From in%@vtcs1 Thu Jun  4 09:09:50 1987
Date: Thu, 4 Jun 87 09:09:30 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #138
Status: RO

Received: from relay.cs.net by vtcs1.cs.vt.edu; Thu, 4 Jun 87 03:18 EDT
Received: from relay.cs.net by RELAY.CS.NET id aa04545; 4 Jun 87 0:00 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa05460; 3 Jun 87 23:58 EDT
Date: Wed  3 Jun 1987 20:28-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #138
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest            Thursday, 4 Jun 1987     Volume 5 : Issue 138

Today's Topics:
  Queries - Uncertainty in ART & Expert Systems for Debugging and Porting &
    Sources for June AI Expert & AAAI at Seattle &
    Common LISP on PRIME 50 & Connectionist AI Grad Schools,
  Education - AI Graduate Schools,
  Philosophy - Computational Complexity,
  Binding - Walter Bunch,
  Funding - Travel Funding,
  Education - Computer Grading and the Law,
  Theory - Linguistic Precision & Symbol Grounding

----------------------------------------------------------------------

Date: 1 Jun 87 16:02:51 GMT
From: ihnp4!alberta!calgary!arcsun!greg@ucbvax.Berkeley.EDU  (Greg Sidebottom)
Subject: Request for information: uncertainty in ART

I am using ART (from INFERENCE) and I am interested in implementing a
mechanism for dealing with uncertainty.  I would like to hear from anybody
who has addressed this problem.

Thanks in advance
Greg

--
Greg Sidebottom, Alberta Research Council
3rd Floor, 6815 8 Street N.E.
Calgary, Alberta CANADA  T2E 7H7
(403) 297-2677

UUCP:  ...!{ubc-vision, alberta}!calgary!arcsun!greg

------------------------------

Date: 3 Jun 87 03:15:49 GMT
From: eric@eddie.mit.edu  (Eric Van Tassell)
Subject: Expert Systems, Debugging and Porting


Hi,
        Does anyone have any experience in building expert systems to
assist in porting large C (or any language) programs to new hardware
and OS environments? I am interested in building a system to aid in
porting and debugging a 100K line relational database and 4GL. Please
e-mail to me any info you think might be relevant. (Success, failure,
elation, bitter dejection, or utter frustration) Thanks in advance.


                        Eric Van Tassell
                        eric@eddie.mit.edu

------------------------------

Date: 1 Jun 87 20:52:20 GMT
From: tektronix!tekcrl!tekchips!stever@ucbvax.Berkeley.EDU  (Steve
      Rehfuss)
Subject: sources for June AI Expert

Can someone send me the source code posted for the June issue of AI Expert?
I actually just want the prolog benchmark stuff, if you happen to have it
separated out.
Sorry about this, it expired before I knew I wanted it.

Thanks,
Steve R
stever%tekchips.tek.com@relay.cs.net

------------------------------

Date: 3 Jun 87 10:10:00 EST
From: "LIZ_FONG" <fong@icst-ise>
Reply-to: "LIZ_FONG" <fong@icst-ise>
Subject: Information on AAAI at Seattle

Can some one send me info on AAAI at Seattle on July 13-17
E. Fong <fong@icst-ecf.arpa>

------------------------------

Date: 3 Jun 87 18:35:56 GMT
From: necntc!primerd!doug@ames.arpa  (Douglas Rand)
Subject: Common LISP on PRIME 50 Series

I'm interested in people's reaction to Lucid's CL on the Prime.  Are people
even aware that this exists?

Doug Rand (...!mit-eddie!primerd!doug, doug@primerd.prime.com)


--
Douglas Rand, Prime Computer Inc. (decvax!necntc!primerd!doug)
->  The above opinions are mine alone and are not influenced by narcotics,
    my employer,  my cat or the clothes I wear.

------------------------------

Date: 25 May 87 20:00:48 GMT
From: speech2.cs.cmu.edu!yamauchi@pt.cs.cmu.edu  (Brian Yamauchi)
Subject: Connectionist AI Grad Schools

I will be graduating from Carnegie-Mellon next May, with a BS in applied
math/computer science, and I am planning to attend graduate school with the
goal of a PhD in computer science.

My field of interest is artificial intelligence, specifically, connectionist
artificial intelligence.  I am currently consdiering Carnegie-Mellon, MIT,
Caltech, Stanford, UCSD, and the University of Rochester.  Are there any
other universities that I should be considering?  Are there any universities
conducting connectionist AI research that I have missed?

I would greatly appreciate any information that anyone could provide.  Also,
I would be interested in hearing any opinions about the relative merits of
the computer science graduate programs at these universities, both in
general and relative to my specific interests.

                                Thanks in advance,

                                Brian Yamauchi

Brian Yamauchi                      ARPANET:    yamauchi@speech2.cs.cmu.edu
Carnegie-Mellon University
Computer Science Department

------------------------------

Date: 2 Jun 87 02:48:13 GMT
From: decvax!dartvax!takis@ucbvax.Berkeley.EDU  (Takis Metaxas)
Subject: Re: Need info on grad schools with a good AI program


        From my experience, I can point out to you two schools with
some projects in AI: Brown Univ. in Prov.,RI has speciality in
natural language representation, and Carnegie-Mellon in searching.
        Good luck with the field you have chosen...


  [See back issues of AI Magazine and the SIGART Newsletter for
  descriptions of many graduate programs.  -- KIL]

------------------------------

Date: Tue, 2 Jun 87 04:43:53 EDT
From: Jim Hendler <hendler@brillig.umd.edu>
Subject: Re: philosophy and computational complexity


I second the suggestion of the Cherniak paper.  If you want a more complete
work try
CHristopher Cherniak, Minimal Rationaliy
I believe it is MIT Press 87.

------------------------------

Date: 1 Jun 87 09:56:12 GMT
From: mcvax!ukc!its63b!hwcs!aimmi!walt@seismo.css.gov  (Walter Bunch)
Subject: Conceptual Information Research


When I made my original posting, my .signature address was incorrect. Thanks
to those who got their response to me anyway. Our address changed recently.
Sorry for the trouble.
--
Walter Bunch, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN
UUCP:   walt@uk.ac.hw.cs
ARPA:   walt%cs.hw.ac.uk@cs.ucl.ac.uk
JANET:  walt@uk.ac.hw.cs                            "Is that you, Dave?"

------------------------------

Date: Tue, 2 Jun 87 15:24:37 BST
From: "G. Joly" (Birkbeck) <gjoly@Cs.Ucl.AC.UK>
Subject: Travel Funding.

With reference to the articles on information on financial
support to travel to Milan for IJCAI-87, the following are
possible sources of support.
(1) Royal Society (U.K. residents and Ph.D. status only).
(2) British Computer Society (members only).
(3) AI and Simulation of Behaviour (members only).
(4) AAAI (members only?).
In the case of (1) above, the passing date has already gone,
but the information may be of use in the future. Most
professional bodies seem to have some funds available to
their own members.
I am not 100% sure of all of the above, but hope this short
list is a start (does anyone have a larger collection?).

Gordon Joly,
Computer Science,
Birkbeck College,
Malet Street,
LONDON WC1E 7HX.

+44 1 631 6468

ARPA: gjoly@cs.ucl.ac.uk
BITNET: UBACW59%uk.ac.bbk.cu@AC.UK
UUCP: ...!seismo!mvcax!ukc!bbk-cs!gordon

------------------------------

Date: Wed, 3 Jun 87 10:48:31 PDT
From: Neil Hunt <spar!hunt@decwrl.DEC.COM>
Subject: Re: Computer Grading and the Law


In  V5  #135,  Laurence  Leff  <leff%smu.csnet@RELAY.CS.NET>
makes a point about computer grading of student essays.

He proposes that using computers to grade essays on a  first
pass,  with  "some  procedure for complaints to be made to a
human being with an appropriate hearing" and that  the  com-
puter  "must  in  some way indicate how the grade was deter-
mined".

I think that he has missed the point of earlier  discussions
expressing concern over the educational consequences of hav-
ing students orient their efforts towards pleasing a machine
rather  than a human grader.  I believe that the real lesson
that students would learn in this situation, is that  it  is
much  simpler  to  write  their essays in a style that would
satisfy the mechanical grader than to  pursue  rectification
of  their  grades  by  requesting a hearing with a human. In
fact, most students would probably soon discover how to beat
the  machine at its own game, writing in a style which would
be unacceptable to a human grader, but which a machine  with
rules of a limited scope might grade highly.

The opposite side of the coin, however, as most students are
aware,  is that human graders all have their own preferences
and foibles.  Students do learn to avoid certain  techniques
and  foster  others just because their human graders seem to
dislike the former and like that latter, even if these feel-
ings are not representative of all graders. The advantage of
human involvement is that the scope of the human includes an
understanding of this very problem, thereby providing a curb
on the possibility of either  the  teacher  or  the  student
exploiting the situation too far.

Of course, the problem is a characteristic of  our  society,
as one's work is always judged by people with prejudices and
biases.  I believe that before we introduce additional  com-
puterised  agents of judgement, we should have a good under-
standing of all the problems they might pose.

This is not to say that mechanical  style  checkers  do  not
have  their  place.   Perhaps  all  students should have the
option of using such a tool before submitting their work  to
the  human  grader,  but they should be encouraged to under-
stand its limitations as well as its  strengths,  and  avoid
falling  into the trap of assuming that if the machine liked
their essay, that the intended readership  would  also  like
it.

Perhaps it is a  little  premature  to  be  considering  the
legality  of  using  computerised grading systems. I am sure
that there are many legal options available to teachers  and
graders  which  we  would not expect them to utilise if they
were not effective teaching and learning tools. I think that
the   desirability   of  using  such  an  option  should  be
established before time is wasted  debating  whether  it  is
legal.

Neil/.

These are my own opinions and not those of my employer etc.

------------------------------

Date: 2 Jun 87 00:39:52 GMT
From: hoptoad!laura@ucbvax.Berkeley.EDU (Laura Creighton)
Reply-to: hoptoad!laura@ucbvax.Berkeley.EDU (Laura Creighton)
Subject: Re: framing problems


In article <8705280722.AA09419@ucbvax.Berkeley.EDU> DAVIS@EMBL.BITNET writes:
>
>I'd like to briefly say that perhaps an even more astounding problem
>than that proposed by Stevan Harnad is that connected with the means by
>which literate, intelligent and interested persons can totally obscure
>the central core of an idea by the use of unnecessarily obtuse jargon.
>
>If we're going to discuss the more philosphical end of AI (*yes please!*),
>then we don't *have* to throw everyone else off the track by bogging
>down the chat in a maze of terms intended to have *such* a precise meaning
>as to prevent anyone but the author from truly grasping the intended meaning.

Precision is a good thing.  If one can say precisely what one means then
one will not be misunderstood.  This, alas, is a pipe dream.  There
is no way to say precisely what one means -- what one says does not
have precise meaning embedded in the words or the relationships between
the words.  Rather, one shares a linguistic context with one's
audience.  This means that the serach for precision is never ending.

Right now there are a good number of people who want to talk about
``psychic energy'' and ``interpersonal energy'' and the like.
Reguardless of what these people mean by these terms, it is clear that
they do not mean m c-squared.  The search for precision continues.
--
(C) Copyright 1987 Laura Creighton - you may redistribute only if your
    recipients may.

        ``One must pay dearly for immortality:  one has to die several
        times while alive.'' -- Nietzsche

Laura Creighton
ihnp4!hoptoad!laura  utzoo!hoptoad!laura  sun!hoptoad!laura

------------------------------

Date: 2 Jun 87 12:54:00 EST
From: cugini@icst-ecf.arpa
Reply-to: <cugini@icst-ecf.arpa>
Subject: physical invertibility and symbol grounding


S. Harnad writes:

> Now I conjecture that it is this physical invertibility -- the possibility
> of recovering all the original information -- that may be critical in
> cognitive representations. I agree that there may be information loss in
> A/A transformations (e.g., smoothing, blurring or loss of some
> dimensions of variation), but then the image is simply *not analog in
> the properties that have been lost*! It is only an analog of what it
> preserves, not what it fails to preserve.....
>
> A strong motivation for giving invertibility a central role in
> cognitive representations has to do with the second stage of A/D
> conversion: symbolization. The "symbol grounding problem" that has
> been under discussion here concerns the fact that symbol systems
> depend for their "meanings" on only one of two possibilities: One is
> an interpretation supplied by human users -- "`Squiggle' means `animal' and
> `Squoggle' means `has four legs'" -- and the other is a physical, causal
> connection with the objects to which the symbols refer. ....
>
> The reason the invertibility must be physical rather than merely
> formal or conceptual is to make sure the system is grounded rather
> than hanging by a skyhook from people's mental interpretations.

I wonder why the grounding is to depend on invertibility rather than
causation and/or resemblance?  Isn't it true that physically distinct
kinds of light (eg. #1 red-wavelength and green-wavelength vs.
#2 yellow-wavelength) can cause completely indistinguishable
sensations (ie subjective yellow)?  Is this not, then, a non-invertible,
but nonetheless grounded sensation?  When I experience something as
yellow, I have no way short of spectroscopy of knowing what the
"real" physical characteristics are of the light.  Nonetheless,
I know what "yellow" means, as do young children, scientifically
naive people, etc.

I don't have a ready-made candidate to substitute for invertibility as a
basis for symbol-grounding, although I suspect, as mentioned above,
that causation and resemblance are lurking around somewhere.
But how can invertibility serve if in fact our sensations are, in general,
not invertible?

John Cugini <Cugini@icst-ecf.arpa>

------------------------------

Date: Tue, 2 Jun 87 13:48:51 pdt
From: ladkin@kestrel.ARPA (Peter Ladkin)
Subject: symbol grounding


stevan harnad writes guardedly:
> perhaps there was an element of incoherence in all but the most
> technical and restricted of signal-analytic candidates.

for the record, my suggestion was not signal-analytic, and no-one
showed any element of technical incoherence. however, it was met
with resounding uninterest, since it was a distinction from logic.
since most people want an inherent distinction, i.e. one that
maintains under translations and coding, my suspicion is still that
technical logic and complexity theory, not signal processing, is the
place to look for a solution.

peter ladkin
ladkin@kestrel.arpa

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Thu Jun 11 15:49:14 1987
Date: Thu, 11 Jun 87 15:49:06 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #139
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Thu, 11 Jun 87 15:47 EDT
Received: from relay.cs.net by RELAY.CS.NET id aa04579; 10 Jun 87 3:53 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa06087; 10 Jun 87 3:46 EDT
Date: Wed 10 Jun 1987 00:13-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #139
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest           Wednesday, 10 Jun 1987    Volume 5 : Issue 139

Today's Topics:
  Conference - AAAI's Preregistration Deadline,
  Binding - Number Theory Net,
  Queries - Small Expert Systems & Speech Data Compression &
    Dominoes & Hofstadter's Waking Up From the Boolean Dream,
  Theory - Applying AI Models to Biology

----------------------------------------------------------------------

Date: Thu, 4 Jun 87 10:23:18 PDT
From: AAAI <AAAI-OFFICE@SUMEX-AIM.STANFORD.EDU>
Subject: AAAI's Preregistration Deadline


The AAAI would like to remind those individuals interested in attending
AAAI-87 in Seattle, July 13-17, that the preregistration deadline of Friday,
June 12, draws very near. If you would like registration materials, please
call or send us a msg with your name and mailing address.  Thanks!

AAAI
445 Burgess Drive
Menlo Park, CA 94025
(415) 328-3123
AAAI-Office@sumex-aim.stanford.edu

------------------------------

Date: 5 Jun 1987 16:45:12-EDT (Friday)
From: "Victor S. Miller" <VICTOR%YKTVMX.BITNET@forsythe.stanford.edu>
Reply-to: THEORYNT%YKTVMX.BITNET@forsythe.stanford.edu
Subject: Number Theory Net

                [Forwarded from the Stanford bboard.]


          Announcing Number Theory Net
   I would like to start a separate network for Number Theorists
around the world.  This would be similar in principle to Theory Net (and
probably have some overlap).  The purpose of NumberTheory Net would be
to help communication among those who work in number theory.  Appropriate
submissions would be: problems, solutions, queries, notification of
address changes, announcement of results, etc.  For now, all submissions
will be handled by the same userids as for TheoryNet: TheoryNet@ibm.com
or theorynt@yktvmx.bitnet for submissions, and TheoryNet-Request@ibm.com
or theorynt@yktvmx.bitnet for administrative matters (e.g. additions or
deletions to the subscriber list, requests for back submissions, etc.).
All contributions should be clearly labeled as being for NumberTheoryNet.
I think that it should be useful and enlightening.
                    Victor S. Miller -- moderator

------------------------------

Date: 5 Jun 87 13:52:47 GMT
From: salveter@bu-cs.bu.edu  (Sharon Salveter)
Subject: Need small expert system for research

I am directing a research project on knowledge transfer for expert
systems.  Essentially, we are trying to automate the function of
the knowledge engineer in classification-type expert systems. We
are looking for small (fewer than 1000 rules) existing expert systems
to use as our domains and testbeds/benchmarks.  If you have such a system
that you would like to donate to research, please contact me.

Sharon Salveter  Computer Science  Boston University

------------------------------

Date: 2 Jun 87 22:10:53 GMT
From: imagen!auspyr!dlb!dana!rap@ucbvax.Berkeley.EDU  (Rob Peck)
Subject: Speech Data Compression

I am interested in finding some kind of data compression algorithm
that is suitable for compressing speech data.  As I understand it,
human speech has a great deal of redundancy to it, i.e. repetitions
of virtually the same waveforms over a period of time, as well
as slow changes in many cases from one waveform to the next.

However, if one takes a set of audio samples of a spoken word,
the samples will not fall in the right spots to show up any such
redundancy.  Thus, for a simplistic compression algorithm that
looks for repeated sequences, no opportunity to compress would
be noticed.

Could someone point me to the appropriate literature?   Or is there
some public domain source code that is already available for this?

The code needn't be fast on the analysis and compression.  On
playback, it should be pretty easy to expand, though.  That is,
play so many repetitions of this waveform at this sampling rate,
then do this next one (or better still, adjust the current waveform
until it looks like this new one, as a slewing to the new output...
that'd be neat).

I've read a little about FFT's, but once calculated, I have no
idea how to use it or if it gives me remotely what I am looking
for here.

Please EMAIL directly to me.  I will summarize any interesting
responses to the Net.   Thanks very much.

Rob Peck        ...ihnp4!hplabs!dana!rap

------------------------------

Date: 8 Jun 87 19:46:16 GMT
From: ai!gautier@rsch.wisc.edu  (Jorge Gautier)
Subject: WANTED: references on the game of dominoes

I am looking for references on computer implementations of the game of
dominoes.  I suspect there are many variations on the rules for this game,
but any pointers to papers, commercial products, Ph.D. theses :-), etc.
would be much appreciated.  Please reply by mail.

Jorge Gautier
gautier@ai.wisc.edu

"America is waiting for a message of some sort or another."

------------------------------

Date: Tue, 09 Jun 87 11:26:42 n
From: DAVIS%EMBL.BITNET@wiscvm.wisc.edu
Subject: digging up the garbage....


Ok, here's a quick query for old timers on the AIList (no prizes KIL for
being the first in line). Did the list ever show much of a response to
Doug Hofstadter's "Waking up From the Boolean Dream or, Subcognition as
Computation" ? Yes, yes - I know thats it a wet, cloudy, amorphous piece
of writing, utterly unpublishable in any journal beginning with the
name "Transactions of....", but nevertheless, Hoftstadter's criticism
of 'traditional' AI ("high church computationalism") still seems well in
place amidst the countless "has anyone seen expert system EXSYS yet ?"
and "any clues on dealing with uncertainty within the context of a
WHITEWASH based frame-solving fourth generation language ?" that
dominate the list......

I don't want to dig up the past, but if it hasn't happened before, are there
any defenders of the EXSYS/4GL/"fuzzy reasoning"/etc., etc., approach willing to
correct my impressions of the right direction for movement ?
Or even just give me a few recent, decent rebuffs to Hofstadter's viewpoint ?

yours in statistical emergence,

paul davis

"i wash my own clothes, i wash my own hair, i wash my own opinions"
nb: but my employers provide the washing machine, the shower & the computer.

davis@embl.bitnet

------------------------------

Date: 6 Jun 87 01:38:26 GMT
From: mnetor!yetti!unicus!craig@seismo.css.gov  (Craig D. Hubley)
Subject: Taking AI models and applying them to biology...

Forgive the wide cross posting, net.gods, but I am interested in gathering
an opinion from biological and artifical intelligence people on a model
that arises from AI but has (possibly) biological implications:

Foreword or WHY I'M WRITING THIS.
--------------------------------
I was semi-surprised in recent months to discover that cognitive psychology,
far from developing a bold new metaphor for human thinking, has (to a degree)
copied at least one metaphor from third-generation computer science.

This description of the human memory system, though cloaked in vaguer terms,
corresponds more or less one-to-one with the traditional computer
architecture we all know and love.  To wit:

        - senses have "iconic" and "echo" memories analogous to buffers.
        - short term memory holds information that is organized for quick
        processing, much like main storage in a computing system.
        - long term memory holds information in a sort of semantic
        association network where large related pieces of information
        reside, similar to backing or "archived" computing storage.

At least this far, this theory appears to owe a lot to computer science.
Granted, there is lots of empirical evidence in favour, but we all know
how a little evidence can go far too far towards developing an analogy.

What I think we may need are good parallel connectionist computing models
for the social sciences to copy, rather than these old ones that we are
beginning to fuse and modify and discard.  After all, engineering can
construct and test artifacts much quicker than psychologists can.  And
investigate their insides and their performance as well...

The Point or WHAT I'M THINKING ABOUT
------------------------------------
        Single cells are constructed according to instructions resident
        in their own DNA.  When their reproductive process fails, they
        die, become cancerous, etc...

        In computing terms, a self-reproducing program messes up the code
        and therefore fails to function (it does not reproduce).  Or, it may
        continue to reproduce a flawed cell (cancer...).

        But a biological mechanism such as, say, a muscle or a brain is
        a massively parallel system consisting of many many redundant cells,
        each of which is capable of performing (at least almost) the same
        function.

        So many many parts would have to fail before the effect was enough
        to endanger the system as a whole.  That is, it degrades gracefully.
        This effect has been observed in parallel sensing systems, which
        use several low-resolution phased fields that redundantly cover
        the same area.  Removing one such field results in a loss of
        resolution, but not utter failure to detect a stimulus.  Details
        in Geoffrey Hinton and others... (Byte AI issue, 1985?)

        At some point of degradation, the whole parallel system will collapse.
        Or an aged human being will die of a cold.

The Question or WHAT DO YOU THINK?
----------------------------------
Apparently, all human organ weights begin to decline shortly after puberty.
The cumulative effect of this seeming reduction of resources isn't felt so
strongly until middle-age, when we become more susceptible to disease.

So far, this is just a statement of the nature of parallel systems.

But does it hold up as a theory of aging?

- Is mitosis sufficiently prone to failure to account for organ decline?
        - Statistically, one would expect exponential distribution for
        failure of single cells, the rate dependent on mitosis failure,
        and perhaps modified by other cell-killing factors
        - Does organ failure, medically, occur at the point where
        a parallel processing system, mathematically, would fail?

I've heard that mammal cells appear to suffer a "hard" reproductive limit
of 52 mitosis operations, and that meiosis "resets this counter" to 0.

- any comment on this, bio-med types?  Is it true?
- Would a theory assuming a simple variable or random "counter" in each cell
limiting its reproductive span better explain aging (programmed cells...)

It doesn't seem so... regardless of the origin of the failure, the observed
degradation of the system as a whole would still follow this pattern.

The upshot of this is that a potentially useful life science model may have
just materialized in artificial intelligence.

The main flaw that I can see in it is that a cell is complex mechanism in
and of itself, and so the success/failure of each might be subject to
many factors in parallel as well.  That is, it might not fail the way a
short subroutine would were it copied badly, which is the gist of this.
But then one might find a lower level where the parts were sufficiently
monolithic that the analogy held.

This seems to kick the butt of the good old 'Entropy' theory... cop-out.
Incompentent nineteenth century philosophers leaned heavily on entropy.

Comments?  Flames?  The name of a good shrink?

Musing,
Craig.

------------------------------

Date: 10 Jun 87 02:49:55 GMT
From: hao!boulder!pell@husc6.harvard.edu  (Anthony Pelletier)
Subject: Re: Taking AI models and applying them to biology...

(Craig D. Hubley) writes:
(cognative psycology)
>far from developing a bold new metaphor for human thinking, has (to a degree)
>copied at least one metaphor from third-generation computer science.
>

one of the things that has always amused me is that, to the extent that
I understand the structuring of computers, it seems that the cell
and the computer scientists have come up with similar solutions to
many of the same questions.  This is particularly true when one looks
at information flow in the cell.  I feel comfortable in assuming that
the cell had little help from the CS types in solving problems of information
flow.
It is likely to be true that contemporaries of in different scientific
fields play with each other's ideas.  This is why "Nature" insists on being so
broad and why F.H.C.C. can get work.

But I should stay more to the point.

>The Question or WHAT DO YOU THINK?
>----------------------------------
>Apparently, all human organ weights begin to decline shortly after puberty.
>The cumulative effect of this seeming reduction of resources isn't felt so
>strongly until middle-age, when we become more susceptible to disease.

>- Is mitosis sufficiently prone to failure to account for organ decline?
>
>I've heard that mammal cells appear to suffer a "hard" reproductive limit
>of 52 mitosis operations, and that meiosis "resets this counter" to 0.
>

It would seem to me that the step that is likely to give the cell trouble
is not mitosis but DNA replication.  If a whole chromosome lost or
non-disjoined, that cell is in some serious trouble.  Progressive
accumulation mistakes through replication and general maintanence seems a more
likely culprit.

I confess that once the topic turns to outside the single cell or involves
more than, say, two cells, I am hopelssly lost.
So the question of aging is outside my capabilities.  This will not, of course,
stop me from volenteering the following:
I have never liked the "hard-wired-number-of-mitosis" model.
I am not sure why; it just seems implausible, or worse yet, unecessary.
Supposedly "immortal" cells, like bacteria, actually have a rather high death
rate in the population (try doing a particle count then plating them out to see
how many are actually able to continue dividing).
Their apparent immortality is the result of unrestrained growth.
I suspect the failure rate is similar between bacteria and individual cells
of a metazoan.  The difference may be simply that a metazoan cannot tolerate
unrestrained growth of cell populations.  The cells are forced to stop
dividing when in contact with other cells.  they can be induced to re-enter
the cycle by growth factors released, for example, when the skin is cut.
I would guess that if one coupled the limitations on growth necessary to
be a metazoan with accumulated errors, both during replication and
simple maitanence, one could explain gradual breakdown of tissue without
invoking the "hard-wire" model.

oh well, I've gone on too long already.


tony (few degrees are worth remembering--and none are worth predicting)

Pelletier
Molecular etc. Bio
Boulder, Co. 80309-0347

P.S. I think alot about information flow problems and would enjoy
discussions on that...if anyone wants to chat.

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Thu Jun 11 03:21:50 1987
Date: Thu, 11 Jun 87 03:21:39 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #140
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Thu, 11 Jun 87 03:10 EDT
Received: from relay.cs.net by RELAY.CS.NET id ah07121; 10 Jun 87 12:12 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa09561; 10 Jun 87 12:10 EDT
Date: Wed 10 Jun 1987 00:22-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #140
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest           Wednesday, 10 Jun 1987    Volume 5 : Issue 140

Today's Topics:
  AI Tools - ID3 vs C4 & Expert Systems for CAD,
  Comment - Precision in Writing,
  Theory - Complexity Theory &
    Applying AI Models to Biology &
    Symbol Grounding and Physical Invertibility

----------------------------------------------------------------------

Date: 4 Jun 87 13:04:17 GMT
From: reiter@endor.harvard.edu  (Ehud Reiter)
Subject: Re: ID3 vs C4

In article <114@upba.UUCP> damon@upba.UUCP (Damon Scaggs) writes:
>I understand that Ross Quinlan, author of the ID3 classification algorithm
>has developed a better version with the designation C4.  I am looking for
>any papers or references about this new algorithm as well as any comments
>about what it does better.

The best reference I've seen on statistical algorithms for learning decision
trees is

        CLASSIFICATION AND REGRESSION TREES
                by L. Breiman, J. Friedman, R. Olshen, C. Stone
                Wadsworth Press, 1984

The book makes no specific mention of ID3 or C4, but it gives much more
detail about this class of learning algorithms than I've seen in any of
Quinlan's papers.

I'm posting this reponse to the net because I really do think this is a
superb book.
                                        Ehud Reiter
                                        reiter@harvard  (ARPA,BITNET,UUCP)
                                        reiter@harvard.harvard.EDU  (new ARPA)

------------------------------

Date: 4 Jun 87 19:16:24 GMT
From: bolasov@athena.mit.edu (Benjamin I Olasov)
Reply-to: aphrodite!bolasov@RUTGERS.EDU (Benjamin I Olasov)
Subject: Re: Wanted: Information on current work in Expert Systems
         for CAD


In article <8705300459.AA08391@ucbvax.Berkeley.EDU> SPANGLER@gmr.COM writes:
>I am beginning a survey of the current status of work in applying Expert
>Systems technology to Computer Aided Design.  This survey is being done
>for the Knowledge Engineering group at General Motors.
>
>I would greatly appreciate any descriptions of or references to research
>in this area, as well as information on what CAD expert systems and
>expert system shells are available for purchase.
>
>       -- Scott Spangler, spangler@gmr.com
>       -- Advanced Engineering Staff, GM


I just finished writing a master's thesis on just this topic,  however
it  focuses  primarily on applications   for  architectural  practice,
especially design  with a structural  pre-cast concrete panel  system.
You  may also write or  send e-mail to Professor Sriram  in the  Civil
Engineering Department  here at MIT who has  written an expert  system
for structural  design  called   DESTINY.  His   e-mail  address    is
sriram@athena.mit.edu-  mine  is        bolasov@aphrodite.mit.edu   or
Olasov@MIT-MULTICS.ARPA.

Good luck!

Ben

------------------------------

Date: Tue, 09 Jun 87 11:26:05 n
From: DAVIS%EMBL.BITNET@wiscvm.wisc.edu
Subject: precision in writing


I have no wish to make a big deal out of this point, but I feel that Laura
Creighton's remarks on precision in writing/expression must be dealt with.
She writes:

> Precision is a good thing.  If one can say precisely what one means then
> one will not be misunderstood.  This, alas, is a pipe dream.  There
> is no way to say precisely what one means -- what one says does not
> have precise meaning embedded in the words or the relationships between
> the words.  Rather, one shares a linguistic context with one's
> audience.  This means that the serach for precision is never ending.

I'm afraid that there is a gross difference between the precise delineation
of an idea, and over-precise word usage. To be sure, all of human activity
is constantly capable of generating new words, and new uses for old words
(radical! barf! hack! bug!) - but this alone does not justify the
`jargonising' of debate. I believe that if two (or more) people wish to
debate any issue, then they have a responsibility to do so on as much common
ground as humanly possible. You think that Bertrand Russel was any less
capable of a meaningful debate on various aspects of philosphy/cognition
because he didn't have access to computerese ? The delineation of an idea
is capable of being precise through carefully chosen analogy and metaphor.
Such a route is actually better than jargonising since the writer/speaker
stands a better chance of getting the audience to appreciate the *core*
of an idea, rather than sit back satisfied that they *think* they understand
his words......

Sorry to go on on this one, but so much of the debate in and around
AI/cognitive science/philosphy of mind gets bogged down by people jargonising
their positions, which forces replies to first hack through the cloud
that surrounds potentially good (or bad) opinions.

yours for jargon free AI,

paul davis

"i wash my own clothes, i wash my own hair, i wash my own opinions"
nb: but my employers provide the washing machine, the shower & the computer

davis@embl.bitnet

------------------------------

Date: 3 Jun 87 17:15:21 GMT
From: mcvax!botter!klipper!biep@seismo.css.gov  (J. A. "Biep" Durieux)
Subject: What philosophical problems does complexity theory yield?

Suppose P != NP. Then some things will take a long time to compute.
But so what?

Suppose someone finds out not all problems can be solved in constant
time. Now that comes as a philosophical shock, of course. That has
lots of implications.

But once one has overcome that shock, finding that some problems cannot
be solved in linear time may be annoying, but now since the possibility
of constant time already has been destroyed, it's no great news.

As, one by one, all sorts of upper bounds on exponents prove false, and
finally it seems polynomial time isn't good at all, one gets even bored
by all those variations on the same theme, not? So what exactly is so
exciting about that polynomial limit?


About constant time solutions: Seemingly linear-time solutions can often be
turned into constant-time solutions by applying parallelism. This is the
way the universe is able to simulate itself, however big it (object) may
be or grow. I don't know of what complexity the collapsing of a wave-
function would be supposed that all "time-space-points" of it worked
parallelly on it.
But, isn't anything which cannot be turned into a constant-time process
philosophically annoying? Why just hassling about non-polynomial time
solutions? Am I missing something? (Shouldn't I have asked that at the
beginning of this article? :-))
--
                                                Biep.  (biep@cs.vu.nl via mcvax)
Popper tells us how science ought to be done - Kuhn tells us how it *is* done.
        And I ... I read neither.

------------------------------

Date: 4 Jun 87 16:09:44 GMT
From: ramesh@cvl.umd.edu  (Ramesh Sitaraman)
Subject: Re: What philosophical problems does complexity theory yield?

In article <789@klipper.cs.vu.nl> biep@cs.vu.nl (J. A. "Biep" Durieux) writes:

>But, isn't anything which cannot be turned into a constant-time process
>philosophically annoying? Why just hassling about non-polynomial time
>solutions? Am I missing something? (Shouldn't I have asked that at the
>beginning of this article? :-))
>--

Yes, you are missing the point !!

The difference between a polynomial and non-polynomial solution for
a problem is the difference between structure and a complete lack
of it. If P not = NP we would have shown that some problems can be
solved only by something similar to a dumb exhaustive search over
the solution space i.e. there is not enough structure in the
problem to constrain its solutions.

Graph theorists have found eulerian circuits very interesting
and there have been very strong theorems proved about graphs
with this property. However, the seemingly similar problem
of Hamiltonian circuits have almost no characterisation inspite
of dilligent efforts for the past 100 years or so. The theory of
NP-completeness explains this anomaly. Eulerian circuit can be
solved in linear time while Hamiltonian circuit is NP-complete !!!

                                Ramesh

(Defn:
      Eulerian ckt: A circuit passing through all edges of a graph
        without repeating an edge.
      Hamiltonian ckt: A circuit passing through all the vertices
        of a graph without repeating a vertex.
)

------------------------------

Date: 6 Jun 87 20:52:53 GMT
From: umnd-cs!umn-cs!moll@ucbvax.Berkeley.EDU  (Rick Moll)
Subject: Re: What philosophical problems does complexity theory yield?

In article <789@klipper.cs.vu.nl> biep@cs.vu.nl (J. A. "Biep" Durieux) writes:
>Suppose P != NP. Then some things will take a long time to compute.
>But so what?
>As, one by one, all sorts of upper bounds on exponents prove false, and[...]

You seem to be implying that if P=NP then *all* problems can be solved in
polynomial time.  This is certainly not so.  Given any computable function
f(x), one can construct (by diagonalization) a problem which can be solved,
but cannot be solved in time f(n) on a Turing machine.  I believe
the same proof would work for parallel machines.

>About constant time solutions: Seemingly linear-time solutions can often be
>turned into constant-time solutions by applying parallelism.  [...]

Note that P=NP is stated as a problem about Turing machines (or sometimes
single processor random access machines).  Any problem in the class NP
can definately be solved in polynomial time is one is allowed to use an
arbitrary number (varying with the size of the problem instance) of
processors.

------------------------------

Date: 7 Jun 87 00:28:36 GMT
From: chiaraviglio@husc4.harvard.edu  (lucius chiaraviglio)
Subject: Re: Taking AI models and applying them to biology...

In article <622@unicus.UUCP> craig@unicus.UUCP (Craig D. Hubley) writes:
>I've heard that mammal cells appear to suffer a "hard" reproductive limit
>of 52 mitosis operations, and that meiosis "resets this counter" to 0.
>
>- any comment on this, bio-med types?  Is it true?
>- Would a theory assuming a simple variable or random "counter" in each cell
>limiting its reproductive span better explain aging (programmed cells...)

        Random failure may be a significant factor in aging, but a hard limit
on the number of times a cell may divide before it self-destructs has been
observed in tissue culture, where the cells are for the most part not
dependant on each other.  Those cells which manage to get past the hard limit
are abnormal (although not necessarily cancerous) in ways beyond their mere
ability to keep dividing after they were supposed to self-destruct.  I don't
remember most of the details of this, but I do remember that they tend to
become tetraploid (I think also aneuploid) due to an increase in the rate of
mitotic failure.

        -- Lucius Chiaraviglio
           lucius%tardis@harvard.harvard.edu
           seismo!tardis.harvard.edu!lucius

------------------------------

Date: 9 Jun 87 00:02:19 GMT
From: pixar!davel@ucbvax.Berkeley.EDU  (David Longerbeam)
Subject: Re: Taking AI models and applying them to biology...

In article <622@unicus.UUCP>, craig@unicus.UUCP (Craig D. Hubley) writes:
>
> This description of the human memory system, though cloaked in vaguer terms,
> corresponds more or less one-to-one with the traditional computer
> architecture we all know and love.  To wit:

  [description deleted]

> At least this far, this theory appears to owe a lot to computer science.
> Granted, there is lots of empirical evidence in favour, but we all know
> how a little evidence can go far too far towards developing an analogy.

One of my philosophy professors in college offered the observation that
models for the human mind have always seemed to correspond to the most
advanced form of technology at that given point in history.  He could
recall that when he was young, this technology was the combustion engine,
and lo, the cognitive psychologists' model at that time was the combustion
engine.

Of course, this technology is now the digital computer, and many psychologists,
linguists and computer scientists use it as a model to explain activites
of the human mind.  Some go so far as to say that intelligence is nothing
more than the result of following the same sorts of syntactical rules as
performed by a computer!

But I stray...
I wanted to point out that you didn't give the source of the above model/
comparison, and that if it is not entirely empirical in nature, it
may be a case of "use the latest technology as the best model".
--
David Longerbeam                        ||  The opinions expressed above
Pixar                                   ||  are not to be contrued as the
San Rafael, CA                          ||  opinions, stated or otherwise,
ucbvax!pixar!davel  (415) 499-3600      ||  of Pixar.

------------------------------

Date: 5 Jun 87 04:45:45 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: symbol grounding and physical invertibility


John Cugini <Cugini@icst-ecf.arpa> asks:

>       (1) I wonder why the grounding is to depend on invertibility rather than
>       causation and/or resemblance?
>
>       (2) Isn't it true that physically distinct
>       kinds of light (eg. #1 red-wavelength and green-wavelength vs.
>       #2 yellow-wavelength) can cause completely indistinguishable
>       sensations (ie subjective yellow)?  Is this not, then, a non-invertible,
>       but nonetheless grounded sensation?

(1) According to my view, invertibility (and perhaps inversion)
captures just the relevant features of causation and resemblance that
are needed to ground symbols. The relation is between the proximal
projection (of a distal object) onto the sensory surfaces -- let's
call it P -- and an invertible transformation of that projection [I(P)].
The latter is what I call the "iconic representation." Note that the
invertibility is with the sensory projection, *not* the distal object. I
don't believe in distal magic. My grounding scheme begins at the
sensory surfaces ("skin and in"). No "wider" metaphysical causality is
involved, just narrow, local causality.

Of course the story is more complicated, because iconic
representations are not sufficient to ground a symbol referring to
an object. They're not even enough to allow a device to reliably pick
out the object and give it the right name (i.e., to categorize or
identify it). "Categorical representations" are needed next, but these
are no longer invertible into the sensory projection. They are
feature-filters preserving only the (proximal) properties of the object's
sensory projection that reliably distinguish the object (let's say
it's an "X") from the other objects that it can be confused with
(i.e., relevant "non-X's" in the particular context of confusable
alternatives sampled to date). Then finally the labels ("X," "non-X")
can be used as the primitive symbols in a (now *grounded*) symbol
system, to be combined and otherwise syntactically manipulated into
meaningful composite symbol-strings (descriptions).

(2) Your question about indistinguishable but distinct colors mistakes my
grounding scheme for a "wide" metaphysical grounding scheme -- one
where the critical "causality" would be in the relation between distal
objects and our internal representations of them, whereas mine is a narrow,
skin-and-in grounding proposal. I have dubbed this view
"approximationism," and, without going into details (for which you may
want to consult the CP book or a reprint of the theoretical chapter),
the essence of the idea is that internal representations are
always approximate rather than "exact," in two important senses. The
iconic representation is approximate up to its grain of resolution
(the "jnd" or "just-noticeable-difference"): Think of it as a Principle
of the "Iconic Identity of Iconic Indiscernibles": What you can't tell
apart is the same to you.

The categorical representations are approximate in an even more important
sense: The only features the category filter picks out are the ones
that are needed in order to identify the confusable alternatives in
the context you have sampled to date. Hence an X is just what your
current, approximate, provisional context-dependent feature-filter picks
out reliably from among the X's and Non-X's you have encountered so far:
"The Categorical Identity of Unsorted or Unsortable Members" (i.e.,
X's are identically X's unless and until reliably identified or identifiable
otherwise).

Since this is not a "wide" grounding, there is nothing oracular or
omniscient or metaphysical about what the X is that this picks out.
There is no God's-eye view from which you can say what X's "really"
are. There's just the mundane historical fact -- available to an
outside observer, if there is one -- about what the actual distal objects
were whose proximal projections you were sampling. Those furnished your
context, and your fallible, context-dependent representations will
always be approximate relative to those objects.

In conclusion, the only differences in the object that are reflected
in the iconic and categorical representations are the ones present in
the proximal projection of the alternatives sampled to date
(and preserved by the category-feature filter). The representations
are approximate (i.e., indifferent) with respect to any further distal
differences. Symbolic discourse may serve to further tighten the
approximation, but even that cannot be "exact," if for no other
reason than that there's always a tomorrow, in which the
context may be widened and the current representation may have to be
revised. -- But that's another story, and no longer concerns the
grounding problem but what's called "inductive risk."
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Thu Jun 11 03:21:30 1987
Date: Thu, 11 Jun 87 03:21:22 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #141
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Thu, 11 Jun 87 03:07 EDT
Received: from relay.cs.net by RELAY.CS.NET id ac04630; 10 Jun 87 3:59 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa06138; 10 Jun 87 3:57 EDT
Date: Wed 10 Jun 1987 00:29-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #141
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest           Wednesday, 10 Jun 1987    Volume 5 : Issue 141

Today's Topics:
  Theory - The Symbol Grounding Problem

----------------------------------------------------------------------

Date: 5 Jun 87 17:12:10 GMT
From: diamond.bbn.com!aweinste@husc6.harvard.edu  (Anders Weinstein)
Subject: Re: The symbol grounding problem

In reply to my objection that
>>      invertibility has essentially *nothing* to do with the difference
>>      between analog and digital representation according to anybody's
>>      intuitive use of the terms

Stevan Harnad (harnad@mind.UUCP) writes in message <792@mind.UUCP>:

>There are two stages of A/D even in the technical sense. ... Unless the
>original signal is already discrete, the quantization phase involves a
>loss of information. Some regions of input variation will not be retrievable
>from the quantized image. The transformation ... cannot be inverted so as to
>recover the entire original signal.

Well, what I think is interesting is not preserving the signal itself but
rather the *information* that the signal carries.  In this sense, an analog
signal conveys only a finite amount of information and it can in fact be
converted to digital form and back to analog *without* any loss.

But in any case the point I've been emphasizing remains: the A/A
transformations you envisage are not going to be perfect (no "skyhooks" now,
remember?), so preservation or loss of information alone won't distinguish an
(intuitively) A/A from an A/D transfomation.  I think the following reply to
this point only muddies the waters:

>                           I agree that there may be information loss in
>A/A transformations (e.g., smoothing, blurring or loss of some
>dimensions of variation), but then the image is simply *not analog in
>the properties that have been lost*! It is only an analog of what it
>preserves, not what it fails to preserve.

You can take this line if you like, but notice that the same is true of a
*digitized* image -- in your terms, it is "analog" in the information it
preserves and not in the information lost.  This seems to me to be a very
unhappy choice of terminology!

Both analog and digitizing transformations must preserve *some* information.
If all you're *really* interested in is the quality of being (naturally)
information-preserving (i.e. physically invertible), than I'd strongly
recommend you just use one of these terms and drop the misleading use of
"analog", "iconic", and "digital".

>                           The "symbol grounding problem" that has
>been under discussion here concerns the fact that symbol systems
>depend for their "meanings" on only one of two possibilities: One is
>an interpretation supplied by human users... and the other is a physical,
>causal connection with the objects to which the symbols refer.
>The surprising consequence is that a "dedicated system" -- one that is
>hard-wired to its transducers and effectors... may be significantly different
>from the very *same* system as an isolated symbol-manipulating module,
>cut off from its peripherals ...

With regard to this "symbol grounding problem": I think it's been
well-understood for some time that causal interaction with the world is a
necessary requirement for artificial intelligence.  Recall that in his BBS
reply to Searle, Dennett dismissed Searle's initial target -- the "bedridden"
form of the Turing test -- as a strawman for precisely this reason. (Searle
believes his argument goes through for causally embedded AI programs as well,
but that's another topic.)

The philosophical rationale for this requirement is the fact that some causal
"grounding" is needed in order to determine a semantic interpretation.  A
classic example is due to Georges Rey: it's possible that a program for
playing chess could, when compiled, be *identical* to one used to plot
strategy in the Six Day War. If you look only at the formal symbol
manipulations, you can't distinguish between the two interpretations; it's
only by virtue of the causal relations between the symbols and the world that
the symbols could have one meaning rather than another.

But although everyone agrees that *some* kind of causal grounding is
necessary for intentionality, it's notoriously difficult to explain exactly
what sort it must be.  And although the information-preserving
transformations you discuss may play some role here, I really don't see how
this challenges the premises of symbolic AI in the way you seem to think it
does.  In particular you say that:

>The potential relevance of the physical invertibility criterion
>would only be to cognitive modeling, especially in the constraint that
>a grounded symbol system must be *nonmodular* -- i.e., it must be hybrid
>symbolic/nonsymbolic.

But why must the arrangement you envision must be "nonmodular" ?  A system
may contain analog and digital subsystems and still be modular if the
subsytems interact solely via well-defined inputs and outputs.

More importantly -- and this is the real motivation for my terminological
objections -- it isn't clear why *any* (intuitively) analog processing need
take place at all.  I presume the stance of symbolic AI is that sensory input
affects the system via an isolable module which converts incoming stimuli
into symbolic representations.  Imagine a vision sub-system that converts
incoming light into digital form at the first stage, as it strikes a grid of
photo-receptor surfaces, and is entirely digital from there on in.  Such a
system is still "grounded" in information-preserving representations in the
sense you require.

In short, I don't see any *philosophical* reason why symbol-grounding
requires analog processing or a non-modular structure.

Anders Weinstein
BBN Labs

------------------------------

Date: 7 Jun 87 18:25:00 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem


aweinste@Diamond.BBN.COM (Anders Weinstein)
of BBN Laboratories, Inc., Cambridge, MA writes:

>       [regarding invertibility, information preservation and the A/D
>       distinction]: what I think is interesting is not preserving the
>       signal itself but rather the *information* that the signal carries.
>       In this sense, an analog signal conveys only a finite amount of
>       information and it can in fact be converted to digital form and back
>       to analog *without* any loss.

This is an important point and concerns a matter that is at the heart
of the symbolic/nonsymbolic issue: What you're saying is appropriate for
ordinary communication theory and communication-theoretic
applications such as radio signals, telegraph, radar CDs, etc. In all these
cases the signal is simply a carrier that encodes information which is
subsequently decoded at the receiving end. But in the case of human
cognition this communication-theoretic model -- of signals carrying
messages that are encoded/decoded on either end -- may not be
appropriate. (Formal information theory has always had difficulties
with "content" or "meaning." This has often been pointed out, and I take
this to be symptomatic of the fact that it's missing something as a
candidate model for cognitive "information processing.")

Note that the communication-theoretic, signal-analytic view has a kind of
built-in bias toward digital coding, since it's the "message" and not
the "medium" that matters. But what if -- in cognition -- the medium
*is* the message? This may well be the case in iconic processing (and
the performances that it subserves, such as discrimination, similarity
judgment, matching, short-term memory, mental rotation, etc.): It may
be the structure or "shape" of the physical signal (the stimulus) itself that
matters, not some secondary information or message it carries in coded
form. Hence the processing may have to be structure- or shape-preserving
in the physical analog sense I've tried to capture with the criterion
of invertibiliy.

>       a *digitized* image -- in your terms... is "analog" in the
>       information it preserves and not in the information lost. This
>       seems to me to be a very unhappy choice of terminology! Both analog
>       and digitizing transformations must preserve *some* information.
>       If all you're *really* interested in is the quality of being
>       (naturally) information-preserving (i.e. physically invertible),
>       than I'd strongly recommend you just use one of these terms and drop
>       the misleading use of "analog", "iconic", and "digital".

I'm not at all convinced yet that the sense of iconic and analog that I am
referring to is unrelated to the signal-analytic A/D distinction,
although I've noted that it may turn out, on sufficient analysis, to be
an independent distinction. For the time being, I've acknowledged that
my invertibility criterion is, if not necessarily unhappy, somewhat
surprising in its implications, for it implies (1) that being analog
may be a matter of degree (i.e., degree of invertibility) and (2) even
a classical digital system must be regarded as analog to a degree if
one is considering a larger "dedicated" system of which it is a
hard-wired (i.e., causally connected) component rather than an
independent (human-interpretation-mediated) module.

Let me repeat, though, that it could turn out that, despite some
suggestive similarities, these considerations are not pertinent to the
A/D distinction but, say, to the symbolic/nonsymbolic distinction --
and even that only in the special context of cognitive modeling rather than
signal analysis or artificial intelligence in general.

>       With regard to [the] "symbol grounding problem": I think it's been
>       well-understood for some time that causal interaction with the world
>       is a necessary requirement for artificial intelligence...
>       The philosophical rationale for this requirement is the fact that
>       some causal "grounding" is needed in order to determine a semantic
>       interpretation... But although everyone agrees that *some* kind of
>       causal grounding is necessary for intentionality, it's notoriously
>       difficult to explain exactly what sort it must be. And although the
>       information-preserving transformations you discuss may play some role
>       here, I really don't see how this challenges the premises of symbolic
>       AI in the way you seem to think it does.

As far as I know, there have so far been only two candidate proposals
to overcome the symbol grounding problem WITHOUT resorting to the kind
of hybrid proposal I advocate (i.e., without giving up purely symbolic
top-down modules): One proposal, as you note, is that a pure
symbol-manipulating system can be "grounded" by merely hooking it up
causally in the "right way" to the outside world with simple (modular)
transducers and effectors. I have conjectured that this strategy
will not work in cognitive modeling (and I have given my supporting
arguments elsewhere: "Minds, Machines and Searle"). The strategy may work
in AI and conventional robotics and vision, but that is because these
fields *do not have a grounding problem*! They're only trying to generate
intelligent *pieces* of performance, not to model the mind in *all* its
performance capacity. Only cognitive modeling has a symbol grounding
problem.

The second nonhybrid way to try to ground a purely symbolic system in
real-world objects is by cryptology. Human beings, knowing already at least
one grounded language and its relation to the world, can infer the meanings
of a second one [e.g., ancient cuneiform] by using its internal formal
structure plus what they already know: Since the symbol permutations and
combinations of the unknown system (i.e., its syntactic rules) are constrained
to yield a semantically interpretatable system, sometimes the semantics can be
reliably and uniquely decoded this way (despite Quine's claims about the
indeterminacy of radical translation). It is obvious, however, that such
a "grounding" would be derivative, and would depend entirely on the
groundedness of the original grounded symbol system. (This is equivalent
to Searle's "intrinsic" vs. "derived intentionality.") And *that* grounding
problem remains to be solved in an autonomous cognitive model.

My own hybrid approach is simply to bite the bullet and give up on the
hope of an autonomous symbolic level, the hope on which AI and symbolic
functionalism had relied in their attempt to capture mental function.
Although you can get a lot of clever performance by building in purely
symbolic "knowledge," and although it had seemed so promising that
symbol-strings could be interpreted as thoughts, beliefs, and mental
propositions, I have argued that a mere extension of this modular "top-down"
approach, hooking up eventually with peripheral modules, simply won't
succeed in the long run (i.e., as we attempt to approach an asymptote of
total human performance capacity, or what I've called the "Total Turing Test")
because of the grounding problem and the nonviability of the two
"solutions" sketched above (i.e., simple peripheral hook-ups and/or
mediating human cryptology). Instead, I have described a nonmodular
hybrid representational system in which symbolic representations are
grounded bottom-up in nonsymbolic ones (iconic and categorical).
Although there is a symbolic level in such a system, it is not quite
the autonomous all-purpose level of symbolic AI. It trades its autonomy
for its groundedness.

>       [W]hy must the arrangement you envision be "nonmodular"? A system
>       may contain analog and digital subsystems and still be modular if
>       the subsystems interact solely via well-defined inputs and outputs.

I'll try to explain why I believe that a successful mind-model (one
able to pass the Total Turing Test) is unlikely to consist merely of a
pure symbol-manipulative module connected to input/output modules.
A pure top-down symbol system just consists of physically implemented
symbol manipulations. You yourself describe a typical example of
ungroundedness (from Georges Rey):

>               it's possible that a program for playing chess could,
>               when compiled, be *identical* to one used to plot
>               strategy in the Six Day War. If you look only at the
>               formal symbol manipulations, you can't distinguish between
>               the two interpretations; it's only by virtue of the causal
>               relations between the symbols and the world that the symbols
>               could have one meaning rather than another.

Now consider two cases of "fixing" the symbol interpretations by
grounding the causal relations between the symbols and the world. In
(1) a "toy" case -- a circumscribed little chunk of performance such as
chess-playing or war-games -- the right causal connections could be
wired according to the human encryption/decryption scheme: Inputs and
outputs could be wired into their appropriate symbolic descriptions.
There is no problem here, because the toy problems are themselves
modular, and we know all the ins and outs. But none but the most
diehard symbolic functionalist would want to argue that such a simple
toy model was "thinking," or even doing anything remotely like what we
do when we accomplish the same performance. The reason is that we are
capable of doing *so much more* -- and not by an assemblage of endless
independent modules of essentially the same sort as these toy models,
but by some sort of (2) integrated internal system. Could that "total"
system be just an oversized toy model -- a symbol system with its
interpretations "fixed" by a means analogous to these toy cases? I am
conjecturing that it is not.

Toy models don't think. Their internal symbols really *are*
meaningless, and hence setting them in the service of generating a toy
performance just involves hard-wiring our intended interpretations
of its symbols into a suitable dedicated system. Total (human-capacity-sized)
models, on the other hand, will, one hopes, think, and hence the
intended interpretations of their symbols will have to be intrinsic in
some deeper way than the analogy with the toy model would suggest, at
least so I think. This is my proposed "nonmodular" candidate:

Every formal symbol system has both primitive atomic symbols and composite
symbol-strings consisting of ruleful combinations of the atoms. Both
the atoms and the combinations are semantically interpretable, but
from the standpoint of the formal syntactic rules governing the symbol
manipulations, the atoms could just as well have been undefined or
meaningless. I hypothesize that the primitive symbols of a nonmodular
cognitive symbol system are actually the (arbitrary) labels of object
categories, and that these labels are reliably assigned to their referents
by a nonsymbolic representational system consisting of (i) iconic (invertible,
one-to-one) transformations of the sensory surface and (ii) categorical
(many-to-few) representations that preserve only the features that suffice to
reliably categorize and label sensory projections of the objects in
question. Hence, rather than being primitive and undefined, and hence
independent of interpretation, I suggest that the atoms of cognitive
symbol systems are grounded, bottom-up, in such a categorization
mechanism. The higher-order symbol combinations inherit the bottom-up
constraints, including the nonsymbolic representations to which they
are attached, rather than being an independent top-down symbol-manipulative
module with its connections to an input/output module open to being
fixed in various extrinsically determined ways.

>       it isn't clear why *any* (intuitively) analog processing need
>       take place at all. I presume the stance of symbolic AI is that
>       sensory input affects the system via an isolable module which converts
>       incoming stimuli into symbolic representations. Imagine a vision
>       sub-system that converts incoming light into digital form at the
>       first stage, as it strikes a grid of photo-receptor surfaces, and is
>       entirely digital from there on in. Such a system is still "grounded"
>       in information-preserving representations in the sense you require.
>       In short, I don't see any *philosophical* reason why symbol-grounding
>       requires analog processing or a non-modular structure.

It is exactly this modular scenario that I am calling into question. It
is not clear at all that a cognitive system must conform to it. To get a
device to be able to do what we can do we may have to stop thinking in
terms of "isolable" input modules that go straight into symbolic
representations. That may be enough to "ground" a conventional toy
system, but, as I've said, such toy systems don't have a grounding problem
in the first place, because nobody really believes they're thinking. To get
closer to life-size devices -- devices that can generate *all* of our
performance capacity, and hence may indeed be thinking -- we may have to
turn to hybrid systems in which the symbolic functions are nonmodularly
grounded, bottom-up, in the nonsymbolic ones. The problem is not a
philosophical one, it's an empirical one: What looks as if it's likely
to work, on the evidence and reasoning available?

--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Tue Jun 16 03:21:14 1987
Date: Tue, 16 Jun 87 03:21:07 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #142
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Tue, 16 Jun 87 03:06 EDT
Received: from relay.cs.net by RELAY.CS.NET id ab07350; 15 Jun 87 2:11 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa10375; 15 Jun 87 2:13 EDT
Date: Sun 14 Jun 1987 22:45-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #142
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest            Monday, 15 Jun 1987      Volume 5 : Issue 142

Today's Topics:
  Program - Cognitive Science at Occidental College,
  Seminars - Universal Plans: Emergent Goal Structures (SRI) &
    Abductive Reasoning in Multifault Diagnostic Systems (UPenn) &
    Potential Histories and Inertial Theories (SU) &
    Controlling Execution of Logic Programs (MCC),
  Conferences - OOPSLA-87 &
    Architectures for Intelligent Interfaces

----------------------------------------------------------------------

Date: 04 Jun 87 09:00:45 PST
From: oxy!traiger@csvax.caltech.edu (Saul P. Traiger)
Subject: Program - Cognitive Science at Occidental College


Occidental College,  a liberal arts college which enrolls approximately
1600 students, is pleased to announce a new Program in Cognitive
Science. The Program offers an undergraduate major and minor in Cognitive
Science. Faculty participating in this program include members of the
departments of mathematics, linguistics, psychology, and philosophy.

The program is the result of one of the most exciting developments in
higher education today, namely the interaction among philosophers,
mathematicians, psychologists, linguists, and computer scientists. This
interaction is the result of common interests in cognitive science.
Computer architecture is now as likely to be discussed in a philosophy or
psychology seminar as it is in a computer science course. Shared
interests in cognitive science lead to the development and adoption of an
interdepartmental program in cognitive science at Occidental College.

The undergraduate major in Cognitive Science at Occidental College
includes courses in mathematics, philosophy, psychology and linguistics.
Instruction in mathematics introduces students to computer languages,
discrete mathematics,  logic, and the mathematics of computation.
Philosophy offerings  cover the philosophy of mind, with emphasis on
computational models of the mind, the theory of knowledge, the philosophy
of science, and the philosophy of language. Psychology courses include
basic psychology, learning, perception, and cognition. Courses in
linguistics provide a theoretical foundation in natural languages, their
acquisition, development, and structure.

For more information about Occidental College's Cognitive Science Program
please contact:

Professor Saul Traiger
Cognitive Science Program
1600 Campus Road
Occidental College
Los Angeles, CA 90041

ARPANET: oxy!traiger@CSVAX.Caltech.EDU
BITNET:  oxy!traiger@hamlet
CSNET:   oxy!traiger%csvax.caltech.edu@RELAY.CS.NET
UUCP:    ....{seismo, rutgers, ames}!cit-vax!oxy!traiger

------------------------------

Date: Thu, 11 Jun 87 11:29:04 PDT
From: Amy Lansky <lansky@venice.ai.sri.com>
Subject: Seminar - Universal Plans: Emergent Goal Structures (SRI)

                       EXECUTING UNIVERSAL PLANS:
                 EMERGENT GOAL STRUCTURES & THEIR USES

                    Marcel Schoppers (MARCEL@ADS.COM)
                        Advanced Decision Systems

                       11:00 AM, MONDAY, June 15
              SRI International, Building E, Room EJ228

``Universal plans'' are designed for execution in unpredictable state spaces,
refusing to over-commit to a specific future course of events, and deliberately
making no assumptions about how situations might follow one another. Instead,
plan synthesis becomes the goal-directed selection of reactions to possible
situations; plans become inherently conditional; and plan execution classifies
the current situation so as to respond with the selected reaction. Consequently
there is no inherent distinction between expected and unexpected events; the
concepts of success & failure are irrelevant for both synthesis and execution;
and "error recovery" needs no special mechanisms beyond those already present
for normal execution.

After introducing the Universal Plan representation, this talk will show how at
any given instant, plan predicates can be interpreted as goals of achievement
or of maintenance, and that this interpretation can be used to reconstruct a
four-fold typing of events (of success, failure, serendipity and sabotage). In
other words, intentions emerge from the interaction of plan with environment
(the environment has a large hand in determining the agent's goals at each
moment), and the notions of success and failure are not primitive but
perceived (relative to the agent's goals).

The Universal Plan representation also indicates precisely which conditions
must be monitored at each instant to enable detection of all events of each
type. Two benefits follow; I will only mention them briefly. First, we can
get complexity estimates for detecting all serendipity and sabotage events,
and can produce informed strategies to alleviate sensing costs. Second, the
goal structure at each moment in time contains all the information required
to choose an appropriate action, thus facilitating incremental synthesis.

VISITORS:  Please arrive 5 minutes early so that you can be escorted up
from the E-building receptionist's desk.  Thanks!

------------------------------

Date: Thu, 11 Jun 87 14:13:25 EDT
From: tim@linc.cis.upenn.edu (Tim Finin)
Subject: Seminar - Abductive Reasoning in Multifault Diagnostic
         Systems (UPenn)


       ABDUCTIVE REASONING in MULTIPLE-FAULT DIAGNOSTIC SYSTEMS

                             Gary Morris
                   Computer and Information Science
                      University of Pennsylvania
                           Philadelphia PA


Abductive reasoning involves generating explanations for observed
facts or symptoms -- i.e. diagnosis.  Diagnosis is more difficult,
both theoretically and practically, when more than one disorder or
fault may occur simultaneously in the system beign diagnosed.  Five
approaches to this problem are reviewed and contrasted:

  - Binary Choice Bayesian (Ben-Bassat: the MEDAS system)

  - Sequential Bayesian (Pople: INTERNIST)

  - Causal Model Reasoning (Patil: ABEL)

  - Parsimonious Set Covering (Reggia & Nau: various systems)

  - "Diagnosis From First Principles" (Reiter, deKleer: various)

Finally, an emerging convergence of these methods is described.


                  Friday, June 12,  3:30 pm
                        Room 554 Moore

------------------------------

Date: 01 Jun 87  1605 PDT
From: Vladimir Lifschitz <VAL@SAIL.STANFORD.EDU>
Subject: Seminar - Potential Histories and Inertial Theories (SU)

                [Forwarded from the Stanford bboard.]


Yoav Shoham asked me to send a nice little poem to this mailing list:

                 With logics that are monotonic
                 Relations are nice but platonic
                 It's when you permit
                 Just models that fit
                 That things become most erotonic

Yoav will also speak at our seminar on a related subject:

            POTENTIAL HISTORIES AND INERTIAL THEORIES

                           Yoav Shoham
                     Thursday, June 4, 4:15pm
                       Bldg. 160, Room 161K

In previous talks I never managed to get to my solution to the
extended-prediction problem (which is my name for the problem
subsuming the frame problem, a name that, shall we say, never
quite caught). I'll describe the intuitive concept of a potential
history, which has a strong McDermott-like persistence flavor.
I'll then embed the concept formally within the logic of
chronological ignorance. I'll identify a class of theories, called
inertial theories, which extend causal theories, and yet which
a. are expressive enough to capture the notion of potential
histories, and b. have the "unique model" and easy computability
properties.

My intention is this time to go into some detail. I'm still
not sure I have enough material for an hour, and if I don't
I'll ask the audience some questions on TMS's.

------------------------------

Date: Mon 1 Jun 87 11:04:12-CDT
From: Ellie Huck <AI.ELLIE@MCC.COM>
Subject: Seminar - Controlling Execution of Logic Programs (MCC)


                              Madhur Kohli
                     Department of Computer Science
                         University of Maryland

                            June 4 - 10:30am
                       ACA Conference Room 2.806

              Controlling the Execution of Logic Programs

 The performance of a logic programming system is  dictated  by  the
 control  strategy  of  its  problem  solving  component.  This talk
 describes a methodology for the specification  and  utilization  of
 control knowledge for logic programs.

 We  describe  a  control  specification  system  developed  as   an
 experimental  tool  for  the  study  of  control  issues in problem
 solving.  Analysis of the control behavior  of  several  sequential
 problem  solvers and PRISM, a parallel logic programming system, is
 used to identify parameters to express control decisions and points
 at  which  they  apply.   These  results  form  the  basis  for the
 definition of a control language to specify the control behavior of
 problem solvers.  The language is expressive enough to specify many
 general  and  specialized  top-down  execution  schemes  for   both
 sequential  and  parallel  problem  solvers.   A  compiler has been
 developed to generate an interpreter which implements the specified
 control  strategy.   Experimental  results  show that the generated
 interpreters  provide  an  order  of  magnitude  improvement   over
 meta-interpretation of the control specification.

 Madhur Kohli
 June 4 - 10:30
 ACA Conference Room 2.806

------------------------------

Date: 5 Jun 87 18:50:31 GMT
From: ut-sally!home.csnet!im4u!ti-csl!fordyce@seismo.CSS.GOV (David
      Fordyce)
Subject: Conference - OOPSLA-87
Article-I.D.: home.444


        OBJECT-ORIENTED DATABASE WORKSHOP: Implementation Aspects

                 To be held in conjunction with the

                 Object-oriented Programming Systems,
          Languages and Applications (OOPSLA-87) Conference

                         October 5, 1987

                         Orlando, Florida


Object-oriented database systems combine the streangths of
object-oriented programming systems and data models, and database
systems.  This half-day workshop will be held on Monday morning, October
5, 1987.  The goal of the workshop is to study the implementation
aspects of object-oriented database systems.  The workshop will focus on
issues such as object fault management, storage management (buffering,
prefetching, clustering, etc.), object persistence, object sharing,
transactions on objects, concurrency control, recovery, and performance
issues.

The workshop panel will consist of: Timothy Andrewes (Ontologic), Umesh
Dayal (Computer Corporation of America), Prof.  David Maier (Oregon
Graduate Center and Servio Logic), Patrick O'Brien (Digital Equipment
Corporation), Prof.  Lawrence Rowe (University of California at
Berkeley), Prof.  Alfred Spector (Carnegie-Mellon University), David
Wells (Texas Instruments), and Prof.  Stan Zdonik (Brown University).
In the first 90 minutes, each panel member will present his position.
This will be followed by questions from the workshop participants and
discussions.

To encourage vigorous interactions and exchange of ideas between the
participants, the workshop will be limited to 50 qualified participants.
If you are interested in attending the workshop, please submit three
copies of a single page abstract to the workshop chairman describing
your work related to the implementation issues of object-oriented
database systems.  The workshop participants will be selected based on the
relevance and significance of their work described in the abstract.
There will be no proceedings for the workshop.

Abstracts should be submitted to the workshop chairman by August 1,
1987.  Selected participants will be notified by September 1, 1987


Workshop Chairman:

Dr. Satish M. Thatte
Manager, Database Systems Branch
Artificial Intelligence Laboratory
Texas Instruments Incorporated
P.O. Box 226015, M/S 238
Dallas, TX 75266

Phone: (214)-995-0340
CSNet: Thatte@TI-CSL

--
Regards, David

------------------------------

Date: Fri, 5 Jun 87 09:42:22 PDT
From: wiley!sherman@lll-lcc.ARPA (Sherman Tyler)
Subject: Conference - Architectures for Intelligent Interfaces


                        Call for Participation

                             Workshop on

              Architectures for Intelligent Interfaces:

                       Elements and Prototypes


            March 29 - April 1, 1988, Monterey, California
                          Sponsored by AAAI


Objective:  The term ``Intelligent Interface'' characterizes  the  set
of   computer-human   interfaces   which  employ  AI  to  enhance  the
transactional nature of the interface.  The goal of the workshop is to
explore  ways  in which AI techniques (e.g., knowledge representation,
inference mechanisms, and heuristic search) can be used to provide the
adaptability   and   reasoning   capabilities   required  for  a  more
intelligent human-machine interaction.

Some possible areas for focused discussions might include:


      *  Models (user, system, task) - adapting the  dialogue  to  the
         current   context   of   the   interaction,  considering  the
         particular user, the target system, and the  high-level  task
         under execution;

      *  Channels of Communication -  allowing  users  to  communicate
         intentions  with  a  minimum  of  learning  and effort, using
         Natural Language, Graphics,  and  the  integration  of  mixed
         modalities of input;

      *  Planning - for  recognizing  user  plans  and  their  implied
         goals, generating plans to meet those goals, and planning how
         to best display the resulting information to communicate  the
         result of the executed action;

      *  Interface-Building  Tools  -  using  artificial  intelligence
         techniques   to   support   developers   in   designing   and
         constructing interfaces.


Attendance:   In  order  to  provide  an  intellectually   stimulating
environment  conducive  to  interaction  and  exchange  of  ideas, the
attendance will be limited  to  approximately  35  participants.   The
ideal   participant  is  an  individual  who  is  actively  addressing
theoretical,  research,  and/or  implementation  issues  relevant   to
Intelligent  Interfaces  (with a bias toward those who have dealt with
implementation issues at some level).   Limited  financial  assistance
will   be   available   for  graduate  students  who  are  invited  to
participate.

Review Process:  The submitted abstracts and autobiographies  will  be
reviewed  by  the  program  committee.   Invitation will be based upon
relevance of the work to the goals of the workshop, and on  the  basis
of significance, originality, and scientific quality.

Workshop Organization:   The  workshop  organizers  are  J.   Sullivan
(Lockheed  AI Center) and S.  Tyler (Lockheed AI Center).  The program
committee consists of J.  Mackinlay (Xerox PARC), R.  Neches
(USC Information Sciences Institute), E. Rissland (University of
Massachusetts), and N. Sondheimer (USC Information Sciences Institute).

Submission:   A  detailed  eight  page  abstract  and   a   one   page
biographical  sketch  (six  copies  of  each)  should  be submitted by
September 1, 1987.  Invitations for participation will be extended  by
October  16,  1987,  with  complete  papers  due by December 18, 1987.
Publication of the proceedings is planned, therefore  the  quality  of
the papers is important.

Submit abstracts to:   Joseph  W.   Sullivan  or  Sherman  W.   Tyler,
O/90-06  B/259, Lockheed AI Center, 2710 Sand Hill Rd., Menlo Park, CA
94025,      (415)      354-5200,       wiley!joe@lll-lcc.arpa       or
wiley!sherman@lll-lcc.arpa

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Mon Jun 15 03:16:18 1987
Date: Mon, 15 Jun 87 03:16:09 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #143
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Mon, 15 Jun 87 03:10 EDT
Received: from relay.cs.net by RELAY.CS.NET id aa07518; 15 Jun 87 2:44 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa10475; 15 Jun 87 2:36 EDT
Date: Sun 14 Jun 1987 23:18-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #143
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest            Monday, 15 Jun 1987      Volume 5 : Issue 143

Today's Topics:
  Journal Issues - Neural Networks (IEEE Computer) &
    Smolensky on Connectionism (BBS) &
    Laming on Sensory Analysis (BBS),
  Conference - Genetic Algorithms

----------------------------------------------------------------------

Date: 8 June 1987, 16:25:07 EDT
From: Bruce Shriver <SHRIVER@ibm.com>
Subject: Journal Issue - Neural Networks

                          Call for Papers and Referees

                       Special Issue of Computer Magazine
                               on Neural Networks

          The March, 1988  issue of Computer magazine  will be devoted
          to a wide  range of topics in  Neural Computing. Manuscripts
          that are  either tutorial, survey,  descriptive, case-study,
          applications-oriented or pedagogic in nature are immediately
          sought in the following areas:

              o   Neural Network Architectures
              o   Electronic and Optical Neurocomputers
              o   Applications  of Neural  Networks in  Vision, Speech
                  Recognition and Synthesis,  Robotics, Image Process-
                  ing, and Learning
              o   Self-Adaptive and Dynamically Reconfigurable Systems
              o   Neural Network Models
              o   Neural Algorithms and Models of Computation
              o   Programming Neural Network Systems

                    INSTRUCTIONS FOR SUBMITTING MANUSCRIPTS

          Manuscripts  should  be  no  more  than  32-34  typewritten,
          double-spaced pages in length including all figures and ref-
          erences. No more than 12 references should be cited.  Papers
          must not  have been previously published  nor currently sub-
          mitted for publication elsewhere.  Manuscripts should have a
          title page that  includes the title of the  paper, full name
          of  its  author(s),  affiliation(s), complete  physical  and
          electronic address(es), telephone number(s),  a 200 word ab-
          stract, and a list of keywords that identify the central is-
          sues of the manuscript's content.

                                   DEADLINES

              o   A 200 word abstract on the manuscript is due as soon
                  as possible.
              o   Eight (8)  copies of the  full manuscript is  due by
                  August 30, 1987.
              o   Notification of acceptance is November 1, 1987.
              o   Final version of the manuscript is due no later than
                  December 1, 1987.

                       SEND SUBMISSIONS AND QUESTIONS TO

          Bruce D. Shriver
          Editor-in-Chief, Computer
          IBM T. J. Watson Research Center
          P. O.  Box 704
          Yorktown Heights, NY 10598
          Phone: (914) 789-7626

          Electronic Mail Addresses:
          arpanet:   shriver@ibm.com
          bitnet:    shriver at yktvmh
          compmail+: b.shriver

------------------------------

Date: 5 Jun 87 18:14:46 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Smolensky on Connectionism: BBS Call for Commentators


The following is the abstract of a forthcoming article on which BBS
[Behavioral and Brain Sciences -- An international, interdisciplinary
Journal of Open Peer Commentary, published by Cambridge University Press]
invites self-nominations by potential commentators.

(Please note that the editorial office must exercise selectivity among the
nominations received so as to ensure a strong and balanced cross-specialty
spectrum of eligible commentators. The procedure is explained after
the abstract.)

-----

          On the Proper Treatment of Connectionism

                       Paul Smolensky
               Institute of Cognitive Science
                   University of Colorado
                   Boulder CO 80309-0430

     A set of hypotheses is formulated for  a  connectionist
     approach  to  cognitive  modeling. These hypotheses are
     shown to be incompatible with the  hypotheses  embodied
     in  traditional  cognitive  models.  The  connectionist
     models considered are massively parallel numerical com-
     putational systems that are a kind of continuous dynam-
     ical  system.   The  numerical  values  in  the  system
     correspond  semantically to fine-grained features below
     the level of the concepts used  to  describe  the  task
     domain.  The  level of analysis is intermediate between
     that of symbolic cognitive models  and  neural  models.
     The explanations of behavior provided are like those in
     traditional physical sciences, unlike the  explanations
     provided by symbolic models.

     Higher-level analyses  of  these  connectionist  models
     reveal  subtle  relations to symbolic models. Fundamen-
     tally  parallel  connectionist  memory  and  linguistic
     processes  are  hypothesized  to give rise to processes
     that are describable at a higher  level  as  sequential
     rule  application.  At the lower level, computation has
     the character of  massively  parallel  satisfaction  of
     numerical  constraints;  at  the  higher level this can
     lead to competence characterizable by hard rules.  Per-
     formance  will  typically deviate from competence since
     behavior is achieved not by interpreting hard rules but
     by satisfying soft constraints. The result is a picture
     in which traditional and connectionist theoretical con-
     structs  collaborate  intimately  to  provide an under-
     standing of cognition.


-----

This is an experiment in using the Net to find eligible commentators
for articles in the Behavioral and Brain Sciences (BBS), an
international, interdisciplinary journal of "open peer commentary,"
published by Cambridge University Press, with its editorial office in
Princeton NJ.

The journal publishes important and controversial interdisciplinary
articles in psychology, neuroscience, behavioral biology, cognitive science,
artificial intelligence, linguistics and philosophy. Articles are
rigorously refereed and, if accepted, are circulated to a large number
of potential commentators around the world in the various specialties
on which the article impinges. Their 1000-word commentaries are then
co-published with the target article as well as the author's response
to each. The commentaries consist of analyses, elaborations,
complementary and supplementary data and theory, criticisms and
cross-specialty syntheses.

[...]   Eligible individuals who judge that they
would have a relevant commentary to contribute should contact the editor at
the e-mail address indicated at the bottom of this message, or should
write by normal mail to:

                        Stevan Harnad
                        Editor
                        Behavioral and Brain Sciences
                        20 Nassau Street, Room 240
                        Princeton NJ 08542
                        (phone: 609-921-7771)

Potential commentators should send their names, addresses, a description of
their general qualifications and their basis for seeking to comment on
this target article in particular to the address indicated earlier or
to the following e-mail address:

{seismo, psuvax1, bellcore, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet     harnad@mind.princeton.edu

[Subscription information is available from Harry Florentine at
Cambridge University Press:  800-221-4512]

  [Contact Harnad for further discussion of eligibility, application
  procedures, journal circulation, etc.  -- KIL]

--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: 10 Jun 87 04:42:54 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Laming on Sensory Analysis: BBS Multiple Book Review


The following is the abstract of a book that will be multiply reviewed in BBS
[Behavioral and Brain Sciences -- An international, interdisciplinary
Journal of Open Peer Commentary, published by Cambridge University Press].

Self-nominations by potential reviewers/commentators are invited. Please note
that the editorial office must exercise selectivity among the nominations
received so as to ensure a strong and balanced cross-specialty spectrum of
eligible commentators. The procedure is explained after the abstract.

-----

                      SENSORY ANALYSIS

                       Donald Laming
           Department of Experimental Psychology
                  University of Cambrdige
                 Cambridge CB2 3EB ENGLAND

                        ABSTRACT


     Sensory analysis is that initial, preconscious stage of
     perception  at  which features (edges, temporal discon-
     tinuities, and periodicities) are picked out  from  the
     random  fluctuations  that  characterize  the  physical
     stimulation of sensory receptors. Sensory analysis  may
     be  studied by means of signal-detection, psychometric-
     function and threshold experiments, and my  book,  SEN-
     SORY  ANALYSIS, presents a succinct, quasi-quantitative
     account of the phenomena revealed thereby. This account
     covers  all  five  sensory  modalities, emphasizing the
     similarities between them.

     A succinct account depends on identifying simple  prin-
     ciples  of wide generality, of which the most fundamen-
     tal are that (a) sensory discriminations are  differen-
     tially  coupled  to  the  physical stimuli and that (b)
     small stimuli are subject  to  a  square-law  transform
     which makes them less detectable than they would other-
     wise be. These two principles are established  by  com-
     parisons   between   different  configurations  of  two
     stimulus levels to be discriminated; they are  realized
     within  a  simple physical-analogue model which affords
     certain low-level comparisons  with  neurophysiological
     observation. That physical-analogue model consists of a
     sequence of elementary operations on the stimulus  con-
     stituting  a stage of sensory processing.  The concate-
     nation of two of three stages in  cascade  accommodates
     an  increased  range  of  experimental phenomena, espe-
     cially the detection of sinusoidal gratings.

     My BBS precis is organized in three parts: Part I  sur-
     veys SENSORY ANALYSIS as economically as may be, begin-
     ning from the  simplest,  most  fundamental  ideas  and
     working  towards  phenomena of increasing complexity. A
     rather short Part II reviews the most important  alter-
     native  models  addressed  to some part or other of the
     phenomena surveyed. Finally, a very short Part III con-
     tributes  some metatheoretic remarks on the function of
     a theory of sensory discrimination.


Potential commentators/reviewers should send their names, addresses, a
description of their general qualifications and their basis for seeking to
review this book in particular to the following USmail or Email address:

                        Stevan Harnad, Editor
                        Behavioral and Brain Sciences
                        20 Nassau Street, Room 240
                        Princeton NJ 08542
                        (phone: 609-921-7771)
{seismo, psuvax1, bellcore, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet     harnad@mind.princeton.edu

  [See previous solicitations in AIList for the full blurb.  -- KIL]

--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: Fri, 12 Jun 87 15:40:35 edt
From: John Grefenstette <gref@nrl-aic.ARPA>
Subject: Conference - Genetic Algorithms


                    Second International Conference on
                 Genetic Algorithms and Their Applications

                             July 28-31, 1987
                                    MIT
                         Cambridge, Massachusetts

                               Sponsored By
             American Association for Artificial Intelligence
                         Naval Research Laboratory
                       Bolt Beranek and Newman, Inc.

     Genetic algorithms are adaptive search techniques based on
     principles derived from natural population genetics, and are
     currently being applied to a variety of difficult problems in
     science, engineering, and artificial intelligence.  Topics for
     discussion will include:

             Fundamental research on genetic algorithms
             Machine learning using genetic algorithms
             Implementation techniques,
                     especially on parallel processors
             Relationships to connectionism and other
                     search and learning techniques
             Application of genetic algorithms

     Conference Committee:

     John H. Holland         University of Michigan
                                     (Conference Chair)
     Lashon B. Booker        Navy Center for Applied Research in AI
     Dave Davis              Bolt Beranek and Newman, Inc.
     Kenneth A. De Jong      George Mason University
     David E. Goldberg       University of Alabama
     John J. Grefenstette    Navy Center for Applied Research in AI
                                     (Program Chair)
     Stephen F. Smith        Carnegie-Mellon Robotics Institute
     Stewart W. Wilson       Rowland Institute for Science
                                     (Local Arrangements)

     The registration fee is $120 ($175 after June 15) and
     includes admission to all sessions, the Conference Proceedings,
     a Welcoming Reception, and all coffee breaks and lunches.
     The Conference Banquet is $30 additional per person. The
     Registration fee for students is $60. For registration forms
     and information concerning local arrangements, contact:

             Conference Services Office
             Room 7-111
             Massachusetts Institute of Technology
             77 Massachusetts Avenue
             Cambridge, MA 02139
             (617) 253-1703

     For copies of the Conference Proceedings, contact:

             Lawrence Erlbaum Associates, Publishers
             365 Broadway
             Hillsdale, New Jersey 07642





                            CONFERENCE PROGRAM


     TUESDAY, JULY 28, 1987

     5:00 - 9:00   REGISTRATION

     7:00 - 9:00   WELCOMING RECEPTION

     7:00 - 9:00   TUTORIAL (if sufficient interest)


     WEDNESDAY, JULY 29, 1987

     8:00           REGISTRATION

     9:00           OPENING REMARKS

     9:20 - 10:40   GENETIC SEARCH THEORY

     Finite Markov chain analysis of genetic algorithms
     David E. Goldberg and Philip Segrest

     An analysis of reproduction and crossover in a
     binary-coded genetic algorithm
     Clayton L. Bridges and David E. Goldberg

     Reducing bias and inefficiency in the selection algorithm
     James E. Baker

     Altruism in the bucket brigade
     Thomas H. Westerdale

     10:40 - 11:00   COFFEE BREAK

     11:00 - 12:00   ADAPTIVE SEARCH OPERATORS I

     Schema recombination in pattern recognition problems
     Irene Stadnyk

     An adaptive crossover distribution mechanism for
     genetic algorithms
     J. David Schaffer and Amy Morishima

     Genetic algorithms with sharing for multimodal
     function optimization
     David E. Goldberg and Jon Richardson

     12:00 - 2:00   LUNCH

     2:00 - 3:20   REPRESENTATION ISSUES

     The ARGOT strategy: adaptive representation genetic
     optimizer technique
     Craig G. Shaefer

     Nonstationary function optimization using genetic
     algorithms with dominance and diploidy
     David E. Goldberg and Robert E. Smith

     Genetic operators for high-level knowledge representations
     H. J. Antonisse and K. S. Keller

     Tree structured rules in genetic algorithms
     Arthur S. Bickel and Riva Wenig Bickel

     3:20 - 3:40   COFFEE BREAK

     3:40 - 5:00   KEYNOTE ADDRESS

     Genetic algorithms and classifier systems: foundations
     and future directions
     John H. Holland

     7:00 - 9:00   BUSINESS MEETING


     THURSDAY, JULY 30, 1987

     9:00 - 10:20   ADAPTIVE SEARCH OPERATORS II

     Greedy genetics
     G.E. Liepins, M.R. Hilliard, Mark Palmer
     and Michael Morrow

     Incorporating heuristic information into genetic search
     Jung Y. Suh and Dirk Van Gucht

     Using reproductive evaluation to improve genetic
     search and heuristic discovery
     Darrell Whitley

     Toward a unified thermodynamic genetic operator
     David J. Sirag and Paul T. Weisser

     10:20 - 10:40   COFFEE BREAK

     10:40 - 12:00   CONNECTIONISM AND PARALLELISM I

     Toward the evolution of symbols
     Charles P. Dolan and Michael G. Dyer

     SUPERGRAN: a connectionist approach to learning,
     integrating genetic algorithms and graph induction
     Deon G. Oosthuizen

     Parallel implementation of genetic algorithms in a
     classifier system
     George G. Robertson

     Punctuated equilibria: a parallel genetic algorithm
     J.P. Cohoon, S.U. Hegde, W.N. Martin and D. Richards

     12:00 - 2:00   LUNCH

     2:00 - 3:20    PARALLELISM II

     A parallel genetic algorithm
     Chrisila B. Pettey, Michael R. Leuze and John J. Grefenstette

     Genetic learning procedures in distributed environments
     Adrian V. Sannier II and Erik D. Goodman

     Parallelisation of probabilistic sequential search algorithms
     Prasanna Jog and Dirk Van Gucht

     Parallel genetic algorithms for a hypercube
     Reiko Tanese

     3:20 - 3:40   COFFEE BREAK

     3:40 - 5:00   CREDIT ASSIGNMENT AND LEARNING

     Bucket brigade performance: I. Long sequences of classifiers
     Rick L. Riolo

     Bucket brigade performance: II. Default hierarchies
     Rick L. Riolo

     Multilevel credit assignment in a genetic learning system
     John J. Grefenstette

     On using genetic algorithms to search program spaces
     Kenneth A. De Jong

     6:30 - 10:00   CLAM BAKE


     FRIDAY, JULY 31, 1987

     9:00 - 10:20   APPLICATIONS I

     A genetic system for learning models of consumer choice
     David Perry Greene and Stephen F. Smith

     A study of permutation crossover operators on the
     traveling salesman problem
     I.M. Oliver, D.J. Smith and J. R. C. Holland

     A classifier based system for discovering scheduling heuristics
     M.R. Hilliard, G.E. Liepins, Mark Palmer,
     Michael Morrow and Jon Richardson

     Using the genetic algorithm to generate LISP source code
     to solve the prisoner's dilemma
     Cory Fujiko and John Dickinson

     10:20 - 10:40   COFFEE BREAK

     10:40 - 12:00   APPLICATIONS II

     Optimal determination of user-oriented clusters:
     an application for the reproductive plan
     Vijay V. Raghavan and Brijesh Agarwal

     The genetic algorithm and biological development
     Stewart W. Wilson

     Genetic algorithms and communication link speed design:
     theoretical considerations
     Lawrence Davis and Susan Coombs

     Genetic algorithms and communication link speed design:
     constraints and operators
     Susan Coombs and Lawrence Davis

     12:00 - 2:00   LUNCH

     2:00 - 3:20    PANEL DISCUSSION: GA's and AI

     3:20 - 3:40    COFFEE BREAK

     3:40 - 5:00    INFORMAL DISCUSSION AND FAREWELL

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Mon Jun 15 03:16:29 1987
Date: Mon, 15 Jun 87 03:16:21 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #144
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Mon, 15 Jun 87 03:14 EDT
Received: from relay.cs.net by RELAY.CS.NET id aa07555; 15 Jun 87 2:50 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa10534; 15 Jun 87 2:50 EDT
Date: Sun 14 Jun 1987 23:29-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #144
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest            Monday, 15 Jun 1987      Volume 5 : Issue 144

Today's Topics:
  Queries - Smalltalk-80 Implementations & AI Grad Schools &
    Machine Emotion Research & Neural Network Processors in High Technology &
    ML programming & ICOT Prolog,
  Theory - Complexity Theory,
  AI Tools - The ISI Grapher

----------------------------------------------------------------------

Date: 10 Jun 87 18:34:19 GMT
From: ihnp4!alberta!sask!kusalik@ucbvax.Berkeley.EDU  (Tony Kusalik)
Subject: request for info on Smalltalk-80 implementations


We are looking for a version of Smalltalk-80 for
SUN-3's.  We have contacted Berkeley about BSII,
but the blurb that came back states
        "It [BSII] has not been updated to run on SUN 3 or to run
        under the X window system, although others have made these
        changes"

Anyone know who these "others" might be? I.e.
can anyone out there point me in the direction of
a Smalltalk-80 system for SUN-3's?

The Berkeley blurb mentions a SUN implementation done
by L. Peter Deutsch and Allan M. Schiffman.  Does anyone
know of addresses (Email or snail-mail) for them?

        Tony Kusalik
         kusalik@sask.bitnet
         ...!{ihnp4,alberta}!sask!kusalik

------------------------------

Date: 8 Jun 87 13:59:44 GMT
From: spe@SPICE.CS.CMU.EDU  (Sean Engelson)
Subject: AI grad schools?


Can anyone give me any `inside' information on graduate CS-AI PhD
programs?  I know of a number of schools with such programs; I am
interested in the opinions of people who have been involved in such
programs, either as students or as professors.  My main interests are
in machine learning, analogical and common-sense reasoning, and
natural language processing.

Thank you,
--

Credo, ergo absurdum est.

LISP ::=
    ((())((Lots(())))(()(()(of(((Idiotic)())()()(Silly(()))()(Parentheses))))))
----------------------------------------------------------------------
Sean Philip Engelson                    I have no opinions.
Carnegie-Mellon University              Therefore my employer is mine.
Computer Science Department
----------------------------------------------------------------------
ARPA: spe@spice.cs.cmu.edu
UUCP: {harvard | seismo | ucbvax}!spice.cs.cmu.edu!spe

------------------------------

Date: 12 Jun 87 02:03:39 GMT
From: dartvax!uvm-gen!emerson.UUCP@seismo.css.gov (Tom "Oliver W.
      Jones" Emerson)
Subject: Machine emotion research


I would like information regarding emotion research in intelligent
computers, including references if possible.

If there is suffecient interest, I will report the contents of replies
to the net.

Thanx in advance,

Tom E.

------------------------------

Date: 12 Jun 87 15:40:00 EDT
From: LANTZ@RED.RUTGERS.EDU
Subject: Neural network processors in High Technology

        The May issue of High Technology has an(other) article on
neural networks.  Would someone please send me the names and addresses
of the four companies mentioned in the article.  Thanks.

Brian

------------------------------

Date: 9 Jun 87 19:36:37 GMT
From: uunet!steinmetz!philabs!sbcs!sbstaff2!allen@seismo.css.gov  (
      Allen Leung)
Subject: ML programming, anybody?


    Is there any one out there doing serious programming and/or research
 in ML (Meta Language)?  I would like to hear from you.


                                                   Don't trust me,
                                                   I'm just an undergrad.

                                                   Allen Leung,
                                                   SUNY at Stony Brook.

------------------------------

Date: 11 Jun 87 18:12:40 GMT
From: mit-vax!jouvelot@eddie.mit.edu  (Pierre Jouvelot)
Subject: Re: ML programming, anybody?

In article <665@sbstaff2.UUCP> allen@sbstaff2.UUCP ( Allen Leung) writes:
>
>    Is there any one out there doing serious programming and/or research
> in ML (Meta Language)?  I would like to hear from you.
>
Yes (I guess) !!

I used it as an executable specification language for the semantic
parallelization of imperative programs during my PhD research in France
(I'm currently a PostDoc in MIT/LCS Programming Research Group). The
overall program is about 3500 lines of ML code (with a few others in
FranzLisp and MLYacc). The overall idea is described in my POPL'87 paper
"Semantic Parallelization: A Practical Exercise in Abstract Interpretation"
where both the theory (abstract interpretation) and practice (use of ML)
are introduced (for courageous people, there is also my PhD thesis ... written
in french :-)

Note that I used the Cambridge/INRIA Version which is older and
slightly different from SML.  The main problem I had was related to
the lack of "real" separate-compilation facility. This should
disappear with newer versions that introduce modules. Besides this, ML is
a very fine language which should have a more widespread use.

Pierre
--
Pierre Jouvelot
Room NE43-403                           ARPA:   jouvelot@xx.lcs.mit.edu
Lab for Computer Science                USENET: decvax!mit-vax!jouvelot
MIT                                             (or mcvax!litp!pj)
545, Technology Square
Cambridge, MA 02139
USA

------------------------------

Date: Thu, 11 Jun 87 08:31:20 EDT
From: elsaesser%mwcamis@mitre.arpa
Subject: Say, what ever happened to ... ICOT Prolog?????

It seems ages ago that the 5th generation project was going to
reinvent AI in a Prolog "engine" that was to do 10 gazillion "
LIPS".  Anyone know what happened?  I mean, if you can make so many
"quality" cars (sans auto transmission, useful A/C, paint that can take
rain and sun, etc.), why can't you make a computer that runs an NP-complete
applications language in real time???  Simi-seriously, what is the status
of the 5th generation project, anyone got an update?

chris (elsaesser%mwcamis@mitre.arpa)


  [See the June IEEE Spectrum, "Next-Generation Race Bogs Down", Karen
  Fitzgerald and Raul Wallich, pp. 28-33, for a review.  The
  Japanese effort is doing well enough in its parallel architecture
  development and is making some progress in "knowledge programming",
  but has dropped VLSI technology and made little headway in AI and
  knowledge representation.  Competitive efforts in the U.S. and
  Europe have also had the most success in hardware.  The real question
  now is whether the 5th-generation push has given Japan the kind of
  computer-science infrastructure that it needs to compete and perhaps
  pull out ahead in algorithm development.  My guess is that it has not
  (because the software part of the effort was too small).  An interesting
  sign of change, though, is the Japanese government's invitation to
  Western universities to set up branches in Japan.  I assume that
  Japanese leaders will always come from Tyodai or Kyodai, but perhaps
  computer scientists will be educated in a different tradition.  -- KIL]

------------------------------

Date: 10 Jun 87 08:33:39 GMT
From: mcvax!botter!klipper!biep@seismo.css.gov  (J. A. "Biep" Durieux)
Subject: Re: What philosophical problems does complexity theory yield?

In article <789@klipper.cs.vu.nl> biep@cs.vu.nl (J. A. "Biep" Durieux) writes:
>But, isn't anything which cannot be turned into a constant-time process
>philosophically annoying? Why just hassling about non-polynomial time
>solutions? Am I missing something?

In article <2258@cvl.umd.edu> ramesh@cvl.UUCP (Ramesh Sitaraman) writes:
>Yes, you are missing the point !!
>
>The difference between a polynomial and non-polynomial solution for
>a problem is the difference between structure and a complete lack
>of it.

Thanks a lot, this sounds much more relevant than just computation time.
But, isn't finding the smallest element of a set solvable only by "dumb
exhaustive search" either? Are people having that much trouble with such a
linear algorithm too?

Also, thanks for including the defs! Such things make the net a whole lot
more readable.

But: please don't put your mail address on the "Follow-up-to: " line.
I'm having a terrible time getting this article out!

Inews
feeding
time

------------------------------

Date: Sat, 13 Jun 87 13:35:12 PDT
From: Gabriel Robins <gabriel@vaxa.isi.edu>
Subject: The ISI Grapher


Greetings,

   Due to the considerable interest drawn by the ISI Grapher so far, I am
posting this abstract summarizing its function and current status.  Interested
parties may obtain further information by directly sending EMail to
"gabriel@vaxa.isi.edu" or by writing to:

             Gabriel Robins
             Intelligent Systems Division
             Information Sciences Institute
             4676 Admiralty Way
             Marina Del Rey, Ca 90292-6695

If you want documentation in hardcopy, please include your U.S. Mail address.

Gabe

----------------------------------------------------------------------

                             The ISI Grapher

                                June, 1987

                              Gabriel Robins
                       Intelligent Systems Division
                      Information Sciences Institute


   The ISI Grapher is a set of functions that convert an arbitrary graph
structure (or relation) into an equivalent pictorial representation and
displays the resulting diagram.  Nodes and edges in the graph become boxes and
lines on the workstation screen, and the user may then interact with the
Grapher in various ways via the mouse and the keyboard.

   The fundamental motivation which gave birth to the ISI Grapher is the
observation that graphs are very basic and common structures, and the belief
that the ability to quickly display, manipulate, and browse through graphs  may
greatly enhance the productivity of a researcher, both quantitatively and
qualitatively.  This seems especially true in knowledge representation and
natural language research.

   The ISI Grapher is both powerful and versatile, allowing an
application-builder to easily build other tools on top of it.  The ISI NIKL
Browser is an example of one such tool.  The salient features of the ISI
Grapher are its portability, speed, versatility, and extensibility.  Several
additional applications were already built on top of the ISI Grapher,
providing the ability to graph lists, flavors, packages, divisors, functions,
and Common-Loops classes.

  Several basic Grapher operations may be user-controlled via the specification
of alternate functions for performing these tasks.  These operations include
the drawing of nodes and edges, the selection of fonts, the determination of
print-names, pretty-printing, and highlighting operations.  Standard
definitions are already provided for these operations and are used by default
if the application-builder does not override them by specifying his own
custom-tailored functions for performing the same tasks.

   The ISI Grapher now spans about 100 pages of CommonLisp code. The 120-page
ISI Grapher manual is available; this manual describes the general ideas, the
interface, the application-builder's  back-end, the algorithms, the
implementation, and the data structures.  The ISI Grapher presently runs on
both Symbolics (6 & 7) and TI Explorer workstations.

   If you are interested in more information, the sources themselves, or just
the documentation/manual, please feel free to forward your U.S. Mail address to
"gabriel@vaxa.isi.edu" or write to "Gabriel Robins, c/o Information Sciences
Institute, 4676 Admiralty Way, Marina Del Rey, Ca 90292-6695."

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Tue Jun 16 03:21:33 1987
Date: Tue, 16 Jun 87 03:21:20 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #145
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Tue, 16 Jun 87 03:09 EDT
Received: from relay.cs.net by RELAY.CS.NET id ab07788; 15 Jun 87 3:13 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa10641; 15 Jun 87 3:11 EDT
Date: Sun 14 Jun 1987 23:45-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #145
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest            Monday, 15 Jun 1987      Volume 5 : Issue 145

Today's Topics:
  Query - Why Did The $6,000,000 Man Run So Slowly?,
  Science - Applying AI Models to Biology

----------------------------------------------------------------------

Date: Fri, 12 Jun 87 00:51:41 EDT
From: tim@linc.cis.upenn.edu (Tim Finin)
Subject: why did the $6,000,000 man run so slowly?

Why did the six million dollar man run so slowly?

Some time ago, Pat Hays posted a message in which he asked people for
explanations for the fact that Dr. Who's tardis is bigger on the
inside than it appears to be from the outside.  He was trying, of
course, to discover something about our common sense model of the
physical world.

I have a similar question which might shed some light on our common
sense notions of time and actions: why did the six million dollar man
run so slowly?  As you recall, the six million dollar man (from the
popular TV show in the early '70's) had bionic legs which enabled him
to run at super-human speeds.  However, when the producers wanted to
show him doing this, they slowed down the image of him running.  That
is, to depict him running at incredibly fast speeds, they showed an
image of him moving in "slow motion".

Id like to collect explanations for this fact.

Tim.

------------------------------

Date: 12 Jun 87 20:51:51 GMT
From: ihnp4!homxb!houxm!hou2d!avr@ucbvax.Berkeley.EDU  (Adam V. Reed)
Subject: Re: Why did the six-million dollar man run so slowly?

Slow motion is commonly used in TV (and before that, newsreel) reports
to represent very fast motion (e.g. in horse races and other sports
events). My guess is that this originated through use of free "photo
finish" footage, originally filmed for the use of sport-event judges,
in early movie newsreels. If my guess is right, the representation of
fast movement with slow-motion footage uses a learned but highly
familiar mental association.
                                Adam Reed (hou2d!adam)

------------------------------

Date: 13 Jun 87 03:16:03 GMT
From: code@sphinx.uchicago.edu  (paul robinson wilson)
Subject: Re: Why did the six-million dollar man run so slowly?

In article <1431@hou2d.UUCP> avr@hou2d.UUCP (Adam V. Reed) writes:
>Slow motion is commonly used in TV (and before that, newsreel) reports
>to represent very fast motion (e.g. in horse races and other sports
>events). My guess is that this originated through use of free "photo
>finish" footage, originally filmed for the use of sport-event judges,
>in early movie newsreels. If my guess is right, the representation of
>fast movement with slow-motion footage uses a learned but highly
>familiar mental association.

I think it may be more subtle than that.  There is a general tendency for
effective, competent motion to be smooth and for large motions to be
relatively slow.  A long-legged runner runs more "slowly" than a short-legged
one, but covers more ground.  A jaguar moves fluidly and less hurriedly than
its usual prey, making large bounds seemingly effortlessly.  By contrast,
the little kid trying to keep up with the big kids moves its legs very fast.

Naturally, if we saw speeded-up film of the $ 6 Meg man, we'd think he looked
comical, with his legs moving very rapidly, like a small (impotent) creature's.

Slow-motion, however, looks smooth and graceful, revealing the grace with
which we all move, but seldom notice.  Our ability to appreciate this
(intended) effect without the accompanying (unintended) impression of his
moving quite slowly, however, may in fact depend on our "being used to it"
from television sports, etc.  We appreciate the obvious grace while suspending
our judgement about speed.

The _right_ way to show it, I guess, would have been to have Lee Majors
bound 20 ft. (or thereabouts) at a time, and quickly.  Besides being a bit
difficult to accomplish, it's also a little hard on the skeletal structure.
They would have gone through stuntmen at quite a clip :-).

(By the way, I believe Lee Majors is a rather short guy, and would have looked
especially comical in sped-up film, coveing significant ground, with normal
stuff to gauge him against.)

| Paul R. Wilson       ph.: (312) 947-0740      uucp: ...!ihnp4!uicbert!wilson |
| Electronic Mind Control Lab      if no answer: ...ihnp4!gargoyle!sphinx!code |
| UIC EECS Dept. (M/C 154)               arpa: uicbert!wilson@uxc.cso.uiuc.edu |
| P.O.Box 4348   Chicago,IL 60680                                              |

------------------------------

Date: 13 Jun 87 06:18:52 GMT
From: pattis@june.cs.washington.edu  (Richard Pattis)
Subject: Re: Why did the six-million dollar man run so slowly?

I've thought that the slowdown was not from the perspective of the viewer,
but from the perspective of the the $6M man.  The viewer, viewing from the
frame of the $6M man, is moving so fast that everything else seems slowed
down.

------------------------------

Date: 13 Jun 87 17:49:31 GMT
From: super.upenn.edu!linc.cis.upenn.edu!mayerk@RUTGERS.EDU  (Kenneth Mayer)
Subject: Re: Why did the six-million dollar man run so slowly?

Occaisionally, the producers _did_ show Lee Majors in a speeded up shot. The
effect was comical. (As I recall, there was this old farmer watching from the
porch of his house as Mr. $6million sprinted across his field.) I like the
cougar metaphor. Wildlife films of such an animal in normal speed are choppy,
incredibly brief, and usuall ends with the felling of the prey. In slow-mo
we get a chance to see the beautiful detail of the predator flying by.

>From a cinematic viewpoint, the camera director/special effects director had
to do something to show that Steve Austin wasn't simply jogging across a field
like the rest of us. Slowing the file speed (and speeding up apparent time)
looks comical, like an old Keystone Cops film. Stretching out the time line
increases tension. The viewer gets a chance to examine more detail per sec.
of real time. Exactly the way a novel will be incredibly brief during
transitions, and excrutiatingly deatailed during climaxes. (I just finished
reading Misery, by Stephen King. For a good reflective look at a writer's art,
packaged in a really good thriller, borrow this book from the library for a
summer weekend reader.)
Kenneth Mayer                           mayerk@eniac.seas.upenn.edu

------------------------------

Date: 10 Jun 87 09:33:34 GMT
From: nosc!humu!uhccux!todd@sdcsvax.ucsd.edu  (The Perplexed Wiz)
Subject: Re: Taking AI models and applying them to biology...

In article <836@pixar.UUCP> davel@pixar.UUCP (David Longerbeam) writes:
>In article <622@unicus.UUCP>, craig@unicus.UUCP (Craig D. Hubley) writes:
>> This description of the human memory system, though cloaked in vaguer terms,
>> corresponds more or less one-to-one with the traditional computer
>> architecture we all know and love.  To wit:
>       [description deleted]
>> At least this far, this theory appears to owe a lot to computer science.
>> Granted, there is lots of empirical evidence in favour, but we all know
>> how a little evidence can go far too far towards developing an analogy.

>One of my philosophy professors in college offered the observation that
>models for the human mind have always seemed to correspond to the most
>advanced form of technology at that given point in history.  He could

It's true that theories of cognition often reflect the current popular
technology.  But before we start arguing current theories as reflections
of computer science and physiology, I suggest we at least have some
common starting point for our discussion.

I don't want to suggest that you need a Ph.D. in Cognitive Psychology
to discuss the subject, but you might want to consider reading one
of the many intro texts on the subject before leaping to any speculations
(wild or otherwise :-).

An intro text I often recommend to people with a more than casual
interest in cognition is:

        Anderson, John (1985).
                Cognitive Psychology and Its Implications. (2nd edition)
                New York:  W.H. Freeman and Co.

  [The 1st edition also has much to recommend it.  It was written from
  a psychological viewpoint, and introduces vocabulary and concepts that
  may be unfamiliar to computer scientists.  The 2nd edition was rewritten
  with an AI (or cognitive psychology!) vocabulary, hence risks echoing the
  preconceptions of the field instead of contributing fresh insights.  -- KIL]


If you are interested in a historical perspective of psychological
research, I suggest you take a peek at:

        Hearst, Eliot (Ed.) (1979).
                The First Century of Experimental Psychology.
                Hillsdale, New Jersey: Lawrence Erlbaum Associates, Pub.

And finally, though I don't always agree with what Richard Gregory has
to say, I always enjoy hearing or reading his ideas and theories.  His
"Mind in Science" is an interesting speculative book.

        Gregory, Richard (1981).
                Mind in Science: A History of Explanations in
                Psychology and Physics.
                Cambridge:  Cambridge University Press

Well, I hope we at least have some common reference point now...

Todd Ogasawara
        "With a good wind behind me and and a lot of luck...
                Ph.D. in Psychology later this year :-)"

--
Todd Ogasawara, U. of Hawaii Computing Center
UUCP:           {ihnp4,seismo,ucbvax,dcdwest}!sdcsvax!nosc!uhccux!todd
ARPA:           uhccux!todd@nosc.MIL
INTERNET:       todd@uhccux.UHCC.HAWAII.EDU

------------------------------

Date: Wed, 10 Jun 87 09:51 EDT
From: Seth Steinberg <sas@bfly-vax.bbn.com>
Subject: Borrowing from Biology [Half in Jest]

Actually, the biologists have been borrowing from the history of the
Roman Empire.  Cincinatus comes down from his farm and codifies the
laws for the Republic and creates a nearly perfect mechanism which
starts taking over the Mediterranean basin.  By providing for a means
of succession (read "DNA replication"), the Empire is able to achieve
higher levels of organization.  Unfortunately, the military (read "the
immune system"), slowly grows in strength as the Empire expands and
finally reaches a limit to its expansion and spends the next millenium
rotting away in Byzantium.

Theories about entropy are about complex systems in general, not just
the behavior of energy in steam engines.  Biologists have latched onto
them to account for aging in organisms and to explain the epochs of
evolution. (Why aren't there any new phyla being created?)  If you've
ever tried to make a major change in a decade old program think of what
the biologists are up against with their billion year old kludges.
Last month, an article in Scientific American described a glucose
complex based aging mechanism, arguing that many aging effects could be
caused by very slow chemical reactions induced by the operating
environment.  Next month we may discover an actual internal counter
within each cell.  It is quite probable that there are dozens of
mechanisms at work.  With 90% of the genome encoding for garbage,
elegant design is more of a serendipity than the norm.

                                        Seth Steinberg
                                        sas@bbn.com

P.S. Did you notice the latest kludge?  They've found a gene whose DNA
complement also encodes a gene!  Kind of like a 68000 program you can
execute if you put a logical complement on each instruction fetch.
Neat, huh?

------------------------------

Date: 12 Jun 87 16:08:04 GMT
From: hao!boulder!eddy@ames.arpa  (Sean Eddy)
Subject: Re: Taking AI models and applying them to biology...

In article <1331@sigi.Colorado.EDU> pell@boulder.Colorado.EDU writes:
>It would seem to me that the step that is likely to give the cell trouble
>is not mitosis but DNA replication.  If a whole chromosome lost or
>non-disjoined, that cell is in some serious trouble.  Progressive
>accumulation mistakes through replication and general maintanence seems a more
>likely culprit.

"General maintenance" is a very important thing to bring up. It seems
to me that replication/mitosis can't be the whole story in aging. One
must also propose other models because there are cells that do not
divide after a certain point, yet still age and die. Neurons are the
classic example; not only do they not divide, they cannot even
be replaced (in humans) if damaged.

- Sean Eddy
- MCD Biology; U. of Colorado at Boulder; Boulder CO 80309
- eddy@boulder.colorado.EDU             !{hao,nbires}!boulder!eddy
-
- "So what. Big deal."
-            - Emilio Lazardo

------------------------------

Date: 13 Jun 87 23:03:16 GMT
From: mcvax!lambert@seismo.css.gov  (Lambert Meertens)
Subject: Re: Taking AI models and applying them to biology...

In article <836@pixar.UUCP> davel@pixar.UUCP (David Longerbeam) writes:

> In article <622@unicus.UUCP>, craig@unicus.UUCP (Craig D. Hubley) writes:
|
| > This description of the human memory system, though cloaked in vaguer terms,
| > corresponds more or less one-to-one with the traditional computer
| > architecture we all know and love.  To wit:
|
|   [description deleted]
|
| > At least this far, this theory appears to owe a lot to computer science.
| > Granted, there is lots of empirical evidence in favour, but we all know
| > how a little evidence can go far too far towards developing an analogy.
|
| One of my philosophy professors in college offered the observation that
| models for the human mind have always seemed to correspond to the most
> advanced form of technology at that given point in history.

I find the connection between models of human memory as developed in
cognitive psychology and existing computer architectures rather tenuous.
The main similarity appears to be that several levels of memory can be
discerned, but the suggested analogy in function is a bit far-fetched.

It is perhaps worth pointing out that much of the current models in
cognitive psychology can already be found in the pioneering work of Otto
Selz (Muenchen, 1881 - Auschwitz, 1943), antedating the computer era.

--

Lambert Meertens, CWI, Amsterdam; lambert@cwi.nl

------------------------------

Date: Thu, 11 Jun 87 13:48:05 BST
From: Graham Higgins <gray%hplb.csnet@RELAY.CS.NET>
Subject: Re: Taking AI models and applying them to biology...

In article <622@unicus.UUCP>, craig@unicus.UUCP (Craig D. Hubley) writes:

> I was semi-surprised in recent months to discover that cognitive psychology,
> far from developing a bold new metaphor for human thinking, has (to a degree)
> copied at least one metaphor from third-generation computer science.

Psychology freely borrows *any* models that will help it get a grip on
characterising and explaining the phenomena of cognition. Over the years,
analogies of the workings of the mind have been constructed from : windmills,
hydraulic systems, telephone switching exchanges and latterly, the computer (or
more properly, information-processing devices). The one thing that all these
analogies have in common is that they draw on the technological state-of-the-art
of the time. (The "internal combustion engine" analogy is a new one to me).

David Longerbeam's comment about the requirement for empiricism is valid in this
instance. Donald Hebb assumed a separation of STM and LTM in a 1949 paper (and
that's going back quite some time, only a year after Shockley's invention of the
transistor). It is unlikely that the computer-architecture construct of
"archived storage" played any part in Hebb's dichotomising of human memory. It
appears that this is one example of a model developed within cognitive
psychology, independently of developments in computer architecture. (I'm not
well-versed in comp.sci. history - but it seems reasonable to conjecture that
Hebb was unaware of the notions of "archived storage" when he was developing his
dichotomisation).


> This description of the human memory system, though cloaked in vaguer terms,
> corresponds more or less one-to-one with the traditional computer
> architecture we all know and love ...
>
>         - senses have "iconic" and "echo" memories analogous to buffers.
>         - short term memory holds information that is organized for quick
>         processing, much like main storage in a computing system.
>         - long term memory holds information in a sort of semantic
>         association network where large related pieces of information
>         reside, similar to backing or "archived" computing storage.

I think that this is somewhat of an over-simplification. There are quite a few
phenomena arising from studies of "iconic", "echoic", "short-term" and
"long-term" areas of human memory which do not fit so tamely into a
computer-architecture model. Thus, there has *not* been uncritical acceptance of
either that the "iconic" and "echoic" aspects of memory are passive or that
memory can be simply dichotomised into into STM and LTM sections. In the absence
of anything better, the analogies will do for now, but there are too many
phenomena which don't fit in to these analogies for them to anything but
convenient for the moment.

One of the disciplinary traits actively promoted in psychology (be it cognitive,
social, experimental, etc.) is a high degree of circumspection. (There is a
tradition that one never sees a one-armed psychologist - "on one hand .... and
on the other ... "). Thus models and analogies *can* be freely borrowed from
other areas and exploited for what they offer, for as long as they exhibit some
level of descriptive utility. It is instructive to note that contemporary
cognitive psychologists no longer use windmills or telephone exchanges (or even
the internal combustion engine) as analogies of the workings of the mind. These
particular analogies have outlived their usefulness and have been discarded (I
hope!).

Graham Higgins                          ||  The opinions expressed above
Hewlett-Packard Labs                    ||  are not to be contrued as the
Bristol, U.K.                           ||  opinions, stated or otherwise,
gjh@hplb.csnet  +44  272 799910 xt 4060 ||  of Hewlett-Packard

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Wed Jun 17 03:52:17 1987
Date: Wed, 17 Jun 87 03:52:07 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #146
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Wed, 17 Jun 87 03:13 EDT
Received: from relay.cs.net by RELAY.CS.NET id ae14738; 16 Jun 87 2:55 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa10779; 16 Jun 87 2:55 EDT
Date: Mon 15 Jun 1987 23:20-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #146
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest            Tuesday, 16 Jun 1987     Volume 5 : Issue 146

Today's Topics:
  Theory - The Symbol Grounding Problem

----------------------------------------------------------------------

Date: 9 Jun 87 22:12:32 GMT
From: diamond.bbn.com!aweinste@husc6.harvard.edu  (Anders Weinstein)
Subject: Re: The symbol grounding problem

In article <812@mind.UUCP> Stevan Harnad <harnad@mind.UUCP> replies:

With regard physical invertibility and the A/D distinction:
>
>>      a *digitized* image -- in your terms... is "analog" in the
>>      information it preserves and not in the information lost. This
>>      seems to me to be a very unhappy choice of terminology!
>
>                            For the time being, I've acknowledged that
>my invertibility criterion is, if not necessarily unhappy, somewhat
>surprising in its implications, for it implies (1) that being analog
>may be a matter of degree (i.e., degree of invertibility) and (2) even
>a classical digital system must be regarded as analog to a degree ...

Grumble. These consequences only *seem* surprising if we forget that you've
redefined "analog" in a non-standard manner; this is precisely I why I keep
harping on your terminology. Compare them with what you're really saying:
"physical invertibility is a matter of degree" or "a classical digital system
still employs physically invertible representations" -- both quite humdrum.

With regard to the symbolic AI approach to the "symbol-grounding problem":
>
>One proposal, as you note, is that a pure symbol-manipulating system can be
>"grounded" by merely hooking it up causally in the "right way" to the outside
>world with simple (modular) transducers and effectors. ...  I  have argued
>that [this approach] simply won't succeed in the long run (i.e., as we
>attempt to approach an asymptote of total human performance capacity ...)
>...In (1) a "toy" case ... the right causal connections could be wired
>according to the human encryption/decryption scheme: Inputs and outputs could
>be wired into their appropriate symbolic descriptions. ... But none but the
>most diehard symbolic functionalist would want to argue that such a simple
>toy model was "thinking," ...  The reason is that we are capable of
>doing *so much more* -- and not by an assemblage of endless independent
>modules of essentially the same sort as these toy models, but by some sort of
>(2) integrated internal system. Could that "total" system be just an
>oversized toy model -- a symbol system with its interpretations "fixed" by a
>means analogous to these toy cases? I am conjecturing that it is not.

I think your reply may misunderstand the point of my objection. I'm not
trying to defend the intentionality of "toy" programs.  I'm not even
particularly concerned to *defend* the symbolic approach to AI (I personally
don't even believe in it).  I'm merely trying to determine exactly what your
argument against symbolic AI is.

I had thought, perhaps wrongly, that you were claiming that the
interpretations of systems conceived by symbolic AI system must somehow
inevitably fail to be "grounded", and that only a system which employed
"analog" processing in the way you suggest would have the causal basis
required for fixing an interpretation.  In response, I pointed out first that
advocates of the symbolic approach already understand that causal commerce
with the environment is necessary for intentionality: they envision the use
of complex perceptual systems to provide the requisite "grounding". So it's
not as though the symbolic approach is indifferent to this issue.  And your
remarks against "toy" systems and "hard-wiring" the interpretations of the
inputs are plain unfair -- the symbolic approach doesn't belittle the
importance or complexity of what perceptual systems must be able to do. It is
in total agreement with you that a truly intentional system must be capable
of complex adaptive performance via the use of its sensory input -- it just
hypothesizes that symbolic processing is sufficient to achieve this.

And, as I tried to point out, there is just no reason that a modular,
all-digital system of the kind envisioned by the symbolic approach could not
be entirely "grounded" BY YOUR OWN THEORY OF "GROUNDEDNESS":  it could employ
"physically inevertible" representations (only they would be digital ones),
from these it could induct reliable "feature filters" based on training (only
these would use digital rather than analog techniques), etc. I concluded that
the symbolic approach appears to handle your so-called "grounding problem"
every bit as well as any other method.

Now comes the reply that you are merely conjecturing that analog processing
may be required to realize the full range of human, as opposed to "toy",
performance -- in short, you think the symbolic approach just won't work.
But this is a completely different issue! It has nothing to do with some
mythical "symbol grounding" problem, at least as I understand it.  It's just
the same old "intelligent-behavior-generating" problem which everyone in AI,
regardless of paradigm, is looking to solve.

>From this reply, it seems to me that this alleged "symbol-grounding problem"
is a real red-herring (it misled me, at least).  All you're saying is that
you suspect that mainstream AI's symbol system hypothesis is false, based on
its lack of conspicuous performance-generating sucesses.  Obviously everyone
must recognize that this is a possibility -- the premise of symbolic AI is,
after all, only a hypothesis.

But I find this a much less interesting claim than I originally thought --
conjectures, after all, are cheap.  It *would* be interesting if you could
show, as, say, the connectionist program is trying to, how analog processing
can work wonders that symbol-manipulation can't. But this would require
detailed research, not speculation. Until then, it remains a mystery why your
proposed approach should be regarded as any more promising than any other.

Anders Weinstein
BBN Labs

------------------------------

Date: 10 Jun 87 21:28:23 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem


aweinste@Diamond.BBN.COM (Anders Weinstein) of BBN Laboratories, Inc.,
Cambridge, MA writes:

>       There's no [symbol] grounding problem, just the old
>       behavior-generating problem

Before responding to the supporting arguments for this conclusion, let
me restate the matter in what I consider to be the right way. There is:
(1) the behavior-generating problem (what I have referred to as the problem of
devising a candidate that will pass the Total Turing Test), (2) the
symbol-grounding problem (the problem of how to make formal symbols
intrinsically meaningful, independent of our interpretations), and (3)
the conjecture (based on the existing empirical evidence and on
logical and methodological considerations) that (2) is responsible for
the failure of the top-down symbolic approach to solve (1).

>>my [SH's] invertibility criterion is, if not necessarily unhappy, somewhat
>>surprising in its implications, for it implies that (1) being analog may
>>be a matter of degree (i.e., degree of invertibility) and that (2) even
>>a classical digital system must be regarded as analog to a degree ...
>
>       These consequences only *seem* surprising if we forget that you've
>       redefined "analog" in a non-standard manner... you're really saying:
>       "physical invertibility is a matter of degree" or "a classical digital
>       system still employs physically invertible representations" -- both
>       quite humdrum.

You've bypassed the three points I brought up in replying to your
challenge to my invertibility criterion for an analog transform the
last time: (1) the quantization in standard A/D is noninvertible, (2) a
representation can only be analog in what it preserves, not in what it
fails to preserve, and, in cognition at any rate, (3) the physical
shape of the signal may be what matters, not the "message" it
"carries." Add to this the surprising logical consequence that a
"dedicated" digital system (hardwired to its peripherals) would be
"analog" in its invertible inputs and outputs according to my
invertibility criterion, and you have a coherent distinction that conforms
well to some features of the classical A/D distinction, but that may prove
to diverge, as I acknowledged, sufficiently to make it an independent,
"non-standard" distinction, unique to cognition and neurobiology. Would it be
surprising if classical electrical engineering concepts did not turn
out to be just right for mind-modeling?

>       I [AW] had thought, perhaps wrongly, that you were claiming that the
>       interpretations of systems conceived by symbolic AI system must somehow
>       inevitably fail to be "grounded", and that only a system which employed
>       "analog" processing in the way you suggest would have the causal basis
>       required for fixing an interpretation.

That is indeed what I'm claiming (although you've completely omitted
the role of the categorical representations, which are just as
critical to my scheme, as described in the CP book). But do make sure you
keep my "non-standard" definition of analog in mind, and recall that I'm
talking about asymptotic, human-scale performance, not toy systems.
Toy systems are trivially "groundable" (even by my definition of
"analog") by hard-wiring them into a dedicated input/output
system. But the problem of intrinsic meaningfulness does not arise for
toy models, only for devices that can pass the Total Turing Test (TTT).
[The argument here involves showing that to attribute intentionality to devices
that exhibit sub-TTT performance is not justified in the first place.]
The conjecture is accordingly that the modular solution (i.e., hardwiring an
autonomous top-down symbolic module to conventional peripheral modules
-- transducers and effectors) will simply not succeed in producing a candidate
that will be able to pass the Total Turing Test, and that the fault
lies with the autonomy (or modularity) of the symbolic module.

But I am not simply proposing an unexplicated "analog" solution to the
grounding problem either, for note that a dedicated modular system *would*
be analog according to my invertibility criterion! The conjecture is
that such a modular solution would not be able to meet the TTT
performance criterion, and the grounds for the conjecture are partly
inductive (extrapolating symbolic AI's performance failures), partly
logical and methodological (the grounding problem), and partly
theory and data-driven (psychophysical findings in human categorical
perception). My proposal is not that some undifferentiated,
non-standard "analog" processing must be going on. I am advocating a
specific hybrid bottom-up, symbolic/nonsymbolic rival to the pure
top-down symbolic approach (whether or not the latter is wedded to
peripheral modules), as described in the volume under discussion
("Categorical Perception: The Groundwork of Cognition," CUP 1987).

>       advocates of the symbolic approach already understand that causal
>       commerce with the environment is necessary for intentionality: they
>       envision the use of complex perceptual systems to provide the
>       requisite "grounding". So it's not as though the symbolic approach
>       is indifferent to this issue.

This is the pious hope of the "top-down" approach: That suitably
"complex" perceptual systems will meet for a successful "hook-up"
somewhere in the middle. But simply reiterating it does not mean it
will be realized. The evidence to date suggests the opposite: That the
top-down approach will just generate more special-purpose toys, not a
general purpose, TTT-scale model of human performance capacity. Nor is
there any theory at all of what the requisite perceptual "complexity"
might be: The stereotype is still standard transducers that go from physical
energy via A/D conversion straight into symbols. Nor does "causal
commerce" say anything: It leaves open anything from the modular
symbol-cruncher/transducer hookups of the kind that so far only seem
capable of generating toy models, to hybrid, nonmodular, bottom-up
models of the sort I would advocate. Perhaps it's in the specific
nature of the bottom-up grounding that the nature of the requisite
"complexity" and "causality" will be cashed in.

>       your remarks against "toy" systems and "hard-wiring" the
>       interpretations of the inputs are plain unfair -- the symbolic
>       approach doesn't belittle the importance or complexity of what
>       perceptual systems must be able to do. It is in total agreement
>       with you that a truly intentional system must be capable of complex
>       adaptive performance via the use of its sensory input -- it just
>       hypothesizes that symbolic processing is sufficient to achieve this.

And I just hypothesize that it isn't. And I try to say why not (the
grounding problem and modularity) and what to do about it (bottom-up,
nonmodular grounding of symbolic representations in iconic and categorical
representations).

>       there is just no reason that a modular, all-digital system of the
>       kind envisioned by the symbolic approach could not be entirely
>       "grounded" BY YOUR OWN THEORY OF "GROUNDEDNESS":  it could employ
>       "physically inevertible" representations (only they would be digital
>       ones), from these it could induct reliable "feature filters" based on
>       training (only these would use digital rather than analog techniques),
>       etc. ...  the symbolic approach appears to handle your so-called
>       "grounding problem" every bit as well as any other method.

First of all, as I indicated earlier, a dedicated top-down symbol-crunching
module hooked to peripherals would indeed be "grounded" in my sense --
if it had TTT-performance power. Nor is it *logically impossible* that
such a system could exist. But it certainly does not look likely on the
evidence. I think some of the reasons we were led (wrongly) to expect it were
the following:

(1) The original successes of symbolic AI in generating intelligent
performance: The initial rule-based, knowledge-driven toys were great
successes, compared to the alternatives (which, apart from some limited
feats of Perceptrons, were nonexistent). But now, after a generation of
toys that show no signs of converging on general principles and growing
up to TTT-size, the inductive evidence is pointing in the other direction:
More ad hoc toys is all we have grounds to expect.

(2) Symbol strings seemed such hopeful candidates for capturing mental
phenomena such as thoughts, knowledge, beliefs. Symbolic function seemed
like such a natural, distinct, nonphysical level for capturing the mind.
Easy come, easy go.

(3) We were persuaded by the power of computation -- Turing
equivalence and all that -- to suppose that computation
(symbol-crunching) just might *be* cognition. If every (discrete)
thing anyone or anything (including the mind) does is computationally
simulable, then maybe the computational functions capture the mental
functions? But the fact that something is computationally simulable
does not entail that it is implemented computationally (any more than
behavior that is *describable* as ruleful is necessarily following an
explicit rule). And some functions (such as transduction and causality)
cannot be implemented computationally at all.

(4) We were similarly persuaded by the power of digital coding -- the
fact that it can approximate analog coding as closely as we please
(and physics permits) -- to suppose that digital representations were
the only ones we needed to think about. But the fact that a digital
approximation is always possible does not entail that it is always
practical or optimal, nor that it is the one that is actually being
*used* (by, say, the brain). Some form of functionalism is probably
right, but it certainly need not be symbolic functionalism, or a
functionalism that is indifferent to whether a mental function or
representation is analog or digital: The type of implementation may
matter, both to the practical empirical problem of successfully
generating performance and to the untestable phenomenological problem of
capturing qualitative subjective experience. And some functions (let
me again add), such as transduction and (continuous) A/A, cannot be
implemented purely symbolically at all.

A good example to bear in mind is Shepard's mental rotation
experiments. On the face of it, the data seemed to suggest that
subjects were doing analog processing: In making same/different
judgments of pairs of successively presented 2-dimensional projections
of 3-dimensional, computer-generated, unfamiliar forms, subjects' reaction
times for saying "same" when one stimulus was in a standard orientation and
the other was rotated were proportional to the degree of rotation. The
diehard symbolists pointed out (correctly) that the proportionality,
instead of being due to the real-time analog rotation of a mental icon, could
have been produced by, say, (1) serially searching through the coordinates
of a digital grid on which the stimuli were represented, with more distant
numbers taking more incremental steps to reach, or by (2) doing
inferences on formal descriptions that became more complex (and hence
time-consuming) as the orientation became more eccentric. The point,
though, is that although digital/symbolic representations were indeed
possible, so were analog ones, and here the latter would certainly seem to be
more practical and parsimonious. And the fact of the matter -- namely,
which kinds of representations were *actually* used -- is certainly
not settled by pointing out that digital representations are always
*possible.*

Maybe a completely digital mind would have required a head the size of
New York State and polynomial evolutionary time in order to come into
existence -- who knows? Not to mention that it still couldn't do the
"A" in the A/D...

>       [you] reply that you are merely conjecturing that analog processing
>       may be required to realize the full range of human, as opposed to "toy",
>       performance -- in short, you think the symbolic approach just won't
>       work. But this... has nothing to do with some mythical "symbol
>       grounding" problem, at least as I understand it. It's just
>       the same old "intelligent-behavior-generating" problem which everyone
>       in AI, regardless of paradigm, is looking to solve... All you're
>       saying is that you suspect that mainstream AI's symbol system
>       hypothesis is false, based on its lack of conspicuous
>       performance-generating successes. Obviously everyone must recognize
>       that this is a possibility -- the premise of symbolic AI is, after
>       all, only a hypothesis.

I'm not just saying I think the symbolic hypothesis is false. I'm
saying why I think it's false (ungroundedness) and I'm suggesting an
alternative (a bottom-up hybrid).

>       But I find this a much less interesting claim than I originally
>       thought -- conjectures, after all, are cheap. It *would* be
>       interesting if you could show, as, say, the connectionist program
>       is trying to, how analog processing can work wonders that
>       symbol-manipulation can't. But this would require detailed research,
>       not speculation. Until then, it remains a mystery why your proposed
>       approach should be regarded as any more promising than any other.

Be patient. My hypotheses (which are not just spontaneous conjectures,
but are based on an evaluation of the available evidence, the theoretical
alternatives, and the logical and methodological problems involved)
will be tested. They even have a potential connectionist component (in
the induction of the features subserving categorization), although
connectionism comes in for criticism too. For now it would seem only
salutary to attempt to set cognitive modeling in directions that
differ from the unprofitable ones it has taken so far.
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: 11 Jun 87 15:24:22 GMT
From: ihnp4!homxb!houxm!houdi!marty1@ucbvax.Berkeley.EDU 
      (M.BRILLIANT)
Subject: Re: The symbol grounding problem

In article <828@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> aweinste@Diamond.BBN.COM (Anders Weinstein) of BBN Laboratories, Inc.,
> Cambridge, MA writes:
>
> >     There's no [symbol] grounding problem, just the old
> >     behavior-generating problem
>
> ..... There is:
> (1) the behavior-generating problem (what I have referred to as the problem of
> devising a candidate that will pass the Total Turing Test), (2) the
> symbol-grounding problem (the problem of how to make formal symbols
> intrinsically meaningful, independent of our interpretations), and (3) ...

Just incidentally, what is the intrinsic meaning of "intrinsically
meaningful"?  The Turing test is an objectively verifiable criterion.
How can we objectively verify intrinsic meaningfulness?

> .... Add to this the surprising logical consequence that a
> "dedicated" digital system (hardwired to its peripherals) would be
> "analog" in its invertible inputs and outputs according to my
> invertibility criterion, .....

Using "analog" to mean "invertible" invites misunderstanding, which
invites irrelevant criticism.

Human (in general, vertebrate) visual processing is a dedicated
hardwired digital system.  It employs data reduction to abstract such
features as motion, edges, and orientation of edges.  It then forms a
map in which position is crudely analog to the visual plane, but
quantized.  This map is sufficiently similar to maps used in image
processing machines so that I can almost imagine how symbols could be
generated from it.

By the time it gets to perception, it is not invertible, except with
respect to what is perceived.  Noninvertibility is demonstrated in
experiments in the identification of suspects.  Witnesses can report
what they perceive, but they don't always perceive enough to invert
the perceived image and identify the object that gave rise to the
perception.  If you don't agree, please give a concrete, objectively
verifiable definition of "invertibility" that can be used to refute my
conclusion.

If I am right, human intelligence itself relies on neither analog nor
invertible symbol grounding, and therefore artificial intelligence
does not require it.

By the way, there is an even simpler argument: even the best of us can
engage in fuzzy thinking in which our symbols turn out not to be
grounded.  Subjectively, we then admit that our symbols are not
intrinsically meaningful, though we had interpreted them as such.

M. B. Brilliant                                 Marty
AT&T-BL HO 3D-520       (201)-949-1858
Holmdel, NJ 07733       ihnp4!houdi!marty1

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Wed Jun 17 03:52:33 1987
Date: Wed, 17 Jun 87 03:52:21 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #147
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Wed, 17 Jun 87 03:17 EDT
Received: from relay.cs.net by RELAY.CS.NET id af14824; 16 Jun 87 3:12 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa10856; 16 Jun 87 3:10 EDT
Date: Mon 15 Jun 1987 23:23-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #147
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest            Tuesday, 16 Jun 1987     Volume 5 : Issue 147

Today's Topics:
  Theory - Symbol Grounding and Physical Invertibility

----------------------------------------------------------------------

Date: 11 Jun 87 21:15:31 GMT
From: diamond.bbn.com!aweinste@husc6.harvard.edu  (Anders Weinstein)
Subject: Re: The symbol grounding problem

In article <828@mind.UUCP> Stevan Harnad <harnad@mind.UUCP> writes
>
>>      There's no [symbol] grounding problem, just the old
>>      behavior-generating problem
>                                                              There is:
>(1) the behavior-generating problem (what I have referred to as the problem of
>devising a candidate that will pass the Total Turing Test), (2) the
>symbol-grounding problem (the problem of how to make formal symbols
>intrinsically meaningful, independent of our interpretations), and (3)
>the conjecture (based on the existing empirical evidence and on
>logical and methodological considerations) that (2) is responsible for
>the failure of the top-down symbolic approach to solve (1).

It seems to me that in different places, you are arguing the relation between
(1) and (2) in both directions, claiming both

        (A) The symbols in a purely symbolic system will always be
            ungrounded because such systems can't generate real performance;
and
        (B) A purely symbolic system can't generate real performance because
            its symbols will always be ungrounded.

That is, when I ask you why you think the symbolic approach won't work, one
of your reasons is always "because it can't solve the grounding problem", but
when I press you for why the symbolic approach can't solve the grounding
problem, it always turns out to be "because I think it won't work." I think
we should get straight on the priority here.

It seems to me that, contra (3), thesis (A) is the one that makes perfect
sense -- in fact, it's what I thought you were saying. I just don't
understand (B) at all.

To elaborate: I presume the "symbol-grounding" problem is a *philosophical*
question: what gives formal symbols original intentionality? I suppose the
only answer anybody knows is, in brief, that the symbols must be playing a
certain role in what Dennett calls an "intentional system", that is, a system
which is capable of producing complex, adaptive behavior in a rational way.

Since such a system must be able to respond to changes in its environment,
this answer has the interesting consequence that causal interaction with the
world is a *necessary* condition for original intentionality. It tells us
that symbols in a disconnected computer, without sense organs, could never be
"grounded" or intrinsically meaningful. But those in a machine that can
sense and react could be, provided the machine exhibited the requisite
rationality.

And this, as far as I can tell, is the end of what we learn from the "symbol
grounding" problem -- you've got to have sense organs. For a system that is
not causally isolated from the environment, the symbol-grounding problem now
just reduces to the old behavior-generating problem, for, if we could just
produce the behavior, there would be no question of the intentionality of the
symbols. In other words, once we've wised up enough to recognize that we must
include sensory systems (as symbolic AI has), we have completely disposed of
the "symbol grounding" problem, and all that's left to worry about is the
question of what kind of system can produce the requisite intelligent
behavior. That is, all that's left is the old behavior-generating problem.

Now as I've indicated, I think it's perfectly reasonable to suspect that the
symbolic approach is insufficient to produce full human performance. You
really don't have to issue any polemics on this point to me; such a suspicion
could well be justified by pointing out the triviality of AI's performance
achievements to date.

What I *don't* see is any more "principled" or "logical" or "methodological"
reason for such a suspicion; in particular, I don't understand how (B) could
provide such a reason.  My system can't produce intelligent performance
because it doesn't make its symbols meaningful? This statement has just got
things backwards -- if I could produce the behavior, you'd have to admit that
its symbols had all the "grounding" they needed for original intentionality.

In sum, apart from the considerations that require causal embedding, I don't
see that there *is* any "symbol-grounding" problem, at least not any problem
that is any different from the old "total-performance generating" problem.
For this reason, I think your animadversions on symbol grounding are largely
irrelevant to your position -- the really substantial claims pertain only to
"what looks like it's likely to work" for generating intelligent behavior.

On a more specific issue:
>
>You've bypassed the three points I brought up in replying to your
>challenge to my invertibility criterion for an analog transform the
>last time: (1) the quantization in standard A/D is noninvertible,

Yes, but *my* point has been that since there isn't necessarily any more loss
here than there is in a typical A/A transformation, the "degree of
invertibility" criterion cross-cuts the intuitive A/D distinction.

Look, suppose we had a digitized image, A, which is of much higher resolution
than another analog one, B.  A is more invertible since it contains more
detail from which to reconstruct the original signal, but B is more
"shape-preserving" in an intuitive sense.  So, which do you regard as "more
analog"?  Which does your theory think is better suited to subserving our
categorization performance? If you say B, then invertibility is just not what
you're after.

Anders Weinstein
BBN Labs

------------------------------

Date: 12 Jun 87 08:16:00 EST
From: cugini@icst-ecf.arpa
Reply-to: <cugini@icst-ecf.arpa>
Subject: symbol grounding and physical invertibility


S. Harnad replies:

> According to my view, invertibility (and perhaps inversion)
> captures just the relevant features of causation and resemblance that
> are needed to ground symbols. The relation is between the proximal
> projection (of a distal object) onto the sensory surfaces -- let's
> call it P -- and an invertible transformation of that projection [I(P)].
> The latter is what I call the "iconic representation." Note that the
> invertibility is with the sensory projection, *not* the distal object. I
> don't believe in distal magic. My grounding scheme begins at the
> sensory surfaces ("skin and in"). No "wider" metaphysical causality is
> involved, just narrow, local causality.

Well, OK, glad you clarified that - I think there are issues here
about the difference between grounding symbols in causation emanating
from distal objects vs. grounding them in proximal sensory surfaces -
(optical illusions, hallucinations, etc.) but let's pass over that
for now.

It still doesn't seem clear why invertibility should be necessary
for grounding (although it may be sufficient).  Frinstance, suppose
we humans, or a robot, had four kinds of color receptors lurking
behind our retinas (retinae?), which responded to red, green,
blue and yellow wavelengths.  And further suppose that stimulating
the yellow receptors alone produced the same iconic representation
as stimulating the red and green ones - ie both were experienced
as plain old yellow, nor could the experiencer in any way
distinguish between the yellows caused by the two different
stimulations.  (A fortiori, the experiencer would certainly not
have more than one categorical representation, nor symbol for
such experiences.)  In short, suppose that some information was
lost on the way in from the sensory surface, so we had a many
to one (hence non-invertible) mapping.

Would you then want to say that the symbol "yellow" was not grounded
for such a being?

John Cugini <Cugini@icst-ecf.arpa>

------------------------------

Date: 12 Jun 87 15:52:40 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem


marty1@houdi.UUCP (M.BRILLIANT) of AT&T Bell Laboratories, Holmdel writes:

>       Human visual processing is neither analog nor invertible.

Nor understood nearly well enough to draw the former two conclusions,
it seems to me. If you are taking the discreteness of neurons, the
all-or-none nature of the action potential, and the transformation of
stimulus intensity to firing frequency as your basis for concluding
that visual processing is "digital," the basis is weak, and the
analogy with electronic transduction strained.

As the (unresolved) discussion of the logical basis of the A/D distinction
last year indicated, nature itself may not be continuous, but
quantized. This would make continuity-based definitions of A/D moot.
If discrete photons strike discrete photoreceptors, then discontintuity
is transforming into discontinuity. Yet the question can still be
asked: Is the transformation preserving physical properties such as
intensity and spatial relations by transforming them to physical
properties that are isomorphic to them (e.g., intensity to frequency,
and spatial adjacency to spatial adjacency) as opposed to merely
"standing for" them in some binary code?

There is also the question of postsynaptic potentials, which, unlike
the all-or-none action potentials, are graded (to within the
pharmacological quantum of a neurotransmitter packet). What if
significant aspects of vision are coded at that level as fields or
gradients and their interactions? Or at the level of local or distributed
patterns of connectivity? Or at the chemical level? We don't even know
how to match up the various resolution-levels or "grains" of the inputs and
transformations involved: light quanta, neural quanta, psychophysical
quanta. What is discrete and above-threshold at one level may become
blurred, "continuous" and below-threshold at another.

>       what is the intrinsic meaning of "intrinsically meaningful"?
>       The Turing test is an objectively verifiable criterion. How can
>       we objectively verify intrinsic meaningfulness?

We cannot objectively verify intrinsic meaningfulness. The Turing test
is the only available criterion. Yet we can make inferences about it
(for example, that it is unlikely to be present in a thermostat or
lisp code running on a vax). And we have some direct (but subjective)
evidence that it exists in at least one case (namely, our own): We
know the difference between looking up a meaning in an English/English
dictionary versus a Chinese/Chinese dictionary (if we are nonspeakers
of Chinese): The former symbols are meaningful and the latter are
not. We also know that we could "ground" an understanding of Chinese
(by translation) in our prior understanding of English; and we assume
that our understanding of English is grounded in our prior perceptual
learning and understanding of categories in the real world of
objects. Objective evidence of this perceptual grounding is provided
by our ability to discriminate, manipulate, categorize, name and
describe real-world objects and our ability to produce and respond to
names and descriptions meaningfully (i.e., all Turing criteria).

So the empirical question becomes the following: Is a device that has
nothing but symbols and can only manipulate them on the basis of their
shape more likely to be like our own (intrinsically grounded) case, or
more like the Chinese/Chinese dictionary, whose meanings can only be
derived by the mediation of an intrinsically grounded system like our own?

But the issue is ultimately empirical. The logical and methodological
considerations can really only serve to motivate pursuing one empirical
hypothesis rather than another (e.g., top-down symbolic vs. bottom-up
hybrid). The final arbiter is the Total Turing Test. If a pure symbolic
module linked to transducers and effectors turns out to be able to
generate all of our performance capacity then the grounding problem and
intrinsic intentionality was a red herring. As I make clear in the
paper "Minds, Machines and Searle," this is an empirical, not a
logical question. But on the evidence to date, this outcome looks
highly unlikely, and the obstacle seems to be the problem of bottom-up
grounding of symbols in nonsymbolic representations and in the real world
of objects.

>       Using "analog" to mean "invertible" invites misunderstanding,
>       which invites irrelevant criticism.

I have tried to capture with the invertibility criterion certain
features that may be important (perhaps even unique) to the case of
cognitive modeling -- features that fail to be captured by the
conventional electrical engineering criteria. I have acknowledged all
along that the physically invertible/noninvertible distinction may
turn out to be independent of the A/D distinction, although the
overlap looks significant. And I'm doing my best to sort out the
misunderstandings and irrelevant criticism...

>       Human (in general, vertebrate) visual processing is a dedicated
>       hardwired digital system.  It employs data reduction to abstract such
>       features as motion, edges, and orientation of edges.  It then forms a
>       map in which position is crudely analog to the visual plane, but
>       quantized.  This map is sufficiently similar to maps used in image
>       processing machines so that I can almost imagine how symbols could be
>       generated from it.

I am surprised that you state this with such confidence. In
particular, do you really think that vertebrate vision is well enough
understood functionally to draw such conclusions? And are you sure
that the current hardware and signal-analytic concepts from electrical
engineering are adequate to apply to what we do know of visual
neurobiology, rather than being prima facie metaphors?

>       By the time it gets to perception, it is not invertible, except with
>       respect to what is perceived.  Noninvertibility is demonstrated in
>       experiments in the identification of suspects.  Witnesses can report
>       what they perceive, but they don't always perceive enough to invert
>       the perceived image and identify the object that gave rise to the
>       perception.  If you don't agree, please give a concrete, objectively
>       verifiable definition of "invertibility" that can be used to refute my
>       conclusion. If I am right, human intelligence itself relies on neither
>       analog nor invertible symbol grounding, and therefore artificial
>       intelligence does not require it.

I cannot follow your argument at all. Inability to categorize and identify
is indeed evidence of a form of noninvertibility. But my theory never laid
claim to complete invertibility throughout. (For the disadvantages of
"total invertibility," see Luria's "The Mind of a Mnemonist," or, for a more
literary depiction of the same problem, Borges's "Funes the Memorious." Both
are discussed in a chapter of mine entitled "Metaphor and Mental Duality"
in Simon & Sholes [eds] book "Language, Mind and Brain," Academic Press 1978.
See also the literature on eidetic imagery.] Categorization and identification
itself *requires* selective non-invertibility: within-category differences
must be ignored and diminished, while between-category differences must
be selected and enhanced.

Although I do my best, it is not always possible to get all the relevant
background material for these Net discussions onto the Net. Sometimes I
must reluctantly refer discussants to a fuller text elsewhere. In
principle, though, I'm prepared to re-present any particular piece of
relevant material here. This particular misunderstanding, though,
sounds like it would call for the exposition of my entire theory of
categorization, which I am reluctant to impose on the entire Net
without a wider demand. So let me just say that invertibility is my
provisional criterion for what counts as an analog transformation, and
that I have claimed that symbolic representations must be grounded in
nonsymbolic ones, which include both invertible (iconic) and
noninvertible (categorical) representations.
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: Sun 14 Jun 87 16:42:34-PDT
From: Ken Laws <Laws@Stripe.SRI.Com>
Reply-to: AIList-Request@STRIPE.SRI.COM
Subject: [mind!harnad@princeton.edu  (Stevan Harnad): Re: The symbol
         grounding problem]

  Date: 12 Jun 87 15:52:40 GMT
  From: mind!harnad@princeton.edu  (Stevan Harnad)

  If discrete photons strike discrete photoreceptors, then discontintuity
  is transforming into discontinuity. Yet the question can still be
  asked: Is the transformation preserving physical properties such as
  intensity and spatial relations by transforming them to physical
  properties that are isomorphic to them (e.g., intensity to frequency,
  and spatial adjacency to spatial adjacency) as opposed to merely
  "standing for" them in some binary code?

This makes me uncomfortable.  Consider a "hash transformation" that
maps a set of "intuitively meaningful" numeric symbols to a set of
seemingly random binary codes.  Suppose that the transformation
can be computed by some [horrendous] information-preserving
mapping of the reals to the reals.  Now, the hash function satisfies
my notion of an analog transformation (in the signal-processing sense).
When applied to my discrete input set, however, the mapping does not
seem to be analog (in the sense of preserving isomorphic relationships
between pairs -- or higher orders -- of symbolic codes).  Since
information has not been lost, however, it should be possible to
define "relational functions" that are analogous to "adjacency" and
other properties in the original domain.  Once this is done, surely
the binary codes must be viewed as isomorphic to the original symbols
rather than just "standing for them".

The "information" in a signal is a function of your methods for
extracting and interpreting the information.  Likewise the "analog
nature" of an information-preserving transformation is a function
of your methods for decoding the analog relationships.

We should also keep in mind that information theorists have advanced
a great deal since the days of Shannon.  Perhaps they have too limited
(or general!) a view of information, but they have certainly considered
your problem of decoding signal shape (as opposed to detecting modulation
patterns).  I regret that I am not familiar with their results, but
I am sure that methods for decoding both discrete and continuous
information in continuous signals are well studied.  Not that all
the answers are in -- vision workers like myself are well aware that
there can be [obvious] information in a signal that is impossible to
extract without a good model of the generating process.

                                        -- Ken

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Wed Jun 17 03:52:50 1987
Date: Wed, 17 Jun 87 03:52:35 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #148
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Wed, 17 Jun 87 03:20 EDT
Received: from relay.cs.net by RELAY.CS.NET id ak14824; 16 Jun 87 3:14 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa10883; 16 Jun 87 3:13 EDT
Date: Mon 15 Jun 1987 23:37-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #148
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest            Tuesday, 16 Jun 1987     Volume 5 : Issue 148

Today's Topics:
  Theory - The Symbol Grounding Problem

----------------------------------------------------------------------

Date: 15 Jun 87 13:23:35 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem (Reply to Ken Laws on ailist)


Ken Laws <Laws@Stripe.SRI.Com> on ailist@Stripe.SRI.Com writes:

>       Consider a "hash transformation" that maps a set of "intuitively
>       meaningful" numeric symbols to a set of seemingly random binary codes.
>       Suppose that the transformation can be computed by some [horrendous]
>       information-preserving mapping of the reals to the reals.  Now, the
>       hash function satisfies my notion of an analog transformation (in the
>       signal-processing sense). When applied to my discrete input set,
>       however, the mapping does not seem to be analog (in the sense of
>       preserving isomorphic relationships between pairs -- or higher
>       orders -- of symbolic codes). Since information has not been lost,
>       however, it should be possible to define "relational functions" that
>       are analogous to "adjacency" and other properties in the original
>       domain.  Once this is done, surely the binary codes must be viewed
>       as isomorphic to the original symbols rather than just "standing for
>       them".

I don't think I disagree with this. Don't forget that I bit the bullet
on some surprising consequences of taking my invertibility criterion
for an analog transform seriously. As long as the requisite
information-preserving mapping or "relational function" is in the head
of the human interpreter, you do not have an invertible (hence analog)
transformation. But as soon as the inverse function is wired in
physically, producing a dedicated invertible transformation, you do
have invertibility, even if a lot of the stuff in between is as
discrete, digital and binary as it can be.

I'm not unaware of this counterintuitive property of the invertibility
criterion -- or even of the possibility that it may ultimately do it in
as an attempt to capture the essential feature of an analog transform in
general. Invertibility could fail to capture the standard A/D distinction,
but may be important in the special case of mind-modeling. Or it could
turn out not to be useful at all. (Although Ken Laws's point seems to
strengthen rather than weaken my criterion, unless I've misunderstood.)

Note, however, that what I've said about the grounding problem and the role
of nonsymbolic representations (analog and categorical) would stand
independently of my particular criterion for analog; substituting a more
standard one leaves just about all of the argument intact. Some of the prior
commentators (not Ken Laws) haven't noticed that, criticizing
invertibility as a criterion for analog and thinking that they were
criticizing the symbol grounding problem.

>       The "information" in a signal is a function of your methods for
>       extracting and interpreting the information.  Likewise the "analog
>       nature" of an information-preserving transformation is a function
>       of your methods for decoding the analog relationships.

I completely agree. But to get the requisite causality I'm looking
for, the information must be interpretation-independent. Physical
invertibility seems to give you that, even if it's generated by
hardwiring the encryption/decryption (encoding/decoding) scheme underlying
the interpretation into a dedicated system.

>       Perhaps [information theorists] have too limited (or general!)
>       a view of information, but they have certainly considered your
>       problem of decoding signal shape (as opposed to detecting modulation
>       patterns)... I am sure that methods for decoding both discrete and
>       continuous information in continuous signals are well studied.

I would be interested to hear from those who are familiar with such work.
It may be that some of it is relevant to cognitive and neural modeling
and even the symbol grounding problems under discussion here.
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: 12 Jun 87 18:14:08 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem


aweinste@Diamond.BBN.COM (Anders Weinstein)
of BBN Laboratories, Inc., Cambridge, MA writes:

>       [1] [The only thing] we learn from the "symbol grounding" problem [is
>       that] you've got to have sense organs.
>       [2] For a system that is not causally isolated from the environment,
>       the symbol-grounding problem now just reduces to the old
>       behavior-generating problem, for, if we could just produce the behavior,
>       there would be no question of the intentionality of the symbols...
>       [3] [But claiming that a] system can't produce intelligent
>       performance *because* it doesn't make its symbols meaningful... has
>       just got things backwards -- if I could produce the behavior, you'd
>       have to admit that its symbols had all the "grounding" they needed
>       for original intentionality.
>       [4] For this reason, I think your animadversions on symbol
>       grounding are largely irrelevant to your position -- the really
>       substantial claims pertain only to "what looks like it's likely to
>       work" for generating intelligent behavior.

[1] No, we don't merely learn that you need sense organs from the symbol
grounding problem; we also learn that the nature of those sense organs,
and their functional inter-relation with whatever else is going on
downstream, may not be as simple as one might expect. The relation may
be non-modular. It may not be just a matter of a simple hookup between
autonomous systems -- sensory and symbolic -- as it is in current toy models.
I agree that the symbol grounding problem does not logically entail
this further conclusion, but it, together with the data, does suggest
it, and why it might be important for generating successful performance.

[2] I completely agree that a system that could pass the Total Turing
Test using nothing but an autonomous symbolic module hooked to simple
transducers would not be open to question about its "intrinsic
intentionality" (at least not from groundedness considerations of the
kind I've been describing here). But there's nothing circular about
arguing that skepticism about the possibility of successfully passing
the Total Turing Test with such a system is dictated in part by
grounding considerations. The autonomy of the symbolic level can be
the culprit in both respects. It can be responsible for the performance
failures *and* for the lack of intrinsic intentionality.

[3] Nor is there anything "backwards" about blaming the lack of
intrinsic intentionality for performance failures. Rather, *you* may be
engaging in counterfactual conditionals here.

[4] The symbol grounding problem can hardly be irrelevant to my
substantive hypotheses about what may work, since it is not only the
motivation for them, but part of the explanation of why and how they
may work.

>       since there isn't necessarily any more loss [in A/D] than there is
>       in a typical A/A transformation, the "degree of invertibility"
>       criterion cross-cuts the intuitive A/D distinction.... suppose we
>       had a digitized image, A, which is of much higher resolution
>       than another analog one, B. A is more invertible since it contains
>       more detail from which to reconstruct the original signal, but B is
>       more "shape-preserving" in an intuitive sense. So, which do you regard
>       as "more analog"?  Which does your theory think is better suited to
>       subserving our categorization performance? If you say B, then
>       invertibility is just not what you're after.

First, if A, the digital representation, is part of a dedicated
system, hardwired to inputs and outputs, and the input stimuli are
invertible, then, as I've said before, the whole system would be "analog"
according to my provisional criterion, perhaps even more analog than
B. If A is not part of a dedicated, physically invertible system, the
question is moot, since it's not analog at all. With equal
invertibility, it is an empirical question which is better suited to
subserve cognition in general, and probably depends on optimality and
capacity considerations. Finally, categorization performance in particular
calls for much more than invertibility, as I've indicated before. Only iconic
representations are invertible. Categorical reprsentations require
selective *noninvertibility*. But that is another discussion...
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: 12 Jun 87 21:36:13 GMT
From: ihnp4!homxb!houxm!houdi!marty1@ucbvax.Berkeley.EDU 
      (M.BRILLIANT)
Subject: Re: The symbol grounding problem

In article <6521@diamond.BBN.COM>, aweinste@Diamond.BBN.COM (Anders
Weinstein) writes:
> ....
>       (A) The symbols in a purely symbolic system will always be
>           ungrounded because such systems can't generate real performance;
> ...
> It seems to me that .... thesis (A) is the one that makes perfect
> sense ....
>
> ..... I think it's perfectly reasonable to suspect that the
> symbolic approach is insufficient to produce full human performance....

What exactly is this "purely" symbolic approach?  What impure approach
might be necessary?  "Purely symbolic" sounds like a straw man: a
system so purely abstract that it couldn't possibly relate to the real
world, and that nobody seriously trying to mimic human behavior would
even try to build anything that pure.

To begin with, any attempt to "produce full human performance" must
involve sensors, effectors, and motivations.  Does "purely symbolic"
preclude any of these?  If not, what is it in the definition of a
"purely symbolic" approach that makes it inadequate to pull these
factors together?

(Why do I so casually include motivations?  I'm an amateur actor.  Not
even a human can mimic another human without knowing about motivations.)

M. B. Brilliant                                 Marty
AT&T-BL HO 3D-520       (201)-949-1858
Holmdel, NJ 07733       ihnp4!houdi!marty1

------------------------------

Date: 12 Jun 87 22:19:48 GMT
From: diamond.bbn.com!aweinste@husc6.harvard.edu  (Anders Weinstein)
Subject: Re: The symbol grounding problem

In article <837@mind.UUCP> Stevan Harnad (harnad@mind.UUCP) writes:
>                                 But there's nothing circular about
>arguing that skepticism about the possibility of successfully passing
>the Total Turing Test with such a system is dictated in part by
>grounding considerations. The autonomy of the symbolic level can be
>the culprit in both respects. It can be responsible for the performance
>failures *and* for the lack of intrinsic intentionality.

I'm afraid I still don't understand this.  You write here as if these are
somehow two *different* things. I don't see them that way, and hence find
circularity.  That is, I view intentionality as a matter of rational
behavior. For me, the behavior is primary, and the notion of "symbol
grounding" or "intrinsic intentionality" is conceptually derivative; and I
thought from your postings that you shared this frankly behavioristic
philosophy.

Baldly put, here is the only plausible theory I know of "symbol grounding":

    X has intrinsic intentionality (is "grounded") iff X can pass the TTT.

If you have a better theory, I'd like to hear it, but until then I believe
that TTT-behavior is the very essence of intrinsic intentionality.

Note that since it's the behavior that has conceptual priority, it makes
sense to say that failure on the behavior front is, in a philosophical sense,
the *reason* for a failure to make intrinsic intentionality.  But to say the
reverse is vacuous: failure to make intrinsic intentionality just *is the
same thing* as failure to produce full TTT performance.  I don't see that
you can significantly distinguish these two failings.

So what could it come to to say that symbolic AI must inevitably choke on the
grounding problem? Since grounding == behavioral capability, all this claim
can mean is that symbolic AI won't be able to generate full TTT performance.

I think, incidentally, that you're probably right in this claim. However, I
also think that the supposed "symbol-grounding problem" *is* irrelevant. From
my point of view, it's just a fancy alternative name for the real issue, the
behavior-generating problem.

>[4] The symbol grounding problem can hardly be irrelevant to my
>substantive hypotheses about what may work, since it is not only the
>motivation for them, but part of the explanation of why and how they
>may work.

I still don't see how it explains anything.  The grounding problem *reduces*
to the behavior problem, not the other way around.  To say that your approach
is better grounded is only to say that it may work better (ie. generate TTT
performance);  there's just no independent content to the claim of
"groundedness". Or do you have some non-behavioral definition of intrinsic
intentionality that I haven't yet heard?

Anders Weinstein
BBN Labs

------------------------------

Date: 13 Jun 87 19:59:12 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem


aweinste@Diamond.BBN.COM (Anders Weinstein)
of BBN Laboratories, Inc., Cambridge, MA writes:

>       X has intrinsic intentionality (is "grounded") iff X can pass the TTT.
>       I thought from your postings that you shared this frankly behavioristic
>       philosophy... So what could it come to to say that symbolic AI must
>       inevitably choke on the grounding problem? Since grounding == behavioral
>       capability, all this claim can mean is that symbolic AI won't be able
>       to generate full TTT performance. I think, incidentally, that you're
>       probably right in this claim. However,...To say that your approach
>       is better grounded is only to say that it may work better (ie.
>       generate TTT performance); there's just no independent content to the
>       claim of "groundedness". Or do you have some non-behavioral definition
>       of intrinsic intentionality that I haven't yet heard?

I think that this discussion has become repetitious, so I'm going to
have to cut down on the words. Our disagreement is not substantive.
I am not a behaviorist. I am a methodological epiphenomenalist.
Intentionality and consciousness are not equivalent to behavioral
capacity, but behavioral capacity is our only objective basis for
inferring that they are present. Apart from behavioral considerations,
there are also functional considerations: What kinds of internal
processes (e.g., symbolic and nonsymbolic) look as if they might work?
and why? and how? The grounding problem accordingly has functional aspects
too. What are the right kinds of causal connections to ground a
system? Yes, the test of successful grounding is the TTT, but that
still leaves you with the problem of which kinds of connections are
going to work. I've argued that top-down symbol systems hooked to
transducers won't, and that certain hybrid bottom-up systems might. All
these functional considerations concern how to ground symbols, they are
distinct from (though ultimately, of course, dependent on) behavioral
success, and they do have independent content.
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: 14 Jun 87 19:45:33 GMT
From: diamond.bbn.com!aweinste@husc6.harvard.edu  (Anders Weinstein)
Subject: Re: The symbol grounding problem

In article <1163@houdi.UUCP> marty1@houdi.UUCP (M.BRILLIANT) writes:
>>      (A) The symbols in a purely symbolic system ...
>
>What exactly is this "purely" symbolic approach?  What impure approach
>might be necessary?  "Purely symbolic" sounds like a straw man ...

The phrase "purely symbolic" was just my short label for the AI strategy that
Stevan Harnad has been criticizing. Yes this strategy *does* encompass the
use of sensors and effectors and (maybe) motivations. Sorry if the term was
misleading, I was only using it as pointer;  consult Harnad's postings for a
fuller characterization.

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Wed Jun 17 03:53:11 1987
Date: Wed, 17 Jun 87 03:52:51 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #149
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Wed, 17 Jun 87 03:23 EDT
Received: from relay.cs.net by RELAY.CS.NET id ae14895; 16 Jun 87 3:26 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa10935; 16 Jun 87 3:25 EDT
Date: Mon 15 Jun 1987 23:42-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #149
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest            Tuesday, 16 Jun 1987     Volume 5 : Issue 149

Today's Topics:
  Theory - The Symbol Grounding Problem

----------------------------------------------------------------------

Date: 14 Jun 87 15:13:34 GMT
From: ihnp4!homxb!houxm!houdi!marty1@ucbvax.Berkeley.EDU 
      (M.BRILLIANT)
Subject: Re: The symbol grounding problem

In article <843@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
>
> Intentionality and consciousness are not equivalent to behavioral
> capacity, but behavioral capacity is our only objective basis for
> inferring that they are present. Apart from behavioral considerations,
> there are also functional considerations: What kinds of internal
> processes (e.g., symbolic and nonsymbolic) look as if they might work?
> and why? and how? The grounding problem accordingly has functional aspects
> too. What are the right kinds of causal connections to ground a
> system? Yes, the test of successful grounding is the TTT, but that
> still leaves you with the problem of which kinds of connections are
> going to work. I've argued that top-down symbol systems hooked to
> transducers won't, and that certain hybrid bottom-up systems might. All
> these functional considerations concern how to ground symbols, they are
> distinct from (though ultimately, of course, dependent on) behavioral
> success, and they do have independent content.

Harnad's terminology has proved unreliable: analog doesn't mean
analog, invertible doesn't mean invertible, and so on.  Maybe
top-down doesn't mean top-down either.

Suppose we create a visual transducer feeding into an image
processing module that could delineate edges, detect motion,
abstract shape, etc.  This processor is to be built with a
hard-wired capability to detect "objects" without necessarily
finding symbols for them.

Next let's create a symbol bank, consisting of a large storage
area that can be partitioned into spaces for strings of
alphanumeric characters, with associated pointers, frames,
anything else you think will work to support a sophisiticated
knowledge base.  The finite area means that memory will be
limited, but human memory can't really be infinite, either.

Next let's connect the two: any time the image processor finds
an object, the machine makes up a symbol for it.  When it finds
another object, it makes up another symbol and links that symbol
to the symbols for any other objects that are related to it in
ways that it knows about (some of which might be hard-wired
primitives): proximity in time or space, similar shape, etc.  It
also has to make up symbols for the relations it relies on to
link objects.  I'm over my head here, but I don't think I'm
asking for anything we think is impossible.  Basically, I'm
looking for an expert system that learns.

Now we decide whether we want to play a game, which is to make
the machine seem human, or whether we want the machine to
exhibit human behavior on the same basis as humans, that is, to
survive.  For the game, the essential step is to make the
machine communicate with us both visually and verbally, so it
can translate the character strings it made up into English, so
we can understand it and it can understand us.  For the survival
motivation, the machine needs a full set of receptors and
effectors, and an environment in which it can either survive or
perish, and if we built it right it will learn English for its
own reasons.  It could also endanger our survival.

Now, Harnad, Weinstein, anyone: do you think this could work,
or do you think it could not work?

M. B. Brilliant                                 Marty
AT&T-BL HO 3D-520       (201)-949-1858
Holmdel, NJ 07733       ihnp4!houdi!marty1

------------------------------

Date: 14 Jun 87 14:15:55 GMT
From: ihnp4!homxb!houxm!houdi!marty1@ucbvax.Berkeley.EDU 
      (M.BRILLIANT)
Subject: Re: The symbol grounding problem

In article <835@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> marty1@houdi.UUCP (M.BRILLIANT) of AT&T Bell Laboratories, Holmdel writes:
>
> >     Human visual processing is neither analog nor invertible.
>
> Nor understood nearly well enough to draw the former two conclusions,
> it seems to me. If you are taking the discreteness of neurons, the
> all-or-none nature of the action potential, and the transformation of
> stimulus intensity to firing frequency as your basis for concluding
> that visual processing is "digital," the basis is weak, and the
> analogy with electronic transduction strained.

No, I'm taking more than that as the basis.  I don't have any
names handy, and I'm not a professional in neurobiology, but
I've seen many articles in Science and Scientific American
(including a classic paper titled something like "What the
frog's eye tells the frog's brain") that describe the flow of
visual information through the layers of the retina, and through
the layers of the visual cortex, with motion detection, edge
detection, orientation detection, etc., all going on in specific
neurons.  Maybe a neurobiologist can give a good account of what
all that means, so we can guess whether computer image
processing could emulate it.

> >     what is the intrinsic meaning of "intrinsically meaningful"?
> >     The Turing test is an objectively verifiable criterion. How can
> >     we objectively verify intrinsic meaningfulness?
>
> We cannot objectively verify intrinsic meaningfulness. The Turing test
> is the only available criterion. Yet we can make inferences...

I think that substantiates Weinstein's position: we're back to
the behavior-generating problem.

> ....: We
> know the difference between looking up a meaning in an English/English
> dictionary versus a Chinese/Chinese dictionary (if we are nonspeakers
> of Chinese): The former symbols are meaningful and the latter are
> not.

Not relevant.  Intrinsically, words in both languages are
equally meaningful.

> >     Using "analog" to mean "invertible" invites misunderstanding,
> >     which invites irrelevant criticism.
>
> ..... I have acknowledged all
> along that the physically invertible/noninvertible distinction may
> turn out to be independent of the A/D distinction, although the
> overlap looks significant. And I'm doing my best to sort out the
> misunderstandings and irrelevant criticism...

Then please stop using the terms analog and digital.

>
> >     Human (in general, vertebrate) visual processing is a dedicated
> >     hardwired digital system.  It employs data reduction to abstract such
> >     features as motion, edges, and orientation of edges.  It then forms a
> >     map in which position is crudely analog to the visual plane, but
> >     quantized.  This map is sufficiently similar to maps used in image
> >     processing machines so that I can almost imagine how symbols could be
> >     generated from it.
>
> I am surprised that you state this with such confidence. In
> particular, do you really think that vertebrate vision is well enough
> understood functionally to draw such conclusions? ...

Yes. See above.

> ... And are you sure
> that the current hardware and signal-analytic concepts from electrical
> engineering are adequate to apply to what we do know of visual
> neurobiology, rather than being prima facie metaphors?

Not the hardware concepts.  But I think some principles of
information theory are independent of the medium.

> >     By the time it gets to perception, it is not invertible, except with
> >     respect to what is perceived.  Noninvertibility is demonstrated in
> >     experiments in the identification of suspects.  Witnesses can report
> >     what they perceive, but they don't always perceive enough to invert
> >     the perceived image and identify the object that gave rise to the
> >     perception....
> >     .... If I am right, human intelligence itself relies on neither
> >     analog nor invertible symbol grounding, and therefore artificial
> >     intelligence does not require it.
>
> I cannot follow your argument at all. Inability to categorize and identify
> is indeed evidence of a form of noninvertibility. But my theory never laid
> claim to complete invertibility throughout.....

First "analog" doesn't mean analog, and now "invertibility"
doesn't mean complete invertibility.  These arguments are
getting too slippery for me.

> .... Categorization and identification
> itself *requires* selective non-invertibility: within-category differences
> must be ignored and diminished, while between-category differences must
> be selected and enhanced.

Well, that's the point I've been making.  If non-invertibility
is essential to the way we process information, you can't say
non-invertibility would prevent a machine from emulating us.

Anybody can do hand-waving.  To be convincing, abstract
reasoning must be rigidly self-consistent.  Harnad's is not.
I haven't made any assertions as to what is possible.  All
I'm saying is that Harnad has come nowhere near proving his
assertions, or even making clear what his assertions are.

M. B. Brilliant                                 Marty
AT&T-BL HO 3D-520       (201)-949-1858
Holmdel, NJ 07733       ihnp4!houdi!marty1

------------------------------

Date: 15 Jun 87 01:43:55 GMT
From: berleant@sally.utexas.edu  (Dan Berleant)
Subject: Re: The symbol grounding problem


It is interesting that some (presumably significant) visual processing
occurs by graded potentials without action potentials. Receptor cells
(rods & cones), 'horizontal cells' which process the graded output of
the receptors, and 'bipolar cells' which do further processing, use no
action potentials to do it. This seems to indicate the significance of
analog processing to vision.

There may also be significant invertibility at these early stages of
visual processing in the retina: One photon can cause several hundred
sodium channels in a rod cell to close. Such sensitivity suggests a need
for precise representation of visual stimuli which suggests the
representation might be invertible.

Furthermore, the retina cannot be viewed as a module, only loosely
coupled to the brain. The optic nerve, which does the coupling, has a
high bandwidth and thus carries much information simultaneously along
many fibers. In fact, the optic nerve carries a topographic
representation of the retina. To the degree that a topographic
representation is an iconic representation, the brain thus receives an
iconic representation of the visual field.

Furthermore, even central processing of visual information is
characterized by topographic representations. This suggests that iconic
representations are important to the later stages of perceptual
processing. Indeed, all of the sensory systems seem to rely on
topographic representations (particularly touch and hearing as well as
vision).

An interesting example in hearing is direction perception. Direction
seems to be, as I understand it, found by processing the difference in
time from when a sound reaches one ear to when it reaches the other, in
large part. The resulting direction is presumably an invertible
representation of that time difference.

Dan Berleant
UUCP: {gatech,ucbvax,ihnp4,seismo,kpno,ctvax}!ut-sally!berleant
ARPA: ai.berleant@r20.utexas.edu

------------------------------

Date: 14 Jun 87 15:03:51 GMT
From: harwood@cvl.umd.edu  (David Harwood)
Subject: Re: The symbol grounding problem

In article <843@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
(... replying to Anders Weinstein ...who wonders "Where's the beef?" in
Steve Harnad's conceptual and terminological salad ...; uh - let me be first
to prophylactically remind us - lest there is any confusion and forfending
that he should perforce of intellectual scruple must need refer to his modest
accomplishments  - Steve Harnad is editor of Behavioral and Brain Sciences,
and I am not, of course. We - all of us - enjoy reading such high-class
stuff...;-)

        Anyway, Steve Harnad replies to A.W., re "Total Turing Tests",
behavior, and the (great AI) "symbol grounding problem":

>I think that this discussion has become repetitious, so I'm going to
>have to cut down on the words.

        Praise the Lord - some insight - by itself, worthy of Pass of
the "Total Turing Test."

>... Our disagreement is not substantive.
>I am not a behaviorist. I am a methodological epiphenomenalist.

        I'm not a behaviorist, you're not a behaviorist, he's not a
behaviorist too ... We are all methodological solipsists hereabouts
on this planet, having already, incorrigibly, failed the "Total Turing
Test" for genuine intergalactic First Class rational beings, but so what?
(Please, Steve - this is a NOT a test - I repeat - this is NOT a test of
your philosophical intelligence. It is an ACTUAL ALERT of your common
sense, not to mention, sense of humor. Please do not solicit BBS review of
this thesis...

>... Apart from behavioral considerations,
>there are also functional considerations: What kinds of internal
>processes (e.g., symbolic and nonsymbolic) look as if they might work?
>and why? and how? The grounding problem accordingly has functional aspects
>too. What are the right kinds of causal connections to ground a
>system? Yes, the test of successful grounding is the TTT, but that
>still leaves you with the problem of which kinds of connections are
>going to work. I've argued that top-down symbol systems hooked to
>transducers won't, and that certain hybrid bottom-up systems might. All
>these functional considerations concern how to ground symbols, they are
>distinct from (though ultimately, of course, dependent on) behavioral
>success, and they do have independent content.
>--
>
>Stevan Harnad                                  (609) - 921 7771
>{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
>harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

        You know what is the real problem with your postings - it's
what I would call "the symbol grounding problem". You want to say obvious
things in the worst possible way, otherwise say abstract things in the
worst possible way.. And ignore what others say. Also, for purposes of
controversial public discussion, ignore scientific 'facts' (eg about
neurologic perceptual equivalence), and standard usage of scientific
terminology and interpretation of theories. (Not that these are sacrosanct.)
        It seems to me that your particular "symbol grounding problem"
is indeed the the sine qua non of the Total Turing Test for "real"
philosophers of human cognition. As I said, we are all methodological
solipsists hereabouts. However, if you want AI funding from me - I want to
see what real computing system, using your own architecture and object code
of at least 1 megabytes, has been designed by you. Then we will see how
your "symbols" are actually grounded, using the standard, naive but effective
denotational semantics for the "symbols" of your intention, qua "methodological
epiphenomensist."

David Harwood

------------------------------

Date: 15 Jun 87 03:32:07 GMT
From: berleant@sally.utexas.edu  (Dan Berleant)
Subject: Re: The symbol grounding problem

In article <835@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:

>We cannot objectively verify intrinsic meaningfulness. The Turing test
>is the only available criterion.

Yes, the Turing test is by definition subjective, and also subject to
variable results from hour to hour even from the same judge.

But I think I disagree that intrinsic meaningfulness cannot be
objectively verified. What about the model theory of logic?


Dan Berleant
UUCP: {gatech,ucbvax,ihnp4,seismo,kpno,ctvax}!ut-sally!berleant
ARPA: ai.berleant@r20.utexas.edu

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Wed Jun 17 03:53:26 1987
Date: Wed, 17 Jun 87 03:53:04 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #150
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Wed, 17 Jun 87 03:25 EDT
Received: from relay.cs.net by RELAY.CS.NET id ab14916; 16 Jun 87 3:32 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa10975; 16 Jun 87 3:29 EDT
Date: Mon 15 Jun 1987 23:45-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #150
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest            Tuesday, 16 Jun 1987     Volume 5 : Issue 150

Today's Topics:
  Binding - comp.theory Newsgroup,
  Theory - Information Flow & The Symbol Grounding Problem

----------------------------------------------------------------------

Date: 11 Jun 87 21:17:29 GMT
From: ramesh@cvl.umd.edu  (Ramesh Sitaraman)
Subject: New newsgroup: comp.theory

To those of you haven't already noticed ....

A new news group "comp.theory" has commenced. This group presumably
deals with all aspects of the Theoretcial Computer Science
including Complexity theory, Algorithm analysis, Logic and
theory of computation, denotational semantics, computational
geometry etc etc etc.


                                Make merry,

                                Ramesh

------------------------------

Date: Wed 10 Jun 87 12:31:39-EDT
From: Albert Boulanger <ABOULANGER@G.BBN.COM>
Subject: Re: Information flow discussions


 Anthony Pelletier writes:
 P.S. I think alot about information flow problems and would enjoy
 discussions on that...if anyone wants to chat.

For a real "juicy" discussion of information flow in non-linear
systems see:

"Strange Attractors, Chaotic Behavior, and Information Flow"
Robert Shaw, Z. Naturforsch, 36a, 80-112 1981

This discusses the information flow characteristics of non-linear
systems in order to gain insight on how non-linear systems self-organize.
(This self-organization aspect of non-linear dynamical systems
is an aspect of neural networks. See for example Kohonen's work
on self-organizing feature maps in "Self-Organization and
Associative Memory" Springer-Verlag 1984. This feature map stuff
is a type of unsupervised learning.)

Albert Boulanger
BBN Labs

------------------------------

Date: 15 Jun 87 02:37:00 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem


In two consecutive postings marty1@houdi.UUCP (M.BRILLIANT)
of AT&T Bell Laboratories, Holmdel wrote:

>       the flow of visual information through the layers of the retina,
>       and through the layers of the visual cortex, with motion detection,
>       edge detection, orientation detection, etc., all going on in specific
>       neurons... Maybe a neurobiologist can give a good account of what
>       all that means, so we can guess whether computer image
>       processing could emulate it.

As I indicated the last time, neurobiologists don't *know* what all
those findings mean. It is not known how features are detected and by
what. The idea that single cells are doing the detecting is just a
theory fragment, and one that has currently fallen on hard times. Rivals
include distributed networks (of which the cell is just a component),
or spatial frequency detectors, or coding at some entirely different
level, such as continuous postsynaptic potentials, local circuits,
architectonic columns or neurochemistry. Some even think that the
multiple analog retinas at various levels of the visual system (12 on
each side, at last count) may have something to do with feature
extraction. One cannot just take current neurophysiological data and
replace the nonexistent theory by preconceptions from machine vision
-- especially not by way of justifying the machine-theoretic concepts.

>>              >[SH:] my theory never laid claim to complete invertibility
>>              >throughout.
>
>       First "analog" doesn't mean analog, and now "invertibility"
>       doesn't mean complete invertibility.  These arguments are
>       getting too slippery for me... If non-invertibility is essential
>       to the way we process information, you can't say non-invertibility
>       would prevent a machine from emulating us.

I have no idea what proposition you think you were debating here. I
had pointed out a problem with the top-down symbolic approach to
mind-modeling -- the symbol grounding problem -- which suggested that
symbolic representations would have to be grounded in nonsymbolic
representations. I had also sketched a model for categorization that
attempted to ground symbolic representations in two nonsymbolic kinds
of representations -- iconic (analog) representations and categorical
(feature-filtered) representations. I also proposed a criterion for
analog transformations -- invertibility. I never said that categorical
representations were invertible or that iconic representations were
the only nonsymbolic representations you needed to ground symbols. Indeed,
most of the CP book under discussion concerns categorical representations.

>       All I'm saying is that Harnad has come nowhere near proving his
>       assertions, or even making clear what his assertions are...
>       Harnad's terminology has proved unreliable: analog doesn't mean
>       analog, invertible doesn't mean invertible, and so on.  Maybe
>       top-down doesn't mean top-down either...
>       Anybody can do hand-waving.  To be convincing, abstract
>       reasoning must be rigidly self-consistent.  Harnad's is not.
>       I haven't made any assertions as to what is possible.

Invertibility is my candidate criterion for an analog transform. Invertible
means invertible, top-down means top-down. Where further clarification is
needed, all one need do is ask.

Now here is M. B. Brilliant's "Recipe for a symbol-grounder" (not to be
confused with an assertion as to what is possible):

>       Suppose we create a visual transducer... with hard-wired
>       capability to detect "objects"... Next let's create a symbol bank
>       Next let's connect the two... I'm over my head here, but I don't
>       think I'm asking for anything we think is impossible. Basically,
>       I'm looking for an expert system that learns... the essential step
>       is to make the machine communicate with us both visually and verbally,
>       so it can translate the character strings it made up into English, so
>       we can understand it and it can understand us. For the survival
>       motivation, the machine needs a full set of receptors and
>       effectors, and an environment in which it can either survive or
>       perish, and if we built it right it will learn English for its
>       own reasons. Now, Harnad, Weinstein, anyone: do you think this
>       could work, or do you think it could not work?

Sounds like a conjecture about a system that would pass the TTT.
Unfortunately, the rest seems far too vague and hypothetical to respond to.

If you want me to pay attention to further postings of yours, stay
temperate and respectful as I endeavor to do. Dismissive rhetoric will not
convince anyone, and will not elicit substantive discussion.

--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: 15 Jun 87 05:21:36 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem


berleant@ut-sally.UUCP (Dan Berleant) of U. Texas CS Dept., Austin, Texas
has posted this welcome reminder:

>       the retina cannot be viewed as a module, only loosely
>       coupled to the brain. The optic nerve, which does the coupling, has a
>       high bandwidth and thus carries much information simultaneously along
>       many fibers. In fact, the optic nerve carries a topographic
>       representation of the retina. To the degree that a topographic
>       representation is an iconic representation, the brain thus receives an
>       iconic representation of the visual field.

>       Furthermore, even central processing of visual information is
>       characterized by topographic representations. This suggests that iconic
>       representations are important to the later stages of perceptual
>       processing. Indeed, all of the sensory systems seem to rely on
>       topographic representations (particularly touch and hearing as well as
>       vision).

As I mentioned in my last posting, at last count there were 12 pairs
of successively higher analog retinas in the visual system. No one yet
knows what function they perform, but they certainly suggest that it
is premature to dismiss the importance of analog representations in at
least one well optimized system...

>       Yes, the Turing test is by definition subjective, and also subject to
>       variable results from hour to hour even from the same judge.
>       But I think I disagree that intrinsic meaningfulness cannot be
>       objectively verified. What about the model theory of logic?

In earlier postings I distinguished between two components of the
Turing Test. One is the formal, objective one: Getting a system to generate
all of our behavioral capacities. The second is the informal,
intuitive (and hence subjective) one: Can a person tell such a device
apart from a person? This version must be open-ended, and is no better
or worse than -- in fact, I argue that is identical to -- the
real-life turing-testing we do of one another in contending with the
"other minds" problem.

The subjective verification of intrinsic meaning, however, is not done
by means of the informal turing test. It is done from the first-person
point of view. Each of us knows that his symbols (his linguistic ones,
at any rate) are grounded, and refer to objects, rather than being
menaningless syntactic objects manipulated on the basis of their shapes.

I am not a model theorist, so the following reply may be inadequate, but it
seems to me that the semantic model for an uninterpreted formal system
in formal model-theoretic semantics is always yet another formal
object, only its symbols are of a different type from the symbols of the
system that is being interpreted. That seems true of *formal* models.
Of course, there are informal models, in which the intended interpretation
of a formal system corresponds to conceptual or even physical objects. We
can say that the intended interpretation of the primitive symbol tokens
and the axioms of formal number theory are "numbers," by which we mean
either our intuitive concept of numbers or whatever invariant physical
property quantities of objects share. But such informal interpretations
are not what formal model theory trades in. As far as I can tell,
formal models are not intrinsically grounded, but depend on our
concepts and our linking them to real objects. And of course the
intrinsic grounding of our concepts and our references to objects is
what we are attempting to capture in confronting the symbol grounding
problem.

I hope model theorists will correct me if I'm wrong. But even if the
model-theoretic interpretation of some formal symbol systems can truly
be regarded as the "objects" to which it refers, it is not clear that
this can be generalized to natural language or to the "language of
thought," which must, after all, have Total-Turing-Test scope, rather
than the scope of the circumscribed artificial languages of logic and
mathematics. Is there any indication that all that can be formalized
model-theoretically?
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Fri Jun 19 03:10:42 1987
Date: Fri, 19 Jun 87 03:10:33 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #151
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Fri, 19 Jun 87 03:06 EDT
Received: from relay.cs.net by RELAY.CS.NET id ac02471; 18 Jun 87 15:42 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa10652; 18 Jun 87 15:41 EDT
Date: Wed 17 Jun 1987 23:58-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #151
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest           Thursday, 18 Jun 1987     Volume 5 : Issue 151

Today's Topics:
  Seminars - Mechanization of Programmer's Knowledge (MCC) &
    Partial Order Programming (MCC) &
    AI at Vanderbilt & Comparative Induction (NASA Ames) &
    Default Reasoning and Stereotypes in User Modelling (UPenn),
  Conference - Last Call for AAAI-87 Volunteers &
    CADE-9: Automated Deduction &
    Office Knowledge &
    Second Eurographics Workshop on Intelligent CAD Systems

----------------------------------------------------------------------

Date: Mon 15 Jun 87 14:12:25-CDT
From: Ellie Huck <AI.ELLIE@MCC.COM>
Subject: Seminar - Mechanization of Programmer's Knowledge (MCC)

Please join the AI group for the following speaker:


            ON THE MECHANIZATION OF PROGRAMMER'S KNOWLEDGE

                        Henryk Jan Komorowski
                          Harvard University

                           June 17 - 10:00
                       MCC Balcones Auditorium

What do programmers with experience in writing programs  know  and how
can this knowledge be mechanized so it can be used by a computer? This
talk  presents an informal  overview of the  foundations of mechanical
support  for  software design.  The   goal of the  mechanization is to
provide  an intelligent assistant for the  programmer that can uncover
flaws in the design rather than automatically generate programs.  What
programmer  knows  is  divided   into  knowledge of  data  structures,
recursive schemata, assimilation rules, and the process of designing a
program which is similar to extension  of a theory. A prototype system
now  implemented    provides  salient  advice,  despite    its limited
knowledge-base.


June 17 - 10:00
MCC Balcones Auditorium

------------------------------

Date: Mon 15 Jun 87 14:50:06-CDT
From: Ellie Huck <AI.ELLIE@MCC.COM>
Subject: Seminar - Partial Order Programming (MCC)

Please join the AI Group for the following speaker:

                         PARTIAL ORDER PROGRAMMING
                              D. Stott Parker
                      UCLA Computer Science Department

                           June 19 - 10:00
                       MCC Balcones Auditorium

We introduce a declarative programming paradigm that describes
computation with partial orders.  A partial order program corresponds
to a collection of constraints

                            u  >=  C(u)

where >= is a partial order on a domain of `objects' and `values',
u is an object, and C(u) is an object or a value.

Semantics of such a program consist of assignments of values to the
objects u that satisfy the inequalities.  When C is a monotone and
continuous function, fixpoint semantics of the program may be
obtained easily and naturally.

The partial order programming paradigm has interesting properties:

(1)  It generalizes various computational paradigms (logic,
     functional, object-oriented, and others) in a clean way.

(2)  It takes thorough advantage of known results for continuous
     functionals on partial orders, providing a clear semantics
     for the paradigm.

(3)  It presents a framework that may be more generally acceptable
     for dealing with `cognitive' computation problems, including
     natural language processing and knowledge representation.

(4)  It coincides with recent work on relaxation solution of a
     variety of problems including consistent labelling, path
     problems, and linear algebraic systems.


June 19 - 10:00
MCC Balcones Auditorium

------------------------------

Date: Tue, 16 Jun 87 15:15:50 PDT
From: JARED%PLU@ames-io.ARPA
Subject: Seminars - AI at Vanderbilt & Comparative Induction (NASA
         Ames)

                   NASA, Ames Research Center
                   Intelligent Systems Forum

                        TWO SPEAKERS:

                   Dr. Arthur J. Brodersen
                Center for Intelligent Systems
                    Vanderbilt University

         Expert Systems Research at Vanderbilt University

Abstract:
The current research activities at the Center for Intelligent Systems
at Vanderbilt University will be discussed.   The current research
activities include knowledge-based systems for test technologies,
intelligent tutorial systems for simulation tools, training systems,
knowledge retrieval systems, and diagnositic and repair systems.


                        David Hartzband
                 Digital Equipment Corporation
   Chief Scientist, Artificial Intelligence Technology Group
                             and
             Visiting Scholar, Stanford University

         Comparative Induction Methods for Problem Solving and
                       (Some) Learning}



Date:     Thursday, June 18, 1987
Time:     2:00 to 4:30 PM
Location: Bldg. 245, Space Science Auditorium
Inquires: David Jared, (415) 694-6525,  jared@ames-pluto.arpa


VISITORS ARE WELCOME: Register and obtain vehicle pass at Ames Visitor
Reception Building (N-253) or the Security Station near Gate 18.  Do not
use the Navy Main Gate.

Non-citizens (except Permanent Residents) must have prior approval from the
Director's Office one week in advance.  Submit requests to the point of
contact indicated above.  Non-citizens must register at the Visitor
Reception Building.  Permanent Residents are required to show Alien
Registration Card at the time of registration.

------------------------------

Date: 15 Jun 87 12:15:25 EDT
From: Marcella.Zaragoza@isl1.ri.cmu.edu
Subject: Seminar - Default Reasoning and Stereotypes in User
         Modelling (UPenn)


                        SPECIAL SEMINAR

SPEAKER:  Timothy Finin
          Computer and Information Science
          University of Pennsylvania, Philadelphia, PA

WHEN:     Thursday, June 18, 1987, 10:00 am

WHERE:    Doherty Hall 3313

TOPIC:    DEFAULT REASONING AND STEREOTYPES IN USER MODELLING


This talk discusses the application of various kinds of default reasoning in
systems which must maintain a model of its users.  In particular, we
describe a general architecture of a domain independent system for building
and maintaining long term models of individual users. The user modelling
system is intended to provide a well defined set of services for an
application system which is interacting with various users and has a need to
build and maintain models of them.  As the application system interacts with
a user, it can acquire knowledge of him and pass that knowledge on to the
user model maintenance system for incorporation.  We describe a prototype
general user modelling system (hereafter called GUMS1 which we have
implemented in Prolog.  This system satisfies some of the desirable
characteristics we discuss.

------------------------------

Date: 16 Jun 87 20:18:07 GMT
From: feifer@locus.ucla.edu
Subject: Conference - Last call for AAAI-87 volunteers

Due to some last minute cancellations we have a few
openings for volunteers for AAAI-87.

Please see the original posting below for more information.
If interested, please respond immediately.


ANNOUNCEMENT:
Student Volunteers Needed for
Artificial Intelligence Conference
AAAI-87


AAAI-87 (American Association on Artificial Intelligence) will
be held July 13-17, 1987 in beautiful Seattle, Washington.
Student volunteers are needed to help with local arrangements
and staffing of the conference.  To be eligible for a Volunteer
position, an individual must be an undergraduate or graduate
student in any field at any college or university.

This is an excellent opportunity for students to participate in
the conference.   Volunteers receive FREE registration at AAAI-87,
conference proceedings, "STAFF" T-shirt, and are invited to the
volunteer party. More importantly, by participating as a volunteer,
you become more involved and meet students and researchers with
similar interests.

Volunteer responsibilities are varied, including conference
preparation, registration, staffing of sessions and tutorials
and organizational tasks.  Each volunteer will be assigned
twelve (12) hours.

If you are interested in participating in AAAI-87 as a Student
Volunteer, apply by sending the following information:

Name
Electronic Mail Address
USMail Address
Telephone Number(s)
Dates Available
Student Affiliation
Advisor's Name

to:

feifer@locus.ucla.edu

 or

Richard Feifer
UCLA
Center for the Study of Evaluation
145 Moore Hall
Los Angeles, California  90024


Thanks, and I hope you join us this year!

Richard Feifer
Student Volunteer Coordinator
AAAI-87 Staff



- Richard

------------------------------

Date: Mon, 15 Jun 87 20:07:49 cdt
From: stevens@anl-mcs.ARPA (Rick L. Stevens)
Subject: Conference - CADE-9: Automated Deduction


                Preliminary Announcement and Call for Papers

         9th International Conference on Automated
                         Deduction

                      May 23-26, 1988



CADE-9 will be held at  Argonne  National  Laboratory  (near
Chicago)  in  celebration  of  the  25th  anniversary of the
discovery of the resolution principle at Argonne in the sum-
mer of 1963.  Papers are invited in the following or related
fields:

Theorem Proving                  Logic Programming
Unification                      Deductive Databases
Term Rewriting                   ATP for Non-Standard Logics
Program Verification             Inference Systems


The Program Committee includes:

Peter Andrews                            Hans-Jorgen Ohlbach
W.W. Bledsoe                             Ross Overbeek
Alan Bundy                               William Pase
Seif Haridi                              Jorg Siekmann
Larry Henschen                           Jim Williams
Jean-Louis Laissez                       Mark Stickel
Dallas Lankford
Ewing Lusk
Michael MacRobbie


Papers should be sent to arrive before November  23rd,  1987
to

        Ewing Lusk and Ross Overbeek, chairmen
        CADE-9
        Mathematics and Computer Science Division
        Argonne National Laboratory
        Argonne, IL 60439

------------------------------

Date: Mon, 15 Jun 87 17:17:53 edt
From: rba@flash.bellcore.com (Robert B. Allen)
Subject: Conference - Office Knowledge


            Office Knowledge: Representation, Management and Utilization
                                University of Toronto
                             IFIP WG8.4 Workshop Program

                For information contact: Fred Lochovsky, fred@csri.toronto.edu

                              Monday, August 17th, 1987
          8:00-9:00 Registration
          9:00-9:15 Workshop Opening Remarks
          9:15-10:45 Session: Invited Talk
             Objects and Things.   D. Tsichritzis,  Universite  de  Geneve,
               Switzerland
          11:15-12:45 Session: Supporting Organizational Activities
             Ubik: A System for Conceptual and Organizational  Development.
               P. de Jong, MIT, U.S.A.
             KNOOM - KNowledge  Oriented  Office  Model  Representation  of
               Knowledge  in  the  Office.   M. Hofmann, Universitaet Wien,
               Austria
             OTM: A Language  for  Representing  Concurrent  Office  Tasks.
               J. Hogg, University of Toronto, Canada
          2:00-3:30 Session: Invited Talk
             Representing  Office  Work   with   Goals   and   Constraints.
               W.B. Croft, University of Massachusetts, U.S.A.
          4:00-5:30   Session: Representing,   Querying   and    Generating
                              Office Objects
             Time  Management  in  the  Office-net  System.    R. Maiocchi,
               R. Zicari,   Politecnico   di   Milano,   Italy,  M. Fugini,
               Universita di Brescia, Italy
             Towards  a  Graphic  Query  Interface  for  Complex   Objects.
               G. Lausen, Universitaet Mannheim, West Germany, A. Oberweis,
               Technische Hochschule Darmstadt, West Germany
             Knowledge Representation and Utilization in  Automatic  Office
               Form  Generation.   K. Watabe, K. Tsuruoka, NEC Corporation,
               Japan
          5:30-6:30 Reception
          6:30-7:30 Demonstration
             Meta-Data for Automating the Management of Office Information.
               R.E.A. Mason, A. Benjamin, J.R. Tessier, Online People Inc.,
               Canada
          _________________________________________________________________
                             Tuesday, August 18th, 1987
          9:00-10:30 Session: Invited Talk
             Organizational Semantics.  C. Hewitt, MIT, U.S.A.
          11:00-12:30 Session: Problem Solving
             An   Office   Environment   to   Support   Problem    Solving.
               P.W.G. Bots,   H.G. Sol,  Delft  University  of  Technology,
               Netherlands
             Generic   Knowledge   in   Office   Activities.    A.A. Araya,
               M.J. Stefik, Xerox PARC, U.S.A.
             EXPERTNET: An Approach to Resource Sharing  on  a  Network  of
               Workstations.    A. Allam,  Northern  Telecom  Canada  Ltd.,
               Canada, G.M. White, University of Ottawa, Canada
          2:00-3:30 Session: Text and Pictures
             Semantics and Conceptual Modelling of  Documents.   F. Barbic,
               S. Daneluzzi,    F. Garzotto,    S. Mainetti,    P. Paolini,
               Politecnico di Milano, Italy
             Knowledge-Based Text Processing in  Office  Environments:  The
               Text   Condensation  System  TOPIC.   U. Hahn,  Universitaet
               Passau, West Germany, U. Reimer, Universitaet Konstanz, West
               Germany
             Knowledge  Base  for  Storage  and  Retrieval   of   Pictures.
               B. Beetz, SEL Research Center, West Germany
          4:00-5:30 Session: Poster Session
             Artificial Intelligence and Organizational  Design:  Prospects
               of  Integrating  Two  Perspectives.   U. Frank, Universitaet
               Mannheim, West Germany
             Intermediate  Knowledge  Representation  for  Extended  Office
               Systems.  E.S. Cordingley, University of Surrey, England
             Intelligent  Interfaces  for   Office   Information   Systems.
               B.C. Desai,   Concordia   University,   Canada,  C. Frasson,
               J. Vaucher, Universite de Montreal, Canada
             Managing  Office  Knowledge  through  Conceptual   Structures.
               G. Berg-Cross, Advanced Decision Systems, U.S.A.
             Picture Management on Optical Disks: A Practical  Approach  on
               Micro-computers.   S. Miranda,  N. Le  Thanh,  A.C. Salgado,
               E. Borelli-Vittori, Universite de Nice, France
             Managing    Replicas    in    Distributed    Telephone/Address
               Directories.   H.M. Gladney,  IBM  Almaden  Research Center,
               U.S.A.
          7:30 Banquet
          _________________________________________________________________
                            Wednesday, August 19th, 1987
          9:00-10:30 Session: Invited Talk
             NICK:  Intelligent  Computer   Supported   Cooperative   Work.
               C. Ellis, MCC, U.S.A.
          11:00-12:30 Session: Office Communication
             Solving the Connection  Problem.   M.S. Mazer,  University  of
               Toronto, Canada
             Viewing  Communication  as  a  Problem  Solving  Activity:  An
               Enrichment   Towards  Supporting  Cooperative  Office  Work.
               C.C. Woo, F.H. Lochovsky, University of Toronto, Canada
             CHAOS: A Knowledge-Based System for Conversing Inside Offices.
               F. De    Cindio,    C. Simone,   R. Vassallo,   A. Zanaboni,
               Universita di Milano, Italy
          2:00-3:30 IFIP WG8.4 Business Meeting

------------------------------

Date: Mon, 15 Jun 87 17:29:16 +0200
From: mcvax!cwi.nl!tomi@seismo.CSS.GOV (Tetsuo Tomiyama)
Reply-to: mcvax!cwi.nl!tomi@seismo.CSS.GOV
Subject: Conference - Second Eurographics Workshop on Intelligent CAD
         Systems

                         Call For Papers

     SECOND EUROGRAPHICS WORKSHOP ON INTELLIGENT CAD SYSTEMS

                    -Implementational Issues-

               APRIL 12-15, 1988, THE NETHERLANDS


                          Organized by
  CENTRE FOR MATHEMATICS AND COMPUTER SCIENCE (CWI), AMSTERDAM

                          Sponsored by
                          EUROGRAPHICS


AIM AND SCOPE

     This  is  the  second  workshop  of  a   series   of   three
     Eurographics workshops on Intelligent CAD Systems which have
     the following main topics;
          - 1st, 1987: Theoretical and methodological aspects.
          - 2nd, 1988: Implementational issues.
          - 3rd, 1989: Practical experiences and evaluation.
     Since applying knowledge engineering to  CAD  systems  seems
     very  promising  to  solve  the problems of conventional CAD
     systems,  it  has  drawn  attention  from   not   only   CAD
     researchers  but  also  AI  researchers.  The first workshop
     which was held on April 21-24,  1987,  in  the  Netherlands,
     aimed  at discussing the results and problems in this highly
     interesting field.  We have realized that ad hoc  approaches
     will  eventually  result  in  increased  complexity  of  CAD
     applications and that we need a robust theoretical basis for
     development.

     This  second  workshop  in  1988  is  planned   to   discuss
     implementational  issues  and to clarify problems associated
     with developing  intelligent  CAD  systems  based  on  those
     theoretical and methodological considerations.  The scope of
     the workshop includes, but is not limited to;
          1) Role   of  theories  to  implement  intelligent  CAD
             systems.
          2) Implementations  of  theories  for  intelligent  CAD
             systems.
          3) Architecture of intelligent CAD systems.
          4) Techniques and tools to  implement  intelligent  CAD
             systems.
          5) Acquisition and maintenance of design knowledge.
          6) Innovative  and   large-scale   implementations   of
             intelligent CAD systems.
          7) Problems and  future  tasks  in  implementations  of
             intelligent CAD systems.
     We  are  especially  interested  in  reports   telling   how
     theoretical work influenced implementations.

SCHEDULE FOR THE SECOND WORKSHOP

     November 1, 1987: Deadline   for   extended   abstracts  and
                       position papers.
     December 1987: Notification of acceptance for presentation.
     February 1988: Acceptance of participation.
     April 12-15, 1988: Workshop (Full papers are submitted  just
                        before the workshop).
     May 1988: Deadline for final manuscripts for publication.

SERIES SCHEDULE

     Approximately 15 reviewed papers will be presented  in  this
     second  workshop.   Participants will be limited to about 50
     based on invitation.  Intended authors and participants  are
     invited  to  submit  extended  abstracts or position papers.
     The results of  this  series  of  three  workshops  will  be
     published  by Springer-Verlag as Eurographics Seminar Books.
     The report  on  the  first  workshop  held  in  April  1987,
     "Intelligent  CAD  Systems 1: Theoretical and Methodological
     Aspects," will be published in August 1987.

     This series of workshop is being organized under cooperation
     with  IFIP  Working  Group  5.2 Workshops on Intelligent CAD
     Systems but with different scopes.

ORGANIZATION

Co-Chairmen
     P.J.W. ten Hagen (CWI, NL)
     T. Tomiyama (The University of Tokyo, J)
Technical Secretary
     P.J. Veerkamp (CWI, NL)
Workshop Secretary
     E. Both (CWI, NL)
Program Committee
     A.M. Agogino (University of California, Berkeley, USA)
     V. Akman (CWI, NL)
     F. Arbab (University of Southern California, USA)
     P. Bernus (Hungarian Academy of Sciences, H)
     A. Bijl (University of Edinburgh, UK)
     J. Encarnacao (TH Darmstadt, D)
     S.J. Fenves (Carnegie Mellon University, USA)
     D. Gossard (MIT, USA)
     F. Kimura (The University of Tokyo, J)
     T. Kjellberg (Royal Institute of Technology, S)
     G.A. Kramer (Schlumberger Palo Alto Research, USA)
     M. Mac an Airchinnigh (University of Dublin, IRL)
     K. MacCallum (University of Strathclyde, UK)
     S. Murthy (IBM Thomas J. Watson Research, USA)
     F.J. Schramel (Philips, NL)
     D. Sriram (MIT, USA)
     W. Strasser (Universitaet Tuebingen, D)
     T. Takala (Technical University of Helsinki, SF)
     F. Tolman (TNO, NL)

INFORMATION

     Please submit 5 copies of an extended abstract or a position
     paper  up  to  1,000  words  (figures  and references do not
     count) on A4 sheets before November 1, 1987, to: (Submission
     by electric mail is accepted)
          Ms. Elisabeth Both
          Centre for Mathematics and Computer Science
          Kruislaan 413, 1098 SJ Amsterdam, The Netherlands
          Telephone: (overseas) +31-20-592-4171
          (from the Netherlands) 020-592-4171
          Telex: 12571 mactr nl
          Electric Mail: pauljan@cwi.nl (Internet, Bitnet),
                         ...!mcvax!pauljan (Usenet)

     The extended abstract or position paper should  contain  the
     following information:
        - Name, Address (Postal, Phone, Telex, E-mail), Keywords,
          References.
        - Statements on how you define "design" and  "intelligent
          CAD systems."

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Thu Jun 18 16:56:15 1987
Date: Thu, 18 Jun 87 16:56:06 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #152
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Thu, 18 Jun 87 16:54 EDT
Received: from relay.cs.net by RELAY.CS.NET id ac28701; 18 Jun 87 3:37 EDT
Received: from [10.4.0.2] by RELAY.CS.NET id aa04521; 18 Jun 87 3:38 EDT
Date: Thu 18 Jun 1987 00:07-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #152
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest           Thursday, 18 Jun 1987     Volume 5 : Issue 152

Today's Topics:
  Queries - Nano-Engineering & AI Research in Network Management &
    K-B Expert Systems for Manufacturing &
    Information Management in Software Engineering & Unix Lisps in C,
  AI Tools - ID3 vs C4 & Smalltalk-80 & ML,
  Comments - AI Models in Biology

----------------------------------------------------------------------

Date: 16 Jun 1987 09:10-EDT
From: DAVSMITH@A.ISI.EDU
Subject: Nano-Engineering

The recent discussion of the $6M man reminded me of an oddity
which someone out there in Net-land might be able to clarify.  Early
one morning on NPR (National Public Radio) I was surprised to hear
a feature from someone at the MIT  AI Lab entitled Nano-Engineering.

I hasten to add that it was several months ago, but _not_ on April 1st,
although the following synopsis may lead you to believe such.  The
general thesis was a genetic engineering exercise whereby a little genetic
robot would be created to "assemble" genes.  The really intersting
part was the observation that since these things would naturally be
very small, their first assignment would be to assemble clones of themselves.

Recall that I said this was early in the morning,  but I did check
with another NPR fan in our office who also heard the same feature.

Can anyone confirm (a) that this was perpetrated and (b) that
it came from MIT?

David Smith - DAVSMITH@A.ISI.EDU

------------------------------

Date: 16 Jun 87 21:33:30 GMT
From: dvorak@im4u.utexas.edu  (Daniel L. Dvorak)
Subject: AI research in network management


This is a brainstorming exercise, folks --- all ideas are welcome.
I'm trying to select a PhD research topic in artificial intelligence
that is applicable to network management (of data or voice networks)
or, more liberally, the management of distributed computing environments.

Network management, roughly, is concerned with the operation, administration
and maintenance of communication networks, whether it be the campus network
here at The University of Texas at Austin or the nationwide telephone network.
The term encompasses issues such as congestion control, fault diagnosis,
capacity planning, security, availability, etc.

My questions for you are:
-- What are the important unsolved (or poorly solved) problems here
   that might yield to AI?  Please be specific.
-- What AI research issues should be tested in this domain?
-- Are there any papers that you would recommend to me?
--
-----
Dan Dvorak          UUCP:  {harvard,ihnp4,seismo}!ut-sally!im4u!dvorak
(512) 472-6671      ARPA:  dvorak@im4u.utexas.edu

------------------------------

Date: 16 Jun 87 13:31:30 GMT
From: pt!andrew.cmu.edu!dg1v+@cs.rochester.edu  (David Greene)
Subject: K-B Expert Sys for Manufacturing


Could anyone tell me where I might obtain the following proceedings:

Knowledge-Based Expert Systems for Manufacturing.  Proceedings
                of the Winter Annual Meeting of the American Society of
                Mechanical Engineers (ASME), S. C-Y. Lu and R. Komanduri
                (eds.), Anaheim (CA), December 7-12, 1986.


Please leave mail for:

    dg1v@andrew.cmu.edu          David Greene
                                 GSIA
                                 Carnegie Mellon Univ.
                                 Pittsburgh, PA 15213

------------------------------

Date: 16 Jun 87 17:41:17 GMT
From: pollux.usc.edu!garg@OBERON.USC.EDU  (Pankaj Garg)
Subject: Information management in software engg.


Hi,

I am doing a survey on information management in the development, use,
and maintenance of large scale software systems. I know about several
efforts, but would like to be comprehensive, hence this posting.

If you know of any efforts in databases, information science, or
knowledge representation, in this direction, please let me know.

I can post summaries to those interested.

                regards...


                                ...pankaj


US MAIL: Computer Science Department
         Sal 200
         University of Southern California
         Los Angeles, CA 90089-0782

E-Mail: garg@pollux.usc.edu or garg@cse.usc.edu

Phone:  (213)743-7995, 735-2843

------------------------------

Date: Wed, 17 Jun 87 13:21:22 BST
From: A system manager <root%maths.qmc.ac.uk@Cs.Ucl.AC.UK>
Subject: Unix Lisps in C?

I am seeking information about Unix lisps written entirely in
(hopefully not VAX-specific) C. Pointers to such beasts would be
gratefully received. Anybody who wants a copy of information thus
obtained should let me know - I will be happy to forward it.
        Malcolm MacCallum (mm@maths.qmc.ac.uk)
                           Relays: UKACRL (Bitnet), ucl-cs (arpa)

------------------------------

Date: 14 Jun 87 21:05:39 GMT
From: mcvax!ukc!stc!praxis!gerry@seismo.css.gov  (Gerry Wolff)
Subject: Re: ID3 vs C4

In article <114@upba.UUCP> damon@upba.UUCP (Damon Scaggs) writes:

>I understand that Ross Quinlan, author of the ID3 classification algorithm
>has developed a better version with the designation C4.  I am looking for
>any papers or references about this new algorithm as well as any comments
>about what it does better.

I can't speak for C4 but I will claim, immodestly, that an inductive
learning program I wrote (and reported) a few years ago is, in certain
respects, more sophisticated than ID3.  In particular, it integrates
the learning of segmental structure with the learning of disjunctive
(class) structure.  The program (called SNPR) also has the ability
to generalize structures and to correct overgeneralizations
*without correction by a 'teacher' or the provision of 'negative'
samples*.

The reference is: Wolff J G (1982).  Language acquisition, data
compression and generalization.  Language & Communication 2, 57-89.

*-----------------------------------------------------------------------*
| Dr Gerry Wolff                |       Phone: (44) 225 335855          |
| Praxis Systems plc            |       UUCP: gerry@praxis.co.uk        |
| 20 Manvers Street             |       Telex: 445848 PRAXIS G          |
| Bath                          |       Facsimile Groups 2 & 3          |
| BA1 1PX                       |           (44) 225 65205              |
| UK                            |                                       |
*-----------------------------------------------------------------------*

------------------------------

Date: Mon, 15 Jun 87 16:04:21 PDT
From: "William J. Fulco" <lcc.bill@CS.UCLA.EDU>
Subject: Smalltakl-80 for Sun 3 (and others)

I saw a really nice system, (I mean REALLY nice - with good color support)
from Xerox PARC marketing spinoff at the 1986 AAAI show.  It was running
on a Sun 3/260 and it really sizzles.....

I believe they are going to show this and some other versions (a la Mac)
at the U of WA/Seattle AAAI '87 show July.  I'll be there, Mac II in hand,
drooling !!!!!

I have been waiting (2 months) paitently for information from:

          ParcPlace Systems
          3330 Coyote Hill Road
          Palo Alto, CA 94304
          (415) 859-1000

(bill)

P.S. I would be interested in any implementations you find out about.

------------------------------

Date: 16 Jun 87 01:14:50 GMT
From: mcvax!diku!carllp@seismo.css.gov  (Carl-Lykke Pedersen)
Subject: Re: ML programming, anybody?


Yes, we are some people at diku (datalogisk institut ved
K|benhavns Universitet -> Coputer Science Department at
the University of Copenhagen) who are trying to work with SML.

We are supposed to make a user-manual for the implementation -
but ....

Right now I'm working with a self-interpreter to SML, and it
seems to be ok.

We are using a version from Edingburgh. It's rather old, but we
have some problems in getting a newer version.

Regards
Carl-Lykke

------------------------------

Date: 11 Jun 87 16:10:31 GMT
From: ptsfa!hoptoad!academ!uhnix1!uhnix2!bchso@ames.arpa  (Dan
      Davison)
Subject: Re: Taking AI models and applying them to biology...

In article <1331@sigi.Colorado.EDU> pell@boulder.Colorado.EDU (Anthony
Pelletier) writes:
>P.S. I think alot about information flow problems and would enjoy
>discussions on that...if anyone wants to chat.

Do you know about the "Matrix of Biological Knowledge Workshop" in Santa Fe, NM
July 13-August 14 this year?  One of the subjects is "information flow from
DNA to cells" lead by Dickerson of UCLA, Hershman, also UCLA, and Smith from
MBCRR at Harvard.

For information, contact Ms. Ginger Richardson at The Santa Fe Institute,
P.O. Box 9020, Santa Fe, N.M. 87504-9020; phone 505-984-8800.


dr. dan davison/ Dept of Biochemical and Biophysical Sciences/ U. of Houston
bitnet: bchs6\@uhupvm1.bitnet           |      4800 Calhoun/ Houston, Tx 77004
arpanet: davison\@sumex-aim.stanford.edu|uucp:...rice!academ!uhnix1!uhnix2!bchso

------------------------------

Date: 15 Jun 87 13:06:19 GMT
From: edwards@unix.macc.wisc.edu  (mark edwards)
Subject: Re: Taking AI models and applying them to biology...

In article <7416@boring.cwi.nl> lambert@cwi.nl (Lambert Meertens) writes:
:| > This description of the human memory system, though cloaked in
     vaguer terms,
:| > corresponds more or less one-to-one with the traditional computer
:| > architecture we all know and love.  To wit:
:|
:|   [description deleted]
:|
:| > At least this far, this theory appears to owe a lot to computer science.
:| > Granted, there is lots of empirical evidence in favour, but we all know
:| > how a little evidence can go far too far towards developing an analogy.
:The main similarity appears to be that several levels of memory can be
:discerned, but the suggested analogy in function is a bit far-fetched.
:
:It is perhaps worth pointing out that much of the current models in
:cognitive psychology can already be found in the pioneering work of Otto
:Selz (Muenchen, 1881 - Auschwitz, 1943), antedating the computer era.


    What? You cite facts from the pre-computer age? Shame shame. Don't
  you know that life began with the creation of the computer, as well
  as all the other sciences! With out the computer all other life would
  cease to exist!

    Its a sad fact that the above holds true for many computer scientist
  especially those in AI. Many still believe the sacred words AI were
  really coined in the late fifties, and that LISP and Liebnitz have
  no connection. When infact my prof. has given references from a latin
  book dating back to the 13th century, with the latin words for AI.
    The state of the art of computer science is in bad shape when the
  computer wheel must be re-invented every year because th CS people refuse
  to read any book that seemingly has nothing to do with CS or computers.

    Thanks for the reference. I may be the only one who benefits from
  it, because those CS'ers practicing the art would certainly declare it
  blasphemous and maybe just short of heresy.

  I may be stoned for this.....

  mark
--
    edwards@unix.macc.wisc.edu
    {allegra, ihnp4, seismo}!uwvax!uwmacc!edwards
    UW-Madison, 1210 West Dayton St., Madison WI 53706

------------------------------

Date: 15 Jun 87 03:47:26 GMT
From: ihnp4!alberta!mnetor!utzoo!utgpu!utcsri!utegc!utai!tjhorton@ucbv
      ax.Berkeley.EDU
Subject: Re: Taking AI models and applying them to biology...

>lambert@cwi.nl (Lambert Meertens) writes:
>It is perhaps worth pointing out that much of the current models in
>cognitive psychology can already be found in the pioneering work of Otto
>Selz (Muenchen, 1881 - Auschwitz, 1943), antedating the computer era.

1943 was at least 7 years after Turing published his paper
(fifty years ago, last November) and 5 years after Shannon
published his thesis about information theory.  Although I
don't know Selz, his life definitely spanned into the dawn
of the "computer era".  It's interesting - do these models
of his pre-date these "computeresque"  notions?

Timothy J Horton <tjhorton@utai.toronto.edu>

------------------------------

Date: 16 Jun 87 18:04:06 GMT
From: pyramid!prls!philabs!aecom!diaz@decwrl.dec.com  (Dizzy Dan)
Subject: Re: Taking AI models and applying them to biology...

In article <395@uhnix2.UUCP>, bchso@uhnix2.UUCP (Dan Davison) writes:
> Do you know about the "Matrix of Biological Knowledge Workshop" in Santa Fe,
> NM
> July 13-August 14 this year?  One of the subjects is "information flow from
> DNA to cells" lead by Dickerson of UCLA, Hershman, also UCLA, and Smith from
> MBCRR at Harvard.
>
> For information, contact Ms. Ginger Richardson at The Santa Fe Institute,
> P.O. Box 9020, Santa Fe, N.M. 87504-9020; phone 505-984-8800.
>

Sorry gang, but applications for the Matrix Workshop were due in April.
If you are interested, the Santa Fe Institute may be able to put you in
touch with some of the faculty running the workshops.
--
5'gtacggagc dn/dx = Dan Diaz    (philabs!aecom!diaz)
            Department of Molecular Biology & Snake Oil Dynamics
            Albert Slimestein College of Medicine ctataacagcta 3'

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Sat Jun 20 03:24:19 1987
Date: Sat, 20 Jun 87 03:24:13 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #153
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Sat, 20 Jun 87 03:20 EDT
Received: from relay.cs.net by RELAY.CS.NET id aa11864; 20 Jun 87 1:33 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa26357; 20 Jun 87 1:27 EDT
Date: Fri 19 Jun 1987 22:02-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #153
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest           Saturday, 20 Jun 1987     Volume 5 : Issue 153

Today's Topics:
  Queries - Computer Composition of Music &
    AI in Criminal Investigation &
    Expert Systems in Process Control,
  AI Tools - Unix LISPs in C,
  Speculation - Nanotechnology,
  Psychology - Why Did The $6,000,000 Man Run So Slowly?

----------------------------------------------------------------------

Date: 17 Jun 87 19:50:07 GMT
From: pwa-b!mmintl!johnt@gr.utah.edu  (John Tangney)
Subject: Computer composition of music


Composing music -- procedurally:


Until 1981 I had been doing some research on computer composition
of music.  I have now taken up where I left off.  Even back then
I was not up to date with the latest advances.  A lot must have
happened in this field since then.

I want to contact people who know something about composing by
computer.  Some of the researchers I read about (like Max
Mathews, Lejaren Hiller, Iannis Xenakis, Stephen Smoliar to name
a few off the top of my head) must still be out there.  Also,
people like myself, who do this in their spare time, must have
ideas, suggestions, sources of information.  What about journals,
books, papers on the subject?

If anyone in net.land knows anything about computer composition
of music, or knows anyone who does, then I beg you to let me
know.  E-mail is most sensible.  Who knows?  Maybe we could end
up with our own news.group!  A phone call or Snailmail reply
would be most welcome too.

John Tangney                ...inhp4!philabs!pwa-b!mmintl!johnt
52 Oakland Ave, East Hartford, CT 06108.  Phone: (203) 522-2116

------------------------------

Date: 19 Jun 87 08:21 PDT
From: gaska.pasa@Xerox.COM
Subject: Use of AI in Criminal Investigation

Does any one have any pointers to papers, books, persons, etc. that deal
with the use of AI in criminal investigation and forensic science? Any
and all leads will be greatly appreciated.

Len Gaska
GASKA@PASA.XEROX.COM

------------------------------

Date: 18 Jun 1987 11:58:37 EST
From: Herve.Lambert@PS3.CS.CMU.EDU
Subject: Expert Systems in Process Control

I  have to find some literature about actually operational expert systems in
process control. All I know is the example of PICON used at a Texaco's
refinery. Any informations, pointers very much aprreciated...
If  I  get interesting enough info, and if some people express the desire to
have the result of my query posted, I will do so...

Thanks in advance

- Herve

Net-address: herve@ps3.cs.cmu.edu

  [How about this blurb from Business Week, "The 'Renaissance Man' of
  Expert Systems?", Emily T. Smith, May 11, 1978, p. 141:

    The trouble with using so-called expert system computer programs
    in the factory is that very few manufacturing operations involve
    only one realm of expertise.  It's tough enough getting two experts
    in the same field to agree, let alone a gaggle of experts from
    different disciplines.  So Major Stephen R. LeClair, head of
    research in artificial intelligence for manufacturing at the
    Materials Laboratory at Wright-Patterson Air Force Base, decided
    it was time to devise a new type of expert system -- one that
    could embrace multiple "domains" of expertise, automatically
    resolve any conflicts, and "learn" from the process.

    In its first real-world test, LeClair's multiexpert knowledge
    system (MKS) recently turned in a stunning performance.  It
    discovered, on its own, that the accepted guidelines for curing
    complex plastics composites are all wet.  The aerospace industry
    has been taking 12 hours to bake a 256-layer, graphite-reinforced
    lamination used for airframe parts.  By synthesizing the knowledge
    from various fields, MKS came up with a complicated scheme for
    curing the composite in less than three hours.  No one believed
    it could work, but it does.  LeClair asserts that MKS may similarly
    confound convential wisdom in other process-control applications.

  The same page has another short report about a system that measures
  rough gemstones (other than diamonds), plans the optimal cuts, and
  cuts the stones.  It reduces wasted stone by 10%, cost by 70%,
  and makes marginal stones useable.  -- KIL]

------------------------------

Date: Fri, 19 Jun 87 12:11:16 EDT
From: dml@nadc.arpa (D. Loewenstern)
Subject: Unix LISPs in C

In response to your request for Unix LISPs written entirely in C, I
believe I can recommend Kyoto Common Lisp.  It has no real editor (it
shells out to vi!!)  but it implements nearly the entirety of Common
LISP.  The compiler translates to C, then invokes the C compiler.  I
know of VAX, ECLIPSE, and Sun versions.  Write to:
IBUKI
399 Main Street
Los Altos, CA 94022

David Loewenstern
Naval Air Development Center
code 7013
Warminster, PA 18974-5000

<dml@nadc.arpa>

------------------------------

Date: Thu 18 Jun 87 17:21:23-CDT
From: Jonathan Slocum <AI.Slocum@MCC.COM>
Subject: nanomachinery

The book "Engines of Creation," by one K. Eric Drexler, describes this
technology and discusses the societal ramifications of its introduction.
Whether one believes in the possibility of such things or not (Drexler
is a persuasive advocate), it makes for good reading in my opinion.  He
was (is?), I believe, associated with MIT in some way (don't have the
book with me, so can't refer to the jacket) -- perhaps a student??

-Jonathan Slocum

------------------------------

Date: 18 Jun 87 07:35:44 edt
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: Nano-Engineering

   Date: 16 Jun 1987 09:10-EDT
   From: DAVSMITH@A.ISI.EDU

   The recent discussion of the $6M man reminded me of an oddity
   which someone out there in Net-land might be able to clarify.  Early
   one morning on NPR (National Public Radio) I was surprised to hear
   a feature from someone at the MIT  AI Lab entitled Nano-Engineering.

                * * *

   Can anyone confirm (a) that this was perpetrated and (b) that
   it came from MIT?

Its proponents call it Nanotechnology.  The most well known spokesman
seems to be Eric Drexler, who has written a book about it called
"Engines of Creation."  I think it's from MIT Press.  Below I have
included an announcement of a two day symposium that was held during
IAP (Independent Activities Period, known as "January" to the world
outside MIT).  As you can see from the header of the
message, there is a mailing list called nanotechnology@oz.ai.mit.edu,
or, from outside MIT a better bet would be to try
nanotechnology@ai.ai.mit.edu.

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

Date: Fri, 16 Jan 87 02:35 EST
From: Christopher Fry <cfry@OZ.AI.MIT.EDU>
Subject: Nanotechnology Symposium
To: nanotechnology@OZ.AI.MIT.EDU, MACROMOLECULES-MIT@OZ.AI.MIT.EDU,
        ROBOTICS-SEMINARS@OZ.AI.MIT.EDU, *BBOARD@OZ.AI.MIT.EDU

Exploring Nanotechnology
An IAP 87 Symposium

All technology rests ultimately on our ability to arrange atoms.
Foreseeable technological advances will enable us to build devices
to atomic specifications.  This "nanotechnology" will have profound
consequences, forcing a reevaluation of our expectations regarding the next
several decades.  In the symposium, we will explore paths to the
development of nanotechnology, consequences of the technology in
various disciplines, and we will critically examine the premises of
these assertions via panel discussions which will include experts
in several fields.

Tuesday, 20 January 1987, 10-250
10:00 - 11:00 am  Overview: Eric Drexler (BS '77, MS '79)
will describe various paths to the
development of replicating assembler systems, capable of manufacturing
complex components to atomic specification.  Some potential
applications, such as mechanical nanocomputers, and their
consequences will be discussed.  We strongly recommend you attend this talk
in order to follow the subsequent discussions in context.

11:05 - 11:45 am Materials Science and Protein Engineering: Kevin Ulmer
will discuss the protein engineering techniques which could be used to
create new alloys and composites.  New materials made
to atomic specifications promise order of magnitude improvements in
performance.  One consequence is space transportation costs equivalent
to current airline costs.

Noon - 1:00 pm Lunch Break

1:00 - 1:40 pm Panel Discussion I.
A panel of experts will discuss the technical
feasibility of various aspects of nanotechnology, including consideration
of the time frame.  A panel moderator will take questions from the
audience.

1:45 - 2:25 pm Economics:  David Friedman
will discuss the consequences of
nanotechnology, such as extreme decentralization of the economy.
On-site, personal manufacturing stations could virtually eliminate
mass production.  What will happen to our economy during the transition
to this technology?

2:30 - 3:10 pm Society, Technology and Policy:  Arthur Kantrowitz
will share his thoughts on
how society may be affected, and what kind of future may be in store
for the human race.  How can our government adapt to this new technology
and what legislation, if any, should be enacted to control its development?

3:10 - 3:25  Break.

3:25 - 4:05 pm Thought and Intelligence:  Marvin Minsky
will speak on intelligent
systems which could employ Avogadro's number of parallel nanocomputers.
Achieving artificial intelligence by mimicking human brain architecture
is a rapid route to true AI with nanotechnology.

4:10 - 4:40 pm  Concluding Points:  Eric Drexler
will wrap up by describing life
extension possibilities using cell repair machines.

4:10 - 5:00 pm Panel Discussion II.
A panel of experts will discuss the
societal implications of nanotechnology, including steps we might take
to avoid some of the dangerous consequences of nanotechnology.  A panel
moderator will take questions from the audience.

Thursday, 22 January 1987  7:30 - 10:00 pm  Advanced Topics:
NE43-773
As an extension to the symposium we will hold a special session
during the regular meeting time of the MIT Nanotechnology
Study Group.  We will discuss, in depth, critical issues regarding the
development of nanotechnology such as control of assemblers, guidance
of technology development, and prevention
of abuse.  Eric Drexler will be with us.  Recommended only for those who
attend the symposium on Tuesday, or who have attended NSG introductory lectures
in the past.

Sponsored by the MIT Nanotechnology Study Group,
the Dept. of Applied Biological Sciences,
the Artificial Intelligence Laboratory,
the Office of the Associate Provost,
the Graduate Student Council,
the Dept. of Materials Science and Engineering,
and the Dept. of Political Science.

Special thanks to the AI Lab for its generous support of this activity.

Contact cfry@@MIT-OZ

------------------------------

Date: 18 Jun 87 08:53:01 CDT (Thu)
From: ernst%home%ti-csl.csnet@RELAY.CS.NET
Subject: Nanotechnology


The "nano-engineering" that David Smith heard about on an
early-morning NPR show is, indeed, no joke.  Its chief proponent is
K. Eric Drexler, who describes the theory in his 1985 book _Engines of
Creation_.  He is associated with MIT, and he has a stong following
there.  In particular, Marvin Minsky wrote the forward to the
above-mentioned book and spoke, along with Eric Drexler and others, at
a recent day-long symposium on nano-technology at MIT.

The idea behind nanotechnology is the creation of tiny machines which
would be built up molecule by molecule by "molecular assemblers",
which would function much like DNA or RNA in fishing for the right
component to add to a structure.  Because of their small size, their
manipulators would move a million times a second, resulting in
extraordinarily quick construction.  Mechanical nanocomputers (that
is, they would contain tiny gears made of a handful of atoms and such,
on the order of Hillis's mechanical tic-tac-toe player) orders of
magnitude more powerful than current machines but small enough to fit
in dust-speck sized nanomachines would carry instructions and direct
work.

Drexler envisions the construction of assemblers within a few decades
as a result of advances in bioengineering and other sciences.  It is a
technology he believes will powerfully leverage off itself: after the
first assembler is built, uncountable trillions more will follow
almost immediately, and scientific breakthroughs in many fields (all
of which will be able to use nanotechnology or its products as a tool)
will be made in days rather than years.

Drexler's book is more about what changes will be made in society with
the advent of nanomachines than their technical aspects; after all, no
one is close to the advances he envisions.  He discusses jet engines
built in hours, self-repairing machines, AI workstations of
unprecedented power, and world hypertext systems as well as more
sinister possibilities like the capability to build tiny airborne
surveillance devices or supergerms that could destroy life on earth in
hours.

Although much of the material is hard to believe, I recommed the book
for an interesting mix of philosophy and forward-sighted scientific
thought (or science fiction, call it what you like).

                                        -Michael Ernst

MIT AI Lab                      Texas Instruments AI Lab
mernst@oz.ai.mit.edu            ernst%home%ti-csl@csnet-relay.arpa
...!eddie!mernst

The opinions expressed above are not only not those of my employer,
they may well not be my own.

------------------------------

Date: Mon, 15 Jun 87 14:48 EDT
From: Nichael Cramer <nichael@JASPER.PALLADIAN.COM>
Reply-to: Nichael Cramer <NICHAEL%JASPER@LIVE-OAK.LCS.MIT.EDU>
Subject: Re: Why Did The $6,000,000 Man Run So Slowly?

>>
>>    Date: Fri, 12 Jun 87 00:51:41 EDT
>>    From: tim@linc.cis.upenn.edu (Tim Finin)
>>    Subject: why did the $6,000,000 man run so slowly?
>>

I had always assumed that he ran slowly for the same reason that the
people on "Kung Fu" always fought so slowly; namely that it's
technically much easier to depict graphic, physical motion (and
violence) in this way.  You have the first actor throwing punches that
actually connect with the second actor's jaw, except that he's moving
more slowly in real time, and so not crippling the second actor with
every blow.  Once you slow this down a lot, the viewer loses all sense
of how much the time is really altered; i.e. the slow motion camera
technique masks the slowed down "acting".

In the present case, slow motion has the effect of distorting your
time sense and allowing the film makers to use other (cheaper?)
methods to suggest high-speed to the viewer, e.g. swooshing sounds or
tense music.

(With regard to these non-visual cues used to suggest high-speed: As
others have pointed out, watch the opening of "Star Trek" with the
sound turned off.  The Enterprise, which would normally sweeping
across the screen at Warp N, will, with the usual swooshing sound
missing, simply creep across the screen.)



                                                        NICHAEL

------------------------------

Date: 15 Jun 87 19:36:28 GMT
From: tektronix!teklds!zeus!bobr@ucbvax.Berkeley.EDU  (Robert Reed)
Subject: Re: Why did the six-million dollar man run so slowly?

Because it was cheaper to take slow motion footage to show SOMETHING was
happening than to make a believable high speed effect.  Of course, they
could taken blue screen shots of Steve Austin running normally and
composited in a high speed background, but many of the shots involved his
feet.  Making a believable shot under those conditions would have been a lot
more expensive.

It is interesting to note that the recent reunion of the "bionic family"
represented the new generation of bionic technology by having his son blur
(it actually looked like defocused multiple exposures) during the slow
motion "high speed" running shots.
--
Robert Reed, Tektronix CAE Systems Division, bobr@zeus.TEK

------------------------------

Date: Mon, 15 Jun 87 17:15:04 edt
From: amsler@flash.bellcore.com (Robert Amsler)
Subject: Re:  AIList Digest   V5 #145

Incidentally.... Re: Dr. Who's TARDIS. I've decided most of the
discussions were wrong. Few people considered the function of the
`relative dimensions stabilizer circuits' which are intended to
compensate for dimensional anomalies. It would be QUITE possible to
have the inside view of the TARDIS look either miniturized or like
a small window into a larger room. One should recall that anomalies in
the circuit can cause the TARDIS inhabitants to actually BE smaller
when they emerge. Anyway... wrong discussion. `pop'

Re: bionics. It has been my belief for some time that the mind
operates using movie techniques when examining moving image memories.
That is, we employ cuts, zooms, view angles, props, etc. in such
memory recording and dreams. It would seem reasonable that we have
borrowed this acceptable form of imaging and used it in films--why,
for instance, should a cut between two views be acceptable
cinamatography. Some cinematographic techniques violate our `dream'
view methods. For instance, when one holds the camera at a bad angle
the impact is typically to introduce the concept of the camera into
the film, i.e. one way to show something is being seen through a
camera lens in a film, is to have the camera do bad cinematographic
techniques--ones which make the artificiality of the instrument apparent
(another problem is whenever things get on the lens, such as rain or
ocean spray or dust, etc.)

Now, the speed to slow motion effect is interesting in that I don't
believe it does have a natural human moving image memory counterpart.
We never see things in slow motion ourselves, except as they have
been slowed down by the use of film etc. That indeed explains to me
why this is being discussed in AILIST.  I.e.  it is an artificial
learned moving-image association. The interesting thing is that is
SEEMS to be possible to introduce this into the visual recording
system for memories in the brain without causing the ``Oh, this is
being shot through a camera'' phenomena.

I suspect what is happening is that this is analogous to the focusing
of attention on the events which happened in a real moving image
memory.  That is, if one attempts to reconstruct an event that
happened very quickly in real time after the fact, one will
artificially create something like slow motion.

---- Note: I am NOT saying that we really have moving images in the
brain. It is unclear we have images at ALL; however, the mapping
between what we do have and what we accept in cinematographic
portrayals is an interesting one.

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Sun Jun 21 03:31:16 1987
Date: Sun, 21 Jun 87 03:31:06 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #154
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Sun, 21 Jun 87 03:26 EDT
Received: from relay.cs.net by RELAY.CS.NET id aa02085; 20 Jun 87 19:39 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa02779; 20 Jun 87 19:25 EDT
Date: Sat 20 Jun 1987 16:08-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #154
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest            Sunday, 21 Jun 1987      Volume 5 : Issue 154

Today's Topics:
  Theory - Symbol Grounding and Physical Invertibility

----------------------------------------------------------------------

Date: 16 Jun 87  1559 PDT
From: John McCarthy <JMC@SAIL.STANFORD.EDU>
Subject: Symbol Grounding Problem and Disputes

[In reply to message sent Mon 15 Jun 1987 23:23-PDT.]

This dispute strikes me as unnecessarily longwinded.  I imagine that the
alleged point at issue and a few of the positions taken could be
summarized for the benefit of those of us whose subjective probability
that there is a real point at issue is too low to motivate studying the
entire discussion but high enough to motivate reading a summary.

------------------------------

Date: 16 Jun 87 17:41:50 GMT
From: ihnp4!homxb!houxm!houdi!marty1@ucbvax.Berkeley.EDU 
      (M.BRILLIANT)
Subject: Re: The symbol grounding problem (Reply to Ken Laws on
         ailist)

In article <849@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> .... Invertibility could fail to capture the standard A/D distinction,
> but may be important in the special case of mind-modeling. Or it could
> turn out not to be useful at all....

So what do you think is essential: (A) literally analog transformation,
(B) invertibility, or (C) preservation of significant relational
functions?

> ..... what I've said about the grounding problem and the role
> of nonsymbolic representations (analog and categorical) would stand
> independently of my particular criterion for analog; substituting a more
> standard one leaves just about all of the argument intact.....

Where does that argument stand now?  Can we restate it in terms whose
definitions we all agree on?

> ..... to get the requisite causality I'm looking
> for, the information must be interpretation-independent. Physical
> invertibility seems to give you that......

I think invertibility is too strong.  It is sufficient, but not
necessary, for human-style information-processing.  Real people forget
awesome amounts of detail, we misunderstand each other (our symbol
groundings are not fully invertible), and we thereby achieve levels of
communication that often, but not always, satisify us.

Do you still say we only need transformations that are analog
(invertible) with respect to those features for which they are analog
(invertible)?  That amounts to limited invertibility, and the next
essential step would be to identify the features that need
invertibility, as distinct from those that can be thrown away.

> Ken Laws <Laws@Stripe.SRI.Com> on ailist@Stripe.SRI.Com writes:
> >     ... I am sure that methods for decoding both discrete and
> >     continuous information in continuous signals are well studied.
>
> I would be interested to hear from those who are familiar with such work.
> It may be that some of it is relevant to cognitive and neural modeling
> and even the symbol grounding problems under discussion here.

I'm not up to date on these methods.  But if you want to get responses
from experts, it might be well to be more specific.  For monaural
sound, decoding can be done with Fourier methods that are in principle
continuous.  For monocular vision, Fourier methods are used for image
enhancement to aid in human decoding, but I think machine decoding
depends on making the spatial dimensions discontinous and comparing the
content of adjacent cells.

M. B. Brilliant                                 Marty
AT&T-BL HO 3D-520       (201)-949-1858
Holmdel, NJ 07733       ihnp4!houdi!marty1

------------------------------

Date: Wed 17 Jun 87 23:33:01-PDT
From: Ken Laws <Laws@Stripe.SRI.Com>
Subject: Visual Decoding

  From: ihnp4!homxb!houxm!houdi!marty1@ucbvax.Berkeley.EDU  (M.BRILLIANT)

  For monaural
  sound, decoding can be done with Fourier methods that are in principle
  continuous.  For monocular vision, Fourier methods are used for image
  enhancement to aid in human decoding, but I think machine decoding
  depends on making the spatial dimensions discontinous and comparing the
  content of adjacent cells.

Marty is right; one must be specific about the types of signals that are
carrying the information.  Information theorists tend to work with
particular types of modulation (e.g., radar returns), but are interested
in the general principles of information transmission.  Some of the
spread spectrum work is aimed at concealing evidence of modulation while
still being able to recover the encoded information.

Fourier techniques are particularly appropriate for speech processing
because sinusoidal waveforms (the basis of Fourier analysis) are the
eigenforms of acoustic channels.  In other words, the sinusoidal components
of speech are transmitted relatively unharmed, although the phase relationships
between the components can be scrambled.  Any process that decodes acoustic
signals must be prepared to deal with a little phase spreading.  Other
1-D signals (e.g., spectrographic signatures of chemicals) may be composed
of Gaussian pulses or other basis forms.  Yet others may be generated by
differential equations rather than composition or modulation of basis
functions.  Decoding generally requires models of the generating process
and of the channel or sensing transformations, particularly if the latter
are invertible.

Images are typically captured in discrete arrays, although we know that
biological retinas are neither limited to one kind of detector/resolution
nor so spatially regular.  Discrete arrays are convenient, and the Nyquist
theorem (combined with the limited spatial resolution of typical imaging
systems) gives us assurance that we lose nothing below a specific minimum
frequency -- we can, if we wish, reconstruct the true image intensity at
any point in the image plane, regardless of its relationship to the pixel
centers.  (In practice this interpolation is exceedingly difficult and is
almost never done -- but enough pixels are sampled to make interpolation
unnecessary for the types of discrimination we need to perform.)  The
discrete pixel grid is often convenient but is not fundamental to the
enterprise of image analysis.

A difficulty in image analysis is that we rarely know the shapes of the
basis functions that carry the information; that, after all, is what we
are trying to determine by parsing a scene into objects.  We do have
models of the optical channels, but they are generally noninvertible.
Our models of the generating processes (e.g., real-world scenes) are
exceedingly weak.  We have some approaches to decoding these signals,
but nothing approaching the power of the human visual system except in
very special tasks (such as analysis of bubble chamber photographs).

                                        -- Ken

------------------------------

Date: 17 Jun 87 08:02:00 EST
From: cugini@icst-ecf.arpa
Reply-to: <cugini@icst-ecf.arpa>
Subject: symbol grounding and physical invertibility


I hate to nag but...

In all the high-falutin' philosophical give-and-take (of which, I admit,
I am actually quite fond) there's been no response to a much more
*specific* objection/question I raised earlier:

What if there were a few-to-one transformation between the skin-level
sensors (remember Harnad proposes "skin-and-in" invertibility
as being necessary for grounding) and the (somewhat more internal)
iconic representation.  My example was to suppose that #1:
a combination of both red and green retinal receptors and #2 a yellow
receptor BOTH generated the same iconic yellow.

Clearly this iconic representation is non-invertible back out to the
sensory surfaces, but intuitively it seems like it would be grounded
nonetheless - how about it?


John Cugini <Cugini@icst-ecf.arpa>

------------------------------

Date: 17 Jun 87 18:32:20 GMT
From: diamond.bbn.com!aweinste@husc6.harvard.edu  (Anders Weinstein)
Subject: Re: The symbol grounding problem (Reply to Ken Laws on
         ailist)

In article <849@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>                                   As long as the requisite
>information-preserving mapping or "relational function" is in the head
>of the human interpreter, you do not have an invertible (hence analog)
>transformation. But as soon as the inverse function is wired in
>physically, producing a dedicated invertible transformation, you do
>have invertibility, ...

This seems to relate to a distinction between "physical invertibility" and
plain old invertibility, another of your points which I haven't understood.

I don't see any difference between "physical" and "merely theoretical"
invertibility.  If a particular physical transformation of a signal is
invertible in theory, then I'd imagine we could always build a device to
perform the actual inversion if we wanted to. Such a device would of course
be a physical device; hence the invertibility would seem to count as
"physical," at least in the sense of "physically possible".

Surely you don't mean that a transformation-inversion capability must
actually be present in the device for it to count as "analog" in your sense.
(Else brains, for example, wouldn't count). So what difference are you trying
to capture with this distinction?

Anders Weinstein
BBN Labs

------------------------------

Date: 17 Jun 87 20:12:22 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem


marty1@houdi.UUCP (M.BRILLIANT) asks:

>       what do you think is essential: (A) literally analog transformation,
>       (B) invertibility, or (C) preservation of significant relational
>       functions?

Essential for what? For (i) generating the pairwise same/different judgments,
simlarity judgments and matching that I've called, collectively,
"discrimination", and for which I've hypothesized that there are
iconic ("analog") representations? For that I think invertibility is
essential. (I think that in most real cases what is actually
physically invertible in my sense will also turn out to be "literally
analog" in a more standard sense. Dedicated digital equivalents that
would also have yielded invertibility will be like a Rube-Goldberg
alternative; they will have a much bigger processing cost. But for my
puroposes, the dedicated digital equivalent would in principle serve
just as well. Don't forget the *dedicated* constraint though.)

For (ii) generating the reliable sorting and labeling of objects on the
basis of their sensory projections, which I've called collectively,
"identification" or "categorization"? For that I think only distinctive
features need to be extracted from the sensory projection. The rest need
not be invertible. Iconic representations are one-to-one with the
sensory projection; categorical representations are many-to-few.

But if you're not talking about sensory discrimination or about
stimulus categorization but about, say, (iii) conscious problem-solving,
deduction, or linguistic description, then relation-preserving
symbolic representations would be optimal -- only the ones I advocate
would not be autonomous (modular). The atomic terms of which they were
composed would be the labels of categories in the above sense, and hence they
would be grounded in and constrained by the nonsymbolic representations.
They would preserve relations not just in virtue of their syntactic
form, as mediated by an interpretation; their meanings would be "fixed"
by their causal connections with the nonsymbolic representations that
ground their atoms.

But if your question concerns what I think is nesessary to pass the
Total Turing Test (TTT), I think you need all of (i) - (iii), grounded
bottom-up in the way I've described.

>       Where does [the symbol grounding] argument stand now? Can we
>       restate it in terms whose definitions we all agree on?

The symbols of an autonomous symbol-manipulating module are
ungrounded. Their "meanings" depend on the mediation of human
interpretation. If an attempt is made to "ground" them merely by
linking the symbolic module with input/output modules in a dedicated
system, all you will ever get is toy models: Small, nonrepresentative,
nongeneralizable pieces of intelligent performance (a valid objective for
AI, by the way, but not for cognitive modeling). This is only a
conjecture, however, based on current toy performance models and the
the kind of thing it takes to make them work. If a top-down symbolic
module linked to peripherals could successfully pass the TTT that way,
however, nothing would be left of the symbol grounding problem.

My own alternative has to do with the way symbolic models work (and
don't work). The hypothesis is that a hybrid symbolic/nonsymbolic
model along the lines sketched above will be needed in order to pass
the TTT. It will require a bottom-up, nonmodular grounding of its
symbolic representations in nonsymbolic representations: iconic
( = invertible with the sensory projection) and categorical ( = invertible
only with the invariant features of category members that are preserved
in the sensory projection and are sufficient to guide reliable
categorization).

>       I think invertibility is too strong. It is sufficient, but not
>       necessary, for human-style information-processing. Real people
>       forget... misunderstand...

I think this is not the relevant form of evidence bearing on this
question.  Sure we forget, etc., but the question concerns what it takes
to get it right when we actually do get it right. How do we discriminate,
categorize, identify and describe things as well as we do (TTT-level)
based on the sensory data we get? And I have to remind you again:
categorization involves at least as much selective *non*invertibility
as it does invertibility. Invertibility is needed where it's needed;
it's not needed everywhere, indeed it may even be a handicap (see
Luria's "Mind of a Mnemonist," which is about a person who seems to
have had such vivid, accurate and persisting eidetic imagery that he
couldn't selectively ignore or forget sensory details, and hence had
great difficulty categorizing, abstracting and generalizing; Borges
describes a similar case in "Funes the Memorious," and I discuss the
problem in "Metaphor and Mental Duality," a chapter in Simon & Sholes' (eds.)
"Language, Mind and Brain," Academic Press 1978).

>       Do you still say [1] we only need transformations that are analog
>       (invertible) with respect to those features for which they are analog
>       (invertible)?  That amounts to limited invertibility, and the next
>       essential step would be [2] to identify the features that need
>       invertibility, as distinct from those that can be thrown away.

Yes, I still say [1]. And yes, the category induction problem is [2].
Perhaps with the three-level division-of-labor I've described a
connectionist algorithm or some other inductive mechanism would be
able to find the invariant features that will subserve a sensory
categorization from a given sample of confusable alternatives. That's
the categorical representation.
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: Thu, 18 Jun 87 10:08:27 pdt
From: Ray Allis <ray@BOEING.COM>
Subject: The Symbol Grounding Answer

I have enjoyed the ailist's tone of rarified intellectual inquiry,
but lately I have begun to think the form of the question "What is
the solution to the Symbol Grounding Problem" has unduly influenced
the content of the answer, as in "How many angels can dance on the
head of a pin?"

You are solemnly discussing angels and pinheads.

There is no "Symbol Grounding Problem"; the things are *not* grounded.

The only relationship a symbol has with anything is that the physical
effects (electrical and chemical) of its perception in the brain of a
perceiver co-exist with the physical effects of other perceptions, and
are consequently associated in that individual's brain, and therefore
mind.  It happens when we direct our baby's attention at a bovine and
clearly enunciate "COW".  There is no more "physical invertibility" in
that case than there is between you and your name, and there is no other
physical relationship.  And, as we computer hackers are wont to say,
"That's a feature, not a bug".  It means we can and do "think" about
things and relationships which may not "exist".   (BTW, it's even better!
You are right now using second-level symbols.  The visual patterns you
are perceiving on paper or on a display screen are symbols for sounds,
which in turn are symbols for experiences.)

Last year's discussion of the definitions of "analog" and "digital" are
relevant to the present topic.  In the paragraph above, the electrical
and chemical effects in the observer's brain are an *analogy* (we
hypothesize) of external "reality".  These events are *determined* (we
believe) by that reality, i.e., for each external situation there is one
and only one electro-chemical state of the observer's brain.  Now, the
brain effects appear pretty abstracted, or attenuated, so "complete
invertibility" is unlikely, but if we can devise a fancy enough brain,
may be approachable.  No such deterministic relationship holds between
external "reality" and symbols.  As I noted above, symbols are related
to their referents by totally arbitrary association.

Thus, there is nothing subtle about the distinction between "analog"
and "digital"; they are two profoundly different things.  The "digital"
side of an A/D relationship is *symbolic*.  The relationship (we humans
create) between a symbol and a quantity is wholly arbitrary.  The value
here is that we can use *deductive* relationships in our manipulation
of quantities, rather than, say, pouring water back and forth among a set
of containers to balance our bank account.

I am one of those convinced by such considerations that purely symbolic
means, which includes most everything we do on digital computers, are
*insufficient in principle* to duplicate human behavior.  And I have some
ideas about the additional things we need to investigate.  (By the way,
whose behavior are we to duplicate?  Ghengis Khan?  William Shakespeare?
Joe Sixpack?  All of the above in one device?  The Total Turing Test is
just academic obfuscation of "If it walks like a duck, and quacks like
A duck ...").

------------------------------

Date: 18 Jun 87 18:26:23 GMT
From: diamond.bbn.com!aweinste@husc6.harvard.edu  (Anders Weinstein)
Subject: Re: The symbol grounding problem

In article <861@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>                                   The atomic terms of which they were
>composed would be the labels of categories in the above sense, and hence they
>would be grounded in and constrained by the nonsymbolic representations.
>They would preserve relations not just in virtue of their syntactic
>form, as mediated by an interpretation; their meanings would be "fixed"
>by their causal connections with the nonsymbolic representations that
>ground their atoms.

I don't know how significant this is for your theory, but I think it's worth
emphasizing that the *semantic* meaning  of a symbol is still left largely
unconstrained even after you take account of it's "grounding" in perceptual
categorization.  This is because what matters for intentional content is not
the objective property in the world that's being detected, but rather how the
subject *conceives* of that external property, a far more slippery notion.

This point is emphasized in a different context in the Churchland's BBS reply
to Drestke's "Knowledge and the Flow of Information." To paraphrase one of
their examples: primitive people may be able to reliably categorize certain
large-scale atmospheric electrical discharges; nevertheless, the semantic
content of their corresponding states might be "Angry gods nearby" or some
such. Indeed, by varying their factual beliefs we could invent cases where
the semantic content of these states is just about anything you please.
Semantic content is a holistic matter.

Another well-known obstacle to moving from an objective to an intentional
description is that the latter contains an essentially normative component,
in that we must make some distinction between correct and erroneous
classification. For example, we'd probably like to say that a frog has a
fly-detector which is sometimes wrong, rather than a "moving-spot-against-a-
fixed-background" detector which is infallible. Again, this distinction seems
to depend on fuzzy considerations about the purpose or functional role of the
concept in question.

Some of the things you say also suggest that you're attempting to resuscitate
a form of classical empricist sensory atomism, where the "atomic" symbols
refer to sensory categories acquired "by acquaintance" and the meaning of
complex symbols is built up from the atoms "by description". This approach
has an honorable history in philsophy; unfortunately, no one has ever been
able to make it work. In addition to the above considerations, the main
problems seem to be: first, that no principled distinction can be made
between the simple sensory concepts and the complex "theoretical" ones; and
second, that very little that is interesting can be explicitly defined in
sensory terms (try, for example, "chair").

I realize the above considerations may not be relevant to your program -- I
just can't tell to what extent you expect it to shed any light on the problem
of explaining semantic content in naturalistic terms. In any case, I think
it's important to understand why this fundamental problem remains largely
untouched by such theories.

Anders Weinstein
BBN Labs

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Sat Jun 27 03:30:44 1987
Date: Sat, 27 Jun 87 03:30:34 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #155
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Sat, 27 Jun 87 03:20 EDT
Received: from relay.cs.net by RELAY.CS.NET id af28970; 26 Jun 87 20:02 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa08353; 26 Jun 87 20:01 EDT
Date: Fri 26 Jun 1987 00:08-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #155
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest            Friday, 26 Jun 1987      Volume 5 : Issue 155

Today's Topics:
  Seminars - Acquiring Knowledge from the Outside (Rutgers) &
    AI Research at Edinburgh (SRI) &
    Nonmonotonic Multiple Inheritance Systems (Bell Labs)
  Conference - Advanced Computing Symposium &
    European Conference on AI in Medicine

----------------------------------------------------------------------

Date: 18 Jun 87 11:47:53 EDT
From: KALANTARI@RED.RUTGERS.EDU
Subject: Seminar - Acquiring Knowledge from the Outside (Rutgers)


                     R U T G E R S     U N I V E R S I T Y

                        Department of Computer Science

                              C O L L O Q U I U M



SPEAKER:         Paul Rosenbloom
                 Stanford University

TITLE:           ACQUIRING KNOWLEDGE FROM THE OUTSIDE
                 SOME RECENT PROGRESS ON LEARNING IN SOAR

DATE:           Monday, June 29, 1987
TIME:           10:00 a.m.
PLACE:          Hill Center, Room 705

In  previous  work  on  learning in Soar we have focused on how the chunking of
internal problem solving can acquire the varieties of knowledge required  by  a
general  problem solver; for example, productions can be acquired which perform
operator retrieval, instantiation, selection, and implementation.    One  major
form  of  learning  not  covered  by  this  previous work is the acquisition of
knowledge from external sources.  In this talk  I  will  describe  two  current
projects  which  are examining how the techniques utilized in the previous work
can be employed to learn from external knowledge sources.  The first project is
working  on  the  acquisition of general search control knowledge from external
advice.    This  work  touches  on  issues  of   operationalization,   learning
apprentices, analogy, and generalization.  The second project is working on the
acquisition of declarative knowledge.  This work  demonstrates  for  the  first
time  in  Soar  what Dietterich termed "knowledge level learning"; that is, the
acquisition of knowledge not already in the system's deductive  closure.    One
implication of this demonstration is that explanation-based learning mechanisms
are not inherently limited to symbol level learning.  Issues that  have  arisen
during  this  work include: how to decouple new facts from the context in which
they were learned, how to be able to distinguish what  has  been  learned  from
what   hasn't,  and  how  to  index  declarative  information  for  appropriate
retrieval.

------------------------------

Date: Wed, 24 Jun 87 11:32:42 PDT
From: Amy Lansky <lansky@venice.ai.sri.com>
Subject: Seminar - AI Research at Edinburgh (SRI)

           AI RESEARCH AT EDINBURGH - PAST, PRESENT, AND FUTURE

                            Roberto Desimone
                         University of Edinburgh

                       2:00 PM, FRIDAY, June 26
              SRI International, Building E, Room EK242


This talk will comprise a review of AI research and other AI
activities that have and are being pursued at Edinburgh.  I will start
with a short history of the early days of AI in Edinburgh in the 1960s
and 1970s, the transition period in the mid to late 1970s and the
revival in 1980s.  Then, I will stress the basic research currently
being conducted within the Dept. of AI at the University of Edinburgh.
Some of the activities conducted within the AI Applications Institute
also in Edinburgh will also be discussed.  Finally, some thoughts on
the future of AI research at Edinburgh.


VISITORS:  Please arrive 5 minutes early so that you can be escorted up
from the E-building receptionist's desk.  Thanks!

NOTE: Different time and place

------------------------------

Date: Thu 25 Jun 1987  14:10:08
From: dlm@allegra.csnet
Subject: Seminar - Nonmonotonic Multiple Inheritance Systems (Bell
         Labs)

Date:  Thursday, July 2
Time:  2:00 PM
Place: AT&T Bell Laboratories MH 3D-473

                       David S. Touretzky

                    Computer Science Department
                    Carnegie Mellon University


                 A Clash of Intuitions:  The Current State of
                   Nonmonotonic Multiple Inheritance Systems


Early attempts at combining multiple inheritance with nonmonotonic reasoning
were based on straightforward extensions to tree-structured inheritance
systems, and were theoretically unsound.  In The Mathematics of Inheritance
Systems, or TMOIS, I described two basic problems that these systems cannot
handle.  One involves reasoning with true but redundant assertions; the other
involves ambiguity.

TMOIS provided the definition and analysis of a theoretically sound multiple
inheritance system, accompanied by inference algorithms.  Other definitions for
inheritance have since been proposed by Sandewall and by Horty, Thomason, and
Touretzky that are equally sound and intuitive, but do not always agree with
the system defined in TMOIS.  At the heart of the controversy is a clash of
intuitions about certain fundamental issues such as skepticism versus
credulity, the direction in which inheritance paths are extended, and classical
versus intuitive notions of consistency.  In this talk I will catalog the
issues, map out a design space, and describe interesting properties that result
from certain choices of definitions.  Just as there are alternative logics,
there may be no single ``best'' approach to nonmonotonic multiple inheritance.

This is joint work with Richmond Thomason of the University of Pittsburgh and
John Horty of CMU.



Sponsor: Ron Brachman

------------------------------

Date: Thu, 18 Jun 87 13:14:49 pdt
From: Douglas Schuler <douglas@BOEING.COM>
Subject: Conference - Advanced Computing Symposium


            DIRECTIONS AND IMPLICATIONS OF ADVANCED COMPUTING

            A One Day Symposium - July 12, 1987
            University of Washington, Seattle, Washington


PROGRAM

On-Site Registration (8:00 - 9:00)

PLENARY SESSION (9:00 - 10:30)

Robert Kahn and Terry Winograd with Gary Chapman

The featured speakers will discuss the role of funding on computer science
research.  How and why are projects selected for funding?  What are the
roles of the Department of Defense, civilian agencies and private sources?
Does it matter where research money comes from?

Robert Kahn is the founder of the non-profit Corporation for National
Research Initiatives, in Washington, D.C.  Until 1985, Kahn was director of
the Information Processing Techniques Office at the Defense Advanced
Research Projects Agency (DARPA).

Terry Winograd is an associate professor of computer science at Stanford
University.  He is author of "Understanding Natural Language", "Language as
a Cognitive Process" and (with Fernando Flores) "Understanding Computers
and Cognition".  Winograd is the national president of Computer
Professionals for Social Responsibility (CPSR).

The discussion will be moderated by Gary Chapman, Executive Director of
CPSR.  He is co-editor of the book, "Computers in Battle" to be published
this fall.  Chapman is a former member U.S. Special Forces.

PARALLEL SESSIONS

FUNDING (11:00 - 12:00)
David Bushnell - The Promise and Reality of ARPANET: A Brief History
Joel Yudken and Barbara Simons - Project on Funding in Computer Science:
  A Preliminary Report

AI PROSPECTS I (11:00 - 12:00)
Juergen Koenemann    Artificial Intelligence and the Future of Work
Reinhard Keil-Slawik    An Ecological Approach to Responsible
  Systems Development

LUNCH (12:00 - 1:30)

MILITARY/RELIABILITY  (1:30 - 3:00)
Richard Hamlet - Testing for Trustworthiness
David Bella - Fault-tolerant Ballistic Missile Defense
Erik Nilsson - The Costs of Computing Star Wars

EXPERT SYSTEMS  (1:30 - 3:00)
Matthew Lewis and Seth Chaikin - Will There Be Teachers in the Classroom
  of the Future?
Rolf Engelbrecht - Expert Systems in Medicine - A Technology Assessment
Carole Hafner and Donald Berman - The Potential of AI to Help Solve the
  Crisis in Our Legal System

BREAK (3:00 - 3:30)

RESEARCH PRIORITIES (3:30 - 4:30)
Douglas Schuler - A Civilian Computing Initiative: Three Modest Proposals
Jack Beusmans and Karen Wieckert - Artificial Intelligence and the Military

AI PROSPECTS II (3:30 - 4:30)
Susan Landau - The Responsible Use of 'Expert' Systems
K. Eric Drexler - Technologies of Danger and Wisdom

VIDEO
Daressa    Computers in Context
CPSR       Reliability and Risk
videotape on DBNET (a computer mail network for the deaf-blind)


Registration fees

Regular $50 ____
CPSR Member $30 ____
Student/Low Income $20 ____

Proceedings only (cannot attend symposium) $15 ____

Proceedings will be distributed to symposium registrants on day of
symposium.  Lunch is included.

DIAC '87
CPSR/Seattle
P.O. Box 85481
Seattle, WA  98105

        Sponsored by Computer Professionals for Social Responsibility

------------------------------

Date: 23 Jun 1987 12:32:28 EST
From: Herve.Lambert@PS3.CS.CMU.EDU
Subject: Conference - European Conference on AI in Medicine


                                EUROPEAN CONFERENCE on

                                ARTIFICIAL INTELLIGENCE

                                   in   MEDICINE

                                      __________



                         Marseilles (France), Aug 31st - Sept 3rd




Organized by: AIME, European Society for Artificial Intelligence in Medicine

In  cooperation  with:  IIRIAM,  International  Institute  of  Robotics and
                        Artificial Intelligence of Marseilles.

                        IRCF, Imperial Cancer Research Fund Laboratories, UK

                        GSF-MEDIS, Gesellschaft fur Strahlen und
                        umweltforschung mbH Munchen

                        Laboratoire d'Informatique Medicale de la Faculte de
                        Medecine de Marseille.




                                PROGRAM
                                _______


                        WORKSHOP and TUTORIALS
                        ----------------------


Monday, August 31st:

Tutorial 1:
9.00 - 13.00    Acquisition of Knowledge from Medical databases
Gio C.M.  Wiederhold, M.  Walker, R.L. Blum, Stanford University (USA)

Tutorial 2:
14.00 - 18.00   Methods and Techniques used in Expert Systems
Jan L. Talmon, Henny P.A. Boshuizen, University of Limburg (NL)

Tutorial 3:
14.00 - 18.00   Knowledge representation
Steen Andreassen, University of Aalborg (DK), Mike Wellman, MIT (USA)

Workshop:
9.00 - 13.00    From Mycin to Oncocin
Larry Fagan, Stanford University (USA)



                                CONFERENCE
                                ----------


Tuesday, September 1st, 1987

9.00 - 9.30     Opening Session
9.30  - 10.30   Invited Keynote speaker:  J.H.  Van Bemmel, Free University of
                Amsterdam
10.30 - 11.00   Break


Session 1: Methodology

11.00 - 11.30   "INTERMED": a medical language interface.
                Mery C., Normier B., Orgonowski A. (F)

11.30 - 12.00   Inference engineering through prototyping in Prolog
                Van Thilo J., Mulders A. (B)

12.00 - 12.30   The  evaluation  of  clinical  decision  support  systems:  a
                discussion of the methodology used in the ACORN project.
                Wyatt J. (UK)
12.30 - 13.00   Matching  patients:  an  approach to decision support in Liver
                transplantation.
                Tusch G., Bernauer J., Reichertz P.L. (FRG)

13.00 - 14.00   Lunch


Session 2: Clinical Applications (1)

14.30 - 15.00   An  expert  system  for  diagnosis  and  therapy  planning  in
                patients with peripheral vascular disease.
                Talmon  J.L.,  Schijven  R.A.J.,  Kitslaar  P.J.E.H.M.,
                Penders R. (NL)

15.00 - 15.30   An  expert  system  for the  classification  of Dizziness  and
                Vertigo.
                Schmid R.,  Zanocco P.,  Buizza A.,  Magenes G.,  Manfrin M.,
                Mira E. (I)

15.30 - 16.00   The SENEX system, a microcomputer-based expert system built by
                oncologists for breast cancer management.
                Renaud-Salis J.L.,  Bonichon F.,  Durand M.,  Avril A.,
                Lagarde C. (F).

16.00 - 16.30   Break


Session 3: Qualitative Reasoning

16.30 - 17.00   The use of  QSIM for Qualitative  simulation of  physiological
                systems.
                Nicolosi E., Leaning M. (UK)

17.00 - 17.30   Qualitative description of electrophysiologic measurements:
                toward automatic data interpretation.
                Irler W.J., Antolini R., Kirchner M., Stringa L. (I)

17.30 - 18.00   A   qualitative   spatial   representation  for  cardiac
                Electrophysiology.
                Gotts N. (UK)

18.45           Cocktail at the city hall of Marseilles.



Wednesday September 2nd 1987{


Session 4: Knowledge acquisition and representation


9.00 - 9.30     Knowledge acquisition in expert system assisted diagnosis:
                a machine learning approach
                Funk M., Appel R.D., Roch Ch., Hochstrasse D., Pellegrini Ch.,
                Muller A.F., (CH)

9.30 - 10.00    Knowledge representation for cooperative medical systems
                Rector A.L. (UK)

10.00 - 10.30   A representation of time for medical expert systems
                Hamlet I., Hunter J. (UK)

10.30 - 11.00   Break


Session 5: Management of uncertainty

11.00 - 11.30   TOULMED an inference engine which deals with imprecise and
                uncertain aspects of medical knowledge.
                Buisson J.C., Farreny H., Prade H., Turnin M.C., Tauber J.P.,
                Bayard F. (F)

11.30 - 12.00   Coherent handling of uncertainty via localized computation in
                an expert system for therapeutic decision.
                Berzuini C., Barosi G., Polino G. (I)

12.00 - 12.30   MUNIN, on the case for probabilities in medical expert systems
                a pratical exercise.
                Jensen F.V., Andersen S.K., Kjaerulff U., Andreassen S. (DK)

12.30 - 13.00   Rule based expert systems in gynecology: statistical versus
                heuristic approach
                Riss P.A., Koelbl H., Reinthaller A., Deutinger J. (Austria)


Afternoon: Social Program


Thursday September 3rd


Session 6: Knowledge Engineering tools.

9.00 - 9.30     A radiological expert system for the P.C., design and
                Implementation issues.
                Horn W., Imhof H., Pfahringer B., Salamonowitz E., (Austria)

9.30  -  10.00  A  P.C  based  shell  for  clinical information systems with
                reasoning capabilities
                Wiener F., Groth T. (Israel, Sweden)

10.00 - 10.30   The   kernel  mechanism  for  handling  assumptions  and
                justifications and its applications to the biotechnologies
                Cherubini M.A., Cerri S.A., Sbarbati R. (I)

10.30 - 11.00   Break


Session 7: General Session

11.00 - 12.00   Invited Lecture
                Larry Fagan, Stanford University (USA)

12.00 - 12.30   Man-machine interaction in CHECK
                Console L., Fossa M., Torasso P., Molino G., Cravetto G (I)

12.30 - 13.00   The Oxford system of medicine
                Fox J., Glowinski A., O'Neil M. (UK)

13.00 - 14.30   Lunch


Session 8: Clinical Applications (2)

14.30 - 15.00   Evaluating the performance of AMEMIA
                Quaglini S., Stefanelli M., Barosi G., Berzuini A. (I)

15.00 - 15.30   Computer aided diagnosis and treatment of brachial plexus
                injuries.
                Jaspers R.B.M., Van der Helm F.C.T. (NL)

15.30 - 16.00   Representation of embryonic development and its anomalies.
                Goutal J.M., Philip N., Griffiths M., Ayme S. (F)

16.00 - 16.30   A microcomputer based decision support for lipid desorder.
                Fhaircheallaigh D.N., Sinnot M., Grimson J., O'Moore R. (Eire)

16.30 - 17.00   Closing session



Program Committee:

J. Fox, London                  Chairman
P. Adlassnig, Vienna
R. Engelbrecht, Munich          Tutorial
M. Fieschi, Marseille
F. Gremy, Montpellier (F)
T. Groth, Upsalla
A. Hasman, Maastricht
A.L., Rector, Manchester
P.L. reichertz, Hannover
P. Smets, Brussels
M. Stefanelli, Pavia

Organizing committee:
M. Fieschi                      Chairman
V. Bernadac                     Organization
P. Dujol
B. Guisiano                     Social events
M. Joubert                      Local arrangements
D. Riouall                      Liaison
M. Roux
G. Soula                        Exhibition





Additionnal Informations:
                                Viviane Bernadac
                                IIRIAM
                                2 rue Henri Barbusse, CMCI
                                13241 Marseille Cedex 1 - FRANCE
                                tel: (33) 91 91 36 72
                                telex: 440 860
                                telefax: (33) 91 91 70 24

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Tue Jun 30 08:20:57 1987
Date: Tue, 30 Jun 87 08:20:52 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #156
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Tue, 30 Jun 87 07:51 EDT
Received: from relay.cs.net by RELAY.CS.NET id aa11054; 29 Jun 87 2:15 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa22196; 29 Jun 87 2:14 EDT
Date: Sun 28 Jun 1987 22:22-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #156
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest            Monday, 29 Jun 1987      Volume 5 : Issue 156

Today's Topics:
  Theory - Symbol Grounding and Invertibility

----------------------------------------------------------------------

Date: Mon, 22 Jun 87 10:19:59 PDT
From: Neil Hunt <spar!hunt@decwrl.dec.com>
Subject: Symbol grounding and invertibility.

John Cugini <Cugini@icst-ecf.arpa> writes:

> What if there were a few-to-one transformation between the skin-level
> sensors ...
> My example was to suppose that #1:
> a combination of both red and green retinal receptors and #2 a yellow
> receptor BOTH generated the same iconic yellow.

We humans see the world (to a first order at least) through red, green and
blue receptors. We are thus unable to distinguish between light of a yellow
frequency, and a mixture of light of red and green frequencies, and we assign
to them a single token - yellow. However, if our visual apparatus was
equipped with yellow receptors as well, then these two input stimuli
would *appear* quite different, as indeed they are. In this case I think
that it is highly unlikely that we would have the same symbol to
represent the two cases.

Consider a species with only two classes of colour receptors, low
frequency and high frequency, roughly equivalent to our concepts
of red and blue, but with no middle frequency receptors corresponding
to a human concept of green). Creatures of such a species when shown
pure green light would receive reduced levels from the receptors
on each side of green frequency, thus receiving some combination of
blue and red signals. This would be indistinguishable from a mixture
of blue and red, which we call magenta. Such creatures might then
reason (incorrectly) about the possibility of having a middle frequency
receptor, and having a many to one mapping between case #1, pure
green light, and case #2, a mixture of red and blue, and wonder
about how that affects questions of invertibility. As we humans know,
if these creatures had such a visual capability, they would
invent a new symbol for magenta, and there would be no many to one
mapping.

> Clearly this iconic representation is non-invertible back out to the
> sensory surfaces, but intuitively it seems like it would be grounded
> nonetheless - how about it?

The fallacy is that iconic representation described is indeed non invertible,
but it is also clearly not grounded, since if we had yellow receptors,
we would be able to perceive a difference between, and require a new
symbol for one of the new colours.

Neil/.


----- End Forwarded Message -----

------------------------------

Date: 21 Jun 87 22:55:09 GMT
From: ihnp4!homxb!houxm!houdi!marty1@ucbvax.Berkeley.EDU 
      (M.BRILLIANT)
Subject: Re: The symbol grounding problem

In article <6670@diamond.BBN.COM>, aweinste@Diamond.BBN.COM (Anders
Weinstein) writes, with reference to article <861@mind.UUCP>
harnad@mind.UUCP (Stevan Harnad):
>
> Some of the things you say also suggest that you're attempting to resuscitate
> a form of classical empricist sensory atomism, where the "atomic" symbols
> refer to sensory categories acquired "by acquaintance" and the meaning of
> complex symbols is built up from the atoms "by description". This approach
> has an honorable history in philsophy; unfortunately, no one has ever been
> able to make it work. In addition to the above considerations, the main
> problems seem to be: first, that no principled distinction can be made
> between the simple sensory concepts and the complex "theoretical" ones; and
> second, that very little that is interesting can be explicitly defined in
> sensory terms (try, for example, "chair").
>
I hope none of us are really trying to resuscitate classical philosophies,
because the object of this discussion is to learn how to use modern
technologies.  To define an interesting object in sensory terms requires
an intermediary module between the sensory system and the symbolic system.

With a chair in the visual sensory field, the system will use hard-coded
nonlinear (decision-making) techniques to identify boundaries and shapes
of objects, and identify the properties that are invariant to rotation
and translation.  A plain wooden chair and an overstuffed chair will be
different objects in these terms.  But the system might also learn to
identify certain types of objects that move, i.e., those we call people.
If it notices that people assume the same position in association with
both chair-objects, it could decide to use the same category for both.

The key to this kind of classification is that the chair is not defined in
explicit sensory terms but in terms of filtered sensory input.

M. B. Brilliant                                 Marty
AT&T-BL HO 3D-520       (201)-949-1858
Holmdel, NJ 07733       ihnp4!houdi!marty1

P.S. Sorry for the double posting of my previous article.

------------------------------

Date: 20 Jun 87 02:17:09 GMT
From: ihnp4!homxb!houxm!houdi!marty1@ucbvax.Berkeley.EDU 
      (M.BRILLIANT)
Subject: Re: The symbol grounding problem

In article <861@mind.UUCP>, harnad@mind.UUCP writes:
> marty1@houdi.UUCP (M.BRILLIANT) asks:
>
> >     what do you think is essential: (A) literally analog transformation,
> >     (B) invertibility, or (C) preservation of significant relational
> >     functions?
>
Let me see if I can correctly rephrase his answer:

(i) "discrimination" (pairwise same/different judgments) he associates
with iconic ("analog") representations, which he says have to be
invertible, and will ordinarily be really analog because "dedicated"
digital equivalents will be too complex.

(ii) for "identification" or "categorization" (sorting and labeling of
objects), he says only distinctive features need be extracted from the
sensory projection; this process is not invertible.

(iii) for "conscious problem-solving," etc., he says relation-preserving
symbolic representations would be optimal, if they are not "autonomous
(modular)" but rather are grounded by deriving their atomic symbols
through the categorization process above.

(iv) to pass the Total Turing Test he wants all of the above, tied
together in the sequence described.

I agree with this formulation in most of its terms.  But some of the
terms are confusing, in that if I accept what I think are good
definitions, I don't entirely agree with the statements above.

"Invertible/Analog": The property of invertibility is easy to visualize
for continuous functions. First, continuous functions are what I would
call "analog" transformations.  They are at least locally image-forming
(iconic). Then, saying a continuous transformation is invertible, or
one-to-one, means it is monotonic, like a linear transformation, rather
than many-to-one like a parabolic transformation.  That is, it is
unambiguously iconic.

It might be argued that physical sensors can be ambiguously iconic,
e.g., an object seen in a half-silvered mirror.  Harnad would argue
that the ambiguity is inherent in the physical scene, and is not
dependent on the sensor.  I would agree with that if no human sensory
system ever gave ambiguous imaging of unambiguous objects.  What about
the ambiguity of stereophonic location of sound sources?  In that case
the imaging (i) is unambiguous; only the perception (ii) is ambiguous.

But physical sensors are also noisy.  In mathematical terms, that noise
could be modeled as discontinuity, as many-to-one, as one-to-many, or
combinations of these.  The noisy transformation is not invertible.
But a "physically analog" sensory process (as distinct from a digital
one) can be approximately modeled (to within the noise) by a continuous
transformation.  The continuous approximation allows us to regard the
analog transformation as image-forming (iconic).  But only the
continuous approximation is invertible.

"Autonomous/Modular": The definition of "modular" is not clear to me.
I have Harnad's definition "not analogous to a top-down, autonomous
symbol-crunching module ... hardwired to peripheral modules."  The
terms in the definition need defining themselves, and I think there are
too many of them.

I would rather look at the "hybrid" three-layer system and say it does
not have a "symbol-cruncher hardwired to peripheral modules" because
there is a feature extractor (and classifier) in between.  The main
point is the presence or absence of the feature extractor.

The symbol-grounding problem arises because the symbols are discrete,
and therefore have to be associated with discrete objects or classes.
Without the feature extractor, there would be no way to derive discrete
objects from the sensory inputs.  The feature extractor obviates the
symbol-grounding problem.  I consider the "symbol-cruncher hardwired to
peripheral modules" to be not only a straw man but a dead horse.

M. B. Brilliant                                 Marty
AT&T-BL HO 3D-520       (201)-949-1858
Holmdel, NJ 07733       ihnp4!houdi!marty1

------------------------------

Date: 26 Jun 87 04:38:02 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem


John Cugini <Cugini@icst-ecf.arpa> on ailist@stripe.sri.com writes:

>       What if there were a few-to-one transformation between the skin-level
>       sensors (remember Harnad proposes "skin-and-in" invertibility
>       as being necessary for grounding) and the (somewhat more internal)
>       iconic representation.  My example was to suppose that #1:
>       a combination of both red and green retinal receptors and #2 a yellow
>       receptor BOTH generated the same iconic yellow.
>       Clearly this iconic representation is non-invertible back out to the
>       sensory surfaces, but intuitively it seems like it would be grounded
>       nonetheless - how about it?

Invertibility is a necessary condition for iconic representation, not
for grounding.  Grounding symbolic representations (according to my
hypothesis) requires both iconic and categorical representations. The
latter are selective, many-to-few, invertible only in the features
they pick out and, most important, APPROXIMATE (e.g., as between
red-green and yellow in your example above). This point has by now
come up several times...
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: 26 Jun 87 05:07:40 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem: McCarthy's query


In article 208 of comp.ai.digest: JMC@SAIL.STANFORD.EDU (John McCarthy)
asks:

>       I imagine that the alleged point at issue and a few of the positions
>       taken could be summarized for the benefit of those of us whose
>       subjective probability that there is a real point at issue is too
>       low to motivate studying the entire discussion but high enough to
>       motivate reading a summary.

The point at issue concerns how symbols in a symbol-manipulative
approach to the modeling of mind can be grounded in something other
than more symbols so that their meanings and their connections to
objects can be independent of people's interpretations of them. One of
the positions taken was that connecting a purely symbolic module to
peripheral (transducer/effector) modules in the right way should be
all you need to ground the symbols. I suggested that all this is
likely to yield is more of the toy models that symbolic AI has produced
until now. To get human-scale (Total Turing Test) performance
capacity, a bottom-up hybrid nonsymbolic/symbolic system may be
needed, one in which the elementary symbols are the names of sensory
categories picked out by inductive (possibly connectionist) feature-filters
(categorical representations) and invertible analogs of sensory projections
(iconic representations). This model is described in "Categorical Perception:
The Groundwork of Cognition" (Cambridge University Press 1987,
S. Harnad, ed., ISBN 0-521-26758-7). Other alternatives that have been
mentioned by others in the discussion included: (1) symbol-symbol "grounding"
is already enough and (2) connectionist nets already generate grounded
"symbols." If you want the entire file, I've saved it all...
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: 26 Jun 87 17:19:29 GMT
From: ihnp4!homxb!houdi!marty1@ucbvax.Berkeley.EDU  (M.BRILLIANT)
Subject: Re: The symbol grounding problem

In article <914@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> Invertibility is a necessary condition for iconic representation, not
> for grounding.  Grounding symbolic representations (according to my
> hypothesis) requires both iconic and categorical representations...

Syllogism:
    (a) grounding ... requires ... iconic ... representation....
    (b) invertibility is ... necessary ... for iconic representation.
    (c) hence, grounding must require invertibility.

Why then does harnad say "invertibility is a necessary condition
for ..., NOT for grounding" (caps mine, of course)?

This discussion is getting hard to follow.  Does it have to be carried
on simultaneously in both comp.ai and comp.cog-eng?  Could harnad, who
seems to be the major participant, pick one?

M. B. Brilliant                                 Marty
AT&T-BL HO 3D-520       (201)-949-1858
Holmdel, NJ 07733       ihnp4!houdi!marty1

------------------------------

Date: 26 Jun 87 18:03:26 GMT
From: ihnp4!homxb!houdi!marty1@ucbvax.Berkeley.EDU  (M.BRILLIANT)
Subject: Re: The symbol grounding problem: McCarthy's query

Will the proponents of the various views described below, and those
whose revelant views have not been described below, please stand up?

In article <915@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> In article 208 of comp.ai.digest: JMC@SAIL.STANFORD.EDU (John McCarthy)
> asks:
>
> >     I imagine that the alleged point at issue and a few of the positions
> >     taken could be summarized .....
>
> The point at issue concerns how symbols in a symbol-manipulative
> approach to the modeling of mind can be grounded in something other
> than more symbols so that their meanings and their connections to
> objects can be independent of people's interpretations of them.

> ..... One of
> the positions taken was that connecting a purely symbolic module to
> peripheral (transducer/effector) modules IN THE RIGHT WAY should be
> all you need to ground the symbols.

Caps mine. Position 1 is that the peripherals and the symbolic module
have to be connected in the right way.  Harnad's position is that

> .... a bottom-up hybrid nonsymbolic/symbolic system may be
> needed, one in which the elementary symbols are the names of sensory
> categories picked out by inductive (possibly connectionist) feature-filters
> (categorical representations) and invertible analogs of sensory projections
> (iconic representations).....

This looks like a way to connect periperals to a symbolic module. To
the extent that I understand it, I like it, except for the invertibility
condition.  If it's the right way, it's a special case of position 1.
Harnad has called the "right way" of position 1 "top-down,"
"hard-wired," and other names, to distance himself from it.  I'm not
sure there are any real proponents of position 1 in such a narrow
sense.  I support position 1 in the wide sense, and I think Harnad does.

> ..... Other alternatives that have been
> mentioned by others in the discussion included: (1) symbol-symbol "grounding"
> is already enough ....

They don't care about the problem, so either they or we can go away.
They (and I) want this discussion to go to another newsgroup.

> ..... and (2) connectionist nets already generate grounded "symbols."

Is that a variant of Harnad's position, i.e., "(possibly connectionist)"?

I think the real subject of discussion is the definition of some of the
technical terms in Harnad's position, and the identification of which
elements are critical and which might be optional?  Might some of the
disagreement disappear if the definitions were more concrete?

M. B. Brilliant                                 Marty
AT&T-BL HO 3D-520       (201)-949-1858
Holmdel, NJ 07733       ihnp4!houdi!marty1

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Mon Jun 29 03:31:04 1987
Date: Mon, 29 Jun 87 03:30:56 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #157
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Mon, 29 Jun 87 03:21 EDT
Received: from relay.cs.net by RELAY.CS.NET id aa11103; 29 Jun 87 2:28 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa22235; 29 Jun 87 2:25 EDT
Date: Sun 28 Jun 1987 22:26-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #157
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest            Monday, 29 Jun 1987      Volume 5 : Issue 157

Today's Topics:
  Theory - The Symbol Grounding Problem

----------------------------------------------------------------------

Date: 26 Jun 87 19:41:11 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem


berleant@ut-sally.UUCP (Dan Berleant) of U. Texas CS Dept., Austin, Texas
writes:

>       Are you saying that the categorical representations are to be
>       nonsymbolic?  The review of human concept representation I recently read
>       (Smith and Medin, Categories and Concepts, 1981) came down... hard on
>       the holistic theory of concept representation... The alternative
>       nonsymbolic approach would be the 'dimensional' one. It seems a
>       strongish statement to say that this would be sufficient, to the
>       exclusion of symbolic properties... However, the metric
>       hypothesis -- that a concept is sufficiently characterized by a point
>       in a multi-dimensional space -- seems wrong, as experiments have shown.

Categorical representations are the representations of purely SENSORY
categories, and I am indeed saying that they are to be NONsymbolic.
Let me also point out that the theory I am putting forward represents
a direct challenge to the Roschian line of category research in which
the book you cite belongs. To put it very briefly, I claim that that
line of experimental and theoretical work is not really investigating
the representations underlying the capacity to categorize at all; it is
only looking at the fine tuning of category judgments. The experiments
are typically not addressing the question of how it is that a device
or organism can successfully categorize the inputs in question in the
first place; instead it examines (1) how QUICKLY or EASILY subjects do it,
(2) how TYPICAL (of the members of the category in question) subjects rate
the inputs to be and (3) what features subjects INTROSPECT that they are
using. This completely bypasses the real question of how anyone or anything
actually manages to accomplish the categorization at all.

Let me quickly add that there is nothing wrong with reaction-time
experiments if they suggest hypotheses about the basic underlying
mechanism, or provide ways of testing them. But in this case -- as in
many others in experimental cognitive psychology -- the basic
mechanisms are bypassed and the focus is on fine-tuning questions
that are beside the point (or premature) -- if, that is, the objective
is to explain how organisms or devices actually manage to generate
successful categorization performance given the inputs in question. As
an exercise, see where the constructs you mention above -- "holistic,"
"dimensional," or "metric" representations -- are likely to get you if
you're actually trying to get a device to categorize, as we do.

There is also an "entry point" problem with this line of research,
which typically looks willy-nilly at higher-order, abstract
categories, as well as "basic level" object categories (an incoherent
concept, in my opinion, except as an arbitrary default level), and
even some sensory categories. But it seems obvious that the question
of how the higher-order categories are represented is dependent on how
the lower-order ones are represented, the abstract ones on the
concrete ones, and perhaps all of these depend on the sensory ones.
Moreover, often the inputs used are members of familiar, overlearned
categories, and the task is a trivial one, not engaging the mechanisms
that were involved in their acquisition. In other experiments,
artificial stimuli are used, but it is not clear how representative
these are of the category acquisition process either.

Finally, and perhaps most important: In bypassing the problem of
categorization capacity itself -- i.e., the problem of how devices
manage to categorize as correctly and successfully as they do, given
the inputs they have encountered -- in favor of its fine tuning, this
line of research has unhelpfully blurred the distinction between the
following: (a) the many all-or-none categories that are the real burden
for an explanatory theory of categorization (a penguin, after all, be it
ever so atypical a bird, and be it ever so time-consuming for us to judge
that it is indeed a bird, is, after all, indeed a bird, and we know
it, and can say so, with 100% accuracy every time, irrespective of
whether we can successfully introspect what features we are using to
say so) and (b) true "graded" categories such as "big," "intelligent,"
etc. Let's face the all-or-none problem before we get fancy...

>       To discuss "invariant features... sufficient to guide reliable
>       categorization" sounds like the "classical" theory (as Smith & Medin
>       call it) of concept representation: Concepts are represented as
>       necessary and sufficient features (i.e., there are defining features,
>       i.e. there is a boolean conjunction of predicates for a concept).  This
>       approach has serious problems, not the least of which is the inability
>       of humans to describe these features for seemingly elementary concepts,
>       like "chair", as Weinstein and others point out. I contend that a
>       boolean function (including ORs as well as ANDs) could work, but that
>       is not what was mentioned. An example might be helpful: A vehicle must
>       have a steering wheel OR handlebars. But to remove the OR by saying,
>       a vehicle must have a means of steering, is to rely on a feature which
>       is symbolic, high level, functional, which I gather we are not allowing.

It certainly is the "classical" theory, but the one with the serious
problems is the fine-tuning approach I just described, not the quite
reasonable assumption that if 100% correct, all-or-none categorization
is possible at all (without magic), then there must be a set of features
in the inputs that is SUFFICIENT to generate it. I of course agree
that disjunctive features are legitimate -- but whoever said they
weren't? That was another red herring introduced by this line of
research. And, as I mentioned, "the inability of humans to describe
these features" is irrelevant. If they could do it, they'd be
cognitive modelers! We must INFER what features they're using to
categorize successfully; nothing guarantees they can tell us.

(If by "Weinstein" you mean "Wittgenstein" on "games," etc., I have to remind
you that Wittgenstein did not have the contemporary burden of speaking
in terms of internal mechanisms a device would have to have in order to
categorize successfully. Otherwise he would have had to admit that
"games" are either (i) an all-or-none category, i.e., there is a "right" or
"wrong" of the matter, and we are able to sort accordingly, whether or
not we can introspect the basis of our correct sorting, or (ii) "games"
are truly a fuzzy category, in which membership is arbitrary,
uncertain, or a matter of degree. But if the latter, then games are
simply not representative of the garden-variety all-or-none
categorization capacity that we exercise when we categorize most
objects, such as chairs, tables, birds. And again, there's nothing
whatsoever wrong with disjunctive features.)

Finally, it is not that we are not "allowing" higher-order symbolically
described features. They are the goal of the whole grounding project.
But the approach I am advocating requires that symbolic descriptions
be composed of primitive symbols which are in turn the labels of sensory
categories, grounded in nonsymbolic (iconic and categorical) representations.

>       [Concerning model-theoretic "grounding":] The more statements
>       you have (that you wish to be deemed correct), the more the possible
>       meanings of the terms will be constrained. To illustrate, consider
>       the statement FISH SWIM. Think of the terms FISH and SWIM as variables
>       with no predetermined meaning -- so that FISH SWIM is just another way
>       of writing A B. What variable bindings satisfy this?  Well, many do...
>       Now consider the statement FISH LIVE, where FISH and LIVE are variables.
>       Now there are two statements to be satisfied. The assignment to the
>       variable LIVE restricts the possible assignments to the variable SWIM...
>       Of course, we have many many statements in our minds that must be
>       simultaneously satisfied, so the possible meanings that each word name
>       can be assigned is correspondingly restricted. Could the restrictions be
>       sufficient to require such a small amount of ambiguity that the word
>       names could be said to have intrinsic meaning?...  footnote: This
>       leaves unanswered the question of how the meanings themselves are
>       grounded. Non-symbolically, seems to be the gist of the discussion,
>       in which case logic would be useless for that task even in an
>       "in principle" capacity since the stuff of logic is symbols.

I agree that there are constraints on the correlations of symbols in a
natural language, and that the degrees of freedom probably shrink, in
a sense, as the text grows. That is probably the basis of successful
cryptography. But I still think (and you appear to agree) that even if
the degrees of freedom are close to zero for a natural language's
symbol combinatons and their interpretations, this still leaves the
grounding problem intact: How are the symbols connected to their
referents? And what justifies our interpretation of their meanings?
With true cryptography, the decryption of the symbols of the unknown
language is always grounded in the meanings of the symbols of a known
language, which are in turn grounded in our heads, and their
understanding of the symbols and their relation to the world. But
that's the standard DERIVED meaning scenario, and for cognitive
modeling we need INTRINSICALLY grounded symbols. (I do believe,
though, that the degrees-of-freedom constraint on symbol combinations
does cut somewhat into Quine's claims about the indeterminacy of
radical translation, and ESPECIALLY for an intrinsically grounded
symbol system.)
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: 26 Jun 87 22:17:16 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem


aweinste@Diamond.BBN.COM (Anders Weinstein) of BBN Laboratories, Inc.,
Cambridge, MA writes:

>       I don't see any difference between "physical" and "merely theoretical"
>       invertibility... Surely you don't mean that a transformation-inversion
>       capability must actually be present in the device for it to count as
>       "analog" in your sense.  (Else brains, for example, wouldn't count).

I think this is partly an empirical question. "Physically possible"
invertibility is enough for an analog transformation, but actual
physical invertibility may be necessary for an iconic representation
that can generate all of our discrimination capacities. Avoiding
"merely theoretical" invertibility is also part of avoiding any reliance
on mediation by our theoretical interpretations in order to get an
autonomous, intrinsically grounded system.

>       the *semantic* meaning  of a symbol is still left largely unconstrained
>       even after you take account of it's "grounding" in perceptual
>       categorization. This is because what matters for intentional content
>       is not the objective property in the world that's being detected, but
>       rather how the subject *conceives* of that external property, a far
>       more slippery notion... primitive people may be able to reliably
>       categorize certain large-scale atmospheric electrical discharges;
>       nevertheless, the semantic content of their corresponding states might
>       be "Angry gods nearby" or some such.

I agree that symbol grounding cannot be based on the "objective
property" that's being detected. Categorical representations in my
grounding model are approximate. All they do is sort and label the confusable
alternatives that have been sampled, using the provisional features
that suffice to generate reliable sorting performance according to the feedback
that defines "right" and "wrong." There is always a context of
confusable alternatives, and which features are used to sort reliably
is always a "compared to what?" matter. The exact "objective property" they
pick out is never an issue, only whether they can generate reliable
asymptotic categorization performance given that sample and those
feedback constraints. The representation is indifferent to whether
what you are calling "water," is really "twin-water" (with other
objective properties), as long as you can sort it "correctly" according
to the feedback (say, from the dictates of thirst, or a community of
categorizing instructors).

As to what people "conceive" themselves to be categorizing: My model
is proposed in a framework of methodological epiphenomenalism. I'm
interested in what's going on in people's heads only inasmuch as it is
REALLY generating their performance, not just because they think or
feel it is. So, for example, in criticizing the Roschian approach to
categorization in my reply to Dan Berleant I suggested that it was
irrelevant what features subjects BELIEVED they were using to
categorize, say, chairs; what matters is what features they (or any
organism or device in a similar input situation) really ARE using.
[This does not contradict my previous point about the irrelevance of
"objective properties." "Features" refers to properties of the
proximal projection on the device's sense receptors, whereas
"properties" would be the essential characteristics of distal objects
in the world. Feature detectors are blind to distal differences that
are not preserved in the proximal projection.]

On the other hand, "Angry gods nearby" is not just an atomic label for
"thunder" (otherwise it WOULD be equivalent to it in my model -- both
labels would pick out approximately the same thing); in fact, it is
decomposable, and hence has a different meaning in virtue of the
meanings of "angry" and "gods." There should be corresponding internal
representational differences (iconic, categorical and symbolic) that
capture that difference.

>       Another well-known obstacle to moving from an objective to an
>       intentional description is that the latter contains an essentially
>       normative component, in that we must make some distinction between
>       correct and erroneous classification. For example, we'd probably
>       like to say that a frog has a fly-detector which is sometimes wrong,
>       rather than a "moving-spot-against-a- fixed-background" detector
>       which is infallible. Again, this distinction seems to depend on fuzzy
>       considerations about the purpose or functional role of the concept
>       in question... [In his reply on this point to Dan Berleant,
>       Weinstein continues:] the philosophical problem is to say why any
>       response should count as an *error* at all. What makes it wrong?
>       I.e. who decides which "concept" -- "fly" or "moving-spot..." -- the
>       frog is trying to apply? The objective facts about the frog's
>       perceptual abilities by themselves don't seem to tell you that in
>       snapping out its tongue at a decoy, it's making a *mistake*. To
>       say this, an outside interpreter has to make some judgement about what
>       the frog's brain is trying to accomplish by its detection of moving
>       spots. And this makes the determination of semantic descriptions a
>       fuzzy matter.

I don't think there's any problem at all of what should count as an "error"
for my kind of model. The correctness or incorrectness of a label is
always determined by feedback -- either ecological, as in evolution
and daily nonverbal learning, or linguistic, where it is conventions
of usage that determine what we call what. I don't see anything fuzzy about
such a functional framework. (The frog's feedback, by the way,
probably has to do with edibility, so (i) "something that affords eating"
is probably a better "interpretation" of what it's detecting. And, to
the extent that (ii) flies and (iii) moving spots are treated indifferently by
the detector, the representation is approximate among all three.
The case is not like that of natives and thunder, since the frog's
"descriptions" are hardly decomposable. Finally, there is again no
hope of specifying distal "objective properties" ["bug"/"schmug"] here
either, as approximateness continues to prevail.)

>       Some of the things you say also suggest that you're attempting to
>       resuscitate a form of classical empricist sensory atomism, where the
>       "atomic" symbols refer to sensory categories acquired "by acquaintance"
>       and the meaning of complex symbols is built up from the atoms "by
>       description". This approach has an honorable history in philosophy;
>       unfortunately, no one has ever been able to make it work. In addition
>       to the above considerations, the main problems seem to be: first,
>       (1) that no principled distinction can be made between the simple
>       sensory concepts and the complex "theoretical" ones; and second,
>       (2) that very little that is interesting can be explicitly defined in
>       sensory terms (try, for example, "chair")...[In reply to Berleant,
>       Weinstein continues:] Of course *some* concepts can be acquired by
>       definition. However, the "classical empiricist" doctrine is committed
>       to the further idea that there is some privileged set of *purely
>       sensory* concepts and that all non-sensory concepts can be defined in
>       terms of this basis. This is what has never been shown to work. If you
>       regard "juice" as a "primitive" concept, then you do not share the
>       classical doctrine. (And if you do not, I invite you try giving
>       necessary and sufficient conditions for juicehood.)

You're absolutely right that this is a throwback to seventeenth-century
bottom-upism.  In fact, in the CP book I call the iconic and
categorical representations the "acquaintance system" and the symbolic
representations the "description system." The only difference is that
I'm only claiming to be giving a theory of categorization. Whether or
not this captures "meaning" depends (for me at any rate) largely on
whether or not such a system can successfully pass the Total Turing
Test. It's true that no one has made this approach work. But it's also
true that no one has tried. It's only in today's era of computer
modeling, robotics and bioengineering that these mechanisms will begin
to be tested to see whether or not they can deliver the goods.

To reply to your "two main problems": (1) Even an elementary sensory
category such as "red" is already abstract once you get beyond the
icon to the categorical representation. "Red" picks out the
electromagnetic wave-lengths that share the feature of being above and
below a certain threshold. That's an abstraction. And in exchange for
generating a feature-detector that reliably picks it out, you get a
label -- "red" -- which can now enter into symbolic descriptions (e.g.,
"red square"). Categorization is abstraction. As soon as you've left
the realm of invertible icons, you've begun to abstract, yet you've
never left the realm of the senses. And so it goes, bottom up, from
there onward.

(2) As to sensory "definitions": I don't think this is the right thing
to look for, because it's too hard to find a valid "entry point" into
the bottom-up hierarchy. I doubt that "chair" or "juice" are sensory
primitives, picked out purely by sensory feature detectors. They're
probably represented by symbolic descriptions such as "things you can
sit on" and "things you can drink," and of course those are just the
coarsest of first approximations. But the scenario looks pretty
straightforward: Even though it's flexible enough to be revised to
include a chair (suitably homegenized) as a juice and a juice (for a
bug?) as a chair, it seems very clear that it is the resources of (grounded)
symbolic description that are being drawn upon here in picking out
what is and is not a chair, and on the basis of what features.

The categories are too interrelated (and approximate, and provisional) for
an exhaustive "definition," but provisional descriptions that will get
you by in your sorting and labeling -- and, more important, are
revisable and updatable, to tighten the approximation -- are certainly
available and not hard to come by. "Necessary and sufficient conditions for
juicehood," however, are a red herring. All we need is a provisional
set of features that will reliably sort the instances as environmental and
social feedback currently dictates. Remember, we're not looking for
"objective properties" or ontic essences -- just something that will
guide reliable sorting according to the contingencies sampled to date.
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Tue Jun 30 08:21:23 1987
Date: Tue, 30 Jun 87 08:21:13 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #158
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Tue, 30 Jun 87 07:54 EDT
Received: from relay.cs.net by RELAY.CS.NET id af11103; 29 Jun 87 2:31 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa22249; 29 Jun 87 2:28 EDT
Date: Sun 28 Jun 1987 22:30-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #158
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest            Monday, 29 Jun 1987      Volume 5 : Issue 158

Today's Topics:
  Theory - The Symbol Grounding Problem

----------------------------------------------------------------------

Date: 27 Jun 87 01:09:41 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem: McCarthy's query


marty1@houdi.UUCP (M.BRILLIANT) of AT&T Bell Laboratories, Holmdel writes:

>       But a "physically analog" sensory process (as distinct from a digital
>       one) can be approximately modeled (to within the noise) by a continuous
>       transformation. The continuous approximation allows us to regard the
>       analog transformation as image-forming (iconic). But only the
>       continuous approximation is invertible.

I have no quarrel with this, in fact I make much the same point --
that iconic representations are approximate too -- in the chapter
describing the three kinds of representation. Is there any reason for
expecting I would object?

>       the "hybrid" three-layer system... does not have a "symbol-cruncher
>       hardwired to peripheral modules" because there is a feature extractor
>       (and classifier) in between.  The main point is the presence or
>       absence of the feature extractor...  The symbol-grounding problem
>       arises because the symbols are discrete, and therefore have to be
>       associated with discrete objects or classes.  Without the feature
>       extractor, there would be no way to derive discrete objects from the
>       sensory inputs. The feature extractor obviates the symbol-grounding
>       problem.

The problem certainly is not just that of discrete symbols needing to pick
out discrete objects. You are vastly underestimating the problem of
sensory categorization, sensory learning, and the relation between
lower and higher-order categories. Nor is it obvious that symbol manipulation
can still be regarded as just symbol manipulation when the atomic symbols
are constrained to be the labels of sensory categories. That's a
bottom-up constraint, and symbolic AI normally expects to float down
onto its sensors top-down. Imagine if your "setq" statements were
constrained by what your elementary symbols were connected to, and their
respective causal interrelations with other nonsymbolic sensory representations
and their associated labels.

>       Why does Harnad say "invertibility is a necessary condition
>       for iconic representations..., NOT for grounding"

Because the original statement of mine that you quote was a reply to a
query about whether ALL representations had to be invertible for grounding.
(It was accompanied by alleged counterexamples -- grounded but noninvertible
percepts.) My reply indicated that only iconic ones had to be invertible,
but that both iconic and categorical (noninvertible) ones were needed to
ground symbols.

>       Position 1 [on the symbol grounding problem] is that the peripherals
>       and the symbolic module have to be connected in the right way. Harnad's
>       position is... a special case of position 1.

I'm afraid not. I don't think there will be independent peripheral
modules and symbolic modules suitably interconnected in the hybrid
device that passes the Total Turing Test. I think a lot of what we
consider cognition will be going on in the nonsymbolic iconic and categorical
systems (discrimination, categorization, sensory learning and
generalization) and that symbol manipulation will be constrained in
ways that don't leave it in any way analogous to the notion of an
independent functional module, operating on its own terms (as in
standard AI), but connected at some critical point with the
nonsymbolic modules. When I spoke earlier of the "connections" of the
atomic symbols I had in mind something much more complexly
interdigitated and interdependent than can be captured by anything
that remotely resembles position 1. Position 1 is simply AI's pious
hope that a pure "top-down" approach can expect to meet up with a
bottom-up one somewhere in between. Mine is not a special case of
this; it's a rival.

>       "...and (2) connectionist nets already generate grounded "symbols." Is
>       that a variant of Harnad's position, i.e., "(possibly connectionist)"?

No. In my model connectionistic processes are just one possible
candidate for the mechanism that finds the features that will reliably
pick out a learned category. They would just be a component in the
categorical representational system. But there are much more ambitious
connectionistic views than that, for example, that connectionism can
usurp the role of symbolic representations altogether or (worse) that
they ARE symbolic (in some yet to be established sense). As far as I'm
concerned, the latter would entail a double grounding problem for
connectionism, the first to ground its interpretation of its states as
symbolic states, and then to ground the interpretations of the
symbolic states themselves (which is the standard symbol grounding problem).

--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: 27 Jun 87 14:32:42 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem: Correction re.
         Approximationism


In responding to Cugini and Brilliant I misinterpreted a point that
the former had made and the latter reiterated. It's a point that's
come up before: What if the iconic representation -- the one that's
supposed to be invertible -- fails to preserve some objective property
of the sensory projection? For example, what if yellow and blue at the
receptor go into green at the icon? The reply is that an analog
representation is only analog in what it preserves, not in what it fails
to preserve. Icons are hence approximate too. If all retinal squares,
irrespective of color, go into gray icons, I have icons of the
squareness, but not of the colors. Or, to put it another way, the
grayness is approximate as between all the actual colors (and gray).

There is no requirement that all the features of the sensory
projection be preserved in icons; just that some of them should be --
enough to subserve our discrimination capacities. This is analogous to
the fact that the sensory projection itself need not (and does not,
and cannot) preserve all of the properties of the distal object. To
those it fails to preserve -- and that we cannot detect by instruments
or inference -- we are fated to remain "blind." But none of this
information loss in either sensory projections or icons (or, for that
matter, categorical representations) compromises groundedness. It just
means that our representations are doomed to be approximations.

Finally, it must be recalled that my grounding scheme is proposed in a
framework of methodological epiphenomenalism: It only tries to account
for performance capacity (discrimination, identification,
description), not qualitative experience. So "what it is like to see
yellow" is not part of my evidential burden: just what it takes to
discriminate, identify and describe colors as those who see yellow do...
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: 27 Jun 87 13:22:19 GMT
From: ihnp4!homxb!houdi!marty1@ucbvax.Berkeley.EDU  (M.BRILLIANT)
Subject: Re: The symbol grounding problem

In article <917@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> ...  blurred the distinction between the
> following: (a) the many all-or-none categories that are the real burden
> for an explanatory theory of categorization (a penguin, after all, be it
> ever so atypical a bird, ... is, after all, indeed a bird, and we know
> it, and can say so, with 100% accuracy every time, ....
> ... and (b) true "graded" categories such as "big," "intelligent," ...

> ......
> "games" are either (i) an all-or-none category, i.e., there is a "right" or
> "wrong" of the matter, and we are able to sort accordingly, ...
> ... or (ii) "games"
> are truly a fuzzy category, in which membership is arbitrary,
> uncertain, or a matter of degree. But if the latter, then games are
> simply not representative of the garden-variety all-or-none
> categorization capacity that we exercise when we categorize most
> objects, such as chairs, tables, birds....

Now, much of this discussion is out of my field, but (a) I would like
to share in the results, and (b) I understand membership in classes
like "bird" and "chair."

I learned recently that I can't categorize chairs with 100% accuracy.
A chair used to be a thing that supported one person at the seat and
the back, and a stool had no back support.  Then somebody invented a
thing that supported one person at the seat, the knees, but not the
back, and I didn't know what it was.  As far as my sensory
categorization was concerned at the time, its distinctive features were
inadequate to classify it.  Then somebody told me it was a chair.  Its
membership in the class "chair" was arbitrary.  Now a "chair" in my
lexicon is a thing that supports the seat and either the back or the
knees.

Actually, I think I perceive most chairs by recognizing the object
first as a familiar thing like a kitchen chair, a wing chair, etc., and
then I name it with the generic name "chair."  I think Harnad would
recognize this process.  The class is defined arbitrarily by inclusion
of specific members, not by features common to the class.  It's not so
much a class of objects, as a class of classes....

If that is so, then "bird" as a categorization of "penguin" is purely
symbolic, and hence is arbitrary, and once the arbitrariness is defined
out, that categorization is a logical, 100% accurate, deduction.  The
class "penguin" is closer to the primitives that we infer inductively
from sensory input.

But the identification of "penguin" in a picture, or in the field, is
uncertain because the outlines may be blurred, hidden, etc.  So there
is no place in the pre-symbolic processing of sensory input where 100%
accuracy is essential.  (This being so, there is no requirement for
invertibility.)

M. B. Brilliant                                 Marty
AT&T-BL HO 3D-520       (201)-949-1858
Holmdel, NJ 07733       ihnp4!houdi!marty1

------------------------------

Date: 28 Jun 87 17:52:03 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: The symbol grounding problem: Against Rosch & Wittgenstein


marty1@houdi.UUCP (M.BRILLIANT) of AT&T Bell Laboratories, Holmdel asks:

>       Why require 100% accuracy in all-or-none categorizing?... I learned
>       recently that I can't categorize chairs with 100% accuracy.

This is a misunderstanding. The "100% accuracy" refers to the
all-or-none-ness of the kinds of categories in question. The rival
theories in the Roschian tradition have claimed that many categories
(including "bird" and "chair") do not have "defining" features. Instead,
membership is either fuzzy or a matter of degree (i.e., percent), being
based on degree of similarity to a prototype or to prior instances, or on
"family resemblances" (as in Wittgenstein on "games"), etc.. I am directly
challenging this family of theories as not really providing a model for
categorization at all. The "100% accuracy" refers to the fact that,
after all, we do succeed in performing all-or-none sorting and
labeling, and that membership assignment in these categories is not
graded or a matter of degree (although our speed and "typicality
ratings" may be).

I am not, of course, claiming that noise does not exist and that errors
may not occur under certain conditions. Perhaps I should have put it this way:
Categorization preformance (with all-or-none categories) is highly reliable
(close to 100%) and MEMBERSHIP is 100%. Only speed/ease of categorization and
typicality ratings are a matter of degree. The underlying representation must
hence account for all-or-none categorization capacity itself first,
then worry about its fine-tuning.

This is not to deny that even all-or-none categorization may encounter
regions of uncertainty. Since ALL category representations in my model are
provisional and approximate (relative to the context of confusable
alternatives that have been sampled to date), it is always possible that
the categorizer will encounter an anomalous instance that he cannot classify
according to his current representation. The representation must
hence be revised and updated under these conditions, if ~100% accuracy
is to be re-attained. This still does not imply that membership is
fuzzy or a matter of degree, however, only that the (provisional
"defining") features that will successfully sort the members must be revised
or extended. The approximation must be tightened. (Perhaps this is
what happened to you with your category "chair.") The models for the
true graded (non-all-or-none) and fuzzy categories are, respectively,
"big" and "beautiful."

>       The class ["chair," "bird"] is defined arbitrarily by inclusion
>       of specific members, not by features common to the class. It's not so
>       much a class of objects, as a class of classes.... If that is so,
>       then "bird" as a categorization of "penguin" is purely symbolic, and
>       hence is arbitrary, and once the arbitrariness is defined
>       out, that categorization is a logical, 100% accurate, deduction.
>       The class "penguin" is closer to the primitives that we infer
>       inductively [?] from sensory input... But the identification of
>       "penguin" in a picture, or in the field, is uncertain because the
>       outlines may be blurred, hidden, etc.  So there is no place in the
>       pre-symbolic processing of sensory input where 100% accuracy is
>       essential. (This being so, there is no requirement for invertibility.)

First, most categories are not arbitrary. Physical and ecological
contraints govern them. (In the case of "chair," this includes the
Gibsonian "affordance" of whether they're something that can be sat
upon.) One of the constraints may be social convention (as in
stipulations of what we call what, and why), but for a
categorizer that must learn to sort and label correctly, that's just
another constraint to be satisfied. Perhaps what counts as a "game" will
turn out to depend largely on social stipulation, but that does not make
its constraints on categorization arbitrary: Unless we stipulate that
"gameness" is a matter of degree, or that there are uncertain cases
that we have no way to classify as "game" or "nongame," this category
is still an all-or-none one, governed by the features we stipulate.
(And I must repeat: Whether or not we can introspectvely report the features
we are actually using is irrelevant. As long as reliable, consensual,
all-or-none categorization performance is going on, there must be a set of
underlying features governing it -- both with sensory and more
abstract categories. The categorization theorist's burden is to infer
or guess what those features really are.)

Nor is "symbolic" synonymous with arbitrary. In my grounding scheme,
for example, the primitive categories are sensory, based on
nonsymbolic representations. The primitive symbols are then the names
of sensory categories; these can then can go on to enter into combinations
in the form of symbolic descriptions. There is a very subtle "entry-point"
problem in investigating this bottom-up quasi-hierarchy, however:
Is a given input sensory or symbolic? And, somewhat independently, is
its categorization mediated by a sensory representation or a symbolic
one (or both, since there are complicated interrelations [especially
inclusion relations] between them, including redundancies and sometimes
even incoherencies)? The Roschian experimental and theoretical line of
work I am criticizing does not attempt to sort any of this out, and no
wonder, because it is not really modeling categorization performance
in the first place, just its fine tuning.

As to invertibility: I must again repeat, an iconic representation is
only analog in the properties of the sensory projection that it
preserves, not those it fails to preserve. Just as our successful
all-or-none categorization performance dictates that a reliable
feature set must have been selected, so our discrimination performance
dictates the minimal resolution capacity and invertibility there must be
in our iconic representations.
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: Sun 28 Jun 87 15:27:22-PDT
From: Ken Laws <Laws@Stripe.SRI.Com>
Subject: Fuzzy Symbolism

  From: mind!harnad@princeton.edu  (Stevan Harnad)

  Finally, and perhaps most important: In bypassing the problem of
  categorization capacity itself -- i.e., the problem of how devices
  manage to categorize as correctly and successfully as they do, given
  the inputs they have encountered -- in favor of its fine tuning, this
  line of research has unhelpfully blurred the distinction between the
  following: (a) the many all-or-none categories that are the real burden
  for an explanatory theory of categorization (a penguin, after all, be it
  ever so atypical a bird, and be it ever so time-consuming for us to judge
  that it is indeed a bird, is, after all, indeed a bird, and we know
  it, and can say so, with 100% accuracy every time, irrespective of
  whether we can successfully introspect what features we are using to
  say so) and (b) true "graded" categories such as "big," "intelligent,"
  etc. Let's face the all-or-none problem before we get fancy...

Is a mechanical rubber penguin a penguin?  Is a dead or dismembered
penguin a penguin?  How about a genetically damaged or altered penguin?
When does an penguin embryo become a penguin?  When does it become a
bird?  I think your example depends on circularities inherent in our
use of natural language.  I can't unambiguously define the class of
penguins, so how can I be 100% certain that every penguin is a bird?
If, on the other hand, we are dealing only in abstractions, and the
only "penguin" involved is a idealized living adult penguin bird, then
the question is a tautology.  We would then be saying that we are 100%
certain that our abstraction satisfies its own sufficient conditions --
and even that could change if scientists someday discover incontrovertible
evidence that penguins are really fish.

In short, every category is a graded one except for those that we
postulate to be exact as part of their defining characteristics.


After writing the above, I saw the following reply:

  I am not, of course, claiming that noise does not exist and that errors
  may not occur under certain conditions. Perhaps I should have put it
  this way: Categorization preformance (with all-or-none categories) is
  highly reliable (close to 100%) and MEMBERSHIP is 100%. Only
  speed/ease of categorization and typicality ratings are a matter of
  degree. The underlying representation must hence account for
  all-or-none categorization capacity itself first, then worry about its
  fine-tuning.

  This is not to deny that even all-or-none categorization may encounter
  regions of uncertainty. Since ALL category representations in my model are
  provisional and approximate (relative to the context of confusable
  alternatives that have been sampled to date), it is always possible that
  the categorizer will encounter an anomalous instance that he cannot classify
  according to his current representation. The representation must
  hence be revised and updated under these conditions, if ~100% accuracy
  is to be re-attained. This still does not imply that membership is
  fuzzy or a matter of degree, however, only that the (provisional
  "defining") features that will successfully sort the members must be revised
  or extended. The approximation must be tightened.

You are entitled to such an opinion, of course, but I do not accept the
position as proven.  We do, of course, sort and categorize objects when
forced to do so.  At the point of observable behavior, then, some kind
of noninvertible or symbolic categorization has taken place.  Such
behavior, however, is distinct from any of the internal representations
that produce it.  I can carry fuzzy and even conflicting representations
until -- and often long after -- the behavior is initiated.  Even at
the instant of commitment, my representations need be unambiguous only
in the implicit sense that one interpretation is momentarily stronger
than the other -- if, indeed, the choice is not made at random.

It may also be true that I do reduce some representations to a single
neural firing or to some other unambiguous event -- e.g., when storing
a memory.  I find this unlikely as a general model.  Coarse coding,
graded or frequency encodings, and widespread activation seem better
models of what's going on.  Symbolic reasoning exists in pure form
only on the printed page; our mental manipulation even of abstract
symbols is carried out with fuzzy reasoning apparatus.

                                        -- Ken Laws

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Mon Jun 29 03:31:18 1987
Date: Mon, 29 Jun 87 03:31:07 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #159
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Mon, 29 Jun 87 03:25 EDT
Received: from relay.cs.net by RELAY.CS.NET id ab11152; 29 Jun 87 2:38 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa22302; 29 Jun 87 2:37 EDT
Date: Sun 28 Jun 1987 23:05-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #159
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest            Monday, 29 Jun 1987      Volume 5 : Issue 159

Today's Topics:
  Future Directions - Drexler and Nanotechnology,
  History - AI in the 13th Century & Otto Selz,
  Binding - Computer Composition of Music

----------------------------------------------------------------------

Date: Mon, 22 Jun 87 11:03:03 EDT
From: Bruce Nevin <bnevin@cch.bbn.com>
Subject: nano-engineering


There is a good summary article in _Whole Earth Review_ (Spring 1987),
pp.  8-14:  A Technology of Tiny Things, Nanotechnics and Civilization,
by K.  Eric Drexler.  The bio in the footnote at the beginning says
Drexler got his SB from MIT in interdisciplinary science, followed by a
Master's in Aeronautics and Astronautics also at MIT.  Recently he
founded the MIT Nanotechnology Study Group to develop the science
described in the article and book.  Some excerpts from the former:

        Whatever is, is obviously possible.  Life is.  Therefore that
        demonstrates the possibility of molecular machines able to build
        other molecular machines--the essence of both life and a new
        method called nanotechnology. . . .

        Whatever obeys natural law is also possible.  Science now
        understands the laws of ordinary matter and energy well enough
        for most engineering purposes.  Nanotechnology will enable us to
        build new kinds of things.  Physical laws let us calculate what
        some of these things will be able to do.

        The basic idea of nanotechnology is straightforward. . . .
        Molecular machines are simply machines made of molecular-scale
        parts having carefully arranged atoms.

        . . . Nanotechnology assemblers will be molecular machines that
        grab reactive molecules and bring them together in a controlled
        way, building up a complex structure a few atoms at a time.

        . . .

        There is no new science in nanotechnology, only new engineering.
        The possibility of nanotechnology was implicit in the science
        known over 30 years ago, though no one saw it then.  During the
        1940s and 1950s, biochemistry revealed more and more of the
        molecular machinery of the cell.  In 1959, physicist Richard
        Feynman touched on a similar idea in a talk:  he spoke of using
        small machines to build smaller machines ( . . . and so on).  He
        suggested that the smallest machines would be able to "put atoms
        down where a chemist says" to make a "chemical substance." But
        Feynman didn't explain how these machines were to work, and said
        they "will really be useless," because chemists will be able to
        make whatever they want without them.  Decades passed with
        little followup.

        [Molecular biology advanced, Drexler's work at MIT indicated in
        winter of 1976 the possibility of "what we now call assemblers";
        he describes several paths for evolution of nanotechnics from
        present science and technology. --BN]

        As you can see, the starting point will make little difference.
        All roads lead to assemblers, and assemblers will let us make
        almost anything we are clever enough to design.

        . . .

        In a world full of competing companies and governments, only
        global disaster or global domination could block the advance of
        technology.  This seems to be a fundamental principle; if so, it
        must guide our plans.

        . . .

        What can nanotechnology do for us?  Almost anything we want, in
        physical terms.  Once we have the software to direct them,
        replicating assemblers can build almost anything, including
        more of themselves, without human labor.  Because they will
        handle matter atom by atom, as trees do, they can be as clean
        as trees, or cleaner.  They need not produce smoke or sludge or
        toxic chemical byproducts.

        . . .

        One important application will be the further miniaturization of
        computers.  Detailed study shows that assemblers could build the
        equivalent of a large, modern computer in about 1/1000 of the
        volume of a typical human cell.  This could be a mechanical
        computer (they're easier to analyze than electronic computers),
        but moving parts on this scale can be small and fast enough to
        make the computer faster than today's electronic machines.

        . . .

Drexler also writes at some length about the enormous potential for
danger and disruption of society and biosphere.

        Our survival may depend on our ability to tell sense from
        nonsense regarding a complex technology that doesn't exist yet.
        The nonsense will be abundant, no matter what we do:  any field
        on the borders of science fiction, quantum mechanics, and
        biology is well positioned to import a lot of prefabricated
        crap; any field where experiments and experience aren't yet
        possible is going to have great trouble getting rid of that
        crap.  When someone says "nanotechnology" and begins to expound,
        beware!

        . . . a political movement to deal with nanotechnology must be a
        movement to guide advance, not to stop it.  I've already argued
        that attempts to stop it would be futile; here are some reasons
        for thinking such efforts would be socially irresponsible.

I leave this and much more for the interested reader to follow up in the
Spring issue of WER.

(This same issue by the way has Shank's `Reality Club' contribution
on why math should not be taught in public schools.  As you know from
his AI work, it cannot be because he dislikes math or is bad at it.)


Bruce Nevin
bn@cch.bbn.com

(This is my own personal communication, and in no way expresses or
implies anything about the opinions of my employer, its clients, etc.)

------------------------------

Date: 24 Jun 87 15:39 PDT
From: JJD.MDC@OFFICE-1.ARPA
Subject: Drexler and Nanotechnology

Sorry I missed the NPR report on K. Eric Drexler and _Engines of Creation_.
Here's some background:

The book _Engines of Creation_ was publixhed by Anchor Press / Doubleday in
1986.  The excited foreword is by Marvin Minsky.  I know from the copy that I
just checked out that Drexler discusses AI in the book, but I am not sure what
his vantage point is.  At minimum, of course, is the potential of
nanotechnology as a way to build much denser hardware.  I suspect that Drexler
also at least touches on the idea of this enabling a critical mass and
consciousness.  This will come later.  It's a good read.

The book was excerpted in the Spring 1987 issue of _Whole Earth Review_.  This
article is provocative and has some good conceptual illustrations.

The cover bio of Drexler identifies him as a "Research Affiliate at the MIT
Space Systems Laboratory."  He is pictured with his back to the camera, staring
at an imposing and eclectic pile of books and a terminal.

I first encountered Drexler in the pages of the Summer 1976 issue of
_CoEvolution Quarterly_ (ancestor of _Whole Earth Review_).  The theme of that
issue (later extended as a book) was Gerard O'Neill's concept of space
colonization and industrialization.  Drexler was a very articulate advocate who
was actually doing something about it.  He was a graduate student at the time,
and had built a six-foot-long track that could electromagnetically launch a
bucket of water into a wall at 80 miles per hour.  It was a demonstration of
the feasibility of a mass driver to be built on the moon and to launch 10
kilogram sacks of material to colony construction sites.  He contributed to the
book that came from the initial article, demolishing his opponents with gleeful
arrogance (apparently since moderated, perhaps by exposure to audiences in the
last few years).

I suspect that Drexler has a general interest in big fixes in answer to
contemporary dilemmas, motivating his fascination with both space
industrialization and with nanotechnology.  More specifically, his earlier
interest in space has probably driven his interest in nanotechnology.
Nanotechnology promises to revolutionize both the ways things are made, and
their resulting performance characteristics.  It makes vast systems of
capital-intensive, high-performance technology seems more approachable.

I leave further analysis to the next century's graduate theses re: contemporary
intellectual history.

------------------------------

Date: 26 Jun 87 18:28:37 GMT
From: jbn@glacier.stanford.edu  (John B. Nagle)
Subject: Re: AI in the 13th Century


     A thorough discussion of the Ars Magna ("Great Art") of Ramon Lull
can be found in Martin Gardner's "Science - Good, Bad and Bogus" (ISBN
0-87975-144-4).  The Great Art is basically a system for exhaustively
combining terms, using a stack of disks, each containing a set of related
terms.  For example, one set of Lull's disks contained the following words:

        1.  God, creature, operation
        2.  difference, similarity, contrariety
        3.  beginning, middle, end
        4.  majority, equality, minority
        5.  affirmation, negation, doubt

In operation, one chooses one term from each set, more or less at random.
One can thus explore, Gardner writes, "such topics as the beginning and
end of God, differences and similarities of animals, and so on."

The Great Art provides no assistance in selecting useful combinations from
the many produced, or for doing anything with them once selected.  It
provides only a means for enumerating the possibilities inherent in some
taxonomic scheme.  So, while the Great Art may be useful as a prod
for creative thinking by humans, it does not provide anything more profound.
It does, though, generate the illusion of profundity, which provides much of
its appeal.


                                        John Nagle

------------------------------

Date: 24 Jun 87 13:23:51 GMT
From: edwards@unix.macc.wisc.edu  (mark edwards)
Subject: AI in the 13th Century


 A number of people have asked about the reference to AI in the
 13th Century. Well I finally dug up the ole notebook and picked
 it out. Unfortunately all I have is a name. The name is

  Ramon Lull


 Since the book was in latin, very old and so forth I guess I thought
 I'd never check it out. Apparently Ramon was a popular person in the
 sciences, black magic and those sort of things. His name appears
 with other terms like shamans in my notebook.

 I hope that helps.

 mark
--
    edwards@unix.macc.wisc.edu
    {allegra, ihnp4, seismo}!uwvax!uwmacc!edwards
    UW-Madison, 1210 West Dayton St., Madison WI 53706

------------------------------

Date: 24 Jun 87 17:03:34 GMT
From: duke!mps@mcnc.org  (Michael P. Smith)
Subject: Re: AI in the 13th Century


In article <1654@uwmacc.UUCP> edwards@uwmacc.UUCP (mark edwards) writes:
>
> A number of people have asked about the reference to AI in the
> 13th Century. Well I finally dug up the ole notebook and picked
> it out. Unfortunately all I have is a name. The name is
>
>  Ramon Lull
>
>
> Since the book was in latin, very old and so forth I guess I thought
> I'd never check it out. Apparently Ramon was a popular person in the
> sciences, black magic and those sort of things. His name appears
> with other terms like shamans in my notebook.
>

I'm no Lull expert, but here's part of an entry from W.L. Reese's
DICTIONARY OF PHILOSOPHY AND RELIGION (Humanities, 1980), p. 319:
\begin{quotation}
Lull, Raymond. 1236-1315.
        Philosopher and missionary.  Born in Palma, Majorca.  Taught
several years at Paris.  His goal was to state the truths Christianity
so succinctly that the infidels could not possibly deny them.  To this
end he wrote the *Ars Magna*, a mechanical method of exhaustively
stating the possible relations of a topic.  The method requires three
concentric circles divided into compartments.  One circle is divided
into nine relevant subjects; a second circle is divided into nine
relevant predicates; the third circle is divided into nine questions:
whether? what? whence? why? how large? of what kind? when? where? how?
One circle is fixed; the others rotate, providing a complete series of
questions, and of statements in relation to them.
\end{quotation}
Lull is usually dismissed as a crackpot by historians, but had
influence on the likes of Descartes and Leibniz centuries later.
I believe that much of Lull's work is available in English translation.

No doubt some interesting comparisons can be drawn between Lull's
program and, say, conceptual dependency theory.  But as to Mark's
claim that Lull used the term 'artificial intelligence', I suspect
that such usage occurs only in the mind of the translator.

----------------------------------------------------------------------------
Michael P. Smith                "The world of the happy man is a different
ARPA: mps@duke.cs.duke.edu      one from that of the unhappy man."
                                        Wittgenstein

------------------------------

Date: 24 Jun 87 19:05:25 GMT
From: duke!jds@mcnc.org  (Joseph D. Sloan)
Subject: Re: AI in the 13th Century


Martin Gardner devotes a chapter to Ramon Lull in
LOGIC, MACHINES AND DIAGRAMS, 2e, 1982, University
of Chicago Press.
                        Joe Sloan
                        jds@duke

------------------------------

Date: 25 Jun 87 14:03:37 GMT
From: nosc!humu!uhccux!stampe@sdcsvax.ucsd.edu  (David Stampe)
Subject: Re: AI in the 13th Century

The nine questions of Ramon Lull's Ars Magna (whether? what? whence?
why? how large? of what kind? when? where? how?) seem to be what were
called the "modes of being" in the grammatical theories of the
"Modistae" during the middle ages.  They were based ultimately on
Aristotle's Categories, which have been claimed during this century
(by Ryle?) to have been based on the Greek interrogative pronouns.

Regarding the similarities to conceptual dependency theory, it's
interesting that in *syntactic* dependency theory, in a phrase, it is
only the dependent member (adjunct, modifier, operator) that can be
interrogated vis a vis the independent (head, operand) member, not
vice versa.

Examples, with (Head (Adjunct)), and * for the bad cases:
 (Verb (Object))  Q: Who does he like?  A: Mary.
                 *Q: What he Mary?   A: Likes her.
 ((Adj) Noun)     Q: Which hat did she wear?   A: The straw hat.
                 *Q: What straw did she wear?   A: The hat.
 ((Adv) Adj)      Q: How hot was it?  A: Too hot.
                 *Q: Too what was it?  A: Hot.

Etc.  Typically the head is implied by the adjunct (e.g. to like Mary
is to like [someone], a straw hat is a hat, too hot is hot).  That is,
adjuncts are rather like predicates.  That is, they correspond to the
modes of being, the ways things can be.

There's not much new under the sun.

                        David Stampe, Linguistics, Univ. of Hawaii
                        uhccux!stampe@nosc.mil

------------------------------

Date: Sun, 21 Jun 87 19:24:42 +0300
From: NYSTERN%WEIZMANN.BITNET@wiscvm.wisc.edu
Subject: Re: Re: Taking AI models and applying them to biology...


I have two comments to say;

a) As far as I understand from the article, Otto Selz has DIED
   in 1943 at Auschwits (If one takes into account what Auschwits was
   that period it seems quite logical) thus he couldn't publish his work in
   1943 ... If one remembers what were the types of people who died in
   Auschwits (I.E Jews) and if one takes into account that they expelled from
   The Universities and Research Institutions from about 1933 (Hitlers
   election as Germany's prime minister) then the only logical concultion
   is that He didn't publish his theory after 1933 (since he was banned)
   1943-1933=10 years (woow , I made it ...) which means he published his
   theory before Turing or Shannon published theirs ...
   WWII was probably the main reason for the lack of knowledge about his work.
   (Remember that the war was ended 2 years later and the world had enough on
    his mind than to remember Selz's theory ...)
   BTW The commentation above isn't based on facts since I know very little
   about his life and death (I may be wrong and will found out that he died
   As a top Nazy SS officer due to cancer ... but that possiblity seems
   redundant to me)

b) As far as I know AI is based on Mathematics ans Biology.
   Both of those Sciences and many of the disciplines adopted by AI
   scientists were formed a long time ago even without being influenced by
   AI/computers (in matter of fact up until now the fields of Biology
   and Computers wasn't combined together when matters of theory comes
   only as a tool (calculation programs and DNA decoding algorythms)


Well to sum up my point I feel that the computer-science field will
benefit more from the work of hopfield then any theoretical axiome ...
The scince world has become too specific while I believe that combining all
forces together instead of working in paralel on the same topic would be more
fruitful for the science world and for the world (With one objection that
one field should not impose his theory on the other let, 1000 flowers grow
together, but TOGETHER). There is a Master grad. in weizmann who has done
his thesis in the vision field in the Department of Applied Math, His
'problem' is that his thesis relates to many fields (physics,neuro biology)
and not only to Applied Math moreover He has proved emphirically and cited
Famous researchers in this field that Math is redundant in this specific field.
Ofcourse no one like his thesis in the Math Department ... As far as I know
He will have his Master (got above 80 in the oral test) BUT how much will he
get about hsi work ? no one knows. His work is great but it doesn't fit into
the cateogries of our formal science. There's no (yet) a field named
Applied neuro-math or Applied psycho-physics or even Applied neuro-physics.
I've brought this story up to show 1) The situation in the science world
nowadays 2) to emphesize the trends of science as I see them 3) to back
up the notion that Applied Math and AI would benifit alot by examening works
of other science fields.

I believe that that's all.

------------------------------

Date: 21 Jun 87 19:42:03 GMT
From: sunybcs!rapaport@ames.arpa  (William J. Rapaport)
Subject: Re: Computer composition of music

In article <2198@mmintl.UUCP> johnt@mmintl.UUCP (John Tangney) writes:
>Some of the researchers I read about (like Max
>Mathews, Lejaren Hiller, Iannis Xenakis, Stephen Smoliar to name
>a few off the top of my head) must still be out there.

Lejaren Hiller is Prof. of Music at SUNY Buffalo and an adjunct prof. in
our CS dept.  His email address is muslah@buffalo.csnet

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Tue Jun 30 08:23:41 1987
Date: Tue, 30 Jun 87 08:23:33 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #160
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Tue, 30 Jun 87 08:10 EDT
Received: from relay.cs.net by RELAY.CS.NET id ac17446; 30 Jun 87 2:06 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa02809; 30 Jun 87 2:05 EDT
Date: Mon 29 Jun 1987 22:17-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #160
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest            Tuesday, 30 Jun 1987     Volume 5 : Issue 160

Today's Topics:
  Queries - Plausible Reasoning &
    Natural Language - Predicate Calculus - Theorem Proving &
    Automatic Programming Bibliographies &
    Frame Matching and Chaining,
  Psychology - Why Did The $6,000,000 Man Run So Slowly?

----------------------------------------------------------------------

Date: Tue, 23 Jun 87 20:57:05 SST
From: Jenny <ISCLIMEL%NUSVM.BITNET@wiscvm.wisc.edu>
Subject: so what about plausible reasoning ?

As I read articles on plausible reasoning in expert systems, I come to the
conclusion that experts themselves do not exactly work with numbers as they
solve problems. And many of them are not willing to commit themselves into
specifying a figure to signify their belief in a rule. The deductive process
that occurs in their brain can never be replicated by any known plausible
reasoning models. The expert system technology is already a weak one per se,
why introduce further complexity and more bottleneck in the acquisition of
knowledge, knowing fully well that the numbers are probably inconsistent ?
If one obtains two conclusions with numbers indicating some significance,
say 75 % and 80 %, can one say that the conclusion with 80% significance is
the correct conclusion and ignore the other one ? These numbers do not seem
to mean much since they are just beliefs or probabilties.


Lim Eng-Lian
National University of Singapore
-- this opinion is my own and is not influenced by the color of my office

------------------------------

Date: Wed, 24 Jun 87 16:37:04 SET
From: "Adlassnig, Peter" <ADLASSNI%AWIIMC11.BITNET@wiscvm.wisc.edu>
Subject: natural language - predicate calculus - theorem proving

concerning my ph.d. thesis i would like to know who has dealt
already with the following themes:

1) translation of indefinite pronomina into predicat calculus.
   parsing only simple english sentences (subj pred obj), with
   reference to the distribution and interpretation of wh-words
   and quantifiers. ( a lexicon should be minimized to the syntax
   and not include semantik ambiguities-rules.)

2) representation of quantifiers in frames.

3) automated theorem prover algorith, which is easy to implement
   for first-order predicat-logic.

are logic grammars the right field for 1)?

the aim of the whole system is to implement an expertsystem in logo,
to demonstrate in schools, that computers can "think".

i would be thankful for any help|
which literature would you advice to read?
                                             ruth gruenberger


Please send response to    adlassni%awiimc11.bitnet

                           Thank you Peter Adlassnig

------------------------------

Date: 29 Jun 87 19:36:07 GMT
From: pratt@vanhalen.rutgers.edu (Lorien Y. Pratt)
Subject: Request for automatic programming bibliographies


Has anyone recently put together a bibliography of work in automatic
program generation?  I'd appreciate any pointers that you can give me.
   --Lorien Pratt

------------------------------

Date: 23 Jun 87 21:13:20 GMT
From: ihnp4!drutx!mcp@ucbvax.Berkeley.EDU  (Mike Paugh)
Subject: Need info on frame matching and chaining


I am looking for good reference material on building expert
systems using frames in Lisp. The environment will be GCLISP.
What I need is a good basic understanding of how to chain
through the frames and do the pattern matching.

I am new to this, so any good pointers will be appreciated.


Mike Paugh
AT&T IS Labs Denver
ihnp4!drutx!mcp

------------------------------

Date: 29 Jun 87 12:34:12 GMT
From: ihnp4!homxb!mtuxo!mtune!akgua!cpsc53!dwb@ucbvax.Berkeley.EDU 
      (Summer Hire)
Subject: Re: Need info on frame matching and chaining

>
> I am looking for good reference material on building expert
> systems using frames in Lisp. The environment will be GCLISP.
> What I need is a good basic understanding of how to chain
> through the frames and do the pattern matching.
>
> I am new to this, so any good pointers will be appreciated.
>
>
> Mike Paugh
> AT&T IS Labs Denver
> ihnp4!drutx!mcp

 Hi, I am a summer hire from AT&T, and just finished a two term course on
conceptual dependencies (Frame-style inference netting) and pattern matching.
We conducted six member projects on the building of MARGIE.  This Margie  took
an english story and converted it into "Frames" where then it was pattern
matched against set scripts (senerios or events).  From this matching the
system could then construct an infered sequence of actions form the given.
It took about six months to develop.  The book we used as an outline, which
gave us a firm grasp on the basics of the entire system, was Roger Shanks
book named "INSIDE COMPUTER UNDERSTANDING." It gave us a great lead.  We
did vary to a certain extent in actual development, but the basics are still
there.

  I would be glad to aid you in any way for those further developments,
because they provided a more natural way of demonstrating working cognitive
structures. (In theory of course.)  I must say that the book is a must
to get you going.  Another great lead in this search would be to
contact George Stockman, Prof. at Michigan State University.  He was the
developer of our course project and is a biggie on frame representation of
knowledge and expert systems.  (He taught me every thing I know).  Please
contact me if you need any assistance at all.


Dave Bigelow (summer hire and damn well worth it!)

------------------------------

Date: 20 Jun 87 15:44:33 GMT
From: mit-amt!mob@mit-amt.media.mit.edu (Mario O. Bourgoin)
Reply-to: mit-amt!mob@media-lab.media.mit.edu (Mario O. Bourgoin)
Subject: Re: Why Did The $6,000,000 Man Run So Slowly?


Because it made the special effects scenes last longer.

------------------------------

Date: Mon 22 Jun 87 10:12:25-CDT
From: Art Flatau <CMP.FLATAU@R20.UTEXAS.EDU>
Subject: Re: why did the $6,000,000 man run so slowly?

I think people have missed the obvious reason that the $6 Meg man ran so
slowly.  To stretch the plots out to fill an hour time slot.

Art

------------------------------

Date: Mon, 22 Jun 87 09:47:32 PDT
From: lambert%cod@nosc.mil
Subject: Why did $6M man run so slowly?

Re: Why did $6M man run so slowly?

Why would a producer use slow motion to depict very fast movement?  I suggest
the following reasons be added to the list:

1. ACHIEVE THE TECHNICAL EFFECT.  The slow motion points out to the viewer the
fact that the flow of time is different.  The context around the slow-motion
scene makes the magnitude and direction of this change obvious.  This is all
the viewer really needs to sense the effect that the $6M man is moving much
faster than normal.

2. TAKE ADVANTAGE OF THE VIEWER'S IMAGINATION.  The slow motion gives the
viewer's mind time to realize that fast motion is being represented, and to
appreciate the non-triviality of it (unlike a realtime presentation which
would tend to make it seem easy). It allows the viewer's imagination to be
creative, to draw on previous experience, and to construct the concepts and
images necessary to represent something so complex and
magnificently-engineered happening so fast.  This increases the impact on the
viewer by enhancing appreciation of the $6M man's feats.  Indeed, it can give
the viewer an experience far beyond what the producer can actually achieve on
the screen.

3. TAKE ADVANTAGE OF THE VIEWER'S INTEREST IN LEARNING ABOUT HIMSELF.   The
viewer is treated to a slow-motion presentation of human qualities difficult
or impossible to observe at normal or faster speeds. This allows him to learn
new things about the actor, himself, and other humans.

4. ACHIEVE ARTISTIC EFFECT.  The producer also achieves beautiful artistic
effect by allowing viewing of the visible signs of forces and motion,
observation of facial expressions, and contemplation of the beauty of the $6M
man's athletic qualities such as speed, power, grace, and coordination.



lambert@cod.nosc.mil (David R. Lambert)

------------------------------

Date: Mon, 22 Jun 87 16:38:32 PDT
From: "William J. Fulco" <lcc.bill@CS.UCLA.EDU>
Subject: Slow-motion / $6E6 man


amsler@flash.bellcore.com:
> ....
> I suspect what is happening is that this is analogous to the focusing
> of attention on the events which happened in a real moving image
> memory.  That is, if one attempts to reconstruct an event that
> happened very quickly in real time after the fact, one will
> artificially create something like slow motion.

This "slo-motion" effect of perception also appears to work in real-time.
A good everyday example of this is (for people that play sports)
a pass or "drive" in basketball, a volly in tennis or hitting a baseball.

Professional baseball players talk about learning to see the ball they are
trying to hit.  They say that they actuall see the ball - an object the size
of an orange, traveling at 90+ mph from 66 feet away.

I used to think that this wasn't really what was happening, but I have
been involved in basketball games where, for less than 1 second,
(real-time) I have had an open lane to the basket, or an oportunity to
make a pass.  The perceived time was far slower, on the order of several
seconds.

During these perceived seconds, I had time to "think" about my options -
actually make verbal & image (mind's eye) judgments about what to do or
not to do, commit and make or skip the play.

One case of this that really stands out: playing basketball several weeks
ago I was left wide open for drive to the basket.  I remember that
I couldn't beleive I was left this wide open and I started to think
"what's the catch".  I then remember thinking that "I don't have time to
be thinking about thinking about what I should be doing - I should just go",
and with this I drove down the key (-: missed the shot :-).

The point is, I had time to "argue" with myself, "verbally", in my mind
before I took action, but in real-time no more that a second passed.

The first time you notice this effect it is truly erie.

(bill)


  [Yup.  It happened to me once, in 1962, as I was jumping out of a
  swing into a sandlot.  I had done this (at full speed from maximum
  height) hundreds of times, and did so again afterwards, but only
  this once did time slow to about 1/4 speed.  I wonder if a similar
  effect might be a part of the "born again" religious conversion
  that is sometimes hits people during routine activities.  -- KIL]

------------------------------

Date: 27 Jun 87 13:41:23 GMT
From: winfree!uucp@seismo.CSS.GOV (Unix Chit-Chat at
      winfree.n3eua.cos.ampr.n3eua.cos.ampr)
Subject: Submission for comp-ai-digest

Path: winfree!hp-lsd!hpldola!ben
From: ben@hpldola.HP.COM (Benjamin Ellsworth)
Newsgroups: comp.ai.digest
Subject: Re: Why Did The $6,000,000 Man Run So Slowly?
Message-ID: <13330001@hpldola.HP.COM>
Date: 26 Jun 87 20:42:14 GMT
References: <870615144826.2.NICHAEL@BUBBAROMDOS.PALLADIAN.COM>
Organization: HP Logic Design Oper. -ColoSpgs
Lines: 15


>From my film classes at school, I had gathered that the reason that the
action sequences in Kung Fu were slowed down for emphasis.  When you
slow a scene down, whatever the content, you emphasize the action of
that scene.  This is especially effective for violent action.  Any good
anti-hunting film will slow down any shots of an actual Bambi kill.
The effect of slowing is to force the viewer to perceive the action in
more detail (and hence with greater emphasis) than he/she could view it
at normal speed.  Speeding up a scene has the opposite effect.

Benjamin Ellsworth
hplabs!hpldola!ben

*** This posting is about the use of temporal distortion in film
    making, not a statement regarding the morality of hunting.

------------------------------

Date: 24 Jun 87 21:34:36 GMT
From: ihnp4!chinet!nucsrl!coray@ucbvax.Berkeley.EDU  (Elizabeth)
Subject: Re: Why did the six-million dollar man run so slowly?

/ nucsrl:comp.ai / tim@linc.cis.upenn.edu (Tim Finin) / 11:47 pm  Jun 11, 1987 /
Why did the six million dollar man run so slowly?


The guy moves slowly in the same way that a car accident happens "slowly".
Slow motion simulates the increase in attention to detail and reaction
times which go with an increase in adrenaline.  This makes slow motion,
oddly enough, exciting.

The thing with the cougar is right on because the pedator in the hunt
is just the sort of thing for which adrenaline evolved.

M. E. Corey

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Tue Jun 30 08:23:57 1987
Date: Tue, 30 Jun 87 08:23:47 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #161
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Tue, 30 Jun 87 08:13 EDT
Received: from relay.cs.net by RELAY.CS.NET id ad17446; 30 Jun 87 2:07 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa02814; 30 Jun 87 2:08 EDT
Date: Mon 29 Jun 1987 22:26-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #161
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest            Tuesday, 30 Jun 1987     Volume 5 : Issue 161

Today's Topics:
  Query - Mega-Monitor,
  Robotics - Vectrobot Recommendation,
  AI Tools - Object-Oriented Languages & CLP(R) Announcement

----------------------------------------------------------------------

Date: 26 Jun 87 15:50:15 GMT
From: stride!tahoe!unsvax!jimi!asci!brian@gr.utah.edu  (Brian Douglass)
Subject: Mega-Monitor

I've been asked by a friend to research information about a super-size
monitor.  Essentially, what I am looking for is a color monitor that is 10
feet by 10 feet with a resolution of say 13,000 by 13,000--don't ask me what
for because I don't know what for, I'm just the gumshoe--and any necessary
equipment to drive it.  (can you imagine what kind of equipment is necessary
to drive 169 million pixels!) Basically, my friend needs to generate some very
large images with extremely fine details.  Color is preferable, but not
absolute.  Money is not really a concern at this moment, so lets hear anything
you got.  Also, if it means building it custom, that is what my friend wants
to know.  So IBM, RCA, Tektronik, HP, etc, if you're listening and have any
experience in a monitor this large I would like to hear from you, as well as
if you have any "off-the-top-of-you-head" price estimates send those along.  I
fully expect to hear in the millions, but that's okay.  Right now, my friend
needs to know if anybody has done this, or if anybody can do this.

I know that there are also some analog systems out and about of this type of
magnitude.  Although not preferable, I would like to hear about them as well.

Please E-mail only your responses to me as I am gone periodically on business
and we keep only a days worth of news on our system for a myriad of reasons.
However, I will summarize periodically to comp.graphics the responses I do
receive, as I am sure there are others who are as fascinated as I with the
leviathan proportions of this Mega-Monitor.

Brian Douglass
Applied Systems Consultants, Inc. (ASCI)
P.O. Box 13301
Las Vegas, NV 89103
Office: (702) 733-6761
Home: (702) 871-8182
brian@asci.uucp
UUCP:    {akgua,ihnp4,mirror,psivax,sdcrdcf}!otto!jimi!asci!brian

------------------------------

Date: 22 Jun 87 19:49:48 GMT
From: linus!alliant!sullivan@husc6.harvard.edu  (Mike Sullivan)
Subject: Re: Search and Employ (Mobile robot)


        If your work involves mobile robot navigation, or other research
in robotics requiring sturdy, reliable hardware, I recommend a small three
wheel drive, three wheel steer (synchronous drive) chassis called a
"Vectrobot".  It is manufactured by a company in New Hampshire called
Real World Interface.

        For info, or references call Grinnell or Curt at (603) 654-6334

#include <std/disclaimer.h>

                                 ______
                                / \    \
Michael J Sullivan             /   \____\               Alliant
decvax!linus!alliant!sullivan /    /     \      ComputerSystemsCorporation
                             /____/_______\

------------------------------

Date: 21 Jun 87 11:42:01 GMT
From: munnari!koel.rmit.oz!rcopm@seismo.CSS.GOV (Paul Menon)
Subject: Re: Smalltalk-80 for Sun 3 ... (LONG)


In article <8706180728.AA10707@ucbvax.Berkeley.EDU>,
lcc.bill@CS.UCLA.EDU ("William J. Fulco") writes:
> I saw a really nice system, (I mean REALLY nice - with good color support)
> from Xerox PARC marketing spinoff at the 1986 AAAI show.  It was running
> on a Sun 3/260 and it really sizzles.....

I can believe that, I was introduced to the 3/260 just recently.  It would make
anything sprout wings.

  The reason for my addition has a little to do with suns, Smalltalk, and
technology in general, so please bear with me.  This is long.

  I have just completed some sizable programs (well, to me they were), and
am in the "recovery stage", ie sizing up what I have done... was it all worth
while etc, etc.  I have reached a few (frustrating) conclusions/opinions...

    *   If I leave these programs for a while, and then come back to change
        them in the name of maintenance or further enhancement, I am not too
        much better off than someone who has never seen the package before.
        I don't mean that in the positive sense, nor would I have forgotten
        the techniques I used; it is the dependence of data being spread
        all over the program.  It was written in Pascal.  How many of you
        decide to change a data structure halfway through a program, not
        because of bad planning in the first place, but because a "new"
        and more efficient technique requires extra "bits" embedded into
        a data structure.  Does "grep 'structtype' *.h *.p" ring a bell?
        Not even then am I too sure if everything is covered, especially
        if it belongs to an overall complex data structure with crosslinks.
        No amount of documentation or cross-references will relieve the
        manual task ahead.  Good programming style can minimize this only to
        a certain extent.  C, Algol, Modula 2, perhaps even Ada suffers
        from this.

    *   If I want to re-use techniques in another program, major surgery is
        required.  Some call this hacking.  It's only ok if the same types are
        being used by the new program.  There is static binding available from
        Ada, if you wish to learn such a complex language.  But none of the
        "standard languages" allow complete type independence.  Lisp and
        Prolog programs will suffer the same scalpel treatment as the others.

  If you haven't already guessed where I am heading, object-oriented
programming languages will (in my opinion) relieve me of these woes.  Ok says
I, which one do I use?  There is the Grand-Daddy, Smalltalk-80; the pure one.
Then there are the nouveau hybrids C++, Objective-C and MacApp.  Others come
in different Flavors, Loops or feathered Flamingos and Owls.  Lisp and Prolog
do not satisfy my requirements because I cannot easily "build" on previous
applications experience.

  I don't hold the generally dismal performance of Smalltalk aginst it.
Hardware is zooming ahead  as witnessed on suns, and soon on the Mac II
(I hope).  My questions to all who have not gone to sleep are...

   Will Smalltalk mature from being the toy that it was? ie a full 32
bit machine with > 32000 objects etc..  Methinks this is the ideal language
to be using no matter how big or small the program.  The objection to being
such an "open" system can be countered by their "change management tools", if
I may be permitted to steal the phrase.

  Of the hybrids, Objective-C appears my favourite.  Although I have never
used it, I delight in the similar Smalltalk syntax.  It will be a good
stand-in until Smalltalk meets its hardware match.  Could any user out there
please comment on Objective-C, including it's ease of use, availability,
portability (ie, which o/s's can it run on), and price?

  My preference to Objective-C rather than C++ is that I "feel uncomfortable"
in the way the latter has been implemented. The extended syntax does not
"stand out", it either melds into the other hieroglphs, so I cannot pick the
wood from the trees or it further confuses my understanding of C.  I wish I
had a video of me reading a C program.. I must have this perpetual frown.
Is it common?  I would love to hear from C++ users, especially those who
have used C++ and Objective-C.  Note that my primary preference to Objective-C
is its syntactical similarity to Smalltalk.

  Why not MacApp as an interim?  Why not indeed!  It is another example of
brilliance on the part of Apple, and once I get over the confusion of records
and messages/methods, all should be swell.  One hitch though.  Apple had
deemed it necessary to inflict a licencing fee on anyone producing/marketing
software that uses MacApp, as well as restricting all such programs to
the Macintosh.  I don't know whether this still holds.  I have noted MacApp
being used on a 4.2 bsd system (refer to OOPSLA '86 procs pp 186 - 201). pity.

  My main hope is Smalltalk.  It is a pity that the ones that can really
benefit from such a system are usually the last to see it.  Kids.  It is
the big kids; ie those who have been ingrained or fed up with procedural
languages that get to use it.  Does this make us shortsighted?
Or perhaps fatally dependent on the past?

  This isn't a plug for trendy software.  This is frustration with writing
applications from scratch that use (nearly) the same techniques time and
time again.  I use the hardware of tomorrow, but give it the brains of
yesterday. I am supposed to build on experience; all I do is
re-invent the wheel.

  If you have read the book ..
        "Object Oriented Programming: An Evolutionary Approach"
                                                        Brad J. Cox.
  Then a major part of my article echoes its theme.  I could not have read
  it at a more pertinent time.

Thankyou,

    Paul Menon.

    Dept of Communication & Electronic Engineering,
    Royal Melbourne Institute of Technology,
    124 Latrobe St, Melbourne, 3000, Australia

ACSnet: rcopm@koel             UUCP: ...!seismo!munnari!koel.rmit.oz!rcopm
CSNET:  rcopm@koel.rmit.oz     ARPA: rcopm%koel.rmit.oz@seismo
BITNET: rcopm%koel.rmit.oz@CSNET-RELAY
PHONE:  +61 3 660 2619.

------------------------------

Date: Fri, 26 Jun 87 15:18:25 est
From: munnari!moncsbruce.oz!clp@seismo.CSS.GOV (The CLP(R) Personae)
Subject: CLP(R) Distribution Announcement

(Can you please add the following announcement to the digest)

                    DISTRIBUTION NOTICE
                    ___________________


We  are  pleased  to  announce  the  availability   of   our
interpreter for CLP(R), the new Constraint Logic Programming
language.  This is being distributed in source code  written
in  C  and it is compatible with most machines running UNIX,
eg. Vaxen, Pyramids and Suns. This is not intended to be a
commercial announcement and is targeted at educational or
research usage.

The distribution includes:

1.   CLP(R) interpreter (source code);

2.   Example CLP(R) programs;

3.   Installation  Manual  and  Programmer's  Manual   (hard
     copies).

Further information can be found in the following papers:

1.   J.  Jaffar   and   J-L.   Lassez,   "Constraint   Logic
     Programming",  Proc.  14th  ACM-POPL,  Munich,  January
     1987.

2.   J.  Jaffar   and   S.   Michaylov,   "Methodology   and
     Implementation  of  a  CLP  System",  Proc.  4th  ICLP,
     Melbourne, May 1987.

3.   N.C. Heintze, S. Michaylov and  P.J.  Stuckey,  "CLP(R)
     and  Some  Electrical  Engineering Problems", Proc. 4th
     ICLP, Melbourne, May 1987.

4.   C. Lassez, K. McAloon and  R.  Yap,  "Constraint  Logic
     Programming  and  Option  Trading",  IEEE  Expert, Fall
     Issue 1987, to appear.

If you would like a Site licence for educational or research
purposes,  please  send  a  request  for more information to
either,

(a)  Electronic Mail address:
     ACSNET:             clp@moncsbruce.oz
     ARPANET,CSNET:      clp@moncsbruce.oz.au
     UUCP:               seismo!munnari!moncsbruce.oz!clp

(b)  Paper Mail address:
     CLP(R) Distribution
     Department of Computer Science
     Monash University
     Clayton
     Victoria 3168
     Australia

In order to cover distribution and media  costs,  a  license
fee of $150 will apply.

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Tue Jun 30 08:24:17 1987
Date: Tue, 30 Jun 87 08:24:00 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #162
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Tue, 30 Jun 87 08:14 EDT
Received: from relay.cs.net by RELAY.CS.NET id ac17505; 30 Jun 87 2:19 EDT
Received: from stripe.sri.com by RELAY.CS.NET id ab02860; 30 Jun 87 2:18 EDT
Date: Mon 29 Jun 1987 22:32-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #162
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest            Tuesday, 30 Jun 1987     Volume 5 : Issue 162

Today's Topics:
  AI Tools - Kyoto Common Lisp

----------------------------------------------------------------------

Date: Mon 22 Jun 87 20:24:22-CDT
From: CL.BOYER@R20.UTEXAS.EDU
Subject: Kyoto Common Lisp

Kyoto Common Lisp (KCL) is a complete implementation of Common Lisp written
by T. Yuasa and M. Hagiya working under Professor R. Nakajima at the
Research Institute for Mathematical Sciences, Kyoto University.  It runs on
many different machines and is highly portable.  It executes very
efficiently and it is superbly documented.  KCL is being made available at
no fee through the implementors' generosity.  The complete sources are
included.  One channel of distribution is via ftp on the Arpanet/Internet.

                               LICENSE REQUIRED!

IMPORTANT: Although there is no fee, KCL is not in the public domain.  You
are authorized to obtain it only after signing and mailing in a license
agreement.  Before you ftp KCL files you MUST fill out and send in the
license agreement included in this message.  Otherwise, you are not
permitted to make copies of KCL.

                           COPYING KCL VIA INTERNET

KCL may be obtained from Internet source rascal.ics.utexas.edu [128.83.144.1],
a Sun-3 at the University of Texas at Austin.  To obtain KCL, login as "ftp"
with password "guest".  There are three tar files:

   /pub/kcl.tar, 4.0 megabytes
   /pub/kcl.tar.C, produced by compact from kcl.tar, 2.8 megabytes
   /pub/kcl.tar.Z, produced by compress from kcl.tar, 1.2 megabytes

Any of the three files is sufficient to generate KCL.  Please ftp the
compressed file if possible.  Please use ftp at an odd hour if possible to
reduce traffic on a sometimes heavily loaded network.  Be sure to use binary
mode with ftp.  A current version of this message may be found as the file
/pub/kcl.broadcast.

                          MACHINES ON WHICH KCL RUNS

KCL runs on many machines.  With the sources provided in the ftp file, KCL
may be executed on the following machines (and operating systems).

        VAX/UNIX (4.2BSD)
        SUN2 (OS2, 3) SUN3 (OS3)
        SONY'S NEWS (4.2BSD)
        ATT3B2 (System V)
        Fujitu S3000 (System V)
        Sumitomo's E15 (Uniplus System V)
        Data General MV (DGUX)

Instructions for making the system are in the file doc/porting in the ftp
tar file.

                               KCL LICENSE FORM

To obtain the right to copy KCL, sign this license form and send it and a copy
to the Kyoto address at the end of the form.  ONCE YOU HAVE MAILED THE SIGNED
LICENSE FORM, YOU MAY COPY KCL.  YOU DO NOT HAVE TO WAIT FOR RECEIPT OF THE
SIGNED FORM.
--------------------------- cut here ----------------------------


                               LICENSE AGREEMENT
                                      FOR
                               KYOTO COMMON LISP

The Special Interest Group in LISP (Taiichi Yuasa and Masami Hagiya) at the
Research Institute for Mathematical Sciences, Kyoto University (hereinafter
referred to as SIGLISP) grants to

USER NAME: _________________________________________

USER ADDRESS: ______________________________________
              ______________________________________

(hereinafter referred to as USER), a non-transferable and non-exclusive license
to copy and use Kyoto Common LISP (hereinafter referred to as KCL) under the
following terms and conditions and for the period of time identified in
Paragraph 6.

1.  This license agreement grants to the USER the right to use KCL within their
own home or organization.  The USER may make copies of KCL for use within their
own home or organization, but may not further distribute KCL except as provided
in paragraph 2.

2.  SIGLISP intends that KCL be widely distributed and used, but in a
manner which preserves the quality and integrity of KCL.  The USER may send
a copy of KCL to another home or organization only after either receiving
permission from SIGLISP or after seeing written evidence that the other
home or organization has signed this agreement and sent a hard copy of it
to SIGLISP.  If the USER has made modifications to KCL and wants to
distribute that modified copy, the USER will first obtain permission from
SIGLISP by written or electronic communication.  Any USER which has
received such a modified copy can pass it on as received, but must receive
further permission for further modifications.  All modifications to copies
of KCL passed on to other homes or organizations shall be clearly and
conspicuously indicated in all such copies.  Under no other circumstances
than provided in this paragraph shall a modified copy of KCL be represented
as KCL.

3.  The USER will ensure that all their copies of KCL, whether modified or not,
carry as the first information item the following copyright notice:

(c) Copyright Taiichi Yuasa and Masami Hagiya, 1984.  All rights reserved.
Copying of this file is authorized to users who have executed the true and
proper "License Agreement for Kyoto Common LISP" with SIGLISP.

4.  Title to and ownership of KCL and its copies shall at all times remain
with SIGLISP and those admitted by SIGLISP as contributors to the
development of KCL.  The USER will return to SIGLISP for further
distribution modifications to KCL, modifications being understood to mean
changes which increase the speed, reliability and existing functionality of
the software delivered to the USER.  The USER may make for their own
ownership and use enhancements to KCL which add new functionality and
applications which employ KCL.  Such modules may be returned to SIGLISP at
the option of the USER.

5.  KCL IS LICENSED WITH NO WARRANTY OF ANY KIND.  SIGLISP WILL NOT BE
RESPONSIBLE FOR THE CORRECTION OF ANY BUGS OR OTHER DEFICIENCIES.  IN NO
EVENT SHALL SIGLISP BE LIABLE FOR ANY DAMAGES OF ANY KIND, INCLUDING
SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES, ARISING OUT OF OR IN CONNECTION
WITH THE USE OR PERFORMANCE OF KCL.

6.  This license for KCL shall be effective from the date hereof and shall
remain in force until the USER discontinues use of KCL.  In the event the USER
neglects or fails to perform or observe any obligations under this Agreement,
this Agreement and the License granted hereunder shall be immediately
terminated and the USER shall certify to SIGLISP in writing that all copies of
KCL in whatever form in its possession or under its control have been
destroyed.

7.  Requests.  KCL is provided by SIGLISP in a spirit of friendship and
cooperation.  SIGLISP asks that people enjoying the use of KCL cooperate in
return to help further develop and distribute KCL.  Specifically, SIGLISP
would like to know which machines KCL gets used on.  A brief notice form is
appended to this agreement which the user is requested to send by email or
otherwise.  Please send in further notifications at reasonable intervals if
you increase the number and type of machines on which KCL is loaded.  You
may send these notices to another USER which is cooperating with SIGLISP
for this purpose.

USER

  DATE:  _________________________________________

  BY:  ___________________________________________

  TITLE:  ________________________________________

  ADDRESS:  ______________________________________
            ______________________________________


SIGLISP

  DATE:  _________________________________________

  BY:  ___________________________________________
       Taiichi Yuasa                    Masami Hagiya
       Special Interest Group in LISP
       Research Institute for Mathematical Sciences
       Kyoto University
       Kyoto, 606,  JAPAN
       Telex:  05422020 RIMS J
       JUNET:  siglisp@kurims.kurims.kyoto-u.junet
       CSNET:  siglisp%kurims.kurims.kyoto-u.junet@utokyo-relay.csnet



USER has loaded KCL on the following machines since (date):

Model Number     Production Name       Number of Machines






                    END OF LICENSE FORM
--------------------------- cut here ------------------------

                                 DOCUMENTATION

The principal documentation for KCL is, of course, the book "Common Lisp
The Language" by Guy L. Steele, Jr. with contributions by Scott E. Fahlman,
Richard P. Gabriel, David A. Moon, and Daniel L. Weinreb, Digital Press,
1984.  Implementation-specific details of KCL (debugging, garbage
collection, data structure format, declarations, operating system
interface, installation) may be found in the 131 page "Kyoto Common Lisp
Report" by Taiichi Yuasa and Masami Hagiya, the authors of KCL.  This
report is available from:

        Teikoku Insatsu Inc.
        Shochiku-cho,
        Ryogae-cho-dori Takeya-machi Sagaru,
        Naka-gyo-ku,
        Kyoto, 604, Japan
        tel: 075-231-4757

for 5,000 yen plus postage.

The KCL Report is produced by the text-formatter KROFF (Kyoto ROFF), which
is used locally within Kyoto University.  Currently KROFF works only on
printers available in Japan.  It is possible that an American
distributorship of this report will be arranged.  The source of the report,
with KROFF commands, is found in the file doc/report on the ftp tar file.
It is possible to read this source, though it is as hard on the eyes as TeX
or Scribe source.  A translation of this source into TeX is underway and
will be available as part of the distribution tape.  Future information
about the availability of the KCL Report will be available in updated
versions of this message, in the file /pub/kcl.broadcast.

A document describing how to port KCL to other systems is available at no
charge from the authors of KCL.

Each of the KCL primitives is thoroughly described by the "describe"
function, which is based on 340K bytes of documentation.

                                    SUPPORT

KCL is one of the most bug-free large software systems that we have ever used.
However, when bugs are found, they may be reported to the implementors:

  hagiya%kurims.kurims.kyoto-u.junet%utokyo-relay.csnet@RELAY.CS.NET
  yuasa%kurims.kurims.kyoto-u.junet%utokyo-relay.csnet@RELAY.CS.NET

We have found them extremely responsive to bug reports and suggestions.

                               SAMPLE TRANSCRIPT

Below is a complete transcript for obtaining and installing KCL on a Sun-3.

        Make a directory for locating KCL

tutorial% mkdir /usr/joe/kcl

        Get the compressed tar file

tutorial% cd /usr/joe/kcl
tutorial% ftp 128.83.144.1
220 rascal FTP server (Version 4.7 Sun Sep 14 12:44:57 PDT 1986) ready.
Name: ftp
Password: guest
ftp>binary
ftp>get /pub/kcl.tar.Z kcl.tar.Z
ftp>quit

        Build the KCL directory structure

tutorial% uncompress kcl.tar.Z
tutorial% tar -xvf kcl.tar .
tutorial% rm kcl.tar

        Make KCL

tutorial% cd /usr/joe/kcl/
tutorial% su
password: super-user-password
tutorial# cp  h/cmpinclude.h /usr/include
tutorial# exit
tutorial% make

        Edit and Install Two Files

We wish to replace "~" by "/usr/joe" in lc and kcl, and put
them in a directory on the search path e.g. "/usr/joe/bin"

tutorial% cd /usr/joe/kcl/unixport
tutorial% mkdir /usr/joe/bin
tutorial% sed -e "s.~./usr/joe/kcl.g" lc >  /usr/joe/bin/lc
tutorial% sed -e "s.~./usr/joe/kcl.g" kcl > /usr/joe/bin/kcl
tutorial% chmod a+x   /usr/joe/bin/lc /usr/joe/bin/kcl

It is now possible to run kcl:

tutorial% /usr/joe/bin/kcl
KCL (Kyoto Common Lisp)

>(+ 2 3)
5
>(bye)

It is best to become super user and execute the following commands, so that all
users may execute kcl.

tutorial% su
Password: superuser-password
tutorial# cp  /usr/joe/bin/kcl /usr/local/bin
tutorial# cp  /usr/joe/bin/lc  /usr/local/bin
tutorial# exit

This transcript puts the entire kcl system, including sources and
documentation, in the directory /usr/joe/kcl.  Any other directory name
would do as well, e.g., /usr/local instead of /usr/joe.  Although this
transcript has worked perfectly for us on a Sun-3, it might not work for
you if you are running under NFS but not logged into the right machine:
you may need to login as root on the system where /usr/include and
/usr/local/bin are really located to do the super-user things.  Immediately
after the make is finished about 8.4 megatyes of disk space are in use.


                                SINCERELY YOURS

     Robert S. Boyer            William F. Schelter
     cl.boyer@r20.utexas.edu    atp.schelter@r20.utexas.edu

This message was written by Robert S. Boyer and William F. Schelter.  The
opinions expressed are theirs and are not necessarily those of the authors of
KCL, the University of Texas, or MCC.  The authors of KCL have, however,
indicated that they have no objection to our distributing this message.

P.S. Thanks to Dave Capshaw, George Fann, Warren Hunt, Ken Rimey, and Carl
Quillen for helping debug this message.  Ken Rimey,
rimey@ernie.Berkeley.EDU, makes the following remarks about bringing up
this release of KCL under BSD 4.3 on a Vax.

1.  Bringing up KCL under BSD4.3.  The machine on which I installed kcl was
a Vax 88xx running Ultrix V2.0.  Kcl crashed when executing the final
save-system command in init_kcl.lsp.  It also did so on a Vax running
BSD4.3.  (I don't know of any Vaxen still running 4.2.)  The problem is
caused by some highly non-portable code introduced into Lsave() in
c/unixsave.c since the version of March 28, 1986.  I deleted the new code
and reintroduced the old which had been commented out.  Here is the
resulting working Lsave():

Lsave()
{
        char filename[256];

        check_arg(1);
        check_type_or_pathname_string_symbol_stream(&vs_base[0]);
        coerce_to_filename(vs_base[0], filename);

        _cleanup();
        memory_save(kcl_self, filename);
        exit(0);
        /*  no return  */
}

KCL ran successfully after only fixing the Lsave problem.

2. The files o/makefile and unixport/makefile define variables that need to be
changed when compiling for any machine other than a Sun-3.  These definitions
are found at the heads of these files.  Here is the head of my copy of
o/makefile:

        MACHINE = VAX
        #       Select 'VAX', 'SUN', 'SUN2R3', 'SUN3', 'ISI', 'SEQ', 'IBMRT',
        #       or 'NEWS'.

        CHTAB   = char_table.s
        #       Select 'char_table.s' or 'sun_chtab.s'.
        #       1) char_table.s : for VAX, SEQ and NEWS
        #       2) sun_chtab.s  : for SUN, SUN2R3 and SUN3
        #       3) isi_chtab.s  : for ISI
        #       4) ibmrt_chtab.s: for IBMRT

For machines other than Sun-3, one might change the MAKE KCL
section of this message to:

tutorial% cd /usr/joe/kcl/
tutorial% vi o/makefile   (If not Sun-3, change definitions MACHINE and CHTAB.)
tutorial% vi o/unixport   (If not Sun-3, change definition of MACHINE.)
tutorial% su
password: super-user-password
tutorial# cp  h/cmpinclude.h /usr/include
tutorial# exit
tutorial% make

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Thu Jul  2 03:15:19 1987
Date: Thu, 2 Jul 87 03:15:09 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #163
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Thu, 2 Jul 87 03:09 EDT
Received: from relay.cs.net by RELAY.CS.NET id aa25944; 1 Jul 87 8:39 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa07078; 1 Jul 87 8:38 EDT
Date: Tue 30 Jun 1987 23:02-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #163
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest           Wednesday, 1 Jul 1987     Volume 5 : Issue 163

Today's Topics:
  Theory - The Symbol Grounding Problem & Graded Categories

----------------------------------------------------------------------

Date: 28 Jun 87 23:56:43 GMT
From: ihnp4!homxb!houdi!marty1@ucbvax.Berkeley.EDU  (M.BRILLIANT)
Subject: Re: The symbol grounding problem....

In article <919@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> marty1@houdi.UUCP (M.BRILLIANT) of AT&T Bell Laboratories, Holmdel writes:
> ......
> >     .... The feature extractor obviates the symbol-grounding
> >     problem.
>
> ..... You are vastly underestimating the problem of
> sensory categorization, sensory learning, and the relation between
> lower and higher-order categories. Nor is it obvious that symbol manipulation
> can still be regarded as just symbol manipulation when the atomic symbols
> are constrained to be the labels of sensory categories....

I still think we're having more trouble with terminology than we
would have with the concepts if we understood each other.  To
get a little more concrete, how walking through what a machine
might do in perceiving a chair?

I was just looking at a kitchen chair, a brown wooden kitchen
chair against a yellow wall, in side light from a window.  Let's
let a machine train its camera on that object.  Now either it
has a mechanical array of receptors and processors, like the
layers of cells in a retina, or it does a functionally
equivalent thing with sequential processing.  What it has to do
is compare the brightness of neighboring points to find places
where there is contrast, find contrast in contiguous places so
as to form an outline, and find closed outlines to form objects.
There are some subtleties needed to find partly hidden objects,
but I'll just assume they're solved.  There may also be an
interpretation of shadow gradations to perceive roundness.

Now the machine has the outline of an object in 2 dimensions,
and maybe some clues to the 3rd dimension.  There are CAD
programs that, given a complete description of an object in
3D, can draw any 2D view of it.  How about reversing this
essentially deductive process to inductively find a 3D form that
would give rise to the 2D view the machine just saw.  Let the
machine guess that most of the odd angles in the 2D view are
really right angles in 3D.  Then, if the object is really
unfamiliar, let the machine walk around the chair, or pick it
up and turn it around, to refine its hypothesis.

Now the machine has a form.  If the form is still unfamiliar,
let it ask, "What's that, Daddy?"  Daddy says, "That's a chair."
The machine files that information away.  Next time it sees a
similar form it says "Chair, Daddy, chair!"  It still has to
learn about upholstered chairs, but give it time.

That brings me to a question: do you really want this machine
to be so Totally Turing that it grows like a human, learns like
a human, and not only learns new objects, but, like a human born
at age zero, learns how to perceive objects?  How much of its
abilities do you want to have wired in, and how much learned?

But back to the main question.  I have skipped over a lot of
detail, but I think the outline can in principle be filled in
with technologies we can imagine even if we do not have them.
How much agreement do we have with this scenario?  What are
the points of disagreement?

M. B. Brilliant                                 Marty
AT&T-BL HO 3D-520       (201)-949-1858
Holmdel, NJ 07733       ihnp4!houdi!marty1

------------------------------

Date: 29 Jun 87 08:49:00 EST
From: cugini@icst-ecf.arpa
Reply-to: <cugini@icst-ecf.arpa>
Subject: invertibility as a graded category ?


Harnad writes:

> In responding to Cugini and Brilliant I misinterpreted a point that
> the former had made and the latter reiterated. It's a point that's
> come up before: What if the iconic representation -- the one that's
> supposed to be invertible -- fails to preserve some objective property
> of the sensory projection? ... The reply is that an analog
> representation is only analog in what it preserves, not in what it fails
> to preserve. Icons are hence approximate too. ...
> There is no requirement that all the features of the sensory
> projection be preserved in icons; just that some of them should be --
> enough to subserve our discrimination capacities.
> ... But none of this
> information loss in either sensory projections or icons (or, for that
> matter, categorical representations) compromises groundedness. It just
> means that our representations are doomed to be approximations.

But then why say that icons, but not categorical representations or symbols
are/must be invertible?  (This was *your* original claim, after all)?
Isn't it just a vacuous tautology to claim that icons are invertible
wrt to the information they preserve, but not wrt the information they
lose?  How could it be otherwise?  Aren't even symbols likewise
invertible in that weak sense?

(BTW, I quite agree that the information loss does not compromise
grounding - indeed my very point was that there is nothing especially
scandalous about non-invertible icons.)

Look, there's information loss (many to one mapping) at each stage of the game:

1. distal object

2. sensory projection

3. icons

4. categorical representation

5. symbols


It was you who seemed to claim that there was some special invertibility
between stages 2 and 3 - but now you claim for it invertibility in
only such a vitiated sense as to apply to all the stages.

So a) do you still claim that the transition between 2 and 3 is invertible
in some strong sense which would not be true of, say, [1 to 2] or [3 to 4], and
b) if so, what is that sense?

Perhaps you just want to say that the transition between 2 and 3 is usually
more invertible than the other transitions ?

John Cugini <Cugini@icst-ecf.arpa>

------------------------------

Date: 29 Jun 87 10:35:00 EST
From: cugini@icst-ecf.arpa
Reply-to: <cugini@icst-ecf.arpa>
Subject: epistemological exception-taking


> > From me:
> >
> > What if there were a few-to-one transformation between the skin-level
> > sensors ...
> > My example was to suppose that #1:
> > a combination of both red and green retinal receptors and #2 a yellow
> > receptor BOTH generated the same iconic yellow.

> From: Neil Hunt <spar!hunt@decwrl.dec.com>
>
> We humans see the world (to a first order at least) through red, green and
> blue receptors. We are thus unable to distinguish between light of a yellow
> frequency, and a mixture of light of red and green frequencies, and we assign
> to them a single token - yellow. However, if our visual apparatus was
> equipped with yellow receptors as well, then these two input stimuli
> would *appear* quite different, as indeed they are. ...

Oh, really?  How do you claim to know what the mental effect would be
of a hypothetical visual nerve apparatus?  Do you know what it feels
like to be a bat?

John Cugini <Cugini@icst-ecf.arpa>

------------------------------

Date: 29 Jun 87 20:53:28 GMT
From: ihnp4!homxb!houdi!marty1@ucbvax.Berkeley.EDU  (M.BRILLIANT)
Subject: Re: The symbol grounding problem: Against Rosch &
         Wittgenstein


In article <931@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> marty1@houdi.UUCP (M.BRILLIANT) of AT&T Bell Laboratories, Holmdel asks:
> >     Why require 100% accuracy in all-or-none categorizing?... I learned
> >     recently that I can't categorize chairs with 100% accuracy.
>
> This is a misunderstanding. The "100% accuracy" refers to the
> all-or-none-ness of the kinds of categories in question. The rival
> theories in the Roschian tradition have claimed that many categories
> (including "bird" and "chair") do not have "defining" features. Instead,
> membership is either fuzzy or a matter of degree (i.e., percent)....

OK: once I classify a thing as a chair, there are no two ways about it:
it's a chair.  But there can be a stage when I can't decide.  I
vacillate: "I think it's a chair."  "Are you sure?"  "No, I'm not sure,
maybe it's a bed."  I would never say seriously that I'm 40 percent
sure it's a chair, 50 percent sure it's a bed, and 10% sure it's an
unfamiliar object I've never seen before.

I think this is in agreement with Harnad when he says:

> Categorization preformance (with all-or-none categories) is highly reliable
> (close to 100%) and MEMBERSHIP is 100%. Only speed/ease of categorization and
> typicality ratings are a matter of degree....
> This is not to deny that even all-or-none categorization may encounter
> regions of uncertainty. Since ALL category representations in my model are
> provisional and approximate .....  it is always possible that
> the categorizer will encounter an anomalous instance that he cannot classify
> according to his current representation.....
> ...... This still does not imply that membership is
> fuzzy or a matter of degree.....

So to pass the Total Turing Test, a machine should respond the way a
human does when faced with inadequate or paradoxical sensory data: it
should vacillate (or bluff, as some people do).  In the presence of
uncertainty it will not make self-consistent statements about
uncertainty, but uncertain and possibly inconsistent statements about
absolute membership.

M. B. Brilliant                                 Marty
AT&T-BL HO 3D-520       (201)-949-1858
Holmdel, NJ 07733       ihnp4!houdi!marty1

------------------------------

Date: 29 Jun 87 23:34:58 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: The symbol grounding problem: "Fuzzy" categories?


In comp.ai.digest: Laws@STRIPE.SRI.COM (Ken Laws) asks re. "Fuzzy Symbolism":

>       Is a mechanical rubber penguin a penguin?... dead...dismembered
>       genetically damaged or altered...? When does a penguin embryo become
>       a penguin?... I can't unambiguously define the class of penguins, so
>       how can I be 100% certain that every penguin is a bird?... and even
>       that could change if scientists someday discover incontrovertible
>       evidence that penguins are really fish. In short, every category is a
>       graded one except for those that we postulate to be exact as part of
>       their defining characteristics.

I think you're raising the right questions, but favoring the wrong
answers. My response to this argument for graded or "fuzzy" category
was that our representations are provisional and approximate. They
converge on the features that will reliably sort members from
nonmembers on the basis of the sample of confusable alternatives
encountered to date. Being always provisional and approximate, they
are always susceptible to revision should the context of confusable
alternatives be widened.

But look at the (not so hidden) essentialism in Ken's query: "how can I
be 100% certain that every penguin is a bird?". I never promised that!
We're not talking about ontological essences here, about the way things
"really are," from the God's Eye" or omniscient point of view! We're
just talking about how organisms and other devices can sort and label
APPEARANCES as accurately as they do, given the feedback and
experiential sample they get. And this sorting and labeling is
provisional, based on approximate representations that pick out
features that reliably handle the confusable alternatives sampled to
date. All science can do is tighten the approximation by widening the
alternatives (experimentally) or strengthening the features
(theoretically).

But provisionally, we do alright, and it's NOT because we sort things
as being what they are as a matter of degree. A penguin is 100% a bird
(on current evidence) -- no more or less a bird than a sparrow. If
tomorrow we find instances that make it better to sort and label them
as fish, then tomorrow's approximation will be better than today's,
but they'll then be 100% fish, and so on.

Note that I'm not denying that there are graded categories; just that
these aren't them. Examples of graded categories are: big,
intelligent, beautiful, feminine, etc.

>       You are entitled to such an opinion, of course, but I do not
>       accept the position as proven...

(Why opinion, by the way, rather than hypothesis, on the evidence and
logical considerations available? Nor will this hypothesis be proven:
just supported by further evidence and analysis, or else supplanted by
a rival hypothesis that accounts for the evidence better; or the
hypothesis and its supporting arguments may be shown to be incoherent
or imparsimonious...)

>       ...We do, of course, sort and categorize objects when forced to do so.
>       At the point of observable behavior, then, some kind of noninvertible
>       or symbolic categorization has taken place.  Such behavior, however,
>       is distinct from any of the internal representations that produce it.
>       I can carry fuzzy and even conflicting representations until -- and
>       often long after -- the behavior is initiated.  Even at the instant of
>       commitment, my representations need be unambiguous only in the
>       implicit sense that one interpretation is momentarily stronger than
>       the other -- if, indeed, the choice is not made at random.

I can't follow some of this. Categorization is the performance
capacity under discussion here. ("Force" has nothing to do with it!).
And however accurately and reliably people can actually categorize things,
THAT'S how accurately our models must be able to do it under the same
conditions. If there's successful all-or-none performance, the
representational model must be able to generate it. How can the
behavior be "distinct from" the representations that produce it?

This is not to say that representations will always be coherent, or
even that incoherent representations can't sometimes generate correct
categorization (up to a point). But I hardly think that the basis of
the bulk of our reliable all-or-none sorting and labeling will turn
out to be just a matter of momentary relative strengths -- or even
chance -- among graded representations. I think probabilistic mechanisms
are more likely to be involved in feature-finding in the training
phase (category learning) rather than in the steady state phase, when
a (provisional) performance asymptote has been reached.

>       It may also be true that I do reduce some representations to a single
>       neural firing or to some other unambiguous event -- e.g., when storing
>       a memory.  I find this unlikely as a general model.  Coarse coding,
>       graded or frequency encodings, and widespread activation seem better
>       models of what's going on.  Symbolic reasoning exists in pure form
>       only on the printed page; our mental manipulation even of abstract
>       symbols is carried out with fuzzy reasoning apparatus.

Some of this sounds like implementational considerations rather than
representational ones. The question was: Do all-or-none categories
(such as "bird") have "defining" features that can be used to sort
members from nonmembers at the level of accuracy (~100%) with which we
sort? However they are coded, I claim that those features MUST exist
in the inputs and must be detected and used by the categorizer. A
penguin is not a bird as a matter of degree, and the features that
reliably assign it to "bird" are not graded. Nor is "bird" a fuzzy
category such as "birdlike." And, yes, symbolic representations are
likely to be more apodictic (i.e., categorical) than nonsymbolic ones.
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Thu Jul  2 03:15:03 1987
Date: Thu, 2 Jul 87 03:14:54 edt
From: in%AIList@stripe.sri.com@vtcs1
To: ailist@stripe.sri.com
Subject: AIList Digest   V5 #164
Status: R

Received: from relay.cs.net by vtcs1.cs.vt.edu; Thu, 2 Jul 87 03:06 EDT
Received: from relay.cs.net by RELAY.CS.NET id aa24605; 1 Jul 87 3:39 EDT
Received: from stripe.sri.com by RELAY.CS.NET id aa05269; 1 Jul 87 3:36 EDT
Date: Tue 30 Jun 1987 23:07-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@stripe.sri.com>
Subject: AIList Digest   V5 #164
To: AIList@stripe.sri.com
Reply-to: AIList@stripe.sri.com
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest           Wednesday, 1 Jul 1987     Volume 5 : Issue 164

Today's Topics:
  Theory - The Symbol Grounding Problem

----------------------------------------------------------------------

Date: 30 Jun 87 00:19:12 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem


marty1@houdi.UUCP (M.BRILLIANT) of AT&T Bell Laboratories, Holmdel asks:

>       how about walking through what a machine might do in perceiving a chair?
>       ...let a machine train its camera on that object.  Now either it
>       has a mechanical array of receptors and processors, like the layers
>       of cells in a retina, or it does a functionally equivalent thing with
>       sequential processing.  What it has to do is compare the brightness of
>       neighboring points to find places where there is contrast, find
>       contrast in contiguous places so as to form an outline, and find
>       closed outlines to form objects... Now the machine has the outline
>       of an object in 2 dimensions, and maybe some clues to the 3rd
>       dimension...  inductively find a 3D form that would give rise to the
>       2D view the machine just saw... Then, if the object is really
>       unfamiliar, let the machine walk around the chair, or pick it
>       up and turn it around, to refine its hypothesis.

So far, apart from its understandable toward current engineering hardware
concepts, there is no particular objection to this description of a
stereoptic sensory receptor.

>       Now the machine has a form.  If the form is still unfamiliar,
>       let it ask, "What's that, Daddy?"  Daddy says, "That's a chair."
>       The machine files that information away.  Next time it sees a
>       similar form it says "Chair, Daddy, chair!"  It still has to
>       learn about upholstered chairs, but give it time.

Now you've lost me completely. Having acknowledged the intricacies of
sensory transduction, you seem to think that the problem of categorization
is just a matter of filing information away and finding "similar forms."

>       do you really want this machine to be so Totally Turing that it
>       grows like a human, learns like a human, and not only learns new
>       objects, but, like a human born at age zero, learns how to perceive
>       objects?  How much of its abilities do you want to have wired in,
>       and how much learned?

That's an empirical question. All it needs to do is pass the Total
turing Test -- i.e., exhibit performance capacities that are
indistinguishable from ours. If you can do it by building everything
in a priori, go ahead. I'm betting it'll need to learn -- or be able to
learn -- a lot.

>       But back to the main question.  I have skipped over a lot of
>       detail, but I think the outline can in principle be filled in
>       with technologies we can imagine even if we do not have them.
>       How much agreement do we have with this scenario?  What are
>       the points of disagreement?

I think the main details are missing, such as how the successful
categorization is accomplished. Your account also sounds as if it
expects innate feature detectors to pick out objects for free, more or
less nonproblematically, and then serve as a front end for another
device (possibly a conventional symbol-cruncher a la standard AI?)
that will then do the cognitive heavy work. I think that the cognitive
heavy work begins with picking out objects, i.e., with categorization.
I think this is done nonsymbolically, on the sensory traces, and that it
involves learning and pattern recognition -- both sophisticated
cognitive activities. I also do not think this work ends, to be taken
over by another kind of work: symbolic processing. I think that ALL of
cognition can be seen as categorization. It begins nonsymbolically,
with sensory features used to sort objects according to their names on
the basis of category learning; then further sorting proceeds by symbolic
descriptions, based on combinations of those atomic names. This hybrid
nonsymbolic/symbolic categorizer is what we are; not a pair of modules,
one that picks out objects and the other that thinks and talks about them.
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: 30 Jun 87 20:52:21 GMT
From: diamond.bbn.com!aweinste@husc6.harvard.edu  (Anders Weinstein)
Subject: Re: The symbol grounding problem

In reply to my statement that
>>      the *semantic* meaning  of a symbol is still left largely unconstrained
>>      even after you take account of it's "grounding" in perceptual
>>      categorization. This is because what matters for intentional content
>>      is not the objective property in the world that's being detected, but
>>      rather how the subject *conceives* of that external property, a far
>>      more slippery notion...

Stevan Harnad (harnad@mind.UUCP) writes:
>
> As to what people "conceive" themselves to be categorizing: My model
> is proposed in a framework of methodological epiphenomenalism. I'm
> interested in what's going on in people's heads only inasmuch as it is
> REALLY generating their performance, not just because they think or
> feel it is.

I regret the subjectivistic tone of my loose characterization; what people
can introspect is indeed not at issue. I was merely pointing out that the
*meaning* of a symbol is crucially dependent on the rest of the cognitive
system, as shown in the Churchland's example:

>>                          ... primitive people may be able to reliably
>>      categorize certain large-scale atmospheric electrical discharges;
>>      nevertheless, the semantic content of their corresponding states might
>>      be "Angry gods nearby" or some such.
>>
>                ... "Angry gods nearby" is not just an atomic label for
> "thunder" (otherwise it WOULD be equivalent to it in my model -- both
> labels would pick out approximately the same thing); in fact, it is
> decomposable, and hence has a different meaning in virtue of the
> meanings of "angry" and "gods." There should be corresponding internal
> representational differences (iconic, categorical and symbolic) that
> capture that difference.

"Angry gods nearby" is composite in *English*, but it need not be composite
in native, or, more to the point, in the supposed inner language of the
native's categorical mechanisms. They may have a single word, say "gog",
which we would want to translate as "god-noise" or some such. Perhaps they
train their children to detect gog in precisely the same way we train
children to detect thunder -- our internal thunder-detectors are identical.
Nevertheless, the output of their thunder-detector does not *mean* "thunder".

Let me try to clarify the point of these considerations.  I am all for an
inquiry into the mechanisms underlying our categorization ablities. Anything
you can discover out about these mechanisms would certainly be a major
contribution to psychology.  My only concern is with semantics:  I was piqued
by what seemed to be an ambitious claim about the significance of the
psychology of categorization for the problem of "intentionality" or intrinsic
meaningfulness. I merely want to emphasize that the former, interesting
though it is, hardly makes a dent in the latter.

As I said, there are two reasons why meaning resists explication by this kind
of psychology:  (1) holism: the meaning of even a "grounded" symbol will
still depend on the rest of the cognitive system; and (2) normativity:
meaning is dependent upon a determination of what is a *correct* response,
and you can't simply read such a norm off from a description of how the
mechanism in fact performs.

I think these points, particularly (1), should be quite clear.  The fact that
a subject's brain reliably asserts the symbol "foo" when and only when
thunder is presented in no way "fixes" the meaning of "foo". Of course it is
obviously a *constraint* on what "foo" may mean: it is in fact part of what
Quine called the "stimulus meaning" of "foo", his first constraint on
acceptable translation.  Nevertheless, by itself it is still way too weak to
do the whole job, for in different contexts the postive output of a reliable
thunder-detector could mean "thunder", something co-extensive but
non-synonymous with "thunder", "god-noise", or just about anything else.
Indeed, it might not *mean* anything at all, if it were only part of a
mechanical thunder-detector which couldn't do anything else.

I wonder if you disagree with this?

As to normativity, the force of problem (2) is particularly acute when
talking about the supposed intentionality of animals, since there aren't any
obvious linguistic or intellectual norms that they are trying to adhere to.
Although the mechanics of a frog's prey-detector may be crystal clear, I am
convinced that we could easily get into an endless debate about what, if
anything, the output of this detector really *means*.

The normativity problem is germane in an interesting way to the problem of
human meanings as well.  Note, for example, that in doing this sort of
psychology, we probably won't care about the difference between correctly
identifying a duck and mis-identifying a good decoy -- we're interested in
the perceptual mechanisms that are the same in both cases.  In effect, we are
limiting our notion of "categorization" to something like "quick and largely
automatic classification by observation alone".

We pretty much *have* to restrict ourselves in this way, because, in the
general case, there's just no limit to the amount of cognitive activity that
might be required in order to positively classify something.  Consider what
might go into deciding whether a dolphin ought to be classified as a fish,
whether a fetus ought to be classified as a person, etc.  These decisions
potentially call for the full range of science and philosophy, and a
psychology which tries to encompass such decisions has just bitten off more
than it can chew:  it would have to provide a comprehensive theory of
rationality, and such an ambitious theory has eluded philosophers for some
time now.

In short, we have to ignore some normative distinctions if we are to
circumscribe the area of inquiry to a theoretically tractable domain of
cognitive activity.  (Indeed, in spite of some of your claims, we seem
committed to the notion that we are limiting ourselves to particular
*modules* as explained in Fodor's modularity book.) Unfortunately -- and
here's the rub -- these normative distinctions *are* significant for the
*meaning* of symbols.  ("Duck" doesn't *mean* the same thing as "decoy").

It seems that, ultimately, the notion of *meaning* is intimately tied to
standards of rationality that cannot easily be reduced to simple features of
a cognitive mechanism.  And this seems to be a deep reason why a descriptive
psychology of categorization barely touches the problem of intentionality.

Anders Weinstein
BBN Labs

------------------------------

Date: 30 Jun 87 19:02:28 GMT
From: teknowledge-vaxc!dgordon@beaver.cs.washington.edu  (Dan Gordon)
Subject: Re: The symbol grounding problem: Against Rosch &
         Wittgenstein

In article <931@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>(And I must repeat: Whether or not we can introspectvely report the features
>we are actually using is irrelevant. As long as reliable, consensual,
>all-or-none categorization performance is going on, there must be a set of
>underlying features governing it -- both with sensory and more

Is this so?  There is no reliable, consensual all-or-none categorization
performance without a set of underlying features?  That sounds like a
restatement of the categorization theorist's credo rather than a thing
that is so.

Dan Gordon

------------------------------

Date: 30 Jun 87 20:49:32 GMT
From: ihnp4!homxb!houdi!marty1@ucbvax.Berkeley.EDU  (M.BRILLIANT)
Subject: Re: The symbol grounding problem

In article <937@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> ...
> marty1@houdi.UUCP (M.BRILLIANT) of AT&T Bell Laboratories, Holmdel asks:
> >     how about walking through what a machine might do in perceiving a chair?
> >     ... (a few steps skipped here)
> >     Now the machine has a form.  If the form is still unfamiliar,
> >     let it ask, "What's that, Daddy?"  Daddy says, "That's a chair."
> >     The machine files that information away.  Next time it sees a
> >     similar form it says "Chair, Daddy, chair!" ...
>
> Now you've lost me completely. Having acknowledged the intricacies of
> sensory transduction, you seem to think that the problem of categorization
> is just a matter of filing information away and finding "similar forms."

I think it is.  We've found a set of lines, described in 3 dimensions,
that can be rotated to match the outline we derived from the view of a
real chair.  We file it in association with the name "chair."  A
"similar form" is some other outline that can be matched (to within
some fraction of its size) by rotating the same 3D description.

> I think the main details are missing, such as how the successful
> categorization is accomplished......

Are we having a problem with the word "categorization"?  Is it the
process of picking discrete objects out of a pattern of light and
shade ("that's a thing"), or the process of naming the object ("that
thing is a chair")?

> ..... Your account also sounds as if it
> expects innate feature detectors to pick out objects for free, more or
> less nonproblematically.....

You left out the part where I referred to computer-aided-design
modules.  I think we can find outlines by looking for contiguous
contrasts.  If the outlines are straight we (the machine, maybe also
humans) can define the ends of the straight lines in the visual plane,
and hypothesize corresponding lines in space.  If hard-coding this
capability gives an "innate feature detector" then that's what I want.

> ...... and then serve as a front end for another
> device (possibly a conventional symbol-cruncher a la standard AI?)
> that will then do the cognitive heavy work. I think that the cognitive
> heavy work begins with picking out objects, i.e., with categorization.

I think I find objects with no conscious knowledge of how I do it (is
that what you call "categorization")?  Saying what kind of object it is
more often involves conscious symbol-processing (sometimes one forgets
the word and calls a perfectly familiar object "that thing").

> I think this is done nonsymbolically, on the sensory traces, and that it
> involves learning and pattern recognition -- both sophisticated
> cognitive activities.

If you're talking about finding objects in a field of light and shade, I
agree that it is done nonsymbolically, and everything else you just said.

> .....  I also do not think this work ends, to be taken
> over by another kind of work: symbolic processing.....

That's where I have trouble.  Calling a penguin a bird seems to me
purely symbolic, just as calling a tomato a vegetable in one context,
and a fruit in another, is a symbolic process.

> ..... I think that ALL of
> cognition can be seen as categorization. It begins nonsymbolically,
> with sensory features used to sort objects according to their names on
> the basis of category learning; then further sorting proceeds by symbolic
> descriptions, based on combinations of those atomic names. This hybrid
> nonsymbolic/symbolic categorizer is what we are; not a pair of modules,
> one that picks out objects and the other that thinks and talks about them.

Now I don't understand what you said.  If it begins nonsymbolically,
and proceeds symbolically, why can't it be done by linking a
nonsymbolic module to a symbolic module?

M. B. Brilliant                                 Marty
AT&T-BL HO 3D-520       (201)-949-1858
Holmdel, NJ 07733       ihnp4!houdi!marty1

------------------------------

Date: 30 Jun 87 19:47:08 GMT
From: ihnp4!homxb!houdi!marty1@ucbvax.Berkeley.EDU  (M.BRILLIANT)
Subject: Re: The symbol grounding problem

In article <937@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> marty1@houdi.UUCP (M.BRILLIANT) of AT&T Bell Laboratories, Holmdel asks:
> ....
> >     do you really want this machine to be so Totally Turing that it
> >     grows like a human, learns like a human, and not only learns new
> >     objects, but, like a human born at age zero, learns how to perceive
> >     objects?  How much of its abilities do you want to have wired in,
> >     and how much learned?
>
> That's an empirical question. All it needs to do is pass the Total
> turing Test -- i.e., exhibit performance capacities that are
> indistinguishable from ours. If you can do it by building everything
> in a priori, go ahead. I'm betting it'll need to learn -- or be able to
> learn -- a lot.

To refine the question: how long do you imagine the Total Turing Test
will last?  Science fiction stories have robots or aliens living in
human society as humans for periods of years, as long as they live with
strangers, but failing after a few hours trying to supplant a human and
fool his or her spouse.

By "performance capabilities," do you mean the capability to adapt as a
human does to the experiences of a lifetime?  Or only enough learning
capability to pass a job interview?

M. B. Brilliant                                 Marty
AT&T-BL HO 3D-520       (201)-949-1858
Holmdel, NJ 07733       ihnp4!houdi!marty1

------------------------------

End of AIList Digest
********************
