From in%@vtcs1 Wed Jan 28 20:04:15 1987
Date: Wed, 28 Jan 87 20:04:08 est
From: vtcs1::in% <LAWS@SRI-STRIPE.ARPA>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #13
Status: R


AIList Digest            Friday, 23 Jan 1987       Volume 5 : Issue 13

Today's Topics:
  Philosophy - Introspection & Consciousness

----------------------------------------------------------------------

Date: Tue, 20 Jan 87 21:17:02 CST
From: Girish Kumthekar <kumthek%lsu.csnet@RELAY.CS.NET>
Subject: Another Lengthy Philosopical ....

I can see a storm brewing on the horizon about minds, consciousness etc....
AILIST readers beware!!!! Remember Turing Machines,...... etc ?

Possible future contributers, PLEASE limit your contributions to an
unclutterable volume!!

Girish Kumthekar

kumthek%lsu@CSNET-RELAY.CSNET

------------------------------

Date: Thu, 22 Jan 87 15:51:42 PST
From: kube%cogsci.Berkeley.EDU@berkeley.edu (Paul Kube)
Subject: The sidetracking of introspection

>From: hester@ai.cel.fmc.com (Tom Hester):

>Finally, R.J. Faichney is absolutely correct.  It was not Freud that
>side tracked psychology from introspection.  Rather it was the "dust
>bowl empiricists" that rode behaviorism to fame and fortune that did it.

On the chance that it's worth arguing about the intellectual
history of phsychology on AIList:

The behaviorists didn't just sidetrack introspection; they sidetracked
mentalism---engine, car, and caboose, so to speak.  Introspection was
already demoted from the position it had had as infallible source of
psychological truth by James (he called his Principles of Psychology
"little more than a collection of illustrations of the difficulty of
discovering by direct introspection exactly what our feelings and
their relations are").  But James believed there are not any unconscious
mental states; Freud should get some credit for further demoting
introspection by arguing so influentially that there are.

Mentalism is back on track now in the post-behaviorist era, but a
principled skepticism about introspection remains.
A fascinating contemporary survey on the topic is Nisbett & Ross,
"Telling more than we can know: verbal reports on mental processes",
Psych. Rev. May 1977.  From the abstract:  "Evidence is reviewed
which suggests that there may be little or no direct introspective
access to higher order cognitive processes."

--Paul Kube
kube@cogsci.berkeley.edu,    ...!ucbvax!kube

------------------------------

Date: Thu, 22 Jan 87 10:38:21 est
From: Stevan Harnad <princeton!mind!harnad@seismo.CSS.GOV>
Subject: Objective vs. Subjective Inquiry

"CUGINI, JOHN" <cugini@icst-ecf> wrote on mod.ai:

>       ...so the toothache is "real" but "subjective"...
>       But...if both [subjective and objective phenomena] are real,
>       then we know why we need consciousness as a concept --
>       because without it we cannot explain/talk about the former class of
>       events - even if the latter class is entirely explicable in its own
>       terms. Ie, why should we demand of consciousness that it have
>       explanatory power for objective events?  It's like demanding that
>       magnetism be acoustically detectible before we accept it as a valid
>       concept.

Fortunately, there is a simple answer to this: Explanation itself is
(or should be) purely an objective matter. Magnetism, and all other
tractable physical phenomena are (in principle) objectively
explainable, so the above analogy simply does not work. Nagel has shown
that all of the other reductions in physics have always been
objective-to-objective. The mind/body problem is an exception
precisely because it resists subjective-to-objective reduction. Now if
there's something (subjectively) real and irreducible left over that
is left out of an objective account, we have to learn to live with
that explanatory incompleteness, rather than wishing it away by
hopeless mixing of categories and hopeful pumping of analogies, images
and interpretations. (In fact, I think that if all the objective
manifestations of consciousness -- performance capacity and neural
substrate -- are indeed "entirely explicable [in their own objective]
terms," as I believe and Cugini seems to concede, then why not get on
to explaining them thus, rather than indulging in subjective
overinterpretation and wishful thinking, which can only obscure or
even retard objective progress?)

[Please do not pounce on the parenthetic "subjectively" that preceded
"real," above. The problem of the objective status of consciousness
IS the mind/body problem, and to declare that subjectively-real =
objectively-real is just to state an empty obiter dictum. It happens
to be a correlative fact that all detectable physical phenomena --
i.e., all objective observables -- have subjective manifestations.
That's what we MEAN by observability, intersubjectivity, etc. But the
fact that the objective always piggy-backs on the subjective still
doesn't settle the objective status of the subjective itself. I'll go
even further. I'm not a solipsist. I'm as confident as I am of any
objective inference I have made that other people really exist and have
experiences like my own. But even THAT sense of the "reality" of the
subjective does not help when it comes to trying to give an objective
account of it. As to subjective accounts -- well, I don't go in much
for hermeneutics...]

>       I can well understand how those who deny the reality of experiences
>       (eg, toothaches) would then insist on the superfluousness of the
>       concept of consciousness - but Harnad clearly is not one such.
>       So...we need consciousness, not to explain public, objective events,
>       such as neural activity, but to explain, or at least discuss, private
>       subjective events.  If it be objected that the latter are outside the
>       proper realm of science, so be it, call it schmience or philosophy or
>       whatever you like. - but surely anything that is REAL, even if
>       subjective, can be the proper object for some sort of rational
>       study, no?

Some sort, no doubt. But not an objective sort, and that's the point.
Empirical psychology, neuroscience and artificial intelligence are
all, I presume, branches of objective inquiry. I know that this is
also the heyday of hermeneutics, but although I share with a vengeance
the belief that philosophy can make a substantive contribution to the
cognitive sciences today, I don't believe that that contribution will
be hermeneutic. Rather, I think it will be logical, methodological
and foundational, pointing out hidden complexities, incoherencies and
false-starts. Let's leave the subjective discussion of private events
to lit-crit, where it belongs.

Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771

------------------------------

Date: 23 Jan 87 02:29:51 GMT
From: mcvax!cwi.nl!lambert@seismo.CSS.GOV (Lambert Meertens)
Subject: Submission for mod-ai

Path: mcvax!lambert
From: lambert@mcvax.cwi.nl (Lambert Meertens)
Newsgroups: mod.ai
Subject: Re: C-2 as C-1
Summary: Long again--please skip this article
Keywords: mind, consciousness, memory
Message-ID: <7259@boring.mcvax.cwi.nl>
Date: 23 Jan 87 02:29:51 GMT
References: <424@mind.UUCP> <12272599850.11.LAWS@SRI-STRIPE.ARPA>
Reply-To: lambert@boring.UUCP (Lambert Meertens)
Organization: CWI, Amsterdam
Lines: 104

In article <12272599850.11.LAWS@SRI-STRIPE.ARPA> Laws@SRI-STRIPE.ARPA
(Ken Laws) writes:

>> From: Stevan Harnad <princeton!mind!harnad@seismo.CSS.GOV>:
>>
>> Worse than that, C-2 already presupposes C-1. You can't
>> have awareness-of-awareness without having awareness [...].
>
> A quibble: It would be possible [...]  that my entire conscious
> perception of a current toothache is an "illusory pain" [...].

I agree.

> These views do not solve the problem, of course; the C-2 consciousness
> must be explained even if the C-1 experience was an illusion.  My conscious
> memory of the event is more than just an uninterpreted memory of a memory
> of a memory ...

Here I am not so sure.  To start with, the only evidence we have of
organisms having C-1 is if they are on the C-2 level, that is, if they
*claim* they experience something.  Even granting them that they are not
lying in the sense of a conscious :-) misrepresentation, why should we (in
our quality of scientific enquirers) believe them on their word?  After
all, more than a few people truly believe they have the most incredible
psychic powers.

Now how can we know that the "awareness-of-awareness" is a conscious thing?
There seems to be a hidden assumption that if someone utters a statement
(like "It is getting late"), then the same organisms is consciously aware
of the fact expressed in the statement.  Normally, I would grant you that,
because that is the normal everyday meaning of "conscious" and "aware", but
not in the current context in which these words are isolated from their
original function to just provide an expedient way to express certain things.

[You will find that people in general have no problem in saying that a fly
is aware of something, or experiences pain, even though for all we know
there is no higher (coordinating) neural centre in this organism that would
provide a physiological substrate.  Many people even have no problem in
ascribing consciousness to trees.  I claim that if people (but not young
children) do have qualms in saying that an automaton experiences something,
it is because they have been *taught* that consciousness is limited to
animate, organic, objects.]

So the mere speech act "It is getting late" does not by itself imply a
conscious awareness of it getting late.  Otherwise, we are forced to
ascribe consciousness of the occurrence of a syntax error to a compiler
mumbling "*** syntax error".  Likewise, not only does someone saying "I
have a toothache" not imply that the speaker is experiencing a toothache,
it also does not imply that the speaker is consciously aware of the
(possibly illusionary) fact of experiencing one.  The only evidence of that
would be a C-3 act, someone saying: "I am aware of the fact that I am aware
of experiencing a toothache."  But again, why should we believe them?  (And
so on, ad nauseam.)

This is getting so complicated mainly because of the inadequacy of words.
Allow me to try again.  You, reader, are having a toothache.  You are
really having one.  I can tell, because you are visibly in pain, and,
moreover, I am your dentist, and you are in my chair with your mouth open
into which I am prodding and probing, and boy, you should have a toothache
if anyone ever had one.  At this point, I cannot know for sure if you are
consciously experiencing that pain.  Maybe neural pathways connect your
exposed pulpa with the centre innervating your grimacing and squirming
muscles while bypassing the centre of consciousness.  I retract my
instruments from your mouth, giving you a chance to say "That really hurt,
doctor.  I'll pay all my bills in time from now on if only you won't do
that again."  Firmly brushing aside the empathy that threatens to
compromise my scientific attitude, I realize that this still does not mean
that you consciously experienced that pain just a minute ago.  All I know
is that you remember it (for if you did not, you wouldn't have said that).
So some symbolic representation, "@#$%@*!" say, may have been stored in
your memory--also bypassing your centre of consciousness--which is now
retrieved and interpreted (maybe illusionary) as "conscious experience of
pain--just now".  This interpretation act need not mean that you experience
the pain now, after the fact.  So it is entirely possible that you did not
consciously experience the pain at any time.  Now were you conscious then,
while making that silly promise, of at least the memory of the--by itself
possibly unconscious--suffering of pain?  If you are still with me, then
you will probably agree that that is not necessarily the case.  Just like
P = <neural event of pain>, even though leaving a trace in memory, need not
imply consciousness of P, so R(P) = <neural event of remembering P> need
not imply consciousness of R(P) itself.  However, R(P) can again leave a
trace in memory--what with your Silly Promise and dentists' bills being as
they are, you are bound to trigger R(SP) and therefore, by association,
R(R(P)), many times in the future.

If we had two unconnected memory stores, and a switch would now connect to
one, now to the other store, we would become two personalities in one body
with two "consciousnesses".  If we could somehow censor either the storing
or the retrieval of pain events, we would truly, honestly believe that we
are incapable of consciously experiencing pain--notwithstanding the fact
that we would probably have the same *immediate* reaction to pain as other
people--and we wouldn't make such promises to our dentists anymore.

Wrapping it all up, I still maintain that "conscious experience" is a term
that is ascribed *in retrospect* to any neural event NE that has been
stored in memory, at the time R(NE) occurs.  Stronger, R(NE) is the
*only*--as I hope I have shown insufficient--evidence of "consciousness"
about NE in a more metaphysical or whatever sense.  For all we know and can
know, all consciousness in the sense of being conscious of something *while
it happens* is an "illusion", whether C-1, C-2 or C-17.

--

Lambert Meertens, CWI, Amsterdam; lambert@mcvax.UUCP

------------------------------

Date: Thu, 22 Jan 87 12:30:35 est
From: Stevan Harnad <princeton!mind!harnad@seismo.CSS.GOV>
Subject: Minsky on Mind(s)

MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU wrote in mod.ai (AIList Digest V5 #11):

>       unless a person IS in the grip of some "theoretical
>       position" - that is, some system of ideas, however inconsistent, they
>       can't "know" what anything "means"

I agree, of course. I thought it was obvious that I was referring to a
theoretical position on the mind/body problem, not on the conventions
of language and folk physics that are needed in order to discourse
intelligibly at all. There is of course no atheoretical talk. As to
atheoretical "knowledge," that's another matter. I don't think a dog
shares any of my theories, but we both know when we feel a toothache
(though he can't identify or describe it, nor does he have a theory of
nerve impulses, etc.). But we both share the same (atheoretical)
experience, and that's C-1. Now it's a THEORY that comes in and says:
"You can't have that C-1 without C-2." I happen to have a rival theory
on that. But never mind, let's just talk about the atheoretical
experience I, my rival and the dog share...

>       My point was that
>       you can't think about, talk about, or remember anything that leaves no
>       temporary trace in some part of your mind.  In other words, I agree
>       that you can't have C-2 without C-1 - but you can't have, think, say,
>       or remember that you have C-1 without C-2!  So, assuming that I know
>       EXACTLY what he means, I understand PERFECTLY that that meaning is
>       vacuous.

Fine. But until you've accounted for the C-1, your interpretation of
your processes as C-2 (rather than P-2, where P is just an unconscious
physical process that does the very same thing, physically and objectively)
has not been supported. It's hanging by a skyhook, and the label "C"
of ANY order is unwarranted.

I'll try another pass at it: I'll attempt to show how ducking or denying
the primacy of the C-1 problem gets one into infinite regress or
question-begging: There's something it's like to have the
experience of feeling a toothache. The experience may be an illusion.
You may have no tooth-injury, you may even have no tooth. You may be
feeling referred pain from your elbow. You may be hysterical,
delerious, hallucinating. You may be having a flashback to a year ago,
a minute ago, 30 milliseconds ago, when the physical and neural causes
actually occurred. But if at T-1 in real time you are feeling that
pain (let's make T-1 a smeared interval of Delta-T-1, which satisfies
both our introspective phenomenology AND the theory that there can be no
punctate, absolutely instantaneous experience), where does C-2 come into it?

Recall that C-2 is an experience that takes C-1 as its object, in the
same way C-1 takes its own phenomenal contents as object. To be
feeling-a-tooth-ache (C-1) is to have a certain direct experience; we all
know what that's like. To introspect on, reflect on, remember, think about
or describe feeling-a-toothache (all instances of C-2) is to have
ANOTHER direct experience -- say, remembering-feeling-a-toothache, or
contemplating-feeling-a-toothache. The subtle point is that this
2nd-order experience always has TWO aspects: (1) It takes a 1st order
experience (real or imagined) as object, and is for that reason
2nd-order, and (2) it is ITSELF an experience, which is of course
1st-order (call that C-1'). The intuition is that there is something
it is like to be aware of feeling pain (C-1), and there's ALSO something
it's like to be aware of being-aware-of-feeling-pain. Because a C-1 is
the object of the latter experience, the experience is 2nd order (C-2); but
because it's still an EXPERIENCE -- i.e., there's something it's LIKE to
feel that way -- every C-2 is always also a C-1' (which can in turn become
the object of a C-3, which is then also a C-1'', etc.).

I'm no phenomenologist, nor an advocate of doing phenomenology as we
just did above. I'm also painfully aware that the foregoing can hardly be
described as "atheoretical." It would seem that only direct experience
at the C-1-level can be called atheoretical; certainly formulating a
distinction between 1st and higher-order experience is a theoretical
enterprise, although I believe that the raw phenomenology bears me
out, if anyone has the patience to introspect it through. But the
point I'm making is simple:

It's EASY to tell a story in which certain physical processes play the
role of the contents of our experience -- toothaches, memories of
toothaches, responses to toothaches, etc. All this is fine, but
hopelessly 2nd-order. What it leaves out is why there should be any
EXPERIENCE for them to be contents OF! Why can't all these processes
just be unconscious processes -- doing the same objective job as our
conscious ones, but with no qualitative experience involved? This is
the question that Marvin keeps ignoring, restating instead his
conviction that it's taken care of (by some magical property of "memory
traces," as far as I can make out), and that my phenomenology is naive
in suggesting that there's still a problem, and that he hasn't even
addressed it in his proposal. But if you pull out the C-1
underpinnings, then all those processes that Marvin interprets as C-2
are hanging by a sky-hook. You no longer have conscious toothaches and
conscious memories of toothaches, you merely have tooth-damage, and
causal sequelae of tooth-damage, including symbolic code, storage,
retrieval, response, etc.. But where's the EXPERIENCE? Why should I
believe any of that is CONSCIOUS? There's the C-2 interpretation, of
course, but that's all it is: an interpretation. I can intepret a
thermostat (and, with some effort, even a rock) that way. What
justifies the interpretation?

Without a viable C-1 story, there can be no justification. And my
conjecture is that there can be no viable C-1 story. So back to
methodological epiphenomenalism, and forget about C of any order.

[Admonition to the ambitious: If you want to try to tell a C-1 story,
don't get too fancy. All the relevant constraints are there if you can
just answer the following question: When the dog's tooth is injured,
and it does the various things it does to remedy this -- inflamation
reaction, release of white blood cells, avoidance of chewing on that
side, seeking soft foods, giving signs of distress to his owner, etc. etc.
-- why do the processes that give rise to all these sequelae ALSO need to
give rise to any pain (or any conscious experience at all) rather
than doing the very same tissue-healing and protective-behavioral job
completely unconsciously? Why is the dog not a turing-indistinguishable
automaton that behaves EXACTLY AS IF it felt pain, etc, but in reality
does not? That's another variant of the mind/body problem, and it's what
you're up against when you're trying to justify interpreting physical
processes as conscious ones. Anything short of a convincing answer to
this amounts to mere hand-waving on behalf of the conscious interpretation
of your proposed processes.]


Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771

------------------------------

Date: Thu 22 Jan 87 21:30:13-PST
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Subject: WHY of Pain


  From: Stevan Harnad <princeton!mind!harnad@seismo.CSS.GOV>
  -- why do the processes that give rise to all these sequelae ALSO need
  to give rise to any pain (or any conscious experience at all) rather
  than doing the very same tissue-healing and protective-behavioral job
  completely unconsciously?


I know what you mean, but ...  Given that the dog >>is<< conscious,
the evolutionary or teleological role of the pain stimulus seems
straightforward.  It is a way for bodily tissues to get the attention
of the reasoning centers.  Instead of just setting some "damaged
tooth" bit, the injured nerve grabs the brain by the lapels and says
"I'm going to make life miserable for you until you solve my problem."
Animals might have evolved to react in the same way without the
conscious pain (denying the "need to" in your "why" question), but
the current system does work adequately.

Why (or, more importantly, how) the dog is conscious in the first place,
and hence >>experiences<< the pain, is the problem you are pointing out.


Some time ago I posted an analogy between the brain and a corporation,
claiming that the natural tendency of everyone to view the CEO as the
center of corporate conscious was evidence for emergent consciousness
in any sufficiently complex hierarchical system.  I would like to
refute that argument now by pointing out that it only works if the
CEO and perhaps the other processing elements in the hierarchy are
themselves conscious.  I still claim that such systems (which I can't
define ...) will appear to have centers of consciousness (and may well
pass Harnad's Total Turing Test), and that the >>system<< may even
>>be conscious<< in some way that I can't fathom, but if the CEO is
not itself conscious no amount of external consensus can make it so.

If it is true that a [minimal] system can be conscious without having
a conscious subsystem (i.e., without having a localized soul), we
must equate consciousness with some threshold level of functionality.
(This is similar to my previous argument that Searle's Chinese Room
understands Chinese even though neither the occupant nor his printed
instructions do.)  I believe that consciousness is a quantitative
phenomenon, so the difference between my consciousness and that of
one of my neurons is simply one of degree.  I am not willing to ascribe
consciousness to the atoms in the neuron, though, so there is a bottom
end to the scale.  What fraction of a neuron (or of its functionality)
is required for consciousness is below the resolving power of my
instruments, but I suggest that memory (influenced by external conditions)
or learning is required.  I will even grant a bit of consciousness
to a flip-flop :-).  The consciousness only exists in situ, however: a
bit of memory is only part of an entity's consciousness if it is used
to interpret the entity's environment.

Fortunately, I don't have my heart set on creating conscious systems.
I will settle for creating intelligent ones, or even systems that are
just a little less unintelligent than the current crop.

                                        -- Ken Laws

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Wed Jan 28 20:01:50 1987
Date: Wed, 28 Jan 87 20:01:44 est
From: vtcs1::in% <LAWS@SRI-STRIPE.ARPA>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #14
Status: R


AIList Digest            Monday, 26 Jan 1987       Volume 5 : Issue 14

Today's Topics:
  AI Tools - Scheme for Mac and IBM PCs &
    Expert System Shell on PCDOS and Unix,
  Mythology - Antiquity of AI,
  Seminars - Presenting Intuitive Deductions (UPenn) &
    New Themes in Data Structure Design (SU) &
    AI and Software Engineering (BTL) &
    Learning Internal Disjunctive Concepts (SRI),
  Conference - MidAltantic Logic Seminar

----------------------------------------------------------------------

Date: Wed, 21 Jan 87 22:38:34 PST
From: larry@Jpl-VLSI.ARPA
Subject: Scheme for Mac & IBM PCs

Scheme can be gotten for Apple Macs and IBM PCs.   MacScheme is $125 and can
be gotten from  Semantic Microsystems  in Oregon, 503/643-4359.    The Texas
Inst.  version costs $95; their phone # is 800/527-3500.  A review from last
Feb. is appended for those who did not see it before.  Larry @ jpl-vlsi.arpa
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
From: Rob Pettengill <CAD.PETTENGILL@MCC.ARPA>                    [69 lines]

I recently purchased an implementation of the Scheme dialect of lisp for my
PC.  I am familiar with GC Lisp, IQ Lisp, and Mu Lisp for the PC.  I use
Lambdas and 3600s with ZetaLisp at work.

TI PC Scheme is a very complete implementation of scheme for the IBM and TI
personal computers and compatibles.  It combines high speed code execution,
a good debugging and editing environment, and very low cost.

The Language:

    * Adheres faithfully to the Scheme standard.
    * Has true lexical scoping.
    * Prodedures and environments are first class data objects.
    * Is properly tail recursive - there is no penalty compared
      to iteration.
    * Includes window and graphics extensions.

The Environment:

    * An incremental optimizing compiler (not native 8086 code)
    * Top level read-compile-print loop.
    * Interactive debugger allows run time error recovery.
    * A minimal Emacs-like full screen editor with a scheme mode
      featuring parethesis matching and auto indenting of lisp code.
    * An execute DOS command or "push" to DOS capability - this is
      only practical with a hard disk because of the swap file PCS writes.
    * A DOS based Fast Load file format object file conversion utility.
    * A fast 2 stage garbage collector.

First Impressions:

Scheme seems to be much better sized to a PC class machine than the other
standard dialects of lisp because of its simplicity.  The TI implementation
appears to be very solid and complete.  The compiled code that it produces
(with debugging switches off) is 2 to 5 times faster than the other PC lisps
that I have used.  With the full screen editor loaded (there is also a
structure editor) there seems to be plenty of room for my code in a 640k PC.
TI recommends 320k or 512k with the editor loaded.  The documentation is of
professional quality (about 390 pages), but not tutorial.  Abelson and
Sussman's "Structure and Interpretation of Computer Programs" is a very
good companion for learning scheme as well as the art and science of
cprogramming in general.

My favorite quick benchmark -

(define (test n)
  (do
    ((i 0 (1+ i))
     (r () (cons i r)))
    ((>= i n) r)))

runs (test 10000) in less than 10 seconds with the editor loaded - of course
it takes a couple of minutes to print out the ten thousand element list that
results.

The main lack I find is that the source code for the system is not included-
one gets used to that in good lisp environments.  I have hit only a couple
of minor glitches, that are probably pilot error, so far.  Since the system
is compiled with debugging switches off it is hard to get much useful
information about the system from the dubugger.

Based on my brief, but very positive experience with TI PC scheme and its
very low price of $95 - I recommend it to anyone interested in a PC based
lisp.  (Standard disclaimers about personal opinions and having no
commercial interest in the product ...)

------------------------------

Date: 22 Jan 87 17:21:04 GMT
From: felix!fritz!kumar@hplabs.hp.com  (John Kumar)
Subject: Expert System Shell on PCDOS and Unix

I got requests to send a summary of the responses I got to my enquiry. The
only useful one I got, I am including below.


The Shell offered by EXSYS (505)836-6676 will be up and running in unix by
mid February.  It already runs on VMS.  EXSYS has learned a lot about user
friendly recently, and offers superior features to Insight 2+ in the new
release about to come out, including:  context sensitive help, backup to
last question, full blackboard, very fast operation, "fairly vanilla"
default display graphics, and powerful shell interface.  I know some of
those features also exist in Insight 2+, but it's 4:00 in the morning and
if I could back up in this file, I'd reword my paragraph.  EXSYS also offers
an online data dictionary and detects collision between similar rules.
Since it assigns serial numbers to rules, objects, and attributes, you
don't have to make up "meaningful" names for all your rules.

John, we are working on several expert systems for software diagnosis.  We
are delivering networked expert systems on the PC now and started development
last month of a series of projects to run on 3B machines.

If you would like to talk, I'd be happy to share information both in my
capacity at Pacific Bell and representing my outside consulting/training
company.  RSVP to:

                John Girard
                AI Systems Engineer
                Pacific Bell
                (415)823-1961

                Meta (Inference) Services, Inc.
                P.O. Box 635
                San Ramon, CA 94583-0635
                (415)449-5745

                {dual,cbosgd,bellcore,ihnp4,qantel,pyramid}!ptsfa!jeg

------------------------------

Date: 22 Jan 87 10:26:00 EST
From: "*BROWN, MARK" <mbrown@ari-hq1.ARPA>
Reply-to: "*BROWN, MARK" <mbrown@ari-hq1.ARPA>
Subject: antiquity of AI

There may be a reference that relates to AI in the Story of Gilgamish
the King, a Summarian legend from about 2500 B.C.

                        Neil Maclay
                        MACLAY@ARI-HQ1.ARPA

------------------------------

Date: Tue, 20 Jan 87 10:58:55 EST
From: dale@linc.cis.upenn.edu (Dale Miller)
Subject: Seminar - Presenting Intuitive Deductions (UPenn)


                      Penn Math/CS Logic Seminar
                              26 January

                   Presenting Intuitive Deductions
                            Frank Pfenning
                     (pfenning@theory.cs.cmu.edu)
                      Carnegie-Mellon University

A deduction of a theorem may be viewed as an explanation why the theorem
holds.  Unfortunately the automated theorem proving community has
concentrated almost exclusively on determining whether a proposed theorem is
provable - the proofs themselves were secondary.  We will explore how
convincing explanations may be obtained from almost any kind of machine
proof.  This extends work by Dale Miller and Amy Felty (who present
deductions in the sequent calculus) to a natural deduction system.  Also,
our deductions will generally not be normal, that is, they make use of
lemmas which are so frequent in mathematical practice and everyday
reasoning.  We will also briefly discuss possible applications of the
methods in the field which may be called "Inferential Programming".

Math Seminar Room, 4th floor Math/Physics Building, 11:00am

------------------------------

Date: Wed 21 Jan 87 11:21:12-PST
From: Alejandro Schaffer <SCHAFFER@Sushi.Stanford.EDU>
Subject: Seminar - New Themes in Data Structure Design (SU)


annual Forsythe Lecture of general interest

Robert Tarjan
Princeton University and AT&T Bell Laboratories

New Themes in Data Structure Design

Wednesday, January 28 at 7:30
Fairchild Auditorium
(just southwest of Stanford Medical Center off Campus Drive)

This talk will cover recent work by the speaker and his colleagues
concerning the design and analysis of data structures.  The talk will
focus on persistent data structures, which allow access to any
version of the structure, past or present.  Applications of such
structures in computational geometry and other areas will be
discussed.

(There will be a reception in the Fairchild Auditorium foyer immediately
following the lecture.)

------------------------------

Date: Tue 20 Jan 1987  18:20:20
From: dlm.allegra%btl.csnet@RELAY.CS.NET
Subject: Seminar - AI and Software Engineering (BTL)


Title:          Artificial Intelligence and Software Engineering
Speaker:        Dave Barstow
Affiliation:    Schlumberger-Doll Research
Date:           January 20, 1987
Location:       AT&T Bell Laboratories - Murray Hill
Sponsor:        Pamela Zave

Abstract:

Artificial Intelligence techniques ought to help us to manage the extensive
knowledge needed for software engineering, but two decades of research have
produced few demonstrations of utility.  This is due in part to the narrow
focus of previous research.  This talk discusses important issues that remain
to be addressed, describes a practical experiment, and suggests profound
implications for software engineering.

------------------------------

Date: Thu, 22 Jan 87 18:32:33 PST
From: lansky@sri-venice.ARPA (Amy Lansky)
Subject: Seminar - Learning Internal Disjunctive Concepts (SRI)

                 LEARNING INTERNAL DISJUNCTIVE CONCEPTS

              David Haussler (HAUSSLER%UCSC@CSNET-RELAY)
       Dept. of Computer and Information Sciences, UC Santa Cruz

                    11:00 AM, TUESDAY, January 27
               SRI International, Building E, Room EK242

Much of artificial intelligence research on concept learning from
examples has focussed on heuristic learning techniques that have not
been susceptible to rigorous analysis.  Here we present a simple
heuristic algorithm for learning a particular type of concept
identified by Michalski (internal disjunctive concepts) and analyze
its performance using the learning performance model recently proposed
by Valiant. This analysis shows that the algorithm will be effective
and efficient in a wide variety of real-world learning situations.

VISITORS:  Please arrive 5 minutes early so that you can be escorted up
from the E-building receptionist's desk.  Thanks!

------------------------------

Date: Thu, 22 Jan 87 16:20:22 EST
From: dale@linc.cis.upenn.edu (Dale Miller)
Subject: Conference - MidAltantic Logic Seminar

If you plan to attend the Mid Atlantic Mathematical Logic Seminar, you
might consider the following hotels in the Univ of Pennsylvania area.

o  Divine Tracy Hotel, 20 South 36th Street, $15/night, 0.5 miles from
meeting, 215/382-4310

o  Quality Inn, 22nd St (north of Parkway), $45/night, 2 miles from
meeting, 800/228-5151

o  Hilton Hotel, Civic Center Blvd & 34th, $60/night, 215/387-8333, 0.2
miles from the meetings

o  Sheraton Inn University City, 36th & Chestnut, $64/night, 0.5 miles
from meeting, 215/387-8000

The prices are approximate.  Notice:  there are no plans to publish
proceedings of this conference.

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Wed Jan 28 20:02:14 1987
Date: Wed, 28 Jan 87 20:02:08 est
From: vtcs1::in% <LAWS@SRI-STRIPE.ARPA>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #15
Status: R


AIList Digest            Monday, 26 Jan 1987       Volume 5 : Issue 15

Today's Topics:
  Philosophy - Consciousness

----------------------------------------------------------------------

Date: Fri, 23 Jan 87 03:59:40 cst
From: goldfain@uxe.cso.uiuc.edu (Mark Goldfain )
Subject: For the AIList "Consciousness" Discussion


   *************************************************************************
   *                                                                       *
   *   Consciousness is like a large ribosome, working its way along the   *
   *   messenger RNA of our perceptual inputs.   Or again, it  is like a   *
   *   multi-headed Turing machine, with the heads marching in lock step   *
   *   down the great input tape of life.                                  *
   *                                                                       *
   *************************************************************************

     Lest anyone think I am saying more than  I actually am, please understand
that these are both meant  as metaphors.  I am not  making ANY claim that mRNA
is the chemical of brain activities, nor that we are finite-state machines, et
cetera ad nauseum.  I  am only trying  to get us   off of  square zero  in our
characterization of how "being conscious" can be understood.

     It must be something which has a "window" of a finite time period, for we
can sense the "motion" of experiences "through" our consciousness.  It must be
more involved than a ribosome or a basic Turing  device, since  in addition to
being able to  access the  "present", it continually  spins off things that we
call "memories", and ties these things down into a place  that allows them  to
be pulled back  into the consciousness.  (Actually,  the  recall of long  term
memory is more like the process of going into a dark room with a tuning  fork,
giving it a whack, then listening for something that  resonates, going over to
the sound, and picking it up  ... so perhaps the memories  are not "tied down"
with pointers at all.)

------------------------------

Date: 23 Jan 87 21:15:22 GMT
From: ihnp4!cuae2!ltuxa!cuuxb!mwm@ucbvax.Berkeley.EDU  (Marc W.
      Mengel)
Subject: Re: More on Minsky on Mind(s)


  In article <460@mind.UUCP> Sevan Harnad (harnad@mind.UUCP) writes:
  > [ discussion of C-1 and C-2]

It seems to me that the human conciousness is actually more
of a C-n;  C-1 being "capable of experiencing sensation",
C-2 being "capable of reasoning about being C-1", and C-n
being "capable of reasoning about C-1..C-(n-1)" for some
arbitrarily large n...  Or was that really the intent of
the Minsky C-2?

--
 Marc Mengel
 ...!ihnp4!cuuxb!mwm

------------------------------

Date: 23 Jan 87 16:10:53 GMT
From: princeton!mind!harnad@rutgers.rutgers.edu  (Stevan Harnad)
Subject: Re: Minsky on Mind(s)


Ken Laws <Laws@SRI-STRIPE.ARPA> wrote on mod.ai:

>       Given that the dog >>is<< conscious,
>       the evolutionary or teleological role of the pain stimulus seems
>       straightforward.  It is a way for bodily tissues to get the attention
>       of the reasoning centers.

Unfortunately, this is no reply at all. It is completely steeped in
the anthropomorphic interpretation to begin with, whereas the burden
is to JUSTIFY that interpretation: Why do tissues need to get the
"attention" of reasoning centers? Why can't this happen by brute
cuasality, like everything else, simple or complicated?

Nor is the problem of explaining the evolutionary function of consciousness
any easier to solve than justifying a conscious interpretation of machine
processes. For every natural-selectional scenario -- every
nondualistic one, that is, i.e., one that doesn't give consciousness
an independent, nonphysical causal force -- is faced with the problem
that the scenario is turing-indistinguishable from the exact same ecological
conditions, with the organisms only behaving AS IF they were
conscious, while in reality being insentient automata. The very same
survival/advantage story would apply to them (just as the very same
internal mechanistic story would apply to a conscious device and a
turing-indistinguishable as-if surrogate).

No, evolution won't help. (And "teleology" of course begs the
question.) Consciousness is just as much of an epiphenomenal
fellow-traveller in the Darwinian picture as in the cognitive one.
(And saying "it" was a chance mutation is again to beg the what/why
question.)

>       Why (or, more importantly, how) the dog is conscious in the first place,
>       and hence >>experiences<< the pain, is the problem you are pointing out.

That's right. And the two questions are intimately related. For when
one is attempting to justify a conscious interpretation of HOW a
device is working, one has to answer WHY the conscious interpretation
is justified, and why the device can't do exactly the same thing (objectively
speaking, i.e., behaviorally, functionally, physically) without the
conscious interpretation.

>       an analogy between the brain and a corporation,
>       ...the natural tendency of everyone to view the CEO as the
>       center of corporate conscious was evidence for emergent consciousness
>       in any sufficiently complex hierarchical system.

I'm afraid that this is mere analogy. Everyone knows that there's no
AT&T to stick a pin into, and to correspondingly feel pain. You can do
that to the CEO, but we already know (modulo the TTT) that he's
conscious. You can speak figuratively, and even functionally, of a
corporation as if it were conscious, but that still doesn't make it so.

>       my previous argument that Searle's Chinese Room
>       understands Chinese even though neither the occupant nor his printed
>       instructions do.

Your argument is of course the familiar "Systems Reply." Unfortunately,
it is open to (likewise familiar) rebuttals -- rebuttals I consider
decisive, but that's another story. To telescope the intuitive sense
of the rebuttals: Do you believe rooms or corporations feel pain, as
we do?

>       I believe that consciousness is a quantitative
>       phenomenon, so the difference between my consciousness and that of
>       one of my neurons is simply one of degree.  I am not willing to ascribe
>       consciousness to the atoms in the neuron, though, so there is a bottom
>       end to the scale.

There are serious problems with the quantitative view of
consciousness. No doubt my alertness, my sensory capacity and my
knowledge admit of degrees. I may feel more pain or less pain, more or
less often, under more or fewer conditions. But THAT I feel pain, or
experience anything at all, seems an all-or-none matter, and that's
what's at issue in the mind/body problem.

It also seems arbitrary to be "willing" to ascribe consciousness to
neurons and not to atoms. Sure, neurons are alive. And they may even
be conscious. (So might atoms, for that matter.) But the issue here
is: what justifies interpreting something/someone as conscious? The
Total Turing Test has been proposed as our only criterion. What
criterion are you using with neurons? And even if single cells are
conscious -- do feel pain, etc. -- what evidence is there that this is
RELEVANT to their collective function in a superordinate organism?

Organs can be replaced by synthetic substances with the relevant
functional properties without disturbing the consciousness of the
superordinate organism. It's a matter of time before this can be done
with the nervous system. It can already be done with minor parts of
the nervous system. Why doesn't replacing conscious nerve cells with
synthetic molecules matter? (To reply that synthetic substances with the
same functional properties must be conscious under these conditions is
to beg the question.)

[If I sound like I'm calling an awful lot of gambits "question-begging,"
it's because the mind/body problem is devilishly subtle, and the
temptation to capitulate by slipping consciousness back into one's
premises is always there. I'm just trying to make these potential
pitfalls conscious... There have been postings in this discussion
to which I have given up on replying because they've fallen so deeply
into these pits.]

>       What fraction of a neuron (or of its functionality)
>       is required for consciousness is below the resolving power of my
>       instruments, but I suggest that memory (influenced by external
>       conditions) or learning is required.  I will even grant a bit of
>       consciousness to a flip-flop :-).
>       The consciousness only exists in situ, however: a
>       bit of memory is only part of an entity's consciousness if it is used
>       to interpret the entity's environment.

What instruments are you using? I know only the TTT. You (like Minsky
and others) are placing a lot of faith in "memory" and "learning." But
we already have systems that have remember and learn, and the whole
point of this discussion concerns whether and why this is sufficient to
justify interpreting them as conscious. To reply that it's again a matter
of degree is again to obfuscate. [The only "natural" threshold is the
TTT, and that's not just a cognitive increment in learning/memory, but
complete functional robotics. And of course even that is merely a
functional goal for the theorist and an intuitive sop for the amateur
(who is doing informal turing testing). The philosopher knows that
it's no solution to the other-minds problem.]

What you say about flip-flops of course again prejudges or begs the
question.

>       Fortunately, I don't have my heart set on creating conscious systems.
>       I will settle for creating intelligent ones, or even systems that are
>       just a little less unintelligent than the current crop.

If I'm right, this is the ONLY way to converge on a system that passes
the TTT (and therefore might be conscious). The modeling must be ambitious,
taking on increasingly life-size chunks of organisms' performance
capacity (a more concrete and specific concept than "intelligence").
But attempting to model conscious phenomenology, or interpreting toy
performance and its underlying function as if it were doing so, can
only retard and mask progress. Methodological Epiphenomenalism.
--

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

Date: 23 Jan 87 08:15:00 EST
From: "CUGINI, JOHN" <cugini@icst-ecf>
Reply-to: "CUGINI, JOHN" <cugini@icst-ecf>
Subject: consciousness as a superfluous concept, but so what


> Stevan Harnad:
>
> ...When the dog's tooth is injured,
> and it does the various things it does to remedy this -- inflamation
> reaction, release of white blood cells, avoidance of chewing on that
> side, seeking soft foods, giving signs of distress to his owner, etc. etc.
> -- why do the processes that give rise to all these sequelae ALSO need to
> give rise to any pain (or any conscious experience at all) rather
> than doing the very same tissue-healing and protective-behavioral job
> completely unconsciously?  Why is the dog not a turing-indistinguishable
> automaton that behaves EXACTLY AS IF it felt pain, etc, but in reality
> does not?  That's another variant of the mind/body problem, and it's what
> you're up against when you're trying to justify interpreting physical
> processes as conscious ones.  Anything short of a convincing answer to
> this amounts to mere hand-waving on behalf of the conscious interpretation
> of your proposed processes.

This seems an odd way to put it - why does X "need" to produce Y ?
Why do spinning magnets "need" to generate electric currents?  I
don't think that's quite the right question to ask about causes and
events - sounds vaguely anthrpomorphic to me.  It's enough to say
that, in fact, certain types of events (spinning magnets, active
brains) do in fact cause, give rise to, certain other types of events
(electric currents, experiences).  Now, now, don't panic, I know that
the epistemological justification for believing in the existence and
causes of experiences (one's own and that of others) is quite
different from that for electric currents.  I tried to outline the
epistemology in the longish note I sent a month or so ago (the one
with events A1, B1, C1, which talked about brains as a more important
criterion for consciousness than performance, etc.).

Do I sense here the implicit premise that there must be an evolutionary
explanation for the existence of consciousness?  And that consciousness
is a rationally justified concept iff such an evolutionary role for it
can be found?  But sez who?  Consciousness may be as superflouous (wrt
evolution) as earlobes.  That hardly goes to show that it ain't there.

The point is, given a satisfactory justification for believing that
a) experiences exist and b) are (in the cases we know of) caused by
the brain, I don't see why a "pro-consciousness" person should feel
obligated to answer why this NEEDS to be so.  I don't think it does
NEED to be so.  It just is so.


John Cugini <Cugini@icst-ecf>

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Wed Jan 28 20:02:33 1987
Date: Wed, 28 Jan 87 20:02:25 est
From: vtcs1::in% <LAWS@SRI-STRIPE.ARPA>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #16
Status: R


AIList Digest            Monday, 26 Jan 1987       Volume 5 : Issue 16

Today's Topics:
  Reviews - Spang Robinson Report Volume 3, No. 1, January 1987 &
    Canadian Artificial Intelligence,
  Conferences - AI at Upcoming Conferences

----------------------------------------------------------------------

Date: WED, 10 oct 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: Summary of Spang Robinson Report Volume 3, No. 1, January
         1987

The first Article: The AI Winter?

This is a discussion of whether the "AI industry" has "losts its momentum" or
is it a changing  market.  There is also a discussion of the possibilities of
AI being done in conventional programming languages.

____________________________________________________________________________

Investors Viewpoint

Interview with Montgomery Venture on the possibilities of venture capital
for AI companies.

___________________________________________________

New Products

Borland has introduced a Prolog toolbox to go with its Prolog.  It includes
features to assist in the user interface, to import data from various
other microcomputer programs such as 1-2-3, parser generators, serial
communications.

________________________________________________________________________
Japan Watch

The Japanese Manufacturing industry is now using 500 AI work stations.
These included 153 Symbolics 3600 series, 15 Lambda series, 50 Fujitsu
Facom Systems, 300 Xerox 1121's and seven TI Explorers.

The Japanese AI Market is 3.125 million dollars.

The Japanese hosted the Sixth Medical Information Study Conference.
Japanese Hospitals are using SURGIST-AI from Fujitsu and EXCORE from NEC.

Nihon Electronics technology Institute of Tokyo has a two year
course for "knowledge engineering."

NEC will be releasing a new AI system called Co-operative HIgh
performance Inference machine (CHI).

_____________________________________________________________________

Japanese Construction Applications

Fudo Construction is developing a design support system for the entire
phase of building construction.  Tokyo construction has
developed the Land Development Provisions Consultation Expert System.
Mitsui is planning to develop several AI systems.  Asahi Glass is developing
a production planning system for plate glas sprocessing.  Nihon cement
will develop a cement manufacturing expert system.
_______________________________________________________________________
Shorts:

One third of the nation's largest insurance companies are using or are
in the process of gearing up to use expert systems.  However only
two per cent have actually put expert systems into use.

Lucid has completed a 4.5 million dollar second round of financing.
______________________________________________________________________
Reviews of Applying Expert Systems in Business by Dimitris Chorafas,
Expert Systems TEchniques Tools and Applications by Philip Klahr and
Donald A. Waterman, Artificial Intelligence and Expert Systems by
V. Daniel Hunt, Advances in Cognitive Science 1 edited by N. E.
Sharkey and Explanation Patterns; Understanding Mechanically and
creatively by Roger C. Shank.

------------------------------

Date: WED, 10 oct 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: summary of Canadian ARtificial Intelligence

January 1987 Canadian Artificial Intelligence No. 10

Letter about progress on Japanese Fifth Generation.

The DELTA database machine, which earlier was described
as being non-functional and abandoned, was in fact fast
but had ease of use problems.  A new version is under development.
The operating sytem/programming environment for the Personal Sequential
Interface by Mitsubishi is 160,000 lines and took 80 man years to develop.
19 Japanese companies have formed the Artificial Intelligence Joint
Research Society to train researchers and undertake joint development efforts.

__________________________________________________

Review of AI research at Queen's University.

They have developed a language for called Nial and a tool kit.  They
are working on database models and interfaces, fuzzy logic, inference
engines, natural language parsing, window packages, rule systems,
built in editing, and educational environments.

__________________________________________________

A review discussion the issue of "What is AI" with a list
of those efforts that could be considered artificial intelligence.
__________________________________________________
Reports on the Seventh European Conference on Artificial Intelligence (ECAI-86)
and the International Workshop on User Modelling.

Some of the expert systems demonstrated at ECAI-86 include
 - a system to monitor the operation of a steam condensor at a thermal power
   plant
 - a system to process alarms in the vacuum distillation tower of a refinery

Some of the papers covered
  - a system to keep a consulting session with an expert system on target
  - a review of machine translation research.  Pre and post-editing
    translation editing aids are quite successful.  Three of the more
    advanced systems are METAL (University of Texas), MU (Kyoto University)
    and Eurotra (European Community).  The latter is supposed to
    cover nine Europ[ean languages.
 -  The UCLA system has an intelligent tutor for UNIX.  After several
    minutes of processing on an Apollo the system was able to have the
    following dialogue:
    User: I tried to remove a file with the "rm" command.  The file was
    not removed, and the error message was "permisssion denied"  I
    checked and I own the file
    Acqua: To remove a file, you need to be able to write into the directory
    containing it.  To remove a file, you do not need to own it.
 -  Paul Jacobs developed a new natural language generator called KING.
 -  Professor Prini predicted that Europe's stable growth rate is a good
    opportunity for Artificial INtelligence, particularly in dealing with
    a software production capacity shortage that is due soon.
 -  Harold Kahn demonstrated computer generated pictures.  His later works
    demonstrated quite a bit of realism including one of the Statue of
    liberty complete with Rococo festivity of the 18th century
 -  Clive Sinclair preicted that intelligent androids would be widely used
    by the year 2010.

__________________________________________________
Reviews of "Implementing Mathematics with the Nuprl Proof Development
system" by R. L. Constable  This text is aimed at mathematics and
computer science undergraduates.
Also Readings in Artificial Intelligence and Software Engineering"
by Charles Rich and Richard C. Waters.

------------------------------

Date: WED, 10 oct 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: AI at upcoming conferences

1987 Society for Computer Simulation Multiconference 1987

Modeling and Simulation on Microcomputers

Individual Face Classification by Computer Vision
  Robert A. Campbell, Scott Cannon, Greg Jones, Neil Morgan, Utah State Universi
ty

AI and Simulation

Preliminary Screening of Wastewater Treatment Alternatives Using Personal Consul
tant Plus
  Giles G. Patry, Bruce Gall, McMaster University
The impact of embedding AI tools in a control system simulator
  Norman R. Nielson SRI International
An Expert System for the Controller
  James A. Sena, L. Murphy Smith,Texas A&M University
Application of Artificial Intelligence Techniques to Simulation
  Pauline A. Langen, Carrier Corporation
The Expert System Applicability Question
  Louis R. Gieszi
An Intelligent Interface for Continuous System Simulation
  Wanda M. ustin The Aerospace Corporation Behrokh Khoshnevis University of Sout
hern
  California
Logic Progrmming and Discrete Event Simulation
  Robert G. Sargent, Ashvin, Radiya, Syracuse University
Expet Systems for Interactive Simulation of Computer System Dynamics
  Axel Lehmann, University of Karlsruhe
An Automated Simulation Modeling System Based on AI Techniques
  Behrokh Khoshnevis, An-Pin Chen, University of Southern California
Design of a Flexible Extendible Modeling Environment
  Robert J. Pooley University of Edinburgh


Prolog for Simulation

Expert System Shellw ith System Simulation Capabilities
  Ivan Futo, Computer Research Institute
Languages for Distributed Simulation
  Brian Unger, Xining Li, University of Calgary
Process Oriented Simulation in Prolog
  Jeans Vaucher, University of Montreal
Application of Artificial Intelligence Techniques to Simulation
  Pauline A. Langen, Carrier Corporation

Computer Integrated Manufacturing Systems and robotics

A Data Modeling Approach to Improve System's Intelligence in Automated Manufactu
ring
  Lee-Eng Shirley Lin, Yun-Baw Lin, Tamkang University
KARMA - A Knowledge-Based Robot Manipulation Graphics Simulation
  Richard H. Kirschbrown, Consultant
Development of questions-answers simulator for real-time scheduling and control
in
flexible manufacturing system using Prolog
  Lee-Eng Shirley Lin, Tamkang Unviersity, Chang Yung Lui, National Sun Yat-Sen
University
Simulation of uncertainty and product structure in MRP
  Louis Brennan, Surendra Mohan Gupta, Northeastern University

__________________________________________________________________________

The University  of ARizona Fourth Symposium on Modeling and Simulation Methodolo
gy
  January 19-23 1987

AI and Simulation I, R. V. Reddy
AI and Simulation II, B. P. Zeigler
  (Object Oreinted/AI Programming, Combining Discrete Event and Symbolic Models,
   Hierarchical, Modular Modelling/Multiprocessor Simulation)
AI and Simulation III, T. I. Oren
  cognizant Simulation Systems, AI and Quality Assurance Methodology
AI and Simulation IV
  Environments for AI and Simulation, Interfacing Lisp Machines and Simulation E
ngines
Special Sessions on Model-basedDiagnosis and Expert Systems Training, Inductive
Modelling,
  Goal Directed, Variable-Structure Models, AI and Simulation in Education


__________________________________________________

Compcon 87 Cathedral Hill Hotel, San Francisco, February 23-27

Tutorial Number 3 on AI Machines: INstructor David Elliot Shaw of the
Columbia University Non-Von Project
Tutorial Number 7: Managing Knowledge Systems Development: Instructor
Avron Barr

10:30-12:00 February 24
Use of an Advanced Expert Systems Tool for Fault Free Analysis in Nuclear
Power Plants - B. Frogner Expert-Easy Systems
Expert System Tool with Fact/Model Representation Environment on PSI
H. Kubono: ICOT
Towards an Expert System for Logic Circuits Synthesis A. DiStefano
Universita Di Catania

1:30-3:00 February 24
Expert Systems Development Environments - Issues and Future Directions
  (Titles and Authors TOBA)

3:30 - 5:00 Tuesday
The Xenologic X1
A Coprocessor for AI, LISP, Prolog and Databases T. Dobry
Integration of the Xenologic X1 AI Coprocessor With General Purpose
Computers R. Ribler
System Level Performance Using AI Coprocessors A. Despain
  (all authors with Xenologic, Inc.)

Intelligent Systems for Management Decision Support, Joseph Fiksel Chair
  of Panel

8:30-10:00 Wednesday February 25

Changing the Nature of CAD/CAM with AI C. Kempf: FMC Corp.
Knowledge-based Engineering for PRoduction Planning and Control
  I. Johnson Garegie Group (sic)
Digital's Knowledge Network - F. Lynch: DEC.

1:30 - 3:00 February 25
A Neural Based Knowledge Processor - J. Vovodsky Neuro Logic Inc.
Connectionists, Symbol Processsing in Neural Based Architectures
  D. Touretzky, Carnegie-Mellon Univ.
Drawbacks with Neural Based Architectures - D. Partridge New Mexico State
  University
Timing Dependencies in Sentence Comprehension - H. Gigly, University of
  New Hampshire

3:30 - 5:00 Wednesday February 25
Plentary Talk - "Trends in Knowledge Processing: From Expert Systems to
INtelligent Systems Engineering" Dr. Frederick Hayes-Roth

8:30 - 10:00 Thursday February 26
Intelligent Assistance Without Artificial Intelligence - G. Caiser

3:30 - 5:00 Thursday February 26
Lisp Machine Architecture Issues - R. Lim NASA Ames Research Center
High Level Language LISP Processor - S. Krueger: TI
Kyoto Common LISP - F. Giunchiglia IBUKI Inc.

Optical Neural Networks D. Psaltis: California Institute of Technology

Attendee's Open Mike - Mim Warren
  (Ten Minutes to present proposals, ideas, etc.)

__________________________________________________

Third International Conference on Data Engineering February 2-6, 1987
Pacifica Hotel, Los Angeles, California

February 4, 1987
2 - 3:30 Panel on Symbolic Procesing
H. Barsamian, UC Irvine, A. Cardenas, UCLA, D. Kibler, UC Irvine,
B. Wah, University of Illinois, T. Welch, International Software Systems

4:00 -6:00 February 4, 1987

M. Stonebraker, E. Hanson, C. HOng
The Design of hte Postgres Rules Systems
M. Kifer, E. L. Lozinskii
Implementing Logic Programs as a Database System
M. Lenzernini
Covering and Disjointness Constraints in Type Networks

11:12:30 February 5, 1987
P. Crews
tbt Expert: A Case Study in Integrating Expert System Technology with
Computer Assisted Instruction

__________________________________________________

ACM SIGCSE, February 19-20 1987, St. Louis Missouri

9:54 Friday February 20

A Course on "Expert Systems" for Electrical Engineering Students

___________________________________________________

Principles of Database Systems March 22-25, 1987 San Diego, California

Monday March 23, 1986 9:00 - 10:35 AM

Logic Programming with Sets G. M. Kuper, IBM T. J. Watson Research Center
Sets and Negation in a Logic Database Language LDL1 C. Beeri (Hebrew
  University,) S. Naqvi (MCC), R. Ramakrishnan (University of Texas at
  Austin and MCC), O. Shmueli, and S. Tsur (MCC)

Monday March 23, 1986, 10:35 AM - 11:00 AM

Logical Design of Relational Database Schemes
  L. Y. Yuan University of Southern Louisiana
  Z. M. Oxsoyoglu, Case Wetern Reserve University

Monday March 23, 1986 3:45 PM - 5:25 PM

A Knowledge-Theoretic Analysis of Atomic Comittment Protocols
  V. Hadzilacos, University of Toronto

Tuesday March 24, 1986, 9:00 AM - 10:35 AM

Perspectives in Deductive Databases
  J. MInker, University of Maryland
Maintenance of Stratified Databases Viewed as a Belief Revision System
  K. Apt (Ecole NOrmal Suprerieure and Universite Paris 7)
  J. M. Pugin (BULL Reserch Center)

Tuesday March 24, 1986, 3:15 PM - 3:45 PM

Bounds on the PRopagation of Selection into Logic Programs
  C. Beeri (Hebrew University)
  P. Kanellakis (Brown University)
  F. Bancilhon (IRIA and MCC)
  R. Ramakrishnnan(University of Texas at Austi.n and MCC)
Decidability and Expressiveness Aspects of Logic Queries
  O. Shmueli (Technion and MCC)

Wednesday March 25, 1986 11:00 - 12:15

Worst Case Complexity Analysis of Methods for Logic Query Implementation
  A. Marchetti-Spaccamella, A. Pelaggi (Universita "La Sapenza" di ROma) and
  D. Sacca (CRAI, Italy)

Wednesday March 25 , 1986 2:00 PM - 4:35 PM

Safety of recursive Horn Clauses with Infinite Relations
  R. Ramkrishanan (University of Texas at Austin and MCC)
  F. Bancilhon (INRIA and MCC))
  A. Silberschatz (University of Texas at Austi.n)
Optimizing Datalog Programs
  Y. Sagiv (Hebrew University)

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Wed Jan 28 20:03:15 1987
Date: Wed, 28 Jan 87 20:03:11 est
From: vtcs1::in% <LAWS@SRI-STRIPE.ARPA>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #17
Status: R


AIList Digest            Tuesday, 27 Jan 1987      Volume 5 : Issue 17

Today's Topics:
  Psychology - Objective Measurement of Subjective Variables,
  Philosophy - Quantitative Consciousness

----------------------------------------------------------------------

Date: 25 Jan 87 15:15:06 GMT
From: clyde!burl!codas!mtune!mtund!adam@rutgers.rutgers.edu  (Adam V. Reed)
Subject: Objective measurement of subjective variables

John Cugini:
>>                              .... to explain, or at least discuss, private
>>      subjective events.  If it be objected that the latter are outside the
>>      proper realm of science, so be it, call it schmience or philosophy or
>>      whatever you like. - but surely anything that is REAL, even if
>>      subjective, can be the proper object for some sort of rational
>>      study, no?
Stevan Harnad:
>   Some sort, no doubt. But not an objective sort, and that's the point.
>   Empirical psychology, neuroscience and artificial intelligence are
>   all, I presume, branches of objective inquiry.
>          ....   Let's leave the subjective discussion of private events
>   to lit-crit, where it belongs.

Stevan Harnad makes an unstated assumption here, namely, that subjective
variables are not amenable to objective measurement. But if by
"objective" Steve means, as I think he does, "observer-invariant", than
this assumption is demonstrably false. I shall proceed to demonstrate
this in two parts: (1) private events are amenable to parametric
measurement; and (2) relevant results of such measurement can be
observer-invariant.

(1) Whether or not a stimulus is experienced as belonging to some target
category is clearly a private event. Now data for the measurement of d',
the detection-theoretic measure of discriminability, are usually
gathered using overt behavior, such as pressing "target" and
"non-target" buttons. But in principle, d' can be measured without any
resort to externally observable behavior. Suppose I program a computer
to present a sequence of stimuli and, following enough time after
each stimulus to allow the observer to mentally classify the experience
as target or non-target, display the actual category of the preceding
stimulus. The observer would use this information to maintain a mental
count of hits and false alarms. The category feedback for the last
stimulus could be followed by a display of a table for the conversion of
hit and false alarm rates into d'. Thus, the observer would be able to
mentally compute d' without engaging in any externally observable
behavior whatever.

(2) In some well-defined contexts, the variation of d' with an
independent variable is as lawful as anything in the "known to be
objective" sciences such as physics (see Reed, Memory and Cognition
1976, 4(4), 453-458, equation 5 and bottom panel of figure 1, for an
example of this). The parameters of such lawful relationships will
differ from observer to observer, but their form is observer-invariant.
In principle, two investigators could perform the experiment as in (1)
above, and obtain objective (in the sense of observer-independent)
results as to the form of the resulting lawful relationships between,
for example, d' and memory retention time, *without engaging in any
externally observable behavior until it came time to compare results*.

The following analogy (proposed, if I remember correctly, by Robert
Efron) may illuminate what is happening here. Two physicists, A and B,
live in countries with closed borders, so that they may never visit each
other's laboratories and personally observe each other's experiments.
Relative to each other's personal perception, their experiments are
as private as the conscious experiences of different observers. But, by
replicating each other's experiments in their respective laboratories,
they are capable of arriving at objective knowledge. This is also true,
I submit, of the psychological study of private, "subjective"
experience.
                                                   Adam Reed
                                                   mtund!adam,attmail!adamreed

------------------------------

Date: 24 Jan 87 15:34:27 GMT
From: princeton!mind!harnad@rutgers.rutgers.edu  (Stevan Harnad)
Subject: Re: More on Minsky on Mind(s)


mwm@cuuxb.UUCP (Marc W. Mengel) of AT&T-IS, Software Support, Lisle IL
writes:

>       It seems to me that the human conciousness is actually more
>       of a C-n;  C-1 being "capable of experiencing sensation",
>       C-2 being "capable of reasoning about being C-1", and C-n
>       being "capable of reasoning about C-1..C-(n-1)" for some
>       arbitrarily large n...  Or was that really the intent of
>       the Minsky C-2?

It's precisely this sort of overhasty overinterpretation that my critique
of the excerpts from Minsky's forthcoming book was meant to counteract. You
can't help yourself to higher-order C's until you've handled 1st-order C
-- unless you're satisfied with hanging them on a hermeneutic sky-hook.
--

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

Date: Sun 25 Jan 87 22:08:34-PST
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Reply-to: AIList-Request@SRI-AI.ARPA
Subject: Quantitative Consciousness

  Stevan Harnad:
  Everyone knows that there's no
  AT&T to stick a pin into, and to correspondingly feel pain. You can do
  that to the CEO, but we already know (modulo the TTT) that he's
  conscious. You can speak figuratively, and even functionally, of a
  corporation as if it were conscious, but that still doesn't make it so.
  [...]   Do you believe [...] corporations feel pain, as we do?

They sure act like it when someone puts arsenic in their capsules.
I'm inclined to grant a limited amount of consciousness to corporations
and even to ant colonies.  To do so, though, requires rethinking the
nature of pain and pleasure (to something related to homeostatis).
I don't know of any purely mechanical systems that approach consciousness,
but computer operating systems and adaptive communications networks are
close.  The issue is partly one of complexity, partly of structure,
partly of function.  I am assuming that neurons and other "simple"
systems are C-1 but not C-2  -- and C-2 is the kind of consciousness
that people are really interested in.  C-2 consciousness seems to
require that at least one subsystem be "wired" to reason about its
own existence, although I gather that this may be denied in the
theory of situated automata.  The mystery for me is why only >>one<<
subsystem in my brain seems to have that introspective property -- but
multiple personalities or split-brain subjects may be examples that
this is not a necessary condition.


  There are serious problems with the quantitative view of
  consciousness. No doubt my alertness, my sensory capacity and my
  knowledge admit of degrees. I may feel more pain or less pain, more or
  less often, under more or fewer conditions. But THAT I feel pain, or
  experience anything at all, seems an all-or-none matter, and that's
  what's at issue in the mind/body problem.

An airplane either can fly or it can't.  (And there's no way half a
B-52 can fly, no matter how you choose your half.)  Yet there are
simpler forms of flight used by other entities -- kites, frisbees,
paper airplanes, butterflies, dandelion seeds, ...   My own opinion
is that insects and fish feel pain, but often do so in a generalized,
nonlocalized way that is similar to a feeling of illness in humans.
Octopi seem to be conscious, but with a psychology like that of spiders
(i.e., if hungry, conserve energy and wait for food to come along).
I assume that lower forms experience lower forms of consciousness
along with lower levels of intelligence.  Such continuua seem natural
to me.  If you wish to say that only humans and TTT-equivalents are
conscious, you shoud bear the burden of establishing the existence
and nature of the discontinuity.


  It also seems arbitrary to be "willing" to ascribe consciousness to
  neurons and not to atoms.

When someone demonstrates that atoms can learn, I'll reconsider.
(Incidentally, this raises the metaphysical question of whether God
can be conscious if He already knows everything.)  You are questioning
my choice of discontinuity, but mine is easy to defend (or give up)
because I assume that the scale of consciousness tapers off into
meaninglessness.  Asking whether atoms are conscious is like asking
whether aircraft bolts can fly.


  The issue here is: what justifies interpreting something/someone as
  conscious?  The Total Turing Test has been proposed as our only criterion.
  What criterion are you using with neurons?

Your TTT has been put forward as the only justifiable means of deciding
that an entity is conscious.  I can't force myself to believe that,
although you have already punched holes in arguments far more cogent
than I could have raised.  Still, I hope you're not insisting that
no entity can be conscious without passing the TTT.  Even a rock could
be conscious without our having any justifiable means of deciding so.


  And even if single cells are
  conscious -- do feel pain, etc. -- what evidence is there that this is
  RELEVANT to their collective function in a superordinate organism?

What evidence is there that it isn't?  Evolved and engineered systems
generally support the "form follows function" dictum.  Aircraft parts
have to be airworthy whether or not they can fly on their own.


  Why doesn't replacing conscious nerve cells with
  synthetic molecules matter? (To reply that synthetic substances with the
  same functional properties must be conscious under these conditions is
  to beg the question.)

I beg your pardon?  Or rather, I beg to beg your question.  I presume
that a synthetic replica of myself, or any number of such replicas,
would continue my consciousness.


  If I sound like I'm calling an awful lot of gambits "question-begging,"
  it's because the mind/body problem is devilishly subtle, and the
  temptation to capitulate by slipping consciousness back into one's
  premises is always there.

Perhaps professional philosophers are able to strive for a totally
consistent world view.  We armchair amateurs have to settle for
tackling one problem at a time.  A standard approach is to open
back doors and try to push the problem through; if no one push back,
the problem is [temporarily] solved.  (Another approach is to duck
out the back way ourselves, leaving the problem unsolved:  Why is
there Being instead of Nothingness?  Who cares?)  I'm glad you've
been guarding the back doors and I appreciate your valiant efforts
to clarify the issues.  I have to live with my gut feelings, though,
and they remain unconvinced that the TTT is of any use.  If I had to
build an aircraft, I would not begin by refuting theological arguments
about Man being given dominion over the Earth rather than the Heavens.
I would start from a premise that flight was possible and would
try to derive enabling conditions.  Perhaps the attempt would be
futile.  Perhaps I would invent only the automobile and the rocket,
and fail to combine them into an aircraft.  But I would still try.

                                        -- Ken Laws

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Wed Jan 28 20:06:34 1987
Date: Wed, 28 Jan 87 20:06:21 est
From: vtcs1::in% <LAWS@SRI-STRIPE.ARPA>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #18
Status: R


AIList Digest           Wednesday, 28 Jan 1987     Volume 5 : Issue 18

Today's Topics:
  Code - AI Expert Magazine Sources (Part 1 of 22)

----------------------------------------------------------------------

Date: 19 Jan 87 03:36:40 GMT
From: imagen!turner@ucbvax.Berkeley.EDU  (D'arc Angel)
Subject: AI Expert Magazine Sources (Part 1 of 22)


here it is and rather lengthly, cat all nine parts together and shar
it, don't forget to remove my .signature at the end of the file.

  [I had to reformat this.  The file lengths may have been
  altered, and I stripped out initial tabs.  -- KIL]


#! /bin/sh
# This is a shell archive, meaning:
# 1. Remove everything above the #! /bin/sh line.
# 2. Save the resulting text in a file.
# 3. Execute the file with /bin/sh (not csh) to create:
#       AIAPP.JAN
#       CONTNT.JAN
#       EXPERT.JAN
#       FILES.JAN
#       OPSNET.JAN
#       PERCEP.JAN
# This archive created: Sun Jan 18 19:24:39 1987
# By:   D'arc Angel (The Houses of the Holy)
export PATH; PATH=/bin:/usr/bin:$PATH
echo shar: "extracting 'AIAPP.JAN'" '(29884 characters)'
if test -f 'AIAPP.JAN'
then
        echo shar: "will not over-write existing file 'AIAPP.JAN'"
else
sed 's/^X//' << \SHAR_EOF > 'AIAPP.JAN'
X
X
X                          AI Apprentice
X                by Bill Thompson and Bev Thompson
X             "Creating Expert Systems from Examples"
X                     January 1987 AI EXPERT
X
X
X
XFigure 1.
X
X        batch#      part#       power       symptom      Problem
X
X        b           312         ac          no power     powersupply
X        a           312         ac          weak         gear bad
X        c           412         dc          sparking     powersupply
X        d           412         ac          no power     wiring
X        c           212         dc          sparking     powersupply
X        c           412         ac          weak         wiring
X        a           212         ac          no power     gear bad
X        b           412         dc          weak         wiring
X        b           212         ac          weak         gear bad
X
X
X
XFigure 2 - A decision tree produced from the data in Table 1.
X
X
X        batch#      part#       power       symptom     Result
X
X        b           412         ac          weak         gear bad
X        a           212         dc          weak         powersupply
X        d           212         dc          sparking     wiring
X        d           412         ac          no power     powersupply
X
X
X
XTable 1 - A training set of data for a repair problem.
X
X       If batch# is a
X        then result is gear bad.
X
X        If batch# is b
X        and part# is 212
X        then result is gear bad.
X
X        If batch# is b
X        and part# is 312
X        then result is powersupply.
X
X        If batch# is b
X        and part# is 412
X        then result is wiring.
X
X        If batch# is c
and power is ac
X        then result is wiring.
X
X        If batch# is c
X        and power is dc
X        then result is powersupply.
X
X        If batch# is d
X        then result is wiring.
X
X
X batch# ?
X  a:  ---------------------------------------------gear bad
X  b:part# ?
X     212: ---------------------------------------- gear bad
X     312: ---------------------------------------- powersupply
X     412: ---------------------------------------- wiring
X  c:power??
X     ac: ----------------------------------------- wiring
X     dc: ----------------------------------------- powersupply
X  d: --------------------------------------------- wiring
X
X
X
XTable 2 - A new set of data collected for the repair problem.  This data
X          is used for validation of the solution.
X
X
Xclinical    descript      distribution  group         Result
X
Xfever       upper resp.   epidemic      respiratory   parainfluenza
Xchills      lower resp.   local         enteric       adenovirus
Xrash        mid resp.     children      exanthems     mumps
Xswelling                  hospital      latent        rhinovirus
Xmalaise                   youngadults                 echo
Xheadache                  universal                   coxasackie
Xcough                                                 varicella
X
X
X                                                      rubella
X
XTable 3 - Definitions of results and attributes for identifying viruses.
X
Xlevel      type of  subject  programming  cover type  basic      Author
X           software matter      covered               language
Xintro/adv  gen/spec gen/spec    no/yes    soft/hard   no/yes
X1.         1.          4.          3.        soft     5.       Jones
X2.         5.          5.          4.        soft     1.       Smith
X1.         1.          1.          3.        soft     1.       Fisher
X1.         1.          1.          3.        hard     5.       Mitchell
X1.         1.          1.          1.        soft     1.       Argyle
X5.         1.          5.          5.        hard     1.       Chang
X
X
X
Table 4 - An example set for selecting a textbook.  This set was produced
X          using the Flexigrid program.
X
Xsubject matter ?  (gen/spec)
X < 2.50: programming covered ? (no/yes)
X        < 2.00: ---------------------------------- Concepts
X        >=2.00: cover ?
X                  hard: -------------------------- Today's
X                  soft: -------------------------- Information
X  >=2.50: level ? (intro/adv)
X          < 1.50: -------------------------------- Society
X          >=1.50: level ? (intro/adv)
X                  < 3.50: ------------------------ Applications
X                  >=3.50: ------------------------ Data_structures
X
X
Xupply of serotinous cones
X.
X
Xprompt 10/acre adequate
XAre 10 trees per acre adequate to seed the area ?
X.
X
Xtrans 10/acre adequate
X10 per acre is /not/ adequate
X.
X
Xprompt burning planned
XHas a prescribed burning been planned ?
X.
X
Xtrans burning planned
Xburning is /not/ planned
X.
X
Xtrans use seed tree
XYou should /not/ use seed trees to seed the area
X.
X
15
Xif branch 11 is yes
Xand pine desired is yes
Xand pine suited is yes
Xand desirable seed is yes
Xand serotinous cones is yes
Xand 10/acre adequate is yes
Xand burning planned is no
Xthen silviculture method is clearcut
Xand branch 17 is yes .
X
Xtrans silvaculture method
Xthe best silviculture method to use
X.
X
X16
Xif branch 11 is yes
Xand pine desired is yes
Xand pine suited is yes
Xand desirable seed is yes
Xand serotinous cones is yes
Xand 10/acre adequate is no
Xthen silviculture method is clearcut
Xand branch 17 is yes .
X
X17
Xif branch 11 is yes
Xand pine desired is yes
Xand pine suited is yes
Xand desirable seed is yes
Xand serotinous cones is no
Xand two harvests wanted is yes
Xand two harvests possible is yes
Xthen silviculture method is shelterwood
Xand branch 17 is yes .
X
Xprompt two harvests wanted
XDo you want to do two commercial harvests on this area ?
X.
X
Xtrans two harvests wanted
Xtwo commercial harvests are /not/ wanted
X.
X
Xprompt two harvests possible
XIs it possible to get two harvests from this area ?
X.
X
Xtrans two harvests possible
Xtwo harvests can /not/ be done on this area
X.
X
X18
Xif branch 11 is yes
Xand pine desired is yes
Xand pine suited is yes
Xand desirable seed is yes
Xand serotinous cones is no
Xand two harvests wanted is yes
Xand two harvests possible is no
then silviculture method is clearcut
Xand branch 17 is yes .
X
X19
Xif branch 11 is yes
Xand pine desired is yes
Xand pine suited is yes
Xand desirable seed is yes
Xand serotinous cones is no
Xand two harvests wanted is no
Xthen silviculture method is clearcut
Xand branch 17 is yes .
X
X20
Xif branch 11 is yes
Xand pine desired is yes
Xand pine suited is yes
Xand desirable seed is no
Xthen silviculture method is clearcut
Xand branch 17 is yes .
X
X21
Xif branch 11 is yes
Xand pine desired is yes
Xand pine suited is no
Xthen convert is yes
Xand recommend is convert .
X
Xtrans convert
Xyou should /not/ convert the area to some more desirable kind of tree
X.
X
X22
Xif branch 11 is yes
Xand pine desired is no
Xthen convert is yes
Xand recommend is convert .
X
X
X26
Xif branch 17 is yes
Xand adequate seedbed is yes
Xthen branch 18 is yes .
X
Xprompt adequate seedbed
XIs there an adequate seedbed for planting ?
X.
X
Xtrans adequate seedbed
Xthere is /not/ an adequate seedbed for planting
X.
X
X27
Xif branch 17 is yes
Xand adequate seedbed is no
Xthen prepare site is yes
Xand branch 18 is yes .
X
trans prepare site
Xthe site should /not/ be prepared before planting
X.
X
X28
Xif branch 18 is yes
Xand silviculture method is shelterwood
Xthen use natural seeding is yes
Xand recommend is use natural seeding .
X
Xtrans use natural seeding
Xnatural seeding techniques should /not/ be used
X.
X
X29
Xif branch 18 is yes
Xand silviculture method is clearcut
Xand improved stock is yes
Xthen plant is yes
Xand recommend is plant .
X
Xprompt improved stock
XIs there improved planting stock available ?
X.
X
Xtrans improved stock
Xthere is /not/ improved stock available
X.
X
Xtrans plant
Xsince there is better stock available you can /not/ plant using that stock
X.
X
X30
Xif branch 18 is yes
Xand silviculture method is clearcut
Xand improved stock is no
Xand good cone supply is yes
Xthen scatter cones is yes
Xand recommend is scatter cones .
X
Xprompt good cone supply
XIs there a good supply of serotinous cones on the area ?
X.
X
Xtrans good cone supply
Xthere is /not/ a good cone supply
X.
X
Xtrans scatter cones
Xyou should /not/ scatter the supply of serotinous cones over the area
X.
X
X31
Xif branch 18 is yes
Xand silviculture method is clearcut
Xand improved stock is no
Xand good cone supply is no
Xthen direct seed is yes
Xand recommend is direct seed .
X
Xtrans direct seed
XSince the cone supply is inadequate, you should /not/ directly seed the
area
X.
X
X
X-------------------------------------------------------------------------
X
XThe following comments are not a part of the knowledge base.  If you
Xtry to run the knowledge base this part of the file should be removed
X
X
XAbbreviated KEY
X
X1.  stocking good is yes ............................. 2
X1.  stocking good is no  ............................. 10
X    2. avg < 5 is yes ................................ 3
X    2. avg < 5 is no ................................. 4
X3.  2000 + per acre is yes ..........WEED OR CLEAN.... 8
X3.  2000 + per acre is no ............................ 8
X    4. age is mature ................................. 11
X    4. age is immature ............................... 5
X5.  site index > 60 is yes ........................... 6
X5.  site index > 60 is no ............................ 9
X    6. product size is large ......................... 7
X    6. product size is small ......................... 9
X7.  120 + basal area is yes .........THIN............. 9
X7.  120 + basal area is no ........................... 9
X    8. severe competition is yes ....RELEASE.......... 9
X    8. severe competition is no ...................... 9
X9.  high risk is yes ................................. CONTROL IF FEASIBLE
X9.  high risk is no .................................. WAIT
X    10. other resources is yes ....................... MAINTAIN
X    10. other resources is no ........................ 11
X11. pine suitable is yes ............................. 12
X11. pine suitable is no .............................. CONVERT
X    12. desirable seed is yes ........................ 13
X    12. desirable seed is no ........USE CLEARCUT..... 17
X13. serotinous cones is yes .......................... 14
X13. serotinous cones is no ........................... 16
X    14. 10/acre adequate is yes ...................... 15
X    14. 10/acre adequate is no ......USE CLEARCUT..... 17
X15. burning planned is yes ........................... USE SEED TREE
X15. burning planned is no ...........USE CLEARCUT..... 17
X    16. two harvests wanted is yes ..USE SHELTERWOOD.. 17
X    16. two harvests wanted is no ...USE CLEARCUT..... 17
X17. adequate seedbeds is yes ......................... 18
X17. adequate seedbeds is no .........PREPARE SITE..... 18
X    18. silviculture method is shelterwood ........... USE NATURAL SEEDING
X    18. silviculture method is clearcut .............. 19
X19. improved stock is yes ............................ PLANT
X19. improved stock is no ............................. 20
X    20. good cone supply is yes ...................... SCATTER CONES
X    20. good cone supply is no ....................... DIRECT SEED
X
X
X
XThe purpose of this exercise is to show how a knowledge base can be
designed to directly follow a key.  There are several places where the
XKB could have been made more efficient, but this would have meant
Xdeparting from the order of the key.  You might find it an interesting
Xexercise to explore other ways this same information could have been
Xrepresented in the KB.
X
XThe key appears in the Managers Handbood for Jack Pine in the North Central
XStates.  The Handbook was produced by the North Central Forest Experiment
XStation of the Forest Service of the U.S. Dept. of Agriculture.  Our
Xintention in writing this knowledge base is to show the structure of a
Xknowledge base written for a backward chaining inference engine directly
Xfrom an existing document.  If this KB were to be actually used, it would
Xneed to have clearer questions and more explanations to the user.  These
Xexplanations are provided in the handbook and could be easily incorporated
Xinto the knowledge base.
X
XThe knowledge base will run on the expert system shell MicroExpert which is
Xan example of a backward chaining inference engine. MicroExpert is
Xavailable from McGraw-Hill for $49.95 and can be ordered by calling 1-800-
X628-0004 or, in NY, 212-512-2999 . The knowledge base is described in the
Xcolumn AI Apprentice which appears in the November issue of AI Expert
Xmagazine.  The design details of the inference engine which runs the KB is
Xdescribed in the article "Inside an Expert System" in the April 1985
Xisuue of BYTE magazine.
X
XMicroExpert, AI Apprentice and "Inside an Expert System" are all written
Xby Bev and Bill Thompson . We're always happy to hear about your thoughts
Xand comments, good or bad on any of our work.  Contact us at the address
Xbelow, on Compuserve or BIX. Our Compuserve id is 76703,4324 and we can be
Xreached by Easyplex or in the AI Expert Forum.  Our BIX id is bbt and  we
Xmay  be  contacted via BIXmail or by leaving comments in the  MicroExpert
Xconference.
X
XBill and Bev Thompson
XR.D. 2 Box 430
XNassau, N.Y.  12123
X
X
X                            TREES.PRO
X                         PROLOG program
X
X
X/* This PDPROLOG program implements a knowledge base based upon the
X   following key:
X
X   To run the program type "go."
X   Caution - This program can be very S L O W.
X
XAbbreviated KEY
X
X1.  stocking good is yes ............................. 2
X1.  stocking good is no  ............................. 10
X    2. avg < 5 is yes ................................ 3
X    2. avg < 5 is no ................................. 4
X3.  2000 + per acre is yes ..........WEED OR CLEAN.... 8
3.  2000 + per acre is no ............................ 8
X    4. age is mature ................................. 11
X    4. age is immature ............................... 5
X5.  site index > 60 is yes ........................... 6
X5.  site index > 60 is no ............................ 9
X    6. product size is large ......................... 7
X    6. product size is small ......................... 9
X7.  120 + basal area is yes .........THIN............. 9
X7.  120 + basal area is no ........................... 9
X    8. severe competition is yes ....RELEASE.......... 9
X    8. severe competition is no ...................... 9
X9.  high risk is yes ................................. CONTROL IF FEASIBLE
X9.  high risk is no .................................. WAIT
X    10. other resources is yes ....................... MAINTAIN
X    10. other resources is no ........................ 11
X11. pine suitable is yes ............................. 12
X11. pine suitable is no .............................. CONVERT
X    12. desirable seed is yes ........................ 13
X    12. desirable seed is no ........USE CLEARCUT..... 17
X13. serotinous cones is yes .......................... 14
X13. serotinous cones is no ........................... 16
X    14. 10/acre adequate is yes ...................... 15
X    14. 10/acre adequate is no ......USE CLEARCUT..... 17
X15. burning planned is yes ........................... USE SEED TREE
X15. burning planned is no ...........USE CLEARCUT..... 17
X    16. two harvests wanted is yes ..USE SHELTERWOOD.. 17
X    16. two harvests wanted is no ...USE CLEARCUT..... 17
X17. adequate seedbeds is yes ......................... 18
X17. adequate seedbeds is no .........PREPARE SITE..... 18
X    18. silviculture method is shelterwood ........... USE NATURAL SEEDING
X    18. silviculture method is clearcut .............. 19
X19. improved stock is yes ............................ PLANT
X19. improved stock is no ............................. 20
X    20. good cone supply is yes ...................... SCATTER CONES
X    20. good cone supply is no ....................... DIRECT SEED
X
X
X
XThe purpose of this exercise is to show how an expert system can be
Xdesigned to directly follow a key.  There are several places where the
Xprogram could have been made more efficient, but this would have meant
Xdeparting from the order of the key.  You might find it an interesting
Xexercise to explore other ways this same information could have been
Xrepresented in the program.
X
XThe key appears in the Managers Handbood for Jack Pine in the North Central
XStates.  The Handbook was produced by the North Central Forest Experiment
XStation of the Forest Service of the U.S. Dept. of Agriculture.  Our
Xintention in writing this knowledge base is to show the structure of a
Xknowledge base written for a backward chaining inference engine directly
Xfrom an existing document.  If this KB were to be actually used, it would
Xneed to have clearer questions and more explanations to the user.  These
Xexplanations are provided in the handbook and could be easily incorporated
Xinto the knowledge base.
X
This program is similar to the KB for the expert system shell
XMicroExpert which is an example of a backward chaining inference engine.
XMicroExpert is available from McGraw-Hill for $49.95 and can be ordered
Xby calling 1-800-628-0004 or, in NY, 212-512-2999 .
XThe knowledge base is described in the AI Apprentice column which appears
Xin the November issue of AI Expert magazine.
XThe design details of the inference engine which runs the KB is
Xdescribed in the article "Inside an Expert System" in the April 1985
Xisuue of BYTE magazine.
X
XMicroExpert, AI Apprentice and "Inside an Expert System" are all written
Xby Bev and Bill Thompson . We're always happy to hear about your thoughts
Xand comments, good or bad on any of our work.  Contact us at the address
Xbelow, on Compuserve or BIX. Our Compuserve id is 76703,4324 and we can be
Xreached by Easyplex or in the AI Expert Forum.  Our BIX id is bbt and  we
Xmay  be  contacted via BIXmail or by leaving comments in the  MicroExpert
Xconference.
X
XBill and Bev Thompson
XR.D. 2 Box 430
XNassau, N.Y.  12123      */
X
X/* Control - In MicroExpert terms, the goal of the consultation is
X   recommendation */
X
Xgo :- clear_kb,
X      give_advice.
Xgive_advice :- recommendation(X),
X               fail.
Xgive_advice :- print_advice.

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Wed Jan 28 20:06:58 1987
Date: Wed, 28 Jan 87 20:06:43 est
From: vtcs1::in% <LAWS@SRI-STRIPE.ARPA>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #19
Status: R


AIList Digest           Wednesday, 28 Jan 1987     Volume 5 : Issue 19

Today's Topics:
  AI Expert Magazine Sources (Part 2 of 22)

----------------------------------------------------------------------

Date: 19 Jan 87 03:36:40 GMT
From: imagen!turner@ucbvax.Berkeley.EDU  (D'arc Angel)
Subject: AI Expert Magazine Sources (Part 2 of 22)

X
X/* The rules -
X   These are implemented this way to mimic the MicroExpert rule set.
X   Looking at them side by side should show the similarities. */
X
Xfact(branch8,yes) :- fact('stocking good',yes),
X                     fact('avg < 5',yes),
X                     fact('2000+ per acre',yes),
X              recommend('The stand of jack pine must be weeded and cleaned.').
Xfact(branch8,yes) :- fact('stocking good',yes),
X                     fact('avg < 5',yes),
X                     fact('2000+ per acre',no).
Xfact(branch9,no) :- fact('stocking good',yes),
X                    fact('avg < 5',no),
X                    fact(age,mature),
X                    assertz(fact(branch11,yes)).
Xfact(branch11,yes) :- fact('stocking good',yes),
X                      fact('avg < 5',no),
X                      fact(age,mature),
X                      assertz(fact(branch9,no)).
Xfact(branch9,yes) :- fact('stocking good',yes),
X                     fact('avg < 5',no),
X                     fact(age,immature),
X                     fact('site index > 60',yes),
X                     fact('product size',large),
             fact('120+ basal area',yes),
X                     recommend('It is important to thin the area').
Xfact(branch9,yes) :- fact('stocking good',yes),
X                     fact('avg < 5',no),
X                     fact(age,immature),
X                     fact('site index > 60',yes),
X                     fact('product size',large),
X                     fact('120+ basal area',no).
Xfact(branch9,yes) :- fact('stocking good',yes),
X                     fact('avg < 5',no),
X                     fact(age,immature),
X                     fact('site index > 60',yes),
X                     fact('product size',large).
Xfact(branch9,yes) :- fact('stocking good',yes),
X                     fact('avg < 5',no),
X                     fact(age,immature),
X                     fact('site index > 60',yes).
Xrecommendation(maintain) :-
X       fact('stocking good',no),
X       fact('other resources',yes),
X       recommend('You should maintain the stand in its present condition').
Xfact(branch11,yes) :- fact('stocking good',no),
X                      fact('other resources',no).
Xfact(branch9,yes) :- fact(branch8,yes),
X                     fact('severe competition',yes),
X                     recommend('Competing trees should be eliminated.').
Xfact(branch9,yes) :- fact(branch8,yes),
X                     fact('severe competition',no).
Xrecommendation(control) :-
X        fact(branch9,yes),
X        fact('high risk',yes),
X      recommend('The current area should be controlled, if at all feasible.').
Xrecommendation(wait) :-
X        fact(branch9,yes),
X        fact('high risk',no),
X       recommend('You should wait before doing anything else to this stand.').
Xrecommendation('use seed tree') :-
X        fact(branch11,yes),
X        fact('pine desired',yes),
X        fact('pine suited',yes),
X        fact('desirable seed',yes),
X        fact('serotinous cones',yes),
X        fact('10/acres adequate',yes),
X        fact('burning planned',yes),
X        recommend('You should use seed trees to seed the area.').
Xfact(branch17,yes) :-
X            fact(branch11,yes),
X            fact('pine desired',yes),
X            fact('pine suited',yes),
X            fact('desirable seed',yes),
X            fact('serotinous cones',yes),
X            fact('10/acres adequate',yes),
X            fact('burning planned',no),
X            add_fact(silvaculture,clearcut),
X            recommend('The best silvaculture method to use is clearcut.').
fact(branch17,yes) :-
X        fact(branch11,yes),
X        fact('pine desired',yes),
X        fact('pine suited',yes),
X        fact('desirable seed',yes),
X        fact('serotinous cones',yes),
X        fact('10/acres adequate',no),
X        add_fact(silvaculture,clearcut),
X        recommend('The best silvaculture method to use is clearcut.').
Xfact(branch17,yes) :-
X        fact(branch11,yes),
X        fact('pine desired',yes),
X        fact('pine suited',yes),
X        fact('desirable seed',yes),
X        fact('serotinous cones',no),
X        fact('two harvests wanted',yes),
X        fact('two harvests possible',yes),
X        add_fact(silvaculture,shelterwood),
X   recommend('The best silvaculture method to use is the shlterwood method.').
Xfact(branch17,yes) :-
X        fact(branch11,yes),
X        fact('pine desired',yes),
X        fact('pine suited',yes),
X        fact('desirable seed',yes),
X        fact('serotinous cones',no),
X        fact('two harvests wanted',yes),
X        fact('two harvests possible',no),
X        add_fact(silvaculture,clearcut),
X        recommend('The best silvaculture method to use is clearcut.').
Xfact(branch17,yes) :-
X        fact(branch11,yes),
X        fact('pine desired',yes),
X        fact('pine suited',yes),
X        fact('desirable seed',yes),
X        fact('serotinous cones',no),
X        fact('two harvests wanted',no),
X        add_fact(silvaculture,clearcut),
X        recommend('The best silvaculture method to use is clearcut.').
Xfact(branch17,yes) :-
X        fact(branch11,yes),
X        fact('pine desired',yes),
X        fact('pine suited',yes),
X        fact('desirable seed',no),
X        add_fact(silvaculture,clearcut),
X        recommend('The best silvaculture method to use is clearcut.').
Xrecommendation(convert) :-
X        fact(branch11,yes),
X        fact('pine desired',yes),
X        fact('pine suited',no),
X        recommend(
X          'You should convert the area to some more desirable kind of tree.').
Xrecommendation(convert) :-
X        fact(branch11,yes),
X        fact('pine desired',no),
X        recommend(
X          'You should convert the area to some more desirable kind of tree.').
Xfact(branch18,yes) :-
fact(branch17,yes),
X        fact('adequate seedbed',yes).
Xfact(branch18,yes) :-
X        fact(branch17,yes),
X        fact('adequate seedbed',no),
X        recommend('The site should be prepared before planting.').
Xrecommendation('natural seeding') :-
X        fact(branch18,yes),
X        fact(silvaculture,shelterwood),
X        recommend('The natural seeding technique should be used.').
Xrecommendation(plant) :-
X        fact(branch18,yes),
X        fact(silvaculture,clearcut),
X        fact('improved stock',yes),
X        recommend(
X    'Since there is better stock available, you can plant using that stock.').
Xrecommendation('scatter cones') :-
X        fact(branch18,yes),
X        fact(silvaculture,clearcut),
X        fact('improved stock',no),
X        fact('good cone supply',yes),
X        recommend('You should scatter the serotinous cones over the area.').
Xrecommendation('direct seed') :-
X        fact(branch18,yes),
X        fact(silvaculture,clearcut),
X        fact('improved stock',no),
X        fact('good cone supply',no),
X        recommend('You should directly seed the area.').
X
X/* These routines add new facts to the internal knowledge base - kb */
X
Xfact(X,Y) :- kb(X,Y),! .
Xfact(X,Y) :- not(kb(X,Anything)),
X             question(X,Answer),
X             assertz(kb(X,Answer)),
X             Y = Answer.
X
Xadd_fact(X,Y) :- kb(X,Y),!.
Xadd_fact(X,Y) :- assertz(kb(X,Y)).
X
Xrecommend(X) :- add_fact(advice,X).
X/* Questions to ask the user */
X
Xquestion('stocking good',Ans) :-
X        print('Is the stocking of the jack pine stand currently'),nl,
X        print('at least minimum ? '),nl,nl,
X        print('If you are unsure of how to determine stocking,'),nl,
X        print('see page 4 in the Managers Handbook for Jack Pine'),
X        nl,
X        ask('',Ans,[ yes , no ]).
Xquestion('avg < 5',Ans) :-
X        ask('Is the average diameter of the trees less than 5 inches ?',
X             Ans,[yes,no]).
Xquestion('2000+ per acre',Ans) :-
X        ask('Are there 2000 or more trees per acre ?',Ans,[yes,no]).
question(age,Ans) :-
X        ask('Is the age of the stand mature or immature ?',
X             Ans,[mature,immature]).
Xquestion('site index > 60',Ans) :-
X        ask('Is the site index greater than 60 ?',Ans,[yes,no]).
Xquestion('product size',Ans) :-
X        ask('Do you want to manage the timber for large or small products ?',
X            Ans,[large,small]).
Xquestion('120+ basal area',Ans) :-
X        ask('Is the basal area per acre at least 120 square feet ?',
X            Ans,[yes,no]).
Xquestion('other resources',Ans) :-
X     ask('Do you want to maintain this condition to support other resources?',
X             Ans,[yes,no]).
Xquestion('severe competition',Ans) :-
X        ask('Is there severe overstory competition ?',Ans,[yes,no]).
Xquestion('high risk',Ans) :-
X        ask('Is there a high risk of loss or injury ?',Ans,[yes,no]).
Xquestion('pine desired',Ans) :-
X        ask('Do you want to keep jack pine in this area ?',Ans,[yes,no]).
Xquestion('pine suited',Ans) :-
X        ask('Is jack pine well suited to this site ?',Ans,[yes,no]).
Xquestion('desirable seed',Ans) :-
X        ask('Is there a desirable jack pine seed source on the area ?',
X             Ans,[yes,no]).
Xquestion('serotinous cones',Ans) :-
X        ask('Do the trees on the site have serotinous cones ?',Ans,[yes,no]).
Xquestion('10/acres adequate',Ans) :-
X        ask('Are 10 trees per acre adequate to seed the area ?',Ans,[yes,no]).
Xquestion('burning planned',Ans) :-
X        ask('Has a prescribed burning been planned ?',Ans,[yes,no]).
Xquestion('two harvests wanted',Ans) :-
X       ask('Do you want two commercial harvests on this area ?',Ans,[yes,no]).
Xquestion('two harvests possible',Ans) :-
X      ask('Is it possible to get two harvests from this area ?',Ans,[yes,no]).
Xquestion('adequate seedbed',Ans) :-
X        ask('Is there an adequate seedbed for planting ?',Ans,[yes,no]).
Xquestion('improved stock',Ans) :-
X        ask('Is there an improved planting stock available ?',Ans,[yes,no]).
Xquestion('good cone supply',Ans) :-
X        ask('Is there a good supply of serotinous cones in the area ?',
X             Ans,[yes,no]).
X
X/* Utility Routines - to be useful, we should add some routines to allow
X                       the user to ask "How" and "Why" */
X
Xdisplay_kb :- kb(X,Y),
X              print(X,' is ',Y),
X              nl,
X              fail.
Xdisplay_kb.
X
X
Xprint_advice :-
X    nl,nl,
print('Based upon your responses, the following is recommended :'),nl,nl,
X    show_advice.
Xshow_advice :-
X    kb(advice,X),
X    print(X),
X    nl,
X    fail.
Xshow_advice :-
X    nl,print('To see the complete set of derived facts,'),
X    print('type "display_kb."').
X
X
Xclear_kb :- retract(kb(_,_)),
X            fail .
Xclear_kb.
X
Xmember(X,[X|_]).
Xmember(X,[_|Y]) :- member(X,Y).
X
Xask(Ques,Ans,LegalResponses) :-
X    nl,print(Ques,' '),
X    read(Ans),
X    member(Ans,LegalResponses),!.
Xask(Ques,Ans,LegalResponses) :-
X   nl,nl,nl,
X   print('Please respond with : ',LegalResponses),nl,nl,
X   ask(Ques,Ans,LegalResponses).
X
X
X
X
X
X
X
X                      Listings and Figures
X                  printed in AI EXPERT magazine
X
X
X1.  Jack pine stand with minimum or higher stocking .................. 2
X1.  Jack pine stand with less than minimum stocking .................. 10
X
X    2.  Average tree diameter less than 5 inches ..................... 3
X    2.  Average tree diameter 5 inches or more ....................... 4
X
X3.  2,000 or more trees per acre ..................WEED OR CLEAN ..... 8
X3.  Less than 2,000 trees per acre ................................... 8
X
X    4.  Stand is mature .............................................. 11
X    4.  Stand is not mature .......................................... 5
X
XFigure 1 - Key for forest management taken from USDA Forest Service
X           Handbook
X
X
X

X                                                    |-- yes ===> weed or clean
X                                                       |            and do # 8
X                             |-- yes --- 2000+ per acre-|
X                             |                          |-- no ===> do # 8
X         |-- yes -- diameter-|
X         |          < 5 in.  |             |-- mature ===> do # 11
Xminimum  |                   |-- no -- age-|
Xstocking-|                                 |-- young ===> do # 5
X         |
X         |
echo shar: "a missing newline was added to 'AIAPP.JAN'"
echo shar: "18 control characters may be missing from 'AIAPP.JAN'"
SHAR_EOF
if test 29884 -ne "`wc -c < 'AIAPP.JAN'`"
then
echo shar: "error transmitting 'AIAPP.JAN'"
 '(should have been 29884 characters)'
fi
fi
echo shar: "extracting 'CONTNT.JAN'" '(2351 characters)'
if test -f 'CONTNT.JAN'
then
echo shar: "will not over-write existing file 'CONTNT.JAN'"
else
sed 's/^X//' << \SHAR_EOF > 'CONTNT.JAN'
X
X                            Contents -- AI EXPERT
X                                 January 1987
X
X
XARTICLES
X--------
X
XPlanning with TWEAK
Xby Jonathan Amsterdam
X
XLike all exploratory work in the sciences, AI research proceeds
Xin cycles of 'scruffy' exploration and 'neat' consolidation.
XAfter years of exploration into different planning algorithm
Xdesign strategies, M.I.T.'s David Chapman may have created a new
Xera in planning research with his neat summary of more than a
Xdecade of scruffy work on an algorithm called TWEAK.
X
X
XRete Match Algorithm
Xby Charles L. Forgy and Susan Shepard
X
XThe Rete Match algorithm is a fast method for comparing a set of
Xpatterns to a set of objects to determine all possible matches.
XIt may be the most efficient algorithm for performing the match
Xoperation on single processor.  Developed by Charles L. Forgy in
X1974, it has been implemented in several languages in both
Xresearch and commercial grade systems.
X
X
XImperative Pattern Matching in OPS5
Xby Dan Neiman
X
XSurely the Rete Match algorithm is an efficient data structure
Xfor implementing production systems.  But what else can it be
Xused for?  Let's look at the OPS5 language as a case study of
Xthe Rete net as an experimental tool kit.  Then we'll present a
Xtechnique that will show the programmer how to use Rete Match as
Xa general purpose pattern matching tool.
X
X
XPerceptrons and Neural Nets
Xby Peter Reece
X
XThere are at least ten billion neurons handling over one million
Xinput messages per second in the human brain.  With many of the
Xearlier hardware and software obstacles now overcome, let's look
Xback to one of the most successful pattern classification
Xcomputers---the Perceptron---and show how you can implement a
Xsimple Perceptron on your home computer.
X
X
XDEPARTMENTS
X-----------
X
XBrain Waves
X"AI for Competitive Advantage"
Xby Eugene Wang, Gold Hill Computers
X
XAI INSIDER
X
XEXPERT'S TOOLBOX
X"Using Smalltalk to Implement Frames"
Xby Marc Rettig
X
XAI APPRENTICE
X"Creating Expert Systems from Examples"
Xby Beverly and Bill Thompson
X
XIN PRACTICE
X"Air Traffic Control:  A Challenge for AI"
Xby Nicholas Findler
X
XHARDWARE REVIEW
X"A LISP Machine Profile:  Symbolics 3650"
Xby Douglas Schuler, et. al.
X
XSOFTWARE REVIEW
X"Expertelligence's PROLOG for the Mac:
XExperPROLOG II"
X
X
echo shar: "a missing newline was added to 'CONTNT.JAN'"
echo shar: "159 control characters may be missing from 'CONTNT.JAN'"
SHAR_EOF
if test 2351 -ne "`wc -c < 'CONTNT.JAN'`"
then
echo shar: "error transmitting 'CONTNT.JAN'"
 '(should have been 2351 characters)'
fi
fi
echo shar: "extracting 'EXPERT.JAN'" '(7019 characters)'
if test -f 'EXPERT.JAN'
then
echo shar: "will not over-write existing file 'EXPERT.JAN'"
else
sed 's/^X//' << \SHAR_EOF > 'EXPERT.JAN'
X
X                         Expert's Toolbox
X                           January 1987
X               "Using Smalltalk to Implement Frames"
X                          by Marc Rettig
X
X
X
XListing 1
X
XDEFINITION OF CLASS SLOT
X
XDictionary variableSubclass: #Slot
X  instanceVariableNames: ''
X  classVariableNames: ''
X  poolDictionaries: ''
X
XMETHODS FOR CLASS SLOT
X
XsetFacet:facetName with:aValue
X   self at:facetName put:aValue
X   ^aValue
X
XgetFacet: facetName
X   ^self at:facetName ifAbsent:[nil]
X
XsetValue:aValue
X   self setFacet:'value' with:aValue
X
XgetValue
X   ^self getFacet:'value'
X
X_________________________________________
XDEFINITION OF CLASS FRAME
X
XDictionary variableSubclass: #Frame
X  instanceVariableNames: ''
X  classVariableNames: ''
X  poolDictionaries: ''
X
XMETHODS FOR CLASS FRAME
X
XsetSlot:slotName facet:facetName contents:aValue
X   | tempSlot |
X   tempSlot := self at:slotName
X                    ifAbsent:[self at:slotName put: Slot new].
X   tempSlot setFacet:facetName with:aValue.
X   ^aValue
X
XgetSlot:slotName facet:facetName
X   ^(self includesKey:slotName)
X      ifTrue: [(self at:slotName) getFacet:facetName]
X      ifFalse:[nil]
X
XsetSlot:slotName value:aValue
X   ^self setSlot:slotName facet:'value' contents:aValue

XgetSlotValue:slotName
X   "Get the value facet of a slot.  If no such slot, look up the AKO
X    inheritance chain.  It that's no good, run a demon to get the value."
X   | temp |
X   ((temp := self getSlot:slotName) isNil)
X      ifTrue: [((temp := self lookUpAkoChain:slotName) isNil)
X         ifTrue: [^self runDemonForValue:slotName]
X         ifFalse:[^temp getValue]]
X      ifFalse:[(temp includesKey:'value')
X         ifTrue: [^temp getValue]]
X         ifFalse:[^self runDemonForValue:slotName]]
X
XgetSlot:slotName
X   ^self at:slotName ifAbsent:[nil]
X
XsetSlot:slotName with:aSlot
X   ^self at:slotName put:aSlot
X
XlookUpAkoChain:slotName
X   "Look up the inheritance chain for a slot with the name in slotName.
X    If you find it, return the Slot."
X   ^(self includesKey:'AKO')
X      ifTrue: [((self isAKO) includesKey:slotName)
X         ifTrue: [^(self isAKO) getSlot:slotName]
X         ifFalse:[^(self isAKO) lookUpAkoChain:slotName]]
X      ifFalse:[nil]
X
XisAKO
X   ^self getSlot:'AKO' facet:'value'
X
XisAKO:aFrame
X   self setSlot:'AKO' value:aFrame
X
X____________________________________
XSOME SAMPLE METHODS FOR DEMONS
X
XaddDemon:aBlock slot:slotName type:demonType
X   (#('ifNeeded' 'ifAdded' 'ifRemoved') includes:demonType)
X      ifTrue: [self setSlot:slotName facet:demonType with:aBlock]
X      ifFalse:[self error:'Invalid Demon Type']
X
XrunDemonForValue:slotName
X   | aBlock |
X   aBlock := self getSlot:slotName facet:'ifNeeded'.
X   (aBlock isNil)
X     ifTrue: [^nil]
X     ifFalse:[^self setSlot:slotName value:(aBlock value)]
X
X
X
XListing 2
X
XA SAMPLE HIERARCHY OF FRAMES, SHOWING USE OF DEMONS
X
| mammal dog firstDog askDemon |
Xmammal := Frame new.
Xmammal setSlot:'hide' value:'hairy'.
Xmammal setSlot:'blood' value:'warm'.
X
Xdog := Frame new.
Xdog isAKO:mammal.
Xdog setSlot 'numberOfLegs' value:4.
X
X" Here is a simple if-needed demon, which will ask the
X  user for a value,while suggesting a default value."
XaskDemon := [Prompter prompt:'What is this doggie''s name?
X                      default:'Phydeaux'].
X
XfirstDog := Frame new.
XfirstDog addDemon:askDemon slot:'name' type:'ifNeeded'.
XfirstDog isAKO:dog.
XfirstDog setSlot:'color' value:'brown'.
X
X"This message would cause the demon to be fired off..."
Xfido getSlotValue:'name'
X
X
XFRAME.CLS
X
XDictionary variableSubclass: #Frame
X  instanceVariableNames: ''
X  classVariableNames: ''
X  poolDictionaries: '' !
X
X!Frame class methods ! !
X
X
X!Frame methods !
X
XaddDemon:aBlock slot:slotName type:demonType
X    (#('ifNeeded' 'ifAdded' 'ifRemoved') includes:demonType)
X        ifTrue: [self setSlot:slotName facet:demonType with:aBlock]
X        ifFalse:[self error:'Invalid Demon Type']!
X
XgetSlot:slotName
X    "return the slot object corresponding to slotName."
X
X    ^self at: slotName ifAbsent: [nil]!
X
XgetSlot: slotName facet: facetName
X
X    ^(self includesKey: slotName)
X        ifTrue: [(self at:slotName) getFacet:facetName]
X        ifFalse: [nil]!
X
XgetSlotValue:slotName
X    "get the value facet of a slot.  If no such slot, look up AKO chain.
X     If that's no good, run a demon to get the value."
X
X    | temp |
X    ((temp := self getSlot: slotName) isNil)
X        ifTrue: [((temp := self lookUpAkoChain: slotName) isNil)
X            ifTrue: [^self runDemonForValue:slotName]
X            ifFalse:[^temp getValue]]
X        ifFalse:[(temp includesKey: 'value')
X            ifTrue: [^temp getValue]
X            ifFalse:[^self runDemonForValue:slotName]]!

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Wed Jan 28 20:07:23 1987
Date: Wed, 28 Jan 87 20:07:13 est
From: vtcs1::in% <LAWS@SRI-STRIPE.ARPA>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #20
Status: R


AIList Digest           Wednesday, 28 Jan 1987     Volume 5 : Issue 20

Today's Topics:
  AI Expert Magazine Sources (Part 3 of 22)

----------------------------------------------------------------------

Date: 19 Jan 87 03:36:40 GMT
From: imagen!turner@ucbvax.Berkeley.EDU  (D'arc Angel)
Subject: AI Expert Magazine Sources (Part 3 of 22)

X
XisAKO
X    ^self getSlot: 'AKO' facet:'value'!
X
XisAKO: aFrame
X    "set the AKO slot of a frame"
X
X    self setSlot:'AKO' value:aFrame!
X
XlookUpAkoChain: slotName
X    "Look up the inheritance chain for a slot with the name in slotName.
X     If you find it, return the Slot"
X
X    ^(self includesKey: 'AKO')
X        ifTrue:[((self isAKO) includesKey:slotName)
X                    ifTrue: [^(self isAKO) getSlot: slotName]
X                    ifFalse:[^(self isAKO) lookUpAkoChain: slotName]]
X        ifFalse:[nil]!
X
XremoveSlot: slotName
X    ^self removeKey:slotName ifAbsent:[nil]!
X
XrunDemonForValue: slotName
X
X    | aBlock |
X    aBlock := self getSlot: slotName facet: 'ifNeeded'.
X    (aBlock isNil)
X        ifTrue: [^nil]
X        ifFalse:[^self setSlot:slotName value:(aBlock value)]!
X
XsetSlot: slotName facet: facetName with: value
X
X    | tempSlot |
X    tempSlot := self at:slotName
X                     ifAbsent: [self at:slotName put: Slot new].
X    tempSlot setFacet: facetName with: value.
X    ^value!
X
XsetSlot:slotName value:aValue
X    "set the value facet of a slot"
X
X    ^self setSlot:slotName facet:'value' with:aValue.!
X
XsetSlot:slotName with: aSlot
X    "associate the slot aSlot with the name slotName. "
X
X    ^self at: slotName put: aSlot! !
X
X
XFRMTRM.TXT
X
X| mammal dog fido s askDemon t |
X" Examples of frame and slot classes in use.
X  Select and DOIT."
X
Xmammal := Frame new.
Xmammal setSlot: 'hide' value: 'hairy'.
Xmammal setSlot: 'bloodType' value: 'warm'.
X
Xdog := Frame new.
Xdog isAKO: mammal.
Xdog setSlot: 'numberLegs' value: 4.
X
XaskDemon := [Prompter prompt:'What is this dog''s name?' default: 'Bruno'].
Xdog addDemon:askDemon slot:'name' type:'ifNeeded'.
X
Xfido := Frame new.
Xfido addDemon:askDemon slot:'name' type:'ifNeeded'.
Xfido isAKO:dog.
Xfido setSlot:'color' value:'brown'.
X
X" Let's see the demon fire "
Xfido getSlotValue:'name'.
X
X
XSLOT.CLS
X
XDictionary variableSubclass: #Slot
X  instanceVariableNames: ''
X  classVariableNames: ''
X  poolDictionaries: '' !
X
X!Slot class methods ! !
X
X
X!Slot methods !
X
XgetFacet: facetName
X    ^self at: facetName ifAbsent: [nil]!
X
XgetValue
X    ^self getFacet: 'value'!
X
XremoveFacet: facetName
X    ^self removeKey:facetName ifAbsent:[nil]!
X
XsetFacet: facetName with: aValue
X
X    self at: facetName put: aValue.
X    ^aValue!
X
XsetValue: aValue
X    self setFacet: 'value' with: aValue! !
X a
echo shar: "a missing newline was added to 'EXPERT.JAN'"
echo shar: "55 control characters may be missing from 'EXPERT.JAN'"
SHAR_EOF
if test 7019 -ne "`wc -c < 'EXPERT.JAN'`"
then
echo shar: "error transmitting 'EXPERT.JAN'"
 '(should have been 7019 characters)'
fi
fi
echo shar: "extracting 'FILES.JAN'" '(837 characters)'
if test -f 'FILES.JAN'
then
echo shar: "will not over-write existing file 'FILES.JAN'"
else
sed 's/^X//' << \SHAR_EOF > 'FILES.JAN'
X
X
X               Articles and Departments that have
X                    Additional On-Line Files
X
X                            AI EXPERT
X                          January 1987
X          (Note:  Contents page is in file CONTNT.JAN)
X
X
X
X
XARTICLES                                        RELEVANT FILES
X--------                                        --------------
X
XJanuary Table of Contents                         CONTNT.JAN
X
XAdding Rete Net to Your OPS5 Toolbox              OPSNET.JAN
Xby Dan Neiman
X
XPerceptrons & Neural Nets                         PERCEP.JAN
Xby Peter Reece
X
X
XDEPARTMENTS
X
XExpert's Toolbox                                  EXPERT.JAN
X"Using Smalltalk to Implement Frames"
Xby Marc Rettig
X
XAI Apprentice                                     AIAPP.JAN
X"Creating Expert Systems frm Examples"
Xby Beverly and Bill Thompson
X
SHAR_EOF
if test 837 -ne "`wc -c < 'FILES.JAN'`"
then
echo shar: "error transmitting 'FILES.JAN'"
 '(should have been 837 characters)'
fi
fi
echo shar: "extracting 'OPSNET.JAN'" '(359936 characters)'
if test -f 'OPSNET.JAN'
then
echo shar: "will not over-write existing file 'OPSNET.JAN'"
else
sed 's/^X//' << \SHAR_EOF > 'OPSNET.JAN'
X
X
X                Adding the Rete Net to Your OPS5 Toolbox
X           (Supplemental files arranged by filename headings)
X                        January 1987 AI EXPERT
X                             by Dan Neiman
X
X
X
XEditor's Note:
X
XAdditional notes and clarifications for Imperative Pattern Match code,
Xas described in January '87 issue of AI/Expert.
X
XThe code described in AI/Expert is still evolving (i.e. the more I use it,
 the more
Xfeatures I add), and there was not sufficient space to give complete
 instructions in
Xthe magazine, so the following notes should be used as a supplement to the
Xarticle.
X
XTo use the Rete net modifications, load the code into an existing Common
 Lisp OPS5
Ximage. Then use the pmatch and map-pmatch functions as described in the
 article.
X
X
XIt was probably not made clear in the article, but both pmatch and map-pmatch
Xreturn the values of the last expression evaluated in the righthand side.
  So,
Xfor example, to get the names of all employees making 30K a year, you might
 use the
Xcode:
X?(map-pmatch (employees ^name <emp> ^salary > 30000)
X       -->
X       ?<emp> )
X
XThe RHS of the above function just evaluates and returns the binding
 of <emp>.
XBecause the function used was map-pmatch, a list of *all* employees
 satisfying the
Xgiven constraints is returned.
X
XThe syntax of the pmatch and map-pmatch commands has been modified slightly
 since
Xthe article went to press.  The method described for passing Lisp variables
 to a
Xpattern match function proved to be inexpressibly awkward for lexically bound
 Lisps
X(the system was originally written in Franz).  The following modification
 makes it
Xconsiderably easier to pass arguments to the pattern match routines.
X
XBecause the pattern match is compiled, the only way to interactively match
 a particular
Xvalue is to write that value into working memory, and include that working
 memory element
Xin the pattern match.  This is fairly awkward to do by hand, so I've
 incorporated a macro
Xinto the pmatch and map-pmatch commands which do it automagically. The
 arguments are passed
Xby following the pmatch function with an argument list.  The argument
 list is distinguished
Xfrom a pattern by the "args" keyword.   The syntax is:
X
X(pmatch (args arg1 arg2 ... argN)
X        (condition element 1)
X       (condition element 2)
X       -->
X        RHS)
X
XAfter macro expansion, the result is effectively
X      (let ((tt (make ipm$data arg1 arg2 ... argN)))
X       (query1  (ipm$data <arg1> <arg2> <arg3>)
X                 (condition element 1)
X                (condition element 2)
X                   :      :       :
X               -->
X                  RHS)
X       (oremove tt) )
X
XNote that the working memory element is added and deleted automatically.
X
XAs an example, the code to locate all children of a couple might look like
 this
X(defun children(mother father)
X   ?(map-pmatch (args mother father)
X           (mother ^name <mother> ^child <child>)
X           (father ^name <father> ^child <child>)
X          -->
X          (make parents ^name <child> ^father <father> ^mother <mother>)
X          ?<child>)
X
Xand given the working memory:
X(mother ^name ann ^child bob)
X(father ^name fred ^child bob)
X(mother ^name sue ^child alex)
X(father ^name fred ^child john)
X(mother ^name ann ^child john)
X(father ^name fred ^child cheryl)
X
X(children 'ann 'fred) would return (bob john)
X
Xand create the working memory elements
X
X(parents ^name bob ^father fred ^mother ann)
X(parents ^name john ^father fred ^mother ann)
X
XDebugging code:  As is the case with OPS5 productions, if you recompile
 a pmatch
Xor map-pmatch function, you must remove working memory and replace it.
  A pattern match
Xwill only work on data which has been added after compilation.  This does
 tend to
Xmake debugging tedious.
X
XEverytime a pmatch operation is recompiled, it generates a new body bound
 to a variable of
Xthe form queryN.  Because queries are not explictly named, it's difficult
 to automatically
Xexcise them. So the net will tend to fill with superfluous nodes during
 debugging.
XThe function exquery will excise all existing queries.  Executing the
 sequence,
X(oremove *)
X(exquery)
X(i-g-v)
X
Xwill remove all working memory and queries and reset all global variables.
X
XIf a pmatch or map-pmatch function blows up while evaluating its RHS, reset
 the
Xglobal variable *in-rhs* to nil before proceeding.
X
XQuestions about this code can be directed to:
XDan Neiman
XCompuServe 72277,2604
XCSNET dann@UMASS-CS.csnet
X
Xor c/o COINS Dept.
X       Lederle Graduate Research Center
X       University of Massachusetts
X       Amherst, MA 01003
X
X
XIndex to software:
X
XCLSUP.LSP : Common Lisp support functions to define some canonical
X             functions missing in Common Lisp.
X
XOPSMODS.L  : The OPS5 modifications described in the article.
X
XCOMMON.OPS : OPS5 for Common Lisp
XTI.OPS     : OPS5 for TI Explorers
XFRANZ.OPS  : OPS5 for Franz Lisp
X
XMONK.OPS : Test file for OPS5
XPRTOWER.OPS : Test file for OPS5
X
X
XNEWOPS.L
X
X;OPS5 modifications for Common Lisp
X; by: Dan Neiman
X; Original idea conceived at  ITT ATC   May, 1986
X; Converted to Common Lisp and expanded at COINS Dept., UMASS Fall 1986
X
X;Copyright notice:  Much of this code is modified or original OPS5 code
; which is
X;copyrighted by C. Lanny Forgy of CMU, and is used with his permission.
;  The rest is
X;Copyright (c) Daniel Neiman, COINS Dept. UMass.  Permission is given to
; use this
X;code freely for personal, educational, or research applications.  It is
; not to be
X;sold, or incorporated into a for-profit product without permission of
; the author.
X;The purpose of this code is to illustrate alternative uses of the Rete
; net and
X;alternative control structures in OPS5.  No guarantees are made about
; its fitness
X;any particular application, and no claim is made about the presence or
; absence of
X;bugs.
X;Version of 12/12/86
X
X;This file contains the necessary OPS5 modifications to perform
X;the RHS pattern matching/control function described in the
X;accompanying January '87 AI/Expert article.  The code is a supplement
; to OPS5 and is
X;intended to be loaded into a Common Lisp OPS5 image.
X
X;Note:  The idea behind this modification is to add memory to the &p node
; and create
X;functions to interrogate that memory at will.  Sort of an elegant idea.
;  But, because
X;it has to be patched into an implementation which was not designed to do
; so, there's a
X;lot of fairly nasty looking code here.  Take heart, most of it is just
; slightly modified
X;ops5 code and can be pretty much ignored.
X
X;This variable is used to determine if we encountered a pmatch
X;or map-pmatch in top-level lisp code or while compiling an
X;OPS5 production.
X
X(proclaim '(special *compiling-rhs* *qnames* *cmp-p-context-stack*
X    *system-state-stack* *NMATCHES* *ipm-data-stack*))
X(setq *qnames* nil)
X(setq *system-state-stack* nil)
X(setq *cmp-p-context-stack* nil)
X(setq *compiling-rhs* nil)
X(setq *ipm-data-stack* nil)
X
X;Read macro for variable evaluation on "RHS" of pattern match
X;All &whatever macros on the righthand side must be preceded by
X;a ?.  This will expand to ($varbind '&whatever)
X;To avoid having a plethora of read macros, ? will be double-duty.
X;If ? precedes a ?(pmatch ....), then the expression is evaluated
X;and the appropriate match stuff is placed in the rete net.  The
;code is replaced by (query queryN pattern-body).
X
X;Read macro ? executes the following function.
X(defun $$ipm$$dofunc$$(strm chr)
X     (let ((inp (read strm t nil t)))
X      (cond ((atom inp)
X     (if (eq '#\< (char (string inp) 0)) ;is it an OPS variable?
X `($varbind ',inp)
X         (intern (concatenate 'string "?" (princ-to-string inp)))))
X    ((member (car inp) '(map-pmatch pmatch) :test #'eq)
X     (eval inp))
X    (t
X     inp))))
X
X
X;make ? a read macro
X(set-macro-character #\? #'$$ipm$$dofunc$$ t)
X
X(defun &query (rating name var-dope ce-var-dope rhs frhs)
X  (prog (fp dp)
X        (cond (*sendtocall*
X               (setq fp *flag-part*)
X               (setq dp *data-part*))
X              (t
X               (setq fp *alpha-flag-part*)
X               (setq dp *alpha-data-part*)))
X        (and (member fp '(nil old))
X             (ipm-removepm name dp))
X        (and fp (ipm-insertpm name dp))))
X
X
X; each conflict set element is a list of the following form:
X; ((p-name . data-part) (sorted wm-recency) special-case-number)
X
X;I'm storing the results of the pattern matches on a property list, pmatches.
X
X;modified OPS5 removecs
X;remove results of the pattern match
X
X(defun ipm-removepm (name cr-data)
X  (prog (inst cs pmtchs)
X(setq pmtchs (setq cs (get name 'pmatches)))
X  l(cond ((null cs)
X               (return nil)))
X(setq inst (car cs))
X(setq cs (cdr cs))
X(and (not (top-levels-eq inst cr-data)) (go l))
X(putprop name (remove inst pmtchs)
X      'pmatches)
X))
X
X;modified OPS5 insertcs
X;store the results of the pattern match
X;Stored as (data ) rather than original conflict set format
X;of ((name . data) (order tags) rating)

X(defun ipm-insertpm (name data)
X  (let ((pmtch (get name 'pmatches)))
X    (setq pmtch (get name 'pmatches))
X    (and (atom pmtch) (setq pmtch nil))
X    (setq pmtch (cons data pmtch))
X    (putprop name pmtch 'pmatches)
X     pmtch
X    ))
X
X;PMATCH is the RHS/LISP equivalent of the (p rule) macro. When used from Lisp,
X;it should always be preceded by the ? read macro, so as to force evaluation
X;at read time.  Otherwise, the Rete net won't be set up correctly.
X
X(defmacro pmatch(&rest z)
X  `(let ((pname (newsym query))
X         (level (newsym level)))
X    (finish-literalize)
X    (princ '*)
X    (cond ((and (listp (car ',z)) (eq (caar ',z) 'args))
X           (ipm-compile-production pname (add-data-to-prod pname ',z ))
X  `(let ((tt  (make-ipm-data ',pname ,@(cdar ',z) ))
X         (ans (query ',pname)))
X(restore-ipm-data tt)
X                ans))
X  (t
X   (ipm-compile-production pname ',z)
X   `(query ',pname)))))
X
X(defun restore-ipm-data(current)
X   (let ((inrhsflg *in-rhs*)
X (old (pop *ipm-data-stack*)))
X       (setq *in-rhs* nil)
X       (eval (list 'oremove current))
X       (setq *in-rhs* inrhsflg)
X       (if old
X   (add-to-wm (car old) (cdr old)))))
X
X;Note, the only way to pass input to the pattern matcher is to create a
X;working memory element containing that input.  The following utility
; functions
X;automagically create the ipm$data working memory element and modify the
X;production to use it.
X
X;MAKE-DATA:  Make data takes a list of values and a unique level specifier
X;and creates a working memory element of the form (ipm$data val1 val2
; val3 .. )
X;Saves old ipm$data elements on stack so that no interference results.
X(defun make-ipm-data(&rest arglst)
X (let ((inrhsflg *in-rhs*)
X       (old (car (get 'ipm$data 'wmpart*))))
X   (if old (push old *ipm-data-stack*))
X   (setq *in-rhs* nil)
X   (eval (list 'oremove (cdr old))) ;needs in-rhs to be nil
X   (setq *in-rhs* inrhsflg)
X   ($reset)
X   ($change 'ipm$data)
(mapc #'(lambda(tab val)
X         ($tab tab)
X         ($change val))
X       '(a b c d e f g h i j k l) (cdr arglst))
X   ($tab 'for) ;target data for particular query
X   ($change (car arglst))
X   ($assert)))
X
X;Modify the production so that it accesses the data passed by the ipm$data wme
X(defun add-data-to-prod(pname prod)
X    (let ((args (cdar prod))
X  (body (cdr prod)))
X     (cons
X       `(ipm$data ,@(mapcan #'(lambda(slot arg) (list  '^ slot (concat
 '\< arg '\> )))
X        '(a b c d e f g h i j k l) args)
X       ^for ,pname)
X       body)))
X
X
X;Finish-literalize: modified to define special wme type ipm$data which
; is used to
X;transfer lisp arguments to working memory.
X(defun finish-literalize nil
X  (cond ((not (null *class-list*))
X         (cond ((not (member 'ipm$data *class-list*))
X             (literalize ipm$data a b c d e f g h i j k l for)))
X         (mapc (function note-user-assigns) *class-list*)
X         (mapc (function assign-scalars) *class-list*)
X         (mapc (function assign-vectors) *class-list*)
X         (mapc (function put-ppdat) *class-list*)
X         (mapc (function erase-literal-info) *class-list*)
X         (setq *class-list* nil)
X         (setq *buckets* nil))))
X
X
X
X;Map the RHS across all matching data.
X(defmacro map-pmatch(&rest z)
X  `(let ((pname (newsym query))
X         (level (newsym level)))
X    (finish-literalize)
X    (princ '*)
X    (cond ((and (listp (car ',z)) (eq (caar ',z) 'args))
X           (ipm-compile-production pname (add-data-to-prod pname ',z ))
X  `(let ((tt  (make-ipm-data ',pname ,@(cdar ',z) ))
X         (ans (map-query ',pname)))
X(restore-ipm-data tt)
Xans))
X  (t
X   (ipm-compile-production pname ',z)
X   `(map-query ',pname)))))
X
X
X(defun ipm-compile-production (name matrix)
X  (prog (erm)
X        (setq *p-name* name)
(cond (*compiling-rhs*
X               (setq erm (catch (ipm-cmp-p-recursive name matrix) '!error!)))
X      (t
X               (setq erm (catch (ipm-cmp-p name matrix) '!error!))))
X; following line is modified to save production name on *qnames*
X        (pushnew name *qnames*)
X(return erm)))
X
X
X;save globals *feature-count *ce-count* *vars* *ce-vars* *rhs-bound-vars*
X;*rhs-bound-ce-vars* *last-branch* on a push-down stack.
X
X;Push global variables takes a stack name, and a list of global variables,
; creates a
X;list of lists of the form ((varname value) (varname value) ... ) and
; pushes it onto
X;the indicated stack.
X
X(defun push-global-variables(stack &rest vars)
X    (push
X      (mapcar #'(lambda(var)
X                  (cons var (eval var)))  ;copy may not be needed, but
; better safe....
X               vars)
X      (symbol-value stack)))
X
X;Pop global variables takes a stack name, pops most recent entry off
; the stack,
X;and resets the values of the variables.
X(defun pop-global-variables(stack)
X   (mapcar #'(lambda(varbinding)
X                (set (car varbinding) (cdr varbinding)))
X           (pop stack))  )

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Fri Jan 30 00:49:12 1987
Date: Fri, 30 Jan 87 00:48:59 est
From: vtcs1::in% <LAWS@SRI-STRIPE.ARPA>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #21
Status: R


AIList Digest           Thursday, 29 Jan 1987      Volume 5 : Issue 21

Today's Topics:
  Queries - Learning Programs & CMU's "GRAPES" &
    1987 Society for Computer Simulation Multiconference,
  Seminars - Circumscriptive Query Answering (SU) &
    Automation in Seismic Interpretation (SMU) &
    A Four-Valued Semantics for Terminological Logics (AT&T) &
    Learning when Irrelevant Variables Abound (IBM),
  Conference - AI and Law

----------------------------------------------------------------------

Date: 26 Jan 87 17:36:23 GMT
From: carlson@lll-tis-b.arpa  (John Carlson)
Subject: Learning programs wanted [Public Domain preferred]


Can anyone give me pointers to programs that learn?  In
particular, does anyone have an copy of the "Marvin"
program that appeared in Byte a couple of months ago?

John Carlson
--
INTERNET  carlson@lll-tis-b.ARPA
UUCP      ...lll-crg!styx!carlson

------------------------------

Date: 27 Jan 87 20:12:07 GMT
From: gatech!mcnc!rti-sel!hlw@hplabs.hp.com  (Hal Waters)
Subject: Info wanted on CMU's "GRAPES"

I wish to get information on CMU's "GRAPES".

Specifically, who wrote the software?
Is it a tool/shell for creating Intelligent Tutoring Systems?
Is it a Cognitive Simulation Model?
Is it Public Domain?
If so, or if not, how can I get a copy of the software?

Please mail responses to me at hlw@rti-sel.

Thanks in advance!
Hal Waters

------------------------------

Date: 27 Jan 87 11:48:00 EST
From: "MATHER, MICHAEL" <mather@ari-hq1.ARPA>
Reply-to: "MATHER, MICHAEL" <mather@ari-hq1.ARPA>
Subject: 1987 Society for Computer Simulation Multiconference

Information on the 1987 Society for Computer Simulation Multiconference was
published in the last AIList.  Does anyone know if this conference has already
taken place or, if not, when and where will it take place?

------------------------------

Date: 26 Jan 87  1434 PST
From: Vladimir Lifschitz <VAL@SAIL.STANFORD.EDU>
Subject: Seminar - Circumscriptive Query Answering (SU)

    Commonsense and Nonmonotonic Reasoning Seminar


            A QUERY ANSWERING ALGORITHM
    FOR CIRCUMSCRIPTIVE AND CLOSED-WORLD THEORIES

              Teodor C. Przymusinski
          University of Texas at El Paso
                <ft00@utep.bitnet>

            Thursday, January 29, 4pm
              Bldg. 160, Room 161K

McCarthy's theory of circumscription appears to be the most powerful
among various non-monotonic logics designed to handle incomplete and
negative information in knowledge representation systems. In this
presentation we will describe a query answering algorithm for
circumscriptive theories.

The algorithm is based on a modified version of ordered linear resolution
(OL-resolution), which we call a MInimal model Linear Ordered resolution
(MILO-resolution). MILO-resolution constitutes a sound and complete procedure
to determine the existence of minimal models satisfying a given formula.

Our algorithm is the first query evaluation algorithm for general
circumscriptive theories. The Closed-World Assumption (CWA) and its
generalizations, the Generalized Closed-World Assumption (GCWA) and
the Extended Closed-World Assumption (ECWA), can be considered as
special forms of circumscription. Consequently, our algorithm also
applies to answering queries in theories using the Closed-World Assumption
and its generalizations. Similarly, since prioritized circumscription
is equivalent to a conjunction of (parallel) circumscriptions, the
algorithm can be used to answer queries in theories circumscribed by
prioritized circumscription.

------------------------------

Date: WED, 10 oct 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: Seminar - Automation in Seismic Interpretation (SMU)

January 28, 1987, 1:30PM, 315SIC Computer Science Department,
Southern Methodist Univeristy, Dallas, Texas



                 AUTOMATION IN SEISMIC INTERPRETATION

                           Bruce Flinchbaugh
                           Texas Instruments


ABSTRACT

Interpreting three-dimensional seismic data is important for oil and
gas exploration.  Part of the problem is perception-intensive (experts
spend much of their time looking at the data), and part of the problem
is more cognition-intensive (experts reconcile perceived structures
with knowledge of plausible geology and other sources of information).
This talk will present a simple overview of the seismic data
acquisition and processing required to produce three-dimensional
seismic data volumes.  Then a variety of tools for assisting in the
interpretation of the data will be discussed.  For the most part
today's useful tools are aimed at solving the perception-intensive
problems.  Finally some open problems in seismic interpretation will
be described.


BIOGRAPHY

Dr. Flinchbaugh is a Senior Member of Technical Staff at T.I. in the
Computer Science Center Artificial Intelligence Laboratory, where he
is currently tackling semiconductor manufacturing automation problems.
Also at T.I. he has invented techniques assisting in the processing
and structural interpretation of three-dimensional seismic data.
Previous research in artificial intelligence, at M.I.T. and The Ohio
State University, addressed computational vision problems involving
the interpretation of motion and color.  Dr. Flinchbaugh received his
Ph.D. in computer and information science from The Ohio State
University in 1980.

------------------------------

Date: Thu, 22 Jan 87 10:46:34 est
From: allegra!dlm
Subject: Seminar - A Four-Valued Semantics for Terminological Logics
         (AT&T)

                  [Forwarded from the NL-KR Digest.]


Title:          A Four-Valued Semantics for Terminological Logics
Speaker:        Peter F. Patel-Schneider
Affiliation:    Schlumberger Palo Alto Research
Date:           Monday, February 2, 1987
Location:       AT&T Bell Laboratories - Murray Hill 3D-473
Sponsor:        Ron Brachman


Terminological logics (also called frame-based description languages)
are a clarification and formalization of some of the ideas underlying
semantic networks and frame-based systems.  The fundamental
relationship in these logics is whether one concept (frame, class) is
more general than (subsumes) another.  This relationship forms the
basis for important operations, including recognition, classification,
and realization, in knowledge representation systems incorporating
terminological logics.

However, determining subsumption is computationally intractable under
the standard semantics for terminological logics, even for languages
of very limited expressive power.  Several partial solutions to this
problem are used in knowledge representation systems, such as NIKL,
that incorporate terminological logics, but none of these solutions
are satisfactory if the system is to be of general use in representing
knowledge.

A new solution to this problem is to use a weaker, four-valued
semantics for terminological logics, thus legitimizing a smaller set
of subsumption relationships.  In this way a computationally tractable
knowledge representation system incorporating a more expressively
powerful terminological logic can be built.

------------------------------

Date: Tue 27 Jan 87 20:05:30-PST
From: Ramsey Haddad <HADDAD@Sushi.Stanford.EDU>
Subject: Seminar - Learning when Irrelevant Variables Abound (IBM)

Next BATS will be at IBM Almaden Research Center on Friday, February 13.

Following is a preliminary schedule:

 9:45 - 10:00   Coffee + +

10:00 - 11:00  "Algebraic Methods in the Theory of Lower Bounds
               for Boolean Circuit Complexity"
               Norman Smolensky, U.C. Berkeley.

11:00 - 12:00  " The Decision Problem for the Probabilities
                 of Higher-Order Properties".
                 Phokion Kolaitis, IBM Almaden.

 1:00 -  2:00   "Learning When Irrelevant Features Abound"
                 Nick Littlestone, U.C. Santa Cruz.

 2:00 - 3:00     "Fast Parallel Algorithms for Chordal Graphs"
                 Alejandro A. Schaffer, Stanford University.

===================================================================

              "Learning When Irrelevant Features Abound"

                         Nick Littlestone
                         U.C. Santa Cruz

Valiant and others have studied the problem of learning various classes
of Boolean functions from examples.  Here we discuss on-line learning of
these functions.  In on-line learning, the learner responds to each
example according to a current hypothesis.  Then the learner updates the
hypothesis, if necessary, based on the correct classification of the
example.  This is the form of the Perceptron learning algorithms, in
which updates to the weights occur after each mistake.  One natural
measure of the quality of learning in the on-line setting is the number
of mistakes the learner makes.  For suitable classes of functions,
on-line learning algorithms are available which make a bounded number of
mistakes, with the bound independent of the number of examples seen by
the learner.  We present one such algorithm, which learns disjunctive
Boolean functions.  The algorithm can be expressed as a linear-threshold
algorithm.  If the examples include a large number of irrelevant
variables, the algorithm does very well, the number of mistakes
depending only logarithmically on the number of irrelevant variables.
More specifically, if the function being learned is of the form $f ( x
sub 1 ,..., x sub n )~=~x sub {i sub 1} orsign ... orsign x sub {i sub
k} then the mistake bound is $O ( k log n )$.  If $k = O ( log n )$ then
this bound is significantly better than that given by the Perceptron
convergence theorem.

------------------------------

Date: 8 Jan 87 14:30:33 EST
From: MCCARTY@RED.RUTGERS.EDU
Subject: Conference - AI and Law


                            FINAL CALL FOR PAPERS:

                       First International Conference on
                        ARTIFICIAL INTELLIGENCE AND LAW

                                May 27-29, 1987
                            Northeastern University
                          Boston, Massachusetts, USA

In  recent  years  there  has been an increased interest in the applications of
artificial intelligence to law.  Some of this interest is due to the  potential
practical  applications:    A number of researchers are developing legal expert
systems, intended as an aid  to  lawyers  and  judges;  other  researchers  are
developing  conceptual legal retrieval systems, intended as a complement to the
existing full-text legal retrieval systems.  But the problems in this field are
very difficult.  The natural language of the law is exceedingly complex, and it
is grounded in the fundamental patterns of human common sense reasoning.  Thus,
many  researchers have also adopted the law as an ideal problem domain in which
to tackle some of the basic theoretical issues in AI:   the  representation  of
common  sense  concepts;  the  process of reasoning with concrete examples; the
construction and use of analogies; etc.  There is  reason  to  believe  that  a
thorough  interdisciplinary  approach  to these problems will have significance
for both fields, with both practical and theoretical benefits.

The purpose of this First International Conference on  Artificial  Intelligence
and  Law  is  to  stimulate  further  collaboration  between AI researchers and
lawyers, and to provide a forum for the latest research results in  the  field.
The  conference  is  sponsored  by  the  Center for Law and Computer Science at
Northeastern University.  The General Chair is: Carole D.  Hafner,  College  of
Computer  Science,  Northeastern  University,  360 Huntington Avenue, Boston MA
02115, USA; (617) 437-5116 or (617) 437-2462; hafner.northeastern@csnet-relay.

Authors are invited to contribute papers on the following topics:

   - Legal Expert Systems
   - Conceptual Legal Retrieval Systems
   - Automatic Processing of Natural Legal Texts
   - Computational Models of Legal Reasoning

In addition, papers on the relevant theoretical issues in AI are also  invited,
if  the  relationship  to the law can be clearly demonstrated.  It is important
that authors identify the original contributions presented in their papers, and
that  they  include  a  comparison with previous work.  Each submission will be
reviewed by at least three members of the Program Committee (listed below), and
judged as to its originality, quality and significance.

Authors  should submit six (6) copies of an Extended Abstract (6 to 8 pages) by
January 15, 1987, to the Program Chair:    L.  Thorne  McCarty,  Department  of
Computer  Science,  Rutgers  University,  New  Brunswick  NJ  08903, USA; (201)
932-2657; mccarty@rutgers.arpa.  Notification of acceptance or  rejection  will
be  sent  out  by March 1, 1987.  Final camera-ready copy of the complete paper
(up to 15 pages) will be due by April 15, 1987.

Conference Chair:        Carole D. Hafner         Northeastern University

Program Chair:           L. Thorne McCarty        Rutgers University

Program Committee:       Donald H. Berman         Northeastern University
                         Michael G. Dyer          UCLA
                         Edwina L. Rissland       University of Massachusetts
                         Marek J. Sergot          Imperial College, London
                         Donald A. Waterman       The RAND Corporation

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Fri Jan 30 00:48:51 1987
Date: Fri, 30 Jan 87 00:48:43 est
From: vtcs1::in% <LAWS@SRI-STRIPE.ARPA>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #22
Status: R


AIList Digest           Thursday, 29 Jan 1987      Volume 5 : Issue 22

Today's Topics:
  Policy - AI Magazine Code,
  Philosophy - Methodological Epiphenomenalism & Consciousness,
  Psychology - Objective Measurement of Subjective Variables

----------------------------------------------------------------------

Date: Wed 28 Jan 87 10:26:52-PST
From: PAT <HAYES@SPAR-20.ARPA>
Reply-to: HAYES@[128.58.1.2]
Subject: Re: AIList Digest   V5 #18

Weve had some bitches about too much philosophy, but I never expected
to be sent CODE to read.
Pat Hayes
PS Especially with price lists in the comments.  Anyone who is willing to pay
$50.00 for a backward chaining program shouldnt be reading AIList Digest.

------------------------------

Date: 28 Jan 87 14:57:08 est
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: AIList Digest   V5 #20

   AIList Digest           Wednesday, 28 Jan 1987     Volume 5 : Issue 20
   Today's Topics:
     AI Expert Magazine Sources (Part 3 of 22)


I can't believe you're really sending 22 of these moby messages to the
entire AIList.  Surely you could have collected requests from
interested individuals and then sent it only to them.

------------------------------

Date: Wed 28 Jan 87 22:12:59-PST
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Reply-to: AIList-Request@SRI-AI.ARPA
Subject: Code Policy


The bulk of this code mailing does bother me, but there seems to be
at least as much interest in it as in the seminar notices, bibliographies,
and philosophy discussions.  AIList reaches thousands of students, and
a fair proportion are no doubt interested in examining the code.  The
initial offer of the code drew only positive feedback, so far as I
know.  Even the mailing of the entire stream (in nine 50-K files) on
the comp.ai distribution drew no public protest.  I'm still open to
discussion, but I'll continue the series unless there is substantial
protest.  Keeping up with current issues of AI Magazine will be much
less disruptive once this backlog is cleared up.

The mailing process is much more efficient for batched addresses
than for individual mailings (which send multiple copies through
many intermediate and destination hosts), so individual replies
seem out of the question -- and I can't afford to condense hundreds
of requests into a batch distribution list.  (Can't somebody invent
an AI program to do that?)

It would be nice if the code could be distributed by FTP, but that
only works for the Arpanet readership.  Most of the people signing
on in the last year or two are on BITNET.  I still haven't gotten
around to finding BITNET relay sites, so there is no convenient way
to split the mailing.  Anyway, that would still force hundreds of
Arpanet readers to go through the FTP process, and it is probably
more cost effective to just mail out the code and let uninterested
readers ignore it.

Suggestions are welcome.

                                        -- Ken Laws

------------------------------

Date: 26 Jan 87 18:55:54 GMT
From: clyde!watmath!sunybcs!colonel@rutgers.rutgers.edu  (Col. G. L.
      Sicherman)
Subject: Re: Minsky on Mind(s)

>               ... It is a way for bodily tissues to get the attention
> of the reasoning centers.  Instead of just setting some "damaged
> tooth" bit, the injured nerve grabs the brain by the lapels and says
> "I'm going to make life miserable for you until you solve my problem."

This metaphor seems to suggest that consciousness wars with itself.  I
would prefer to say that the body grabs the brain by the handles, like
a hedge clipper or a geiger counter.  In other words, just treat the
mind as a tool, without any personality of its own.  After all, it's the
body that is real; the mind is only an abstraction.

By the way, it's well known that if the brain has a twist in it, it
needs only one handle.  Ask any topologist!
--
Col. G. L. Sicherman
UU: ...{rocksvax|decvax}!sunybcs!colonel
CS: colonel@buffalo-cs
BI: colonel@sunybcs, csdsiche@ubvms

------------------------------

Date: Mon, 26 Jan 87 23:57:40 est
From: Stevan Harnad <princeton!mind!harnad@seismo.CSS.GOV>
Subject: Methodological Epiphenomenalism

"CUGINI, JOHN" <cugini@icst-ecf> wrote on mod.ai"

>       Subject: Consciousness as a Superfluous Concept, But So What?

So methodological epiphenomenalism.

>       Consciousness may be as superflouous (wrt evolution) as earlobes.
>       That hardly goes to show that it ain't there.

Agreed. It only goes to show that methodological epiphenomalism may
indeed be the right research strategy. (The "why" is a methodological
and logical question, not an evolutionary one. I'm arguing that no
evolutionary scenario will help. And it was never suggested that
consciousness "ain't there." If it weren't, there would be no
mind/body problem.)

>       I don't think it does NEED to be so. It just is so.

Fine. Now what are you going to do about it, methodologically speaking?

Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771

------------------------------

Date: 27 Jan 87 19:44:16 GMT
From: princeton!mind!harnad@rutgers.rutgers.edu  (Stevan Harnad)
Subject: Re: Objective measurement of subjective variables


adam@mtund.UUCP (Adam V. Reed), of AT&T ISL Middletown NJ USA, wrote:

>       Stevan Harnad makes an unstated assumption... that subjective
>       variables are not amenable to objective measurement. But if by
>       "objective" Steve means, as I think he does, "observer-invariant", then
>       this assumption is demonstrably false.

I do make the assumption (let me state it boldly) that subjective
variables are not objectively measurable (nor are they objectively
explainable) and that that's the mind/body problem. I don't know what
"observer-invariant" means, but if it means the same thing as in
physics -- which is that the very same physical phenomenon can
occur independently of any particular observation, and can in
principle be measured by any observer, then individuals' private events
certainly are not such, since the only eligible observer is the
subject of the experience himself (and without an observer there is no
experience -- I'll return to this below). I can't observe yours and you
can't observe mine. That's one of the definitive features of the
subjective/objective distinction itself, and it's intimately related to
the nature of experience, i.e., of subjectivity, of consciousness.

>       Whether or not a stimulus is experienced as belonging to some target
>       category is clearly a private event...[This is followed by an
>       interesting thought-experiment in which the signal detection parameter
>       d' could be calculated for himself by a subject after an appropriate
>       series of trials with feedback and no overt response.]... the observer
>       would be able to mentally compute d' without engaging in any externally
>       observable behavior whatever.

Unfortunately, this in no way refutes the claim that subjective experience
cannot be objectively measured or explained. Not only is there (1) no way
of objectively testing whether the subject's covert calculations on
that series of trials were correct, not only is there (2) no way of
getting any data AT ALL without his overt mega-response at the end
(unless, of course, the subject is the experimenter, which makes the
whole exercise solipsistic), but, worst of all, (3) the very same
performance data could be generated by presenting inputs to a
computer's transducer, and no matter how accurately it reported its
d', we presumably wouldn't want to conclude that it had experienced anything
at all. So what's OBJECTIVELY different about the human case?

At best, what's being objectively measured happens to correlate
reliably with subjective experience (as we can each confirm in our own
cases only -- privately and subjectively). What we are actually measuring
objectively is merely behavior (and, if we know what to look for, also
its neural substrate). By the usual objective techniques of scientific
inference on these data we can then go on to formulate (again objective)
hypotheses about underlying functional (causal) mechanisms. These should
be testable and may even be valid (all likewise objectively). But the
testability and validity of these hypotheses will always be objectively
independent of any experiential correlations (i.e., the presence or
absence of consciousness).

To put it my standard stark way: The psychophysics of a conscious
organism (or device) will always be objectively identical to that
of a turing-indistinguishable unconscious organism (or device) that
merely BEHAVES EXACTLY AS IF it were conscious. (It is irrelevant whether
there are or could be such organisms or devices; what's at issue here is
objectivity. Moreover, the "reliability" of the correlations is of
course objectively untestable.) This leaves subjective experience a
mere "nomological dangler" (as the old identity theorists used to call
it) in a lawful psychophysical account. We each (presumably) know it's
there from our respective subjective observations. But, objectively speaking,
psychophysics is only the study of, say, the detecting and discriminating
capacity (i.e., behavior) of our trandsucer systems, NOT the qualities of our
conscious experience, no matter how tight the subjective correlation.
That's no limit on psychophysics. We can do it as if it were the study
of our conscious experience, and the correlations may all be real,
even causal. But the mind/body problem and the problem of objective
measurement and explanation remain completely untouched by our findings,
both in practise and in principle.

So even in psychophysics, the appropriate research strategy seems to
be methodological epiphenomenalism. If you disagree, answer this: What
MORE is added to our empirical mission in doing psychophysics if we
insist that we are not "merely" trying to account for the underlying
regularities and causal mechanisms of detection, discrimination,
categorization (etc.) PERFORMANCE, but of the qualitative experience
accompanying and "mediating" it? How would someone who wanted to
undertake the latter rather than merely the former go about things any
differently, and how would his methods and findings differ (apart from
being embellished with a subjective interpretation)? Would there be any
OBJECTIVE difference?

I have no lack of respect for psychophysics, and what it can tell us
about the functional basis of categorization. (I've just edited and
contributed to a book on it.) But I have no illusions about its being
in any better a position to make objective inroads on the mind/body
problem than neuroscience, cognitive psychology, artificial
intelligence or evolutionary biology -- and they're in no position at all.

>       In principle, two investigators could perform the [above] experiment
>       ...and obtain objective (in the sense of observer-independent)
>       results as to the form of the resulting lawful relationships between,
>       for example, d' and memory retention time, *without engaging in any
>       externally observable behavior until it came time to compare results*.

I'd be interested in knowing how, if I were one of the experimenters
and Adam Reed were the other, he could get "objective
(observer-independent) results" on my experience and I on his. Of
course, if we make some (question-begging) assumptions about the fact
that the experience of our respective alter egos (a) exists, (b) is
similar to our own, and (c) is veridically reflected by the "form" of the
overt outcome of our respective covert calculations, then we'd have some
agreement, but I'd hardly dare to say we had objectivity.

(What, by the way, is the difference in principle between overt behavior
on every trial and overt behavior after a complex-series-of-trials?
Whether I'm detecting individual signals or calculating cumulating d's
or even more complex psychophysical functions, I'm just an
organism/device that's behaving in a certain way under certain
conditions. And you're just a theorist making inferences about the
regularities underlying my performance. Where does "experience" come
into it, objectively speaking? -- And you're surely not suggesting that
psychophyics be practiced as a solipsistic science, each experimenter
serving as his own sole subject: for from solipsistic methods you can
only arrive at solipsistic conclusions, trivially observer-invariant,
but hardly objective.)

>       The following analogy (proposed, if I remember correctly, by Robert
>       Efron) may illuminate what is happening here. Two physicists, A and B,
>       live in countries with closed borders, so that they may never visit each
>       other's laboratories and personally observe each other's experiments.
>       Relative to each other's personal perception, their experiments are
>       as private as the conscious experiences of different observers. But, by
>       replicating each other's experiments in their respective laboratories,
>       they are capable of arriving at objective knowledge. This is also true,
>       I submit, of the psychological study of private, "subjective"
>       experience.

As far as I can see, Efron's analogy casts no light at all.
It merely reminds us that even normal objectivity in science (intersubjective
repeatability) happens to be piggy-backing on the existence of
subjective experience. We are not, after all, unconscious automata. When we
perform an "observation," it is not ONLY objective, in the sense that
anyone in principle can perform the same observation and arrive at the
same result. There is also something it is "like" to observe
something -- observations are also conscious experiences.

But apart from some voodoo in certain quantum mechanical meta-theories,
the subjective aspect of objective observations in physics seems to be
nothing but an innocent fellow-traveller: The outcome of the
Michelson-Morley Experiment would presumably be the same if it were
performed by an unconscious automaton, or even if WE were unconscious automata.
This is decidely NOT true of the (untouched) subjective aspect of a
psychophysical experiment. Observer-independent "experience" is a
contradiction in terms.

(Most scientists, by the way, do not construe repeatability to require
travelling directly to one another's labs; rather, it's a matter of
recreating the same objective conditions. Unfortunately, this does not
generalize to the replication of anyone else's private events, or even
to the EXISTENCE of any private events other than one's own.)

Note that I am not denying that objective knowledge can be derived
from psychophysics; I'm only denying that this can amount to objective
knowledge about anything MORE than psychophysical performance and its
underlying causal substrate. The accompanying subjective phenomenology is
simply not part of the objective story science can tell, no matter how, and
how tightly, it happens to be coupled to it in reality. That's the
mind/body problem, and a fundamental limit on objective inquiry.
Methodological epiphenomenalism recommends we face it and live with
it, since not that much is lost. The "incompleteness" of an objective
account is, after all, just a subjective problem. But supposing away
the incompleteness -- by wishful thinking, hopeful over-interpretation,
hidden (subjective) premises or blurring of the objective/subjective
distinction -- is a logical problem.
--

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

Date: Mon, 26 Jan 87 23:25:17 est
From: mnetor!dciem!mmt@seismo.CSS.GOV
Subject: Necessity of consciousness

Newsgroups: mod.ai
Subject: Re: Minsky on Mind(s)
Summary:
Expires:
References: <8701221730.AA04257@seismo.CSS.GOV>
Sender:
Reply-To: mmt@dciem.UUCP (Martin Taylor)
Followup-To:
Distribution:
Organization: D.C.I.E.M., Toronto, Canada
Keywords:

I tried to send this direct to Steve Harnad, but his signature is
incorrect: seismo thinks princeton is an "unknown host".  Also mail
to him through allegra bounced.
===============
>just answer the following question: When the dog's tooth is injured,
>and it does the various things it does to remedy this -- inflamation
>reaction, release of white blood cells, avoidance of chewing on that
>side, seeking soft foods, giving signs of distress to his owner, etc. etc.
>-- why do the processes that give rise to all these sequelae ALSO need to
>give rise to any pain (or any conscious experience at all) rather
>than doing the very same tissue-healing and protective-behavioral job
>completely unconsciously? Why is the dog not a turing-indistinguishable
>automaton that behaves EXACTLY AS IF it felt pain, etc, but in reality
>does not? That's another variant of the mind/body problem, and it's what
>you're up against when you're trying to justify interpreting physical
>processes as conscious ones. Anything short of a convincing answer to
>this amounts to mere hand-waving on behalf of the conscious interpretation
>of your proposed processes.]

I'm not taking up your challenge, but I think you have overstated
the requirements for a challenge.  Okham's razor demands only that
the simplest explanation be accepted, and I take this to mean inclusive
of boundary conditions AND preconceptions.  The acceptability of a
hypothesis must be relative to the observer (say, scientist), since
we have no access to absolute truth.  Hence, the challenge should be
to show that the concept of consciousness in the {dog|other person|automaton}
provides a simpler description of the world than the elimination of
the concept of consciousness does.

The whole-world description includes your preconceptions, and a hypothesis
that demands you to change those precoceptions is FROM YOUR VIEWPOINT
more complex than one that does not.  Since you start from the
preconception that consciousness need not (or perhaps should not)
be invoked, you need stronger proof than would, say, an animist.

Your challenge should ask for a demonstration that the facts of observable
behaviour can be more succinctly described using consciousness than not
using it.  Obviously, there can be no demonstration of the necessity of
consciousness, since ALL observable behaviour could be the result of
remotely controlled puppetry (except your own, of course).  But this
hypothesis is markedly more complex than a hypothesis derived from
psychological principles, since every item of behaviour must be separately
described as part of the boundary conditions.

I have a mathematization of this argument, if you are interested.  It is
about 15 years old, but it still seems to hold up pretty well.  Ockham's
razor isn't just a good idea, it is informationally the correct means
of selecting hypotheses.  However, like any other razor, it must
be used correctly, and that means that one cannot ignore the boundary
conditions that must be stated when using the hypothesis to make specific
predictions or descriptions.  Personally, I think that hypotheses that
allow other people (and perhaps some animals) to have consciousness are
simpler than hypotheses that require me to describe myself as a special
case.  Hence, Ockham's razor forces me to prefer the hypothesis that other
beings have consciousness.  The same does not hold true for silicon-based
behaving entities, because I already have hypotheses that explain their
behaviour without invoking consciousness, and these hypotheses already
include the statement that silicon-based beings are different from me.
Any question of silon-based consciousness must be argued on a different
basis, and I think such arguments are likely to turn on personal preference
rather than on the facts of behaviour.

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Sat Jan 31 00:54:54 1987
Date: Sat, 31 Jan 87 00:54:43 est
From: vtcs1::in% <LAWS@SRI-STRIPE.ARPA>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #23
Status: R


AIList Digest            Friday, 30 Jan 1987       Volume 5 : Issue 23

Today's Topics:
  Code - AI Expert Magazine Sources (Part 4 of 22)

----------------------------------------------------------------------

Date: 19 Jan 87 03:36:40 GMT
From: imagen!turner@ucbvax.Berkeley.EDU  (D'arc Angel)
Subject: AI Expert Magazine Sources (Part 4 of 22)

X
X
X;This version of cmp-p is used when compiling patterns on the
X;righthand side in which we want variable bindings consistent
X;with variable bindings on the LHS.  Effectively, the RHS
X;pattern is just treated as a continuation of the LHS
X;pattern, except, of course, that the results of the RHS
X;pattern match will not affect the firing of the production.
X(defun ipm-cmp-p-recursive (name matrix)
X  (prog (m bakptrs srhs frhs)
X  (push-global-variables '*cmp-p-context-stack* '*matrix*
X      '*feature-count* '*ce-count*
X      '*vars* '*ce-vars*
X      '*rhs-bound-vars* '*rhs-bound-ce-vars*
X      '*last-branch* '*last-node*)
X        (prepare-lex matrix)
X(setq *rhs-bound-vars* nil)
X(setq *rhs-bound-ce-vars* nil)
X        (setq m (rest-of-p))
X   l1   (and (end-of-p) (\%error '|no '-->' in production| m))
X        (cmp-prin)
X        (setq bakptrs (cons *last-branch* bakptrs))
X        (or (eq '--> (peek-lex)) (go l1))
X        (lex)
X(setq srhs (rest-of-p)) ; get righthand side
X(if (setq frhs (cdr (memq '<-- srhs)))
(setq srhs (remove-frhs srhs)))
X(ipm-check-rhs srhs)
X;note, we change the structure of the &query node to have a tail
X;component.  This is the action to take on a failed pattern match
X        (link-new-node (list '&query
X                             *feature-count*
X     name
X                             (encode-dope)
X                             (encode-ce-dope)
X                             (cons 'progn srhs)
X     (cons 'progn frhs)))
X        (putprop name (cdr (nreverse bakptrs)) 'backpointers)
X(putprop name matrix 'production)
X        (putprop name *last-node* 'topnode)
X(pop-global-variables *cmp-p-context-stack*)
X))
X
X;Extract failed pattern match rhs actions from production.
X(defun remove-frhs(rhs)
X   (do ((lis nil (append lis (list inp)))
X(inp (car rhs) (car rhs)))
X       ((eq inp '<--)
X(return lis))
X      (setq rhs (cdr rhs))
X       ))
X
X;;Modified version of OPS5 cmp-p, compiles pattern match and links
X;&query node into Rete net. If pmatch occurs in the righthand side of the
; rule, then
X;nodes are linked to tree generated by rule's LHS.
X(defun ipm-cmp-p (name matrix)
X  (prog (m bakptrs srhs frhs)
X        (prepare-lex matrix)
X        (excise-p name)
X        (setq bakptrs nil)
X        (setq *pcount* (1+ *pcount*))
X        (setq *feature-count* 0.)
X(setq *ce-count* 0)
X        (setq *vars* nil)
X        (setq *ce-vars* nil)
X(setq *rhs-bound-vars* nil)
X(setq *rhs-bound-ce-vars* nil)
X        (setq *last-branch* nil)
X        (setq m (rest-of-p))
X   l1   (and (end-of-p) (\%error '|no '-->' in production| m))
X        (cmp-prin)
X        (setq bakptrs (cons *last-branch* bakptrs))
X        (or (eq '--> (peek-lex)) (go l1))
X        (lex)
X(setq srhs (rest-of-p)) ; get righthand side
X(if (setq frhs (cdr (memq '<-- srhs)))
X    (setq srhs (remove-frhs srhs)))
X(ipm-check-rhs srhs)
X;note, we change the structure of the &query node to have a tail
X;component.  This is the action to take on a failed pattern match
X        (link-new-node (list '&query
                     *feature-count*
X                             name
X                             (encode-dope)
X                             (encode-ce-dope)
X                             (cons 'progn srhs)
X     (cons 'progn frhs)))
X(terpri)
X        (putprop name (cdr (nreverse bakptrs)) 'backpointers)
X(putprop name matrix 'production)
X        (putprop name *last-node* 'topnode)))
X
X;Modified OPS5 code, sets *compiling-rhs* variable.
X(defun check-rhs (rhs)
X    (setq *compiling-rhs* t)
X    (mapc (function check-action) rhs)
X    (setq *compiling-rhs* nil))
X
X
X;rhs part to be evaluated upon pattern match failure
X
X(defun frhs-part (pnode) (car (last pnode)))
X
X;;returns value of last expression in RHS
X(defun query (qname)
X  (ipm-eval-query qname (car (get qname 'pmatches))))
X
X;IPM-EVAL-QUERY: Given a pointer to a query and the associated data,
; this function
X;sets up the appropriate environment to evaluate the RHS of the pattern match.
X;This is a modified eval-rhs from OPS5.
X
X(defun ipm-eval-query (pname data)
X  (let ((node (get pname 'topnode))
X        (ans nil)
X        (saved nil))
X   (if (setq saved *in-rhs*) ;in case of recursive call,save system state and
X       (save-system-state))  ;set saved flag
X    (setq *data-matched* data)
X    (setq *p-name* pname)
X    (setq *last* nil)
X    (setq node (get pname 'topnode))
X    (ipm-init-var-mem (var-part node))
X    (ipm-init-var-nmatches pname)
X    (ipm-init-ce-var-mem (ce-var-part node))
X    (setq *in-rhs* t)
X    (setq ans
X      (if (neq *NMATCHES* 0) ;if match failed, execute failpart, if any
X(eval (rhs-part node))
X(eval (frhs-part node)) ))
X    (setq *in-rhs* nil)
X   (if saved
X      (restore-system-state))
X    ans
X))
X
X;map-query is just like query, except that we are performing the
;eval operation for each match.  Therefore, some of the initialization
X;must be factored out of ipm-eval-map-query.
X(defun map-query(qname)
X   (let* ((node (get qname 'topnode))
X          (ans nil)
X          (saved nil))
X   (if (setq saved *in-rhs*) ;in case of recursive call,save system state and
X       (save-system-state))  ;set saved flag
X      (setq *p-name* qname)
X      (setq *last* nil)
X  (setq ans
X   (if (> (length (get qname 'pmatches)) 0)
X      (mapcar '(lambda(qinstance)
X            (ipm-eval-map-query qname qinstance node))
X          (get qname 'pmatches))
X      (eval (frhs-part node)) ))
X   (if saved
X      (restore-system-state))
X   ans))
X
X(defun ipm-eval-map-query (qname data node)
X  (let ((ans))
X    (setq *data-matched* data)
X    (setq node (get qname 'topnode))
X    (ipm-init-var-mem (var-part node))
X    (ipm-init-var-nmatches qname)
X    (ipm-init-ce-var-mem (ce-var-part node))
X    (setq *in-rhs* t)
X    (setq ans (eval (rhs-part node)))
X    (setq *in-rhs* nil)
X    ans
X ))
X
X
X;the variable &nmatches is bound to the number of production
X;matches in each query.  Useful for counting applications and
X;deciding if any matches succeeded.
X
X(defun ipm-init-var-nmatches(pname)
X    (setq *NMATCHES* (length (get pname 'pmatches)))
X    (setq *variable-memory* ;remove previous number of matches
X  (remove (assoc '\<NMATCHES\> *variable-memory*) *variable-memory*))
X    (setq *variable-memory*  ;set up &NMATCHES environ. variable
X          (cons (cons '\<NMATCHES\> *NMATCHES*)
X*variable-memory*)))
X
X;More modified OPS5 code.  Initializes the variable and ce-variable bindings
X;to be consistent with the results of the pattern match.
X(defun ipm-init-var-mem (vlist)
X  (prog (v ind r)
X(or *in-rhs* ;if we're in rhs, then global is already set
X          (setq *variable-memory* nil))
X   top  (and (atom vlist) (return nil))
X        (setq v (car vlist))
X        (setq ind (cadr vlist))
(setq vlist (cddr vlist))
X        (setq r (gelm *data-matched* ind))
X        (setq *variable-memory* (cons (cons v r) *variable-memory*))
X        (go top)))
X
X(defun ipm-init-ce-var-mem (vlist)
X  (prog (v ind r)
X(or *in-rhs* ;if we're in rhs, then global is already set
X          (setq *ce-variable-memory* nil))
X   top  (and (atom vlist) (return nil))
X        (setq v (car vlist))
X        (setq ind (cadr vlist))
X        (setq vlist (cddr vlist))
X        (setq r (ce-gelm *data-matched* ind))
X        (setq *ce-variable-memory*
X              (cons (cons v r) *ce-variable-memory*))
X        (go top)))
X
X(defun save-system-state()
X   (push-global-variables '*system-state-stack* '*ce-variable-memory*
 '*data-matched*
X                          '*variable-memory* '*NMATCHES* '*p-name* '*in-rhs*))
X
X(defun restore-system-state()
X   (pop-global-variables *system-state-stack*))
X
X;changed OPS5 code to accept &query
X(defun link-new-node (r)
X  (cond ((not (member (car r) '(&query &p &mem &two &and &not)))
X (setq *feature-count* (1+ *feature-count*))))
X  (setq *virtual-cnt* (1+ *virtual-cnt*))
X  (setq *last-node* (link-left *last-node* r)))
X
X(defun ipm-check-rhs (rhs)
X    (setq *compiling-rhs* t)
X    (mapc (function ipm-check-action) rhs)
X    (setq *compiling-rhs* nil))
X
X(defun myreplace(x y)
X   (rplaca x (car y))
X   (rplacd x (cdr y)))
X
X;This check-action is called by pmatch or map-pmatch macros
X(defun ipm-check-action (x)
X  (prog (a)
X    (cond ((atom x)
X           (%warn '|atomic action| x)
X   (return nil)))
X    (setq a (setq *action-type* (car x)))
X   (cond ((eq a 'bind) (check-bind x))
X          ((eq a 'query) nil) ;never happens?
X          ((eq a 'map-query) nil) ;never happens?
X  ;if we come across an unexpanded pmatch, expand and compile it.
X  ;replace with result
X          ((eq a 'pmatch) (myreplace x (eval x)))
X          ((eq a 'map-pmatch) (myreplace x (eval x)))
  ((eq a 'cbind) (check-cbind x))
X          ((eq a 'make) (check-make x))
X          ((eq a 'modify) (check-modify x))
X          ((eq a 'remove) (check-remove x))
X          ((eq a 'write) (check-write x))
X          ((eq a 'call) (check-call x))
X          ((eq a 'halt) (check-halt x))
X          ((eq a 'openfile) (check-openfile x))
X          ((eq a 'closefile) (check-closefile x))
X          ((eq a 'default) (check-default x))
X          ((eq a 'build) (check-build x))
X          (t nil) ;in a pmatch rhs, code is not restricted to OPS rhs actions.
X  )))
X
X;This check action is just modified so that pmatch or map-pmatch
X;are acceptable right-hand sides.
X(defun check-action (x)
X  (prog (a)
X    (cond ((atom x)
X           (%warn '|atomic action| x)
X   (return nil)))
X    (setq a (setq *action-type* (car x)))
X    (cond ((eq a 'bind) (check-bind x))
X          ((eq a 'query) nil) ;never happens
X          ((eq a 'map-query) nil) ;never happens
X  ;if we come across an unexpanded pmatch, expand and compile it.
X  ;replace with result
X          ((eq a 'pmatch) (myreplace x (eval x)))
X          ((eq a 'map-pmatch) (myreplace x (eval x)))
X          ((eq a 'cbind) (check-cbind x))
X          ((eq a 'make) (check-make x))
X          ((eq a 'modify) (check-modify x))
X          ((eq a 'remove) (check-remove x))
X          ((eq a 'write) (check-write x))
X          ((eq a 'call) (check-call x))
X          ((eq a 'halt) (check-halt x))
X          ((eq a 'openfile) (check-openfile x))
X          ((eq a 'closefile) (check-closefile x))
X          ((eq a 'default) (check-default x))
X          ((eq a 'build) (check-build x))
X          (t (%warn '|undefined rhs action| a)))))
X
X
X;add-to-wm: modified to return timetag number of item added
X(defun add-to-wm (wme override)
X  (prog (fa z part timetag port)
X    (setq *critical* t)
X    (setq *current-wm* (1+ *current-wm*))
X    (and (> *current-wm* *max-wm*) (setq *max-wm* *current-wm*))
X    (setq *action-count* (1+ *action-count*))
X    (setq fa (wm-hash wme))
X    (or (memq fa *wmpart-list*)
X        (setq *wmpart-list* (cons fa *wmpart-list*)))
X    (setq part (get fa 'wmpart*))
X    (cond (override (setq timetag override))
  (t (setq timetag *action-count*)))
X    (setq z (cons wme timetag))
X    (putprop fa (cons z part) 'wmpart*)
X    (record-change '=>wm *action-count* wme)
X    (match 'new wme)
X    (setq *critical* nil)
X    (cond ((and *in-rhs* *wtrace*)
X           (setq port (trace-file))
X           (terpri port)
X           (princ '|=>wm: | port)
X           (ppelm wme port)))
X    (and *in-rhs* *mtrace* (setq *madeby*
X                                 (cons (cons wme *p-name*) *madeby*)))
X    (return timetag)))
X
X(defun &old (&rest a) nil) ;a null function used for deleting node
X
X
X;MAKESYM: Does the same thing as gensym, but allows a symbol to be passed, so
X;         the resulting symbol is meaningful.
X(defun makesym(x)
X  (prog(numb)
X       (and (not (setq numb (get x '$cntr)))
X    (setq numb 0))
X       (putprop x (add1 numb) '$cntr)
X       (return (concat x numb))))
X
X;CONCAT: Make a symbol from a number of symbols
X(defun concat(&rest x)
X   (do ((lst x (cdr lst))
X        (strng nil))
X       ((null lst)
X        (intern strng))
X       (setq strng (concatenate 'string strng (princ-to-string (car lst))))
X    ))
X
X;A general purpose gensym function.  Input is
X; [atom], output is [atom]N, where N is a unique integer.
X; ie. (newsym baz) ==> baz1
X;     (newsym baz) ==> baz2, etc.
X
X(defmacro newsym(x)
X  `(makesym ',x))
X
X
X(defun exquery()
X  (mapc #'(lambda(q) (eval `(excise ,q))) *qnames*)
X  (setq *qnames* nil))
X
X;The following is a minimal test for the opsmods programs.
X;To use it, uncomment it, and load it. The code should load without
X;blowing up. Complaints about atomic actions in RHS are OK, ignore them.
X;Type
X;(setup)
X;(cs) -- foo and baz should be in the conflict set.
;Type (run 1), the program should print out a list of blocks.
X;(run) should continue until only chartreuse blocks are left.
X;While simple, this code tests for nested use of pattern matches,
; recursive calls,
X;and use of pmatch in the rhs of OPS productions.
X;(i-g-v)
X;(literalize block a b c)
X
X
X;(p baz
X;   { <a> (block ^a <colour> ) }
X;         (block ^a <> <colour>)
X;  -->
X;   (pmatch  (block ^a <> <colour> )
X;       -->
X;           (find-block-colors ?<colour> )
X;   (oremove <a> ))
X;   (make block ^a chartreuse))
X
X;Test for recursive use of pmatch. (find-block-colors uses map-pmatch and
X;appears in a RHS of another pmatch)
X;(defun rtest(a )
X;   ?(pmatch (args a )
X;    (block ^a <a> <numb>)
X;        -->
X;          (find-block-colors 'green)
X;          (format t "Block color ~a is ~a~%" ?<a> ?<numb>)))
X
X;(defun find-block-colors (color)
X;  ?(map-pmatch (args color)
X;       (block ^a <color> <numb>)
X;     -->
X;   (format t "~%Find-block-colors ~a ~a~%" ?<color> ?<numb>)))
X
X;(defun setup()
X;  (setq *in-rhs* nil)
X;  (oremove *)
X;  (make block ^a green 1)
X;  (make block ^a green 2)
X;  (make block ^a green 3)
X;  (make block ^a green 4)
X;  (make block ^a green 5)
X;  (make block ^a red 6)
X;  (make block ^a red 7)
X;  (make block ^a yellow 8)
X;  (make block ^a blue 9)
X;  )
X
X
XCLSUP.LIS
X
X;Common Lisp Support Functions:
X;These functions are not defined in vanilla Common Lisp, but are used
X;in the OPSMODS.l code and in OPS5.
X
X(defun putprop(name val att)
X   (setf (get name att) val))
X
X(defun memq(obj lis)
X    (member obj lis :test #'eq))
X
X(defun fix(num)
X    (round num))
X
X
X(defun assq(item alist)
X     (assoc item alist :test #'eq))
X
X(defun ncons(x) (cons x nil))
X
X(defun neq(x y) (not (eq x y)))
X
X(defun delq(obj list)
X   (delete obj list :test #'eq))
X
X(defmacro comment(&optional &rest x) nil) ;comment is a noop
X
X(defun plus(x y)
X   (+ x y))
X
X(defun quotient(x y)
X   (/ x y))
X
X(defun flatc(x)
X   (length (princ-to-string x)))
X
X
X
XCOMMON.OPS
X
X;      VPS2 -- Interpreter for OPS5
X;
X;      Copyright (C) 1979, 1980, 1981
X;      Charles L. Forgy,  Pittsburgh, Pennsylvania
X
X
X
X; Users of this interpreter are requested to contact
X
X;
X;      Charles Forgy
X;      Computer Science Department
X;      Carnegie-Mellon University
X;      Pittsburgh, PA  15213
X; or
X;      Forgy@CMUA
X;
X; so that they can be added to the mailing list for OPS5.  The mailing list
X; is needed when new versions of the interpreter or manual are released.
X
X
X
X;;; Definitions
X
X#+ vax (defun putprop(name val att)
X   (setf (get name att) val))
X
X
X
X(proclaim '(special *matrix* *feature-count* *pcount* *vars* *cur-vars*
X          *curcond* *subnum* *last-node* *last-branch* *first-node*
X          *sendtocall* *flag-part* *alpha-flag-part* *data-part*
X          *alpha-data-part* *ce-vars* *virtual-cnt* *real-cnt*
X          *current-token* *c1* *c2* *c3* *c4* *c5* *c6* *c7* *c8* *c9*
X          *c10* *c11* *c12* *c13* *c14* *c15* *c16* *c17* *c18* *c19*
X          *c20* *c21* *c22* *c23* *c24* *c25* *c26* *c27* *c28* *c29*
X          *c30* *c31* *c32* *c33* *c34* *c35* *c36* *c37* *c38* *c39*
X          *c40* *c41* *c42* *c43* *c44* *c45* *c46* *c47* *c48* *c49*
X          *c50* *c51* *c52* *c53* *c54* *c55* *c56* *c57* *c58* *c59*
X          *c60* *c61* *c62* *c63* *c64* *record-array* *result-array*
X          *max-cs* *total-cs* *limit-cs* *cr-temp* *side*
X          *conflict-set* *halt-flag* *phase* *critical*
X          *cycle-count* *total-token* *max-token* *refracts*
X          *limit-token* *total-wm* *current-wm* *max-wm*
X          *action-count* *wmpart-list* *wm* *data-matched* *p-name*
X          *variable-memory* *ce-variable-memory*
X          *max-index* ; number of right-most field in wm element
X          *next-index* *size-result-array* *rest* *build-trace* *last*
X          *ptrace* *wtrace* *in-rhs* *recording* *accept-file* *trace-file*
X          *mtrace* *madeby* ; used to trace and record makers of elements
X          *write-file* *record-index* *max-record-index* *old-wm*
X          *record* *filters* *break-flag* *strategy* *remaining-cycles*
X         *wm-filter* *rhs-bound-vars* *rhs-bound-ce-vars* *ppline*
X         *ce-count* *brkpts* *class-list* *buckets* *action-type*
X          *literals*   ;stores literal definitions
X          *pnames*     ;stores production names
X         *externals*  ;tracks external declarations
X          *vector-attributes*  ;list of vector-attributes
X         ))
X
X;(declare (localf ce-gelm gelm peek-sublex sublex
X;          eval-nodelist sendto and-left and-right not-left not-right
X;          top-levels-eq add-token real-add-token remove-old
X;          remove-old-num remove-old-no-num removecs insertcs dsort
X;          best-of best-of* conflict-set-compare =alg ))
X
X
X;;; Functions that were revised so that they would compile efficiently
X
X
X;* The function == is machine dependent\!
X;* This function compares small integers for equality.  It uses EQ
X;* so that it will be fast, and it will consequently not work on all
X;* Lisps.  It works in Franz Lisp for integers in [-128, 127]
X
X
X;(defun == (&rest z) (= (cadr z) (caddr z)))
X(defun == (x y) (= x y))
X
X; =ALG returns T if A and B are algebraicly equal.
X
X(defun =alg (a b) (= a b))
X
X(defmacro fast-symeval (&rest z)
X        `(cond ((eq ,(car z) '*c1*) *c1*)
X               ((eq ,(car z) '*c2*) *c2*)
X               ((eq ,(car z) '*c3*) *c3*)
X               ((eq ,(car z) '*c4*) *c4*)
X               ((eq ,(car z) '*c5*) *c5*)
X               ((eq ,(car z) '*c6*) *c6*)
X               ((eq ,(car z) '*c7*) *c7*)
X               (t (eval ,(car z)))  ))
X
X; getvector and putvector are fast routines for using one-dimensional
X; arrays.  these routines do no checking; they assume
X;      1. the array is a vector with 0 being the index of the first
X;         element
X;      2. the vector holds arbitrary list values
X;defun versions are useful for tracing
X
X; Example call: (putvector array index value)
X
X(defmacro putvector (array_ref ind var)
X      `(setf (aref ,array_ref ,ind) ,var))
X
X;(defun putvector (array_ref ind var)
X;      (setf (aref array_ref ind) var))
X
X; Example call: (getvector name index)
X
X;(defmacro getvector(&rest z)
X;     (list 'cxr (caddr z) (cadr z)))
X
X(defmacro getvector(array_ref ind)
X      `(aref ,array_ref ,ind))
X
X;(defun getvector (array_ref ind)
X ;       (aref array_ref ind))
X
X(defun ce-gelm (x k)
X  (prog nil
X   loop (and (== k 1.) (return (car x)))
X        (setq k (1- k))
X        (setq x (cdr x))
X        (go loop)))
X
X; The loops in gelm were unwound so that fewer calls on DIFFERENCE
X; would be needed
X
X(defun gelm (x k)
X  (prog (ce sub)
X        (setq ce  (floor (/ k 10000)))
X        (setq sub (- k (* ce 10000)))
X celoop (and (== ce 0) (go ph2))
X        (setq x (cdr x))
X        (and (== ce 1) (go ph2))
X        (setq x (cdr x))
X        (and (== ce 2) (go ph2))
X        (setq x (cdr x))
X        (and (== ce 3) (go ph2))
X        (setq x (cdr x))
X        (and (== ce 4) (go ph2))
X        (setq ce (- ce 4))
X        (go celoop)
X   ph2  (setq x (car x))
X   subloop (and (== sub 0) (go finis))
X        (setq x (cdr x))
X        (and (== sub 1) (go finis))
X        (setq x (cdr x))
X        (and (== sub 2) (go finis))
X        (setq x (cdr x))
X        (and (== sub 3) (go finis))
X        (setq x (cdr x))
X        (and (== sub 4) (go finis))
X        (setq x (cdr x))
X        (and (== sub 5) (go finis))
X        (setq x (cdr x))
X        (and (== sub 6) (go finis))
X        (setq x (cdr x))
X        (and (== sub 7) (go finis))
X        (setq x (cdr x))
X        (and (== sub 8) (go finis))
X        (setq sub (- sub 8))
X        (go subloop)
X   finis (return (car x))))
X
X

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Sat Jan 31 00:55:15 1987
Date: Sat, 31 Jan 87 00:54:58 est
From: vtcs1::in% <LAWS@SRI-STRIPE.ARPA>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #24
Status: R


AIList Digest            Friday, 30 Jan 1987       Volume 5 : Issue 24

Today's Topics:
  Code - AI Expert Magazine Sources (Part 5 of 22)

----------------------------------------------------------------------

Date: 19 Jan 87 03:36:40 GMT
From: imagen!turner@ucbvax.Berkeley.EDU  (D'arc Angel)
Subject: AI Expert Magazine Sources (Part 5 of 22)

X;;; Utility functions
X
X
X
X(defun printline (x) (mapc (function printline*) x))
X
X(defun printline* (y) (princ '| |) (print y))
X
X(defun printlinec (x) (mapc (function printlinec*) x))
X
X(defun printlinec* (y) (princ '| |) (princ y))
X
X; intersect two lists using eq for the equality test
X
X(defun interq (x y)
X  (intersection x y :test #'eq))
X
X(defun enter (x ll)
X   (and (not (member x ll :test #'equal))
X       (push x ll)))
X
X
X;Hack read-macro tables to accept single characters -- right out of CL book.
X(defun single-macro-character (stream char)
X   (declare (ignore stream))
X   (character char))
X
X(defun i-g-v nil
X (prog (x)
X        (set-macro-character #\{ #'single-macro-character )
X        (set-macro-character #\} #'single-macro-character )
X        (set-macro-character #\^ #'single-macro-character )
X;      (setsyntax '\{ 66.) ;These are already normal characters in CL
X;      (setsyntax '\} 66.)
X;      (setsyntax '^ 66.)
X       (setq *buckets* 64.)            ; OPS5 allows 64 named slots
X       (setq *accept-file* nil)
X       (setq *write-file* nil)
X       (setq *trace-file* nil)
X        (and (boundp '*class-list*)
X          (mapc #'(lambda(class) (putprop class nil 'att-list)) *class-list*))
X       (setq *class-list* nil)
X       (setq *brkpts* nil)
X       (setq *strategy* 'lex)
X       (setq *in-rhs* nil)
X       (setq *ptrace* t)
X       (setq *wtrace* nil)
X       (setq *mtrace* t)            ; turn on made-by tracing
X       (setq *madeby* nil)          ; record makers of wm elements
X       (setq *recording* nil)
X        (setq *refracts* nil)
X       (setq *real-cnt* (setq *virtual-cnt* 0.))
X       (setq *max-cs* (setq *total-cs* 0.))
X       (setq *limit-token* 1000000.)
X       (setq *limit-cs* 1000000.)
X       (setq *critical* nil)
X       (setq *build-trace* nil)
X       (setq *wmpart-list* nil)
X        (setq *pnames* nil)
X        (setq *literals* nil) ; records literal definitions
X       (setq *externals* nil) ; records external definitions
X       (setq *vector-attributes* nil) ;records vector attributes
X       (setq *size-result-array* 127.)
X       (setq *result-array* (make-array 128))
X       (setq *record-array* (make-array 128))
X       (setq x 0)
X        (setq *pnames* nil)     ; list of production names
X  loop (putvector *result-array* x nil)
X       (setq x (1+ x))
X       (and (not (> x *size-result-array*)) (go loop))
X       (make-bottom-node)
X       (setq *pcount* 0.)
X       (initialize-record)
X       (setq *cycle-count* (setq *action-count* 0.))
X       (setq *total-token*
X              (setq *max-token* (setq *current-token* 0.)))
X       (setq *total-cs* (setq *max-cs* 0.))
X       (setq *total-wm* (setq *max-wm* (setq *current-wm* 0.)))
X       (setq *conflict-set* nil)
X       (setq *wmpart-list* nil)
X       (setq *p-name* nil)
X       (setq *remaining-cycles* 1000000)
X))
X
X; if the size of result-array changes, change the line in i-g-v which
X; sets the value of *size-result-array*
X
X(defun %warn (what where)
X  (prog nil
X    (terpri)
X    (princ '?)
X    (and *p-name* (princ *p-name*))
X    (princ '|..|)
X    (princ where)
X    (princ '|..|)
X    (princ what)
X    (return where)))
X
X(defun %error (what where)
X    (%warn what where)
X    (throw '!error! nil))
X
X
X(defun top-levels-eq (la lb)
X  (prog nil
X   lx   (cond ((eq la lb) (return t))
X              ((null la) (return nil))
X              ((null lb) (return nil))
X              ((not (eq (car la) (car lb))) (return nil)))
X        (setq la (cdr la))
X        (setq lb (cdr lb))
X        (go lx)))
X
X
X;;; LITERAL and LITERALIZE
X
X(defmacro literal (&rest z)
X  `(prog (atm val old args)
X        (setq args ',z)
X   top  (and (atom args) (return 'bound))
X        (or (eq (cadr args) '=) (return (%warn '|wrong format| args)))
X        (setq atm (car args))
X        (setq val (caddr args))
X        (setq args (cdddr args))
X        (cond ((not (numberp val))
X               (%warn '|can bind only to numbers| val))
X              ((or (not (symbolp atm)) (variablep atm))
X                (%warn '|can bind only constant atoms| atm))
X              ((and (setq old (literal-binding-of atm)) (not (equal old val)))
X               (%warn '|attempt to rebind attribute| atm))
X              (t (putprop atm val 'ops-bind )))
X        (go top)))
X
X(defmacro literalize (&rest l)
X  `(prog (class-name atts)
X    (setq class-name (car ',l))
X    (cond ((have-compiled-production)
X           (%warn '|literalize called after p| class-name)
X           (return nil))
X          ((get class-name 'att-list)
X           (%warn '|attempt to redefine class| class-name)
X          (return nil)))
X    (setq *class-list* (cons class-name *class-list*))
X    (setq atts (remove-duplicates (cdr ',l)))
X    (test-attribute-names atts)
X    (mark-conflicts atts atts)
X    (putprop class-name  atts 'att-list)))
X
X(defmacro vector-attribute  (&rest l)
X  `(cond ((have-compiled-production)
X         (%warn '|vector-attribute called after p| ',l))
X        (t
X         (test-attribute-names ',l)
X        (mapc (function vector-attribute2) ',l))))
X
X(defun vector-attribute2 (att) (putprop att t 'vector-attribute)
X                              (setq  *vector-attributes*
X                                  (enter att *vector-attributes*)))
X
X(defun is-vector-attribute (att) (get att 'vector-attribute))
X
X(defun test-attribute-names (l)
X  (mapc (function test-attribute-names2) l))
X
X(defun test-attribute-names2 (atm)
X  (cond ((or (not (symbolp atm)) (variablep atm))
X         (%warn '|can bind only constant atoms| atm))))
X
X(defun finish-literalize nil
X  (cond ((not (null *class-list*))
X         (mapc (function note-user-assigns) *class-list*)
X         (mapc (function assign-scalars) *class-list*)
X         (mapc (function assign-vectors) *class-list*)
X         (mapc (function put-ppdat) *class-list*)
X         (mapc (function erase-literal-info) *class-list*)
X         (setq *class-list* nil)
X         (setq *buckets* nil))))
X
X(defun have-compiled-production nil (not (zerop *pcount*)))
X
X(defun put-ppdat (class)
X  (prog (al att ppdat)
X        (setq ppdat nil)
X        (setq al (get class 'att-list))
X   top  (cond ((not (atom al))
X               (setq att (car al))
X               (setq al (cdr al))
X               (setq ppdat
X                     (cons (cons (literal-binding-of att) att)
X                           ppdat))
X               (go top)))
X        (putprop class ppdat 'ppdat)))
X
X; note-user-assigns and note-user-vector-assigns are needed only when
X; literal and literalize are both used in a program.  They make sure that
X; the assignments that are made explicitly with literal do not cause problems
X; for the literalized classes.
X
X(defun note-user-assigns (class)
X  (mapc (function note-user-assigns2) (get class 'att-list)))
X
X(defun note-user-assigns2 (att)
X  (prog (num conf buck clash)
X        (setq num (literal-binding-of att))
X       (and (null num) (return nil))
X       (setq conf (get att 'conflicts))
X       (setq buck (store-binding att num))
X       (setq clash (find-common-atom buck conf))
X       (and clash
X            (%warn '|attributes in a class assigned the same number|
X                   (cons att clash)))
X        (return nil)))
X
X(defun note-user-vector-assigns (att given needed)
X  (and (> needed given)
X       (%warn '|vector attribute assigned too small a value in literal| att)))
X
X(defun assign-scalars (class)
X  (mapc (function assign-scalars2) (get class 'att-list)))
X
X(defun assign-scalars2 (att)
X  (prog (tlist num bucket conf)
X        (and (literal-binding-of att) (return nil))
X        (and (is-vector-attribute att) (return nil))
X        (setq tlist (buckets))
X        (setq conf (get att 'conflicts))
X   top  (cond ((atom tlist)
X               (%warn '|could not generate a binding| att)
X               (store-binding att -1.)
X               (return nil)))
X        (setq num (caar tlist))
X        (setq bucket (cdar tlist))
X        (setq tlist (cdr tlist))
X        (cond ((disjoint bucket conf) (store-binding att num))
X        (t (go top)))))
X
X(defun assign-vectors (class)
X  (mapc (function assign-vectors2) (get class 'att-list)))
X
X(defun assign-vectors2 (att)
X  (prog (big conf new old need)
X        (and (not (is-vector-attribute att)) (return nil))
X        (setq big 1.)
X        (setq conf (get att 'conflicts))
X   top  (cond ((not (atom conf))
X               (setq new (car conf))
X               (setq conf (cdr conf))
X               (cond ((is-vector-attribute new)
X                      (%warn '|class has two vector attributes|
X                             (list att new)))
X                     (t (setq big (max (literal-binding-of new) big))))
X               (go top)))
X        (setq need (1+ big))
X       (setq old (literal-binding-of att))
X       (cond (old (note-user-vector-assigns att old need))
X             (t (store-binding att need)))
X        (return nil)))
X
X(defun disjoint (la lb) (not (find-common-atom la lb)))
X
X(defun find-common-atom (la lb)
X  (prog nil
X   top  (cond ((null la) (return nil))
X              ((member (car la) lb :test #'eq) (return (car la)))
X              (t (setq la (cdr la)) (go top)))))
X
X(defun mark-conflicts (rem all)
X  (cond ((not (null rem))
X         (mark-conflicts2 (car rem) all)
X         (mark-conflicts (cdr rem) all))))
X
X(defun mark-conflicts2 (atm lst)
X  (prog (l)
X        (setq l lst)
X   top  (and (atom l) (return nil))
X        (conflict atm (car l))
X        (setq l (cdr l))
X        (go top)))
X
X(defun conflict (a b)
X  (prog (old)
X    (setq old (get a 'conflicts))
X    (and (not (eq a b))
X         (not (member b old :test #'eq))
X         (putprop a (cons b old) 'conflicts ))))
X
X;(defun remove-duplicates (lst)
X;  (cond ((atom lst) nil)
X;        ((member (car lst) (cdr lst) :test #'eq)
                   (remove-duplicates (cdr lst)))
X;        (t (cons (car lst) (remove-duplicates (cdr lst))))))
X
X(defun literal-binding-of (name) (get name 'ops-bind))
X
X(defun store-binding (name lit)
X  (putprop name lit 'ops-bind)
X  (add-bucket name lit))
X
X(defun add-bucket (name num)
X  (prog (buc)
X    (setq buc (assoc num (buckets)))
X    (and (not (member name buc :test #'eq))
X         (rplacd buc (cons name (cdr buc))))
X    (return buc)))
X
X(defun buckets nil
X  (and (atom *buckets*) (setq *buckets* (make-nums *buckets*)))
X  *buckets*)
X
X(defun make-nums (k)
X  (prog (nums)
X        (setq nums nil)
X   l    (and (< k 2.) (return nums))
X        (setq nums (cons (cons k nil) nums))
X        (setq k (1- k))
X        (go l)))
X
X;(defun erase-literal-info (class)
X;  (mapc (function erase-literal-info2) (get class 'att-list))
X;  (remprop class 'att-list))
X
X; modified to record literal info in the variable *literals*
X(defun erase-literal-info (class)
X      (setq *literals*
X            (cons (cons class (get class 'att-list)) *literals*))
X      (mapc (function erase-literal-info2) (get class 'att-list))
X      (remprop class 'att-list))
X
X
X(defun erase-literal-info2 (att) (remprop att 'conflicts))
X
X
X;;; LHS Compiler
X
X(defmacro p (&rest z)
X `(progn
X   (finish-literalize)
X   (princ '*)
X  ;(drain);drain probably drains a line feed
X   (compile-production (car ',z) (cdr ',z))))
X
X(defun compile-production (name matrix)
X  (prog (erm)
X        (setq *p-name* name)
X        (setq erm (catch '!error! (cmp-p name matrix) ))
X       ; following line is modified to save production name on *pnames*
X        (and (null erm) (setq *pnames* (enter name *pnames*)))
X       (setq *p-name* nil)
X       (return erm)))
X
X(defun peek-lex nil (car *matrix*))
X
X(defun lex nil
X  (prog2 nil (car *matrix*) (setq *matrix* (cdr *matrix*))))
X
X(defun end-of-p nil (atom *matrix*))
X
X(defun rest-of-p nil *matrix*)
X
X(defun prepare-lex (prod) (setq *matrix* prod))
X
X
X(defun peek-sublex nil (car *curcond*))
X
X(defun sublex nil
X  (prog2 nil (car *curcond*) (setq *curcond* (cdr *curcond*))))
X
X(defun end-of-ce nil (atom *curcond*))
X
X(defun rest-of-ce nil *curcond*)
X
X(defun prepare-sublex (ce) (setq *curcond* ce))
X
X(defun make-bottom-node nil (setq *first-node* (list '&bus nil)))
X
X(defun cmp-p (name matrix)
X  (prog (m bakptrs)
X        (cond ((or (null name) (listp name))
X               (%error '|illegal production name| name))
X              ((equal (get name 'production) matrix)
X              (return nil)))
X        (prepare-lex matrix)
X        (excise-p name)
X        (setq bakptrs nil)
X        (setq *pcount* (1+ *pcount*))
X        (setq *feature-count* 0.)
X       (setq *ce-count* 0)
X        (setq *vars* nil)
X        (setq *ce-vars* nil)
X       (setq *rhs-bound-vars* nil)
X       (setq *rhs-bound-ce-vars* nil)
X        (setq *last-branch* nil)
X        (setq m (rest-of-p))
X   l1   (and (end-of-p) (%error '|no '-->' in production| m))
X        (cmp-prin)
X        (setq bakptrs (cons *last-branch* bakptrs))
X        (or (eq '--> (peek-lex)) (go l1))
X        (lex)
X       (check-rhs (rest-of-p))
X        (link-new-node (list '&p
X                             *feature-count*
X                             name
X                             (encode-dope)
X                             (encode-ce-dope)
X                             (cons 'progn (rest-of-p))))
X        (putprop name (cdr (nreverse bakptrs)) 'backpointers )
X       (putprop name matrix 'production)
X        (putprop name *last-node* 'topnode)))
X
X(defun rating-part (pnode) (cadr pnode))
X
X(defun var-part (pnode) (car (cdddr pnode)))
X
X(defun ce-var-part (pnode) (cadr (cdddr pnode)))
X
X(defun rhs-part (pnode) (caddr (cdddr pnode)))
X
X(defun excise-p (name)
X  (cond ((and (symbolp name) (get name 'topnode))
X        (printline (list name 'is 'excised))
X         (setq *pcount* (1- *pcount*))
X         (remove-from-conflict-set name)
X         (kill-node (get name 'topnode))
X         (setq *pnames* (delete name *pnames* :test #'eq))
X        (remprop name 'production)
X        (remprop name 'backpointers)
X         (remprop name 'topnode))))
X
X(defun kill-node (node)
X  (prog nil
X   top  (and (atom node) (return nil))
X        (rplaca node '&old)
X        (setq node (cdr node))
X        (go top)))
X
X(defun cmp-prin nil
X  (prog nil
X        (setq *last-node* *first-node*)
X        (cond ((null *last-branch*) (cmp-posce) (cmp-nobeta))
X              ((eq (peek-lex) '-) (cmp-negce) (cmp-not))
X              (t (cmp-posce) (cmp-and)))))
X
X(defun cmp-negce nil (lex) (cmp-ce))
X
X(defun cmp-posce nil
X  (setq *ce-count* (1+ *ce-count*))
X  (cond ((eq (peek-lex) #\{) (cmp-ce+cevar))
X        (t (cmp-ce))))
X
X(defun cmp-ce+cevar nil
X  (prog (z)
X        (lex)
X        (cond ((atom (peek-lex)) (cmp-cevar) (cmp-ce))
X              (t (cmp-ce) (cmp-cevar)))
X        (setq z (lex))
X        (or (eq z #\}) (%error '|missing '}'| z))))
X
X(defun new-subnum (k)
X  (or (numberp k) (%error '|tab must be a number| k))
X  (setq *subnum* (round k)))
X
X(defun incr-subnum nil (setq *subnum* (1+ *subnum*)))
X
X(defun cmp-ce nil
X  (prog (z)
X        (new-subnum 0.)
X        (setq *cur-vars* nil)
X        (setq z (lex))
X        (and (atom z)
X             (%error '|atomic conditions are not allowed| z))
X        (prepare-sublex z)
X   la   (and (end-of-ce) (return nil))
X        (incr-subnum)
X        (cmp-element)
X        (go la)))
X
X(defun cmp-element nil
X        (and (eq (peek-sublex) #\^) (cmp-tab))
X        (cond ((eq (peek-sublex) '#\{) (cmp-product))
X              (t (cmp-atomic-or-any))))
X
X(defun cmp-atomic-or-any nil
X        (cond ((eq (peek-sublex) '<<) (cmp-any))
X              (t (cmp-atomic))))
X
X(defun cmp-any nil
X  (prog (a z)
X        (sublex)
X        (setq z nil)
X   la   (cond ((end-of-ce) (%error '|missing '>>'| a)))
X        (setq a (sublex))
X        (cond ((not (eq '>> a)) (setq z (cons a z)) (go la)))
X        (link-new-node (list '&any nil (current-field) z))))
X
X
X(defun cmp-tab nil
X  (prog (r)
X        (sublex)
X        (setq r (sublex))
X        (setq r ($litbind r))
X        (new-subnum r)))
X
X(defun $litbind (x)
X  (prog (r)
X        (cond ((and (symbolp x) (setq r (literal-binding-of x)))
X               (return r))
X              (t (return x)))))
X
X(defun get-bind (x)
X  (prog (r)
X        (cond ((and (symbolp x) (setq r (literal-binding-of x)))
X               (return r))
X              (t (return nil)))))
X
X(defun cmp-atomic nil
X  (prog (test x)
X        (setq x (peek-sublex))
X        (cond ((eq x '=) (setq test 'eq) (sublex))
X              ((eq x '<>) (setq test 'ne) (sublex))
X              ((eq x '<) (setq test 'lt) (sublex))
X              ((eq x '<=) (setq test 'le) (sublex))
X              ((eq x '>) (setq test 'gt) (sublex))
X              ((eq x '>=) (setq test 'ge) (sublex))
X              ((eq x '<=>) (setq test 'xx) (sublex))
X              (t (setq test 'eq)))
X        (cmp-symbol test)))
X
X(defun cmp-product nil
X  (prog (save)
X        (setq save (rest-of-ce))
X        (sublex)
X   la   (cond ((end-of-ce)
X               (cond ((member #\} save)
X                     (%error '|wrong contex for '}'| save))
X                    (t (%error '|missing '}'| save))))
X              ((eq (peek-sublex) #\}) (sublex) (return nil)))
X        (cmp-atomic-or-any)
X        (go la)))
X
X(defun variablep (x) (and (symbolp x) (char-equal
 (char (symbol-name x) 0) #\<)))
X
X(defun cmp-symbol (test)
X  (prog (flag)
X        (setq flag t)
X        (cond ((eq (peek-sublex) '//) (sublex) (setq flag nil)))
X        (cond ((and flag (variablep (peek-sublex)))
X               (cmp-var test))
X              ((numberp (peek-sublex)) (cmp-number test))
X              ((symbolp (peek-sublex)) (cmp-constant test))
X              (t (%error '|unrecognized symbol| (sublex))))))
X
X(defun concat3(x y z)
X   (intern (format nil "~s~s~s" x y z)))
X
X(defun cmp-constant (test)
X  (or (member test '(eq ne xx) )
X      (%error '|non-numeric constant after numeric predicate| (sublex)))
X  (link-new-node (list (concat3 't test 'a)
X                       nil
X                       (current-field)
X                       (sublex))))
X
X
X(defun cmp-number (test)
X  (link-new-node (list (concat3 't test 'n)
X                       nil
X                       (current-field)
X                       (sublex))))
X
X(defun current-field nil (field-name *subnum*))

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Sat Jan 31 00:55:42 1987
Date: Sat, 31 Jan 87 00:55:15 est
From: vtcs1::in% <LAWS@SRI-STRIPE.ARPA>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #25
Status: R


AIList Digest            Friday, 30 Jan 1987       Volume 5 : Issue 25

Today's Topics:
  Code - AI Expert Magazine Sources (Part 6 of 22)

----------------------------------------------------------------------

Date: 19 Jan 87 03:36:40 GMT
From: imagen!turner@ucbvax.Berkeley.EDU  (D'arc Angel)
Subject: AI Expert Magazine Sources (Part 6 of 22)

X
X(defun field-name (num)
X  (cond ((= num 1.) '*c1*)
X        ((= num 2.) '*c2*)
X        ((= num 3.) '*c3*)
X        ((= num 4.) '*c4*)
X        ((= num 5.) '*c5*)
X        ((= num 6.) '*c6*)
X        ((= num 7.) '*c7*)
X        ((= num 8.) '*c8*)
X        ((= num 9.) '*c9*)
X        ((= num 10.) '*c10*)
X        ((= num 11.) '*c11*)
X        ((= num 12.) '*c12*)
X        ((= num 13.) '*c13*)
X        ((= num 14.) '*c14*)
X        ((= num 15.) '*c15*)
X        ((= num 16.) '*c16*)
X        ((= num 17.) '*c17*)
X        ((= num 18.) '*c18*)
X        ((= num 19.) '*c19*)
X        ((= num 20.) '*c20*)
X        ((= num 21.) '*c21*)
X        ((= num 22.) '*c22*)
X        ((= num 23.) '*c23*)
X        ((= num 24.) '*c24*)
X        ((= num 25.) '*c25*)
X        ((= num 26.) '*c26*)
X        ((= num 27.) '*c27*)
X        ((= num 28.) '*c28*)
X        ((= num 29.) '*c29*)
X        ((= num 30.) '*c30*)
X        ((= num 31.) '*c31*)
X        ((= num 32.) '*c32*)
X        ((= num 33.) '*c33*)
X        ((= num 34.) '*c34*)
X        ((= num 35.) '*c35*)
X        ((= num 36.) '*c36*)
X        ((= num 37.) '*c37*)
X        ((= num 38.) '*c38*)
X        ((= num 39.) '*c39*)
X        ((= num 40.) '*c40*)
X        ((= num 41.) '*c41*)
X        ((= num 42.) '*c42*)
X        ((= num 43.) '*c43*)
X        ((= num 44.) '*c44*)
X        ((= num 45.) '*c45*)
X        ((= num 46.) '*c46*)
X        ((= num 47.) '*c47*)
X        ((= num 48.) '*c48*)
X        ((= num 49.) '*c49*)
X        ((= num 50.) '*c50*)
X        ((= num 51.) '*c51*)
X        ((= num 52.) '*c52*)
X        ((= num 53.) '*c53*)
X        ((= num 54.) '*c54*)
X        ((= num 55.) '*c55*)
X        ((= num 56.) '*c56*)
X        ((= num 57.) '*c57*)
X        ((= num 58.) '*c58*)
X        ((= num 59.) '*c59*)
X        ((= num 60.) '*c60*)
X        ((= num 61.) '*c61*)
X        ((= num 62.) '*c62*)
X        ((= num 63.) '*c63*)
X        ((= num 64.) '*c64*)
X        (t (%error '|condition is too long| (rest-of-ce)))))
X
X
X;;; Compiling variables
X;
X;
X;
X; *cur-vars* are the variables in the condition element currently
X; being compiled.  *vars* are the variables in the earlier condition
X; elements.  *ce-vars* are the condition element variables.  note
X; that the interpreter will not confuse condition element and regular
X; variables even if they have the same name.
X;
X; *cur-vars* is a list of triples: (name predicate subelement-number)
X; eg:          ( (<x> eq 3)
X;                (<y> ne 1)
X;                . . . )
X;
X; *vars* is a list of triples: (name ce-number subelement-number)
X; eg:          ( (<x> 3 3)
X;                (<y> 1 1)
X;                . . . )
X;
X; *ce-vars* is a list of pairs: (name ce-number)
X; eg:          ( (ce1 1)
X;                (<c3> 3)
X;                . . . )
X
X(defun var-dope (var) (assoc var *vars* :test #'eq))
X
X(defun ce-var-dope (var) (assoc var *ce-vars* :test #'eq))
X
X(defun cmp-var (test)
X  (prog (old name)
X        (setq name (sublex))
X        (setq old (assoc name *cur-vars* :test #'eq))
X        (cond ((and old (eq (cadr old) 'eq))
X               (cmp-old-eq-var test old))
X              ((and old (eq test 'eq)) (cmp-new-eq-var name old))
X              (t (cmp-new-var name test)))))
X
X(defun cmp-new-var (name test)
X  (setq *cur-vars* (cons (list name test *subnum*) *cur-vars*)))
X
X(defun cmp-old-eq-var (test old)
X  (link-new-node (list (concat3 't test 's)
X                       nil
X                       (current-field)
X                       (field-name (caddr old)))))
X
X(defun cmp-new-eq-var (name old)
X  (prog (pred next)
X        (setq *cur-vars* (delete old *cur-vars* :test #'eq))
X        (setq next (assoc name *cur-vars* :test #'eq))
X        (cond (next (cmp-new-eq-var name next))
X              (t (cmp-new-var name 'eq)))
X        (setq pred (cadr old))
X        (link-new-node (list (concat3 't pred 's)
X                             nil
X                             (field-name (caddr old))
X                             (current-field)))))
X
X(defun cmp-cevar nil
X  (prog (name old)
X        (setq name (lex))
X        (setq old (assoc name *ce-vars* :test #'eq))
X        (and old
X             (%error '|condition element variable used twice| name))
X        (setq *ce-vars* (cons (list name 0.) *ce-vars*))))
X
X(defun cmp-not nil (cmp-beta '&not))
X
X(defun cmp-nobeta nil (cmp-beta nil))
X
X(defun cmp-and nil (cmp-beta '&and))
X
X(defun cmp-beta (kind)
X  (prog (tlist vdope vname vpred vpos old)
X        (setq tlist nil)
X   la   (and (atom *cur-vars*) (go lb))
X        (setq vdope (car *cur-vars*))
X        (setq *cur-vars* (cdr *cur-vars*))
X        (setq vname (car vdope))
X        (setq vpred (cadr vdope))
X        (setq vpos (caddr vdope))
X        (setq old (assoc vname *vars* :test #'eq))
X        (cond (old (setq tlist (add-test tlist vdope old)))
X              ((not (eq kind '&not)) (promote-var vdope)))
X        (go la)
X   lb   (and kind (build-beta kind tlist))
X        (or (eq kind '&not) (fudge))
X        (setq *last-branch* *last-node*)))
X
X(defun add-test (list new old)
X  (prog (ttype lloc rloc)
X       (setq *feature-count* (1+ *feature-count*))
X        (setq ttype (concat3 't (cadr new) 'b))
X        (setq rloc (encode-singleton (caddr new)))
X        (setq lloc (encode-pair (cadr old) (caddr old)))
X        (return (cons ttype (cons lloc (cons rloc list))))))
X
X; the following two functions encode indices so that gelm can
X; decode them as fast as possible
X
X(defun encode-pair (a b) (+ (* 10000. (1- a)) (1- b)))
X
X(defun encode-singleton (a) (1- a))
X
X(defun promote-var (dope)
X  (prog (vname vpred vpos new)
X        (setq vname (car dope))
X        (setq vpred (cadr dope))
X        (setq vpos (caddr dope))
X        (or (eq 'eq vpred)
X            (%error '|illegal predicate for first occurrence|
X                   (list vname vpred)))
X        (setq new (list vname 0. vpos))
X        (setq *vars* (cons new *vars*))))
X
X(defun fudge nil
X  (mapc (function fudge*) *vars*)
X  (mapc (function fudge*) *ce-vars*))
X
X(defun fudge* (z)
X  (prog (a) (setq a (cdr z)) (rplaca a (1+ (car a)))))
X
X(defun build-beta (type tests)
X  (prog (rpred lpred lnode lef)
X        (link-new-node (list '&mem nil nil (protomem)))
X        (setq rpred *last-node*)
X        (cond ((eq type '&and)
X               (setq lnode (list '&mem nil nil (protomem))))
X              (t (setq lnode (list '&two nil nil))))
X        (setq lpred (link-to-branch lnode))
X        (cond ((eq type '&and) (setq lef lpred))
X              (t (setq lef (protomem))))
X        (link-new-beta-node (list type nil lef rpred tests))))
X
X(defun protomem nil (list nil))
X
X(defun memory-part (mem-node) (car (cadddr mem-node)))
X
X(defun encode-dope nil
X  (prog (r all z k)
X        (setq r nil)
X        (setq all *vars*)
X   la   (and (atom all) (return r))
X        (setq z (car all))
X        (setq all (cdr all))
X        (setq k (encode-pair (cadr z) (caddr z)))
X        (setq r (cons (car z) (cons k r)))
X        (go la)))
X
X(defun encode-ce-dope nil
X  (prog (r all z k)
X        (setq r nil)
X        (setq all *ce-vars*)
X   la   (and (atom all) (return r))
X        (setq z (car all))
X        (setq all (cdr all))
X        (setq k (cadr z))
X        (setq r (cons (car z) (cons k r)))
X        (go la)))
X
X
X
X;;; Linking the nodes
X
X(defun link-new-node (r)
X  (cond ((not (member (car r) '(&p &mem &two &and &not)))
X        (setq *feature-count* (1+ *feature-count*))))
X  (setq *virtual-cnt* (1+ *virtual-cnt*))
X  (setq *last-node* (link-left *last-node* r)))
X
X(defun link-to-branch (r)
X  (setq *virtual-cnt* (1+ *virtual-cnt*))
X  (setq *last-branch* (link-left *last-branch* r)))
X
X(defun link-new-beta-node (r)
X  (setq *virtual-cnt* (1+ *virtual-cnt*))
X  (setq *last-node* (link-both *last-branch* *last-node* r))
X  (setq *last-branch* *last-node*))
X
X(defun link-left (pred succ)
X  (prog (a r)
X        (setq a (left-outs pred))
X        (setq r (find-equiv-node succ a))
X        (and r (return r))
X        (setq *real-cnt* (1+ *real-cnt*))
X        (attach-left pred succ)
X        (return succ)))
X
X(defun link-both (left right succ)
X  (prog (a r)
X        (setq a (interq (left-outs left) (right-outs right)))
X        (setq r (find-equiv-beta-node succ a))
X        (and r (return r))
X        (setq *real-cnt* (1+ *real-cnt*))
X        (attach-left left succ)
X        (attach-right right succ)
X        (return succ)))
X
X(defun attach-right (old new)
X  (rplaca (cddr old) (cons new (caddr old))))
X
X(defun attach-left (old new)
X  (rplaca (cdr old) (cons new (cadr old))))
X
X(defun right-outs (node) (caddr node))
X
X(defun left-outs (node) (cadr node))
X
X(defun find-equiv-node (node list)
X  (prog (a)
X        (setq a list)
X   l1   (cond ((atom a) (return nil))
X              ((equiv node (car a)) (return (car a))))
X        (setq a (cdr a))
X        (go l1)))
X
X(defun find-equiv-beta-node (node list)
X  (prog (a)
X        (setq a list)
X   l1   (cond ((atom a) (return nil))
X              ((beta-equiv node (car a)) (return (car a))))
X        (setq a (cdr a))
X        (go l1)))
X
X; do not look at the predecessor fields of beta nodes; they have to be
X; identical because of the way the candidate nodes were found
X
X(defun equiv (a b)
X  (and (eq (car a) (car b))
X       (or (eq (car a) '&mem)
X           (eq (car a) '&two)
X           (equal (caddr a) (caddr b)))
X       (equal (cdddr a) (cdddr b))))
X
X(defun beta-equiv (a b)
X  (and (eq (car a) (car b))
X       (equal (cddddr a) (cddddr b))
X       (or (eq (car a) '&and) (equal (caddr a) (caddr b)))))
X
X; the equivalence tests are set up to consider the contents of
X; node memories, so they are ready for the build action
X
X;;; Network interpreter
X
X(defun match (flag wme)
X  (sendto flag (list wme) 'left (list *first-node*)))
X
X; note that eval-nodelist is not set up to handle building
X; productions.  would have to add something like ops4's build-flag
X
X(defun eval-nodelist (nl)
X  (prog nil
X   top  (and (not nl) (return nil))
X        (setq *sendtocall* nil)
X       (setq *last-node* (car nl))
X        (apply (caar nl) (cdar nl))
X        (setq nl (cdr nl))
X        (go top)))
X
X(defun sendto (flag data side nl)
X  (prog nil
X   top  (and (not nl) (return nil))
X        (setq *side* side)
X        (setq *flag-part* flag)
X        (setq *data-part* data)
X        (setq *sendtocall* t)
X       (setq *last-node* (car nl))
X        (apply (caar nl) (cdar nl))
X        (setq nl (cdr nl))
X        (go top)))
X
X; &bus sets up the registers for the one-input nodes.  note that this
X(defun &bus (outs)
X  (prog (dp)
X        (setq *alpha-flag-part* *flag-part*)
X        (setq *alpha-data-part* *data-part*)
X        (setq dp (car *data-part*))
X        (setq *c1* (car dp))
X        (setq dp (cdr dp))
X        (setq *c2* (car dp))
X        (setq dp (cdr dp))
X        (setq *c3* (car dp))
X        (setq dp (cdr dp))
X        (setq *c4* (car dp))
X        (setq dp (cdr dp))
X        (setq *c5* (car dp))
X        (setq dp (cdr dp))
X        (setq *c6* (car dp))
X        (setq dp (cdr dp))
X        (setq *c7* (car dp))
X        (setq dp (cdr dp))
X        (setq *c8* (car dp))
X        (setq dp (cdr dp))
X        (setq *c9* (car dp))
X        (setq dp (cdr dp))
X        (setq *c10* (car dp))
X        (setq dp (cdr dp))
X        (setq *c11* (car dp))
X        (setq dp (cdr dp))
X        (setq *c12* (car dp))
X        (setq dp (cdr dp))
X        (setq *c13* (car dp))
X        (setq dp (cdr dp))
X        (setq *c14* (car dp))
X        (setq dp (cdr dp))
X        (setq *c15* (car dp))
X        (setq dp (cdr dp))
X        (setq *c16* (car dp))
X        (setq dp (cdr dp))
X        (setq *c17* (car dp))
X        (setq dp (cdr dp))
X        (setq *c18* (car dp))
X        (setq dp (cdr dp))
X        (setq *c19* (car dp))
X        (setq dp (cdr dp))
X        (setq *c20* (car dp))
X        (setq dp (cdr dp))
X        (setq *c21* (car dp))
X        (setq dp (cdr dp))
X        (setq *c22* (car dp))
X        (setq dp (cdr dp))
X        (setq *c23* (car dp))
X        (setq dp (cdr dp))
X        (setq *c24* (car dp))
X        (setq dp (cdr dp))
X        (setq *c25* (car dp))
X        (setq dp (cdr dp))
X        (setq *c26* (car dp))
X        (setq dp (cdr dp))
X        (setq *c27* (car dp))
X        (setq dp (cdr dp))
X        (setq *c28* (car dp))
X        (setq dp (cdr dp))
X        (setq *c29* (car dp))
X        (setq dp (cdr dp))
X        (setq *c30* (car dp))
X        (setq dp (cdr dp))
X        (setq *c31* (car dp))
X        (setq dp (cdr dp))
X        (setq *c32* (car dp))
X        (setq dp (cdr dp))
X        (setq *c33* (car dp))
X        (setq dp (cdr dp))
X        (setq *c34* (car dp))
X        (setq dp (cdr dp))
X        (setq *c35* (car dp))
X        (setq dp (cdr dp))
X        (setq *c36* (car dp))
X        (setq dp (cdr dp))
X        (setq *c37* (car dp))
X        (setq dp (cdr dp))
X        (setq *c38* (car dp))
X        (setq dp (cdr dp))
X        (setq *c39* (car dp))
X        (setq dp (cdr dp))
X        (setq *c40* (car dp))
X        (setq dp (cdr dp))
X        (setq *c41* (car dp))
X        (setq dp (cdr dp))
X        (setq *c42* (car dp))
X        (setq dp (cdr dp))
X        (setq *c43* (car dp))
X        (setq dp (cdr dp))
X        (setq *c44* (car dp))
X        (setq dp (cdr dp))
X        (setq *c45* (car dp))
X        (setq dp (cdr dp))
X        (setq *c46* (car dp))
X        (setq dp (cdr dp))
X        (setq *c47* (car dp))
X        (setq dp (cdr dp))
X        (setq *c48* (car dp))
X        (setq dp (cdr dp))
X        (setq *c49* (car dp))
X        (setq dp (cdr dp))
X        (setq *c50* (car dp))
X        (setq dp (cdr dp))
X        (setq *c51* (car dp))
X        (setq dp (cdr dp))
X        (setq *c52* (car dp))
X        (setq dp (cdr dp))
X        (setq *c53* (car dp))
X        (setq dp (cdr dp))
X        (setq *c54* (car dp))
X        (setq dp (cdr dp))
X        (setq *c55* (car dp))
X        (setq dp (cdr dp))
X        (setq *c56* (car dp))
X        (setq dp (cdr dp))
X        (setq *c57* (car dp))
X        (setq dp (cdr dp))
X        (setq *c58* (car dp))
X        (setq dp (cdr dp))
X        (setq *c59* (car dp))
X        (setq dp (cdr dp))
X        (setq *c60* (car dp))
X        (setq dp (cdr dp))
X        (setq *c61* (car dp))
X        (setq dp (cdr dp))
X        (setq *c62* (car dp))
X        (setq dp (cdr dp))
X        (setq *c63* (car dp))
X        (setq dp (cdr dp))
X        (setq *c64* (car dp))
X        (eval-nodelist outs)))
X
X(defun &any (outs register const-list)
X  (prog (z c)
X        (setq z (fast-symeval register))
X        (cond ((numberp z) (go number)))
X   symbol (cond ((null const-list) (return nil))
X                ((eq (car const-list) z) (go ok))
X                (t (setq const-list (cdr const-list)) (go symbol)))
X   number (cond ((null const-list) (return nil))
X                ((and (numberp (setq c (car const-list)))
X                      (=alg c z))
X                 (go ok))
X                (t (setq const-list (cdr const-list)) (go number)))
X   ok   (eval-nodelist outs)))
X
X(defun teqa (outs register constant)
X  (and (eq (fast-symeval register) constant) (eval-nodelist outs)))
X
X(defun tnea (outs register constant)
X  (and (not (eq (fast-symeval register) constant)) (eval-nodelist outs)))
X
X(defun txxa (outs register constant)
X  (and (symbolp (fast-symeval register)) (eval-nodelist outs)))
X
X(defun teqn (outs register constant)
X  (prog (z)
X        (setq z (fast-symeval register))
X        (and (numberp z)
X             (=alg z constant)
X             (eval-nodelist outs))))
X
X(defun tnen (outs register constant)
X  (prog (z)
X        (setq z (fast-symeval register))
X        (and (or (not (numberp z))
X                 (not (=alg z constant)))
X             (eval-nodelist outs))))
X
X(defun txxn (outs register constant)
X  (prog (z)
X        (setq z (fast-symeval register))
X        (and (numberp z) (eval-nodelist outs))))
X
X(defun tltn (outs register constant)
X  (prog (z)
X        (setq z (fast-symeval register))
X        (and (numberp z)
X             (greaterp constant z)
X             (eval-nodelist outs))))
X
X(defun tgtn (outs register constant)
X  (prog (z)
X        (setq z (fast-symeval register))
X        (and (numberp z)
X             (greaterp z constant)
X             (eval-nodelist outs))))
X
X(defun tgen (outs register constant)
X  (prog (z)
X        (setq z (fast-symeval register))
X        (and (numberp z)
X             (not (greaterp constant z))
X             (eval-nodelist outs))))
X
X(defun tlen (outs register constant)
X  (prog (z)
X        (setq z (fast-symeval register))
X        (and (numberp z)
X             (not (greaterp z constant))
X             (eval-nodelist outs))))
X
X(defun teqs (outs vara varb)
X  (prog (a b)
X        (setq a (fast-symeval vara))
X        (setq b (fast-symeval varb))
X        (cond ((eq a b) (eval-nodelist outs))
X              ((and (numberp a)
X                    (numberp b)
X                    (=alg a b))
X               (eval-nodelist outs)))))
X
X(defun tnes (outs vara varb)
X  (prog (a b)
X        (setq a (fast-symeval vara))
X        (setq b (fast-symeval varb))
X        (cond ((eq a b) (return nil))
X              ((and (numberp a)
X                    (numberp b)
X                    (=alg a b))
X               (return nil))
X              (t (eval-nodelist outs)))))
X
X(defun txxs (outs vara varb)
X  (prog (a b)
X        (setq a (fast-symeval vara))
X        (setq b (fast-symeval varb))
X        (cond ((and (numberp a) (numberp b)) (eval-nodelist outs))
X              ((and (not (numberp a)) (not (numberp b)))
X               (eval-nodelist outs)))))
X
X(defun tlts (outs vara varb)
X  (prog (a b)
X        (setq a (fast-symeval vara))
X        (setq b (fast-symeval varb))
X        (and (numberp a)
X             (numberp b)
X             (greaterp b a)
X             (eval-nodelist outs))))
X
X(defun tgts (outs vara varb)
X  (prog (a b)
X        (setq a (fast-symeval vara))
X        (setq b (fast-symeval varb))
X        (and (numberp a)
X             (numberp b)
X             (greaterp a b)
X             (eval-nodelist outs))))
X
X(defun tges (outs vara varb)
X  (prog (a b)
X        (setq a (fast-symeval vara))
X        (setq b (fast-symeval varb))
X        (and (numberp a)
X             (numberp b)
X             (not (greaterp b a))
X             (eval-nodelist outs))))
X
X(defun tles (outs vara varb)
X  (prog (a b)
X        (setq a (fast-symeval vara))
X        (setq b (fast-symeval varb))
X        (and (numberp a)
X             (numberp b)
X             (not (greaterp a b))
X             (eval-nodelist outs))))
X
X(defun &two (left-outs right-outs)
X  (prog (fp dp)
X        (cond (*sendtocall*
X               (setq fp *flag-part*)
X               (setq dp *data-part*))
X              (t
X               (setq fp *alpha-flag-part*)
X               (setq dp *alpha-data-part*)))
X        (sendto fp dp 'left left-outs)
X        (sendto fp dp 'right right-outs)))
X
X(defun &mem (left-outs right-outs memory-list)
X  (prog (fp dp)
X        (cond (*sendtocall*
X               (setq fp *flag-part*)
X               (setq dp *data-part*))
X              (t
X               (setq fp *alpha-flag-part*)
X               (setq dp *alpha-data-part*)))
X        (sendto fp dp 'left left-outs)
X        (add-token memory-list fp dp nil)
X        (sendto fp dp 'right right-outs)))
X
X(defun &and (outs lpred rpred tests)
X  (prog (mem)
X        (cond ((eq *side* 'right) (setq mem (memory-part lpred)))
X              (t (setq mem (memory-part rpred))))
X        (cond ((not mem) (return nil))
X              ((eq *side* 'right) (and-right outs mem tests))
X              (t (and-left outs mem tests)))))

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Sat Jan 31 00:55:56 1987
Date: Sat, 31 Jan 87 00:55:32 est
From: vtcs1::in% <LAWS@SRI-STRIPE.ARPA>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #26
Status: R


AIList Digest            Friday, 30 Jan 1987       Volume 5 : Issue 26

Today's Topics:
  Code - AI Expert Magazine Sources (Part 7 of 22)

----------------------------------------------------------------------

Date: 19 Jan 87 03:36:40 GMT
From: imagen!turner@ucbvax.Berkeley.EDU  (D'arc Angel)
Subject: AI Expert Magazine Sources (Part 7 of 22)

X
X(defun and-left (outs mem tests)
X  (prog (fp dp memdp tlist tst lind rind res)
X        (setq fp *flag-part*)
X        (setq dp *data-part*)
X   fail (and (null mem) (return nil))
X        (setq memdp (car mem))
X        (setq mem (cdr mem))
X        (setq tlist tests)
X   tloop (and (null tlist) (go succ))
X        (setq tst (car tlist))
X        (setq tlist (cdr tlist))
X        (setq lind (car tlist))
X        (setq tlist (cdr tlist))
X        (setq rind (car tlist))
X        (setq tlist (cdr tlist))
X        ;the next line differs in and-left & -right
X        (setq res (funcall tst (gelm memdp rind) (gelm dp lind)))
X        (cond (res (go tloop))
X              (t (go fail)))
X   succ ;the next line differs in and-left & -right
X        (sendto fp (cons (car memdp) dp) 'left outs)
X        (go fail)))
X
X(defun and-right (outs mem tests)
X  (prog (fp dp memdp tlist tst lind rind res)
X        (setq fp *flag-part*)
X        (setq dp *data-part*)
X   fail (and (null mem) (return nil))
X        (setq memdp (car mem))
X        (setq mem (cdr mem))
X        (setq tlist tests)
X   tloop (and (null tlist) (go succ))
X        (setq tst (car tlist))
X        (setq tlist (cdr tlist))
X        (setq lind (car tlist))
X        (setq tlist (cdr tlist))
X        (setq rind (car tlist))
X        (setq tlist (cdr tlist))
X        ;the next line differs in and-left & -right
X        (setq res (funcall tst (gelm dp rind) (gelm memdp lind)))
X        (cond (res (go tloop))
X              (t (go fail)))
X   succ ;the next line differs in and-left & -right
X        (sendto fp (cons (car dp) memdp) 'right outs)
X        (go fail)))
X
X
X(defun teqb (new eqvar)
X  (cond ((eq new eqvar) t)
X        ((not (numberp new)) nil)
X        ((not (numberp eqvar)) nil)
X        ((=alg new eqvar) t)
X        (t nil)))
X
X(defun tneb (new eqvar)
X  (cond ((eq new eqvar) nil)
X        ((not (numberp new)) t)
X        ((not (numberp eqvar)) t)
X        ((=alg new eqvar) nil)
X        (t t)))
X
X(defun tltb (new eqvar)
X  (cond ((not (numberp new)) nil)
X        ((not (numberp eqvar)) nil)
X        ((greaterp eqvar new) t)
X        (t nil)))
X
X(defun tgtb (new eqvar)
X  (cond ((not (numberp new)) nil)
X        ((not (numberp eqvar)) nil)
X        ((greaterp new eqvar) t)
X        (t nil)))
X
X(defun tgeb (new eqvar)
X  (cond ((not (numberp new)) nil)
X        ((not (numberp eqvar)) nil)
X        ((not (greaterp eqvar new)) t)
X        (t nil)))
X
X(defun tleb (new eqvar)
X  (cond ((not (numberp new)) nil)
X        ((not (numberp eqvar)) nil)
X        ((not (greaterp new eqvar)) t)
X        (t nil)))
X
X(defun txxb (new eqvar)
X  (cond ((numberp new)
X         (cond ((numberp eqvar) t)
X               (t nil)))
X        (t
X         (cond ((numberp eqvar) nil)
X               (t t)))))
X
X
X(defun &p (rating name var-dope ce-var-dope rhs)
X  (prog (fp dp)
X        (cond (*sendtocall*
X               (setq fp *flag-part*)
X               (setq dp *data-part*))
X              (t
X               (setq fp *alpha-flag-part*)
X               (setq dp *alpha-data-part*)))
X        (and (member fp '(nil old)) (removecs name dp))
X        (and fp (insertcs name dp rating))))
X
X(defun &old (a b c d e) nil) ;a null function used for deleting node
X
X(defun &not (outs lmem rpred tests)
X  (cond ((and (eq *side* 'right) (eq *flag-part* 'old)) nil)
X        ((eq *side* 'right) (not-right outs (car lmem) tests))
X        (t (not-left outs (memory-part rpred) tests lmem))))
X
X(defun not-left (outs mem tests own-mem)
X  (prog (fp dp memdp tlist tst lind rind res c)
X        (setq fp *flag-part*)
X        (setq dp *data-part*)
X        (setq c 0.)
X   fail (and (null mem) (go fin))
X        (setq memdp (car mem))
X        (setq mem (cdr mem))
X        (setq tlist tests)
X   tloop (and (null tlist) (setq c (1+ c)) (go fail))
X        (setq tst (car tlist))
X        (setq tlist (cdr tlist))
X        (setq lind (car tlist))
X        (setq tlist (cdr tlist))
X        (setq rind (car tlist))
X        (setq tlist (cdr tlist))
X        ;the next line differs in not-left & -right
X        (setq res (funcall tst (gelm memdp rind) (gelm dp lind)))
X        (cond (res (go tloop))
X              (t (go fail)))
X   fin  (add-token own-mem fp dp c)
X        (and (== c 0.) (sendto fp dp 'left outs))))
X
X(defun not-right (outs mem tests)
X  (prog (fp dp memdp tlist tst lind rind res newfp inc newc)
X        (setq fp *flag-part*)
X        (setq dp *data-part*)
X        (cond ((not fp) (setq inc -1.) (setq newfp 'new))
X              ((eq fp 'new) (setq inc 1.) (setq newfp nil))
X              (t (return nil)))
X   fail (and (null mem) (return nil))
X        (setq memdp (car mem))
X        (setq newc (cadr mem))
X        (setq tlist tests)
X   tloop (and (null tlist) (go succ))
X        (setq tst (car tlist))
X        (setq tlist (cdr tlist))
X        (setq lind (car tlist))
X        (setq tlist (cdr tlist))
X        (setq rind (car tlist))
X        (setq tlist (cdr tlist))
X        ;the next line differs in not-left & -right
X        (setq res (funcall tst (gelm dp rind) (gelm memdp lind)))
X        (cond (res (go tloop))
X              (t (setq mem (cddr mem)) (go fail)))
X   succ (setq newc (+ inc newc))
X        (rplaca (cdr mem) newc)
X        (cond ((or (and (== inc -1.) (== newc 0.))
X                   (and (== inc 1.) (== newc 1.)))
X               (sendto newfp memdp 'right outs)))
X        (setq mem (cddr mem))
X        (go fail)))
X
X
X
X;;; Node memories
X
X
X(defun add-token (memlis flag data-part num)
X  (prog (was-present)
X        (cond ((eq flag 'new)
X               (setq was-present nil)
X               (real-add-token memlis data-part num))
X              ((not flag)
X              (setq was-present (remove-old memlis data-part num)))
X              ((eq flag 'old) (setq was-present t)))
X        (return was-present)))
X
X(defun real-add-token (lis data-part num)
X  (setq *current-token* (1+ *current-token*))
X  (cond (num (rplaca lis (cons num (car lis)))))
X  (rplaca lis (cons data-part (car lis))))
X
X(defun remove-old (lis data num)
X  (cond (num (remove-old-num lis data))
X        (t (remove-old-no-num lis data))))
X
X(defun remove-old-num (lis data)
X  (prog (m next last)
X        (setq m (car lis))
X        (cond ((atom m) (return nil))
X              ((top-levels-eq data (car m))
X               (setq *current-token* (1- *current-token*))
X               (rplaca lis (cddr m))
X               (return (car m))))
X        (setq next m)
X   loop (setq last next)
X        (setq next (cddr next))
X        (cond ((atom next) (return nil))
X              ((top-levels-eq data (car next))
X               (rplacd (cdr last) (cddr next))
X               (setq *current-token* (1- *current-token*))
X               (return (car next)))
X              (t (go loop)))))
X
X(defun remove-old-no-num (lis data)
X  (prog (m next last)
X        (setq m (car lis))
X        (cond ((atom m) (return nil))
X              ((top-levels-eq data (car m))
X               (setq *current-token* (1- *current-token*))
X               (rplaca lis (cdr m))
X               (return (car m))))
X        (setq next m)
X   loop (setq last next)
X        (setq next (cdr next))
X        (cond ((atom next) (return nil))
X              ((top-levels-eq data (car next))
X               (rplacd last (cdr next))
X               (setq *current-token* (1- *current-token*))
X               (return (car next)))
X              (t (go loop)))))
X
X
X
X;;; Conflict Resolution
X;
X;
X; each conflict set element is a list of the following form:
X; ((p-name . data-part) (sorted wm-recency) special-case-number)
X
X(defun removecs (name data)
X  (prog (cr-data inst cs)
X        (setq cr-data (cons name data))
X       (setq cs *conflict-set*)
X loop1 (cond ((null cs)
X               (record-refract name data)
X               (return nil)))
X       (setq inst (car cs))
X       (setq cs (cdr cs))
X       (and (not (top-levels-eq (car inst) cr-data)) (go loop1))
X        (setq *conflict-set* (delete inst *conflict-set* :test #'eq))))
X
X(defun insertcs (name data rating)
X  (prog (instan)
X    (and (refracted name data) (return nil))
X    (setq instan (list (cons name data) (order-tags data) rating))
X    (and (atom *conflict-set*) (setq *conflict-set* nil))
X    (return (setq *conflict-set* (cons instan *conflict-set*)))))
X
X(defun order-tags (dat)
X  (prog (tags)
X        (setq tags nil)
X   l1  (and (atom dat) (go l2))
X        (setq tags (cons (creation-time (car dat)) tags))
X        (setq dat (cdr dat))
X        (go l1)
X   l2  (cond ((eq *strategy* 'mea)
X               (return (cons (car tags) (dsort (cdr tags)))))
X              (t (return (dsort tags))))))
X
X; destructively sort x into descending order
X
X(defun dsort (x)
X  (prog (sorted cur next cval nval)
X        (and (atom (cdr x)) (return x))
X   loop (setq sorted t)
X        (setq cur x)
X        (setq next (cdr x))
X   chek (setq cval (car cur))
X        (setq nval (car next))
X        (cond ((> nval cval)
X               (setq sorted nil)
X               (rplaca cur nval)
X               (rplaca next cval)))
X        (setq cur next)
X        (setq next (cdr cur))
X        (cond ((not (null next)) (go chek))
X              (sorted (return x))
X              (t (go loop)))))
X
X(defun conflict-resolution nil
X  (prog (best len)
X        (setq len (length *conflict-set*))
X        (cond ((> len *max-cs*) (setq *max-cs* len)))
X        (setq *total-cs* (+ *total-cs* len))
X        (cond (*conflict-set*
X               (setq best (best-of *conflict-set*))
X               (setq *conflict-set* (delete best *conflict-set* :test #'eq))
X               (return (pname-instantiation best)))
X              (t (return nil)))))
X
X(defun best-of (set) (best-of* (car set) (cdr set)))
X
X(defun best-of* (best rem)
X  (cond ((not rem) best)
X        ((conflict-set-compare best (car rem))
X         (best-of* best (cdr rem)))
X        (t (best-of* (car rem) (cdr rem)))))
X
X(defun remove-from-conflict-set (name)
X  (prog (cs entry)
X   l1   (setq cs *conflict-set*)
X   l2   (cond ((atom cs) (return nil)))
X        (setq entry (car cs))
X        (setq cs (cdr cs))
X        (cond ((eq name (caar entry))
X               (setq *conflict-set* (delete entry *conflict-set* :test #'eq))
X               (go l1))
X              (t (go l2)))))
X
X(defun pname-instantiation (conflict-elem) (car conflict-elem))
X
X(defun order-part (conflict-elem) (cdr conflict-elem))
X
X(defun instantiation (conflict-elem)
X  (cdr (pname-instantiation conflict-elem)))
X
X
X(defun conflict-set-compare (x y)
X  (prog (x-order y-order xl yl xv yv)
X        (setq x-order (order-part x))
X        (setq y-order (order-part y))
X        (setq xl (car x-order))
X        (setq yl (car y-order))
X   data (cond ((and (null xl) (null yl)) (go ps))
X              ((null yl) (return t))
X              ((null xl) (return nil)))
X        (setq xv (car xl))
X        (setq yv (car yl))
X        (cond ((> xv yv) (return t))
X              ((> yv xv) (return nil)))
X        (setq xl (cdr xl))
X        (setq yl (cdr yl))
X        (go data)
X   ps   (setq xl (cdr x-order))
X        (setq yl (cdr y-order))
X   psl  (cond ((null xl) (return t)))
X        (setq xv (car xl))
X        (setq yv (car yl))
X        (cond ((> xv yv) (return t))
X              ((> yv xv) (return nil)))
X        (setq xl (cdr xl))
X        (setq yl (cdr yl))
X        (go psl)))
X
X
X(defun conflict-set nil
X  (prog (cnts cs p z best)
X        (setq cnts nil)
X        (setq cs *conflict-set*)
X   l1  (and (atom cs) (go l2))
X        (setq p (caaar cs))
X        (setq cs (cdr cs))
X        (setq z (assoc p cnts :test #'eq))
X        (cond ((null z) (setq cnts (cons (cons p 1.) cnts)))
X              (t (rplacd z (1+ (cdr z)))))
X        (go l1)
X   l2  (cond ((atom cnts)
X               (setq best (best-of *conflict-set*))
X               (terpri)
X               (return (list (caar best) 'dominates))))
X        (terpri)
X        (princ (caar cnts))
X        (cond ((> (cdar cnts) 1.)
X               (princ '|       (|)
X               (princ (cdar cnts))
X               (princ '| occurrences)|)))
X        (setq cnts (cdr cnts))
X        (go l2)))
X
X
X
X;;; WM maintaining functions
X;
X; The order of operations in the following two functions is critical.
X; add-to-wm order: (1) change wm (2) record change (3) match
X; remove-from-wm order: (1) record change (2) match (3) change wm
X; (back will not restore state properly unless wm changes are recorded
X; before the cs changes that they cause)  (match will give errors if
X; the thing matched is not in wm at the time)
X
X
X(defun add-to-wm (wme override)
X  (prog (fa z part timetag port)
X    (setq *critical* t)
X    (setq *current-wm* (1+ *current-wm*))
X    (and (> *current-wm* *max-wm*) (setq *max-wm* *current-wm*))
X    (setq *action-count* (1+ *action-count*))
X    (setq fa (wm-hash wme))
X    (or (member fa *wmpart-list* :test #'eq)
X        (setq *wmpart-list* (cons fa *wmpart-list*)))
X    (setq part (get fa 'wmpart*))
X    (cond (override (setq timetag override))
X          (t (setq timetag *action-count*)))
X    (setq z (cons wme timetag))
X    (putprop fa (cons z part) 'wmpart*)
X    (record-change '=>wm *action-count* wme)
X    (match 'new wme)
X    (setq *critical* nil)
X    (cond ((and *in-rhs* *wtrace*)
X           (setq port (trace-file))
X           (terpri port)
X           (princ '|=>wm: | port)
X           (ppelm wme port)))
X    (and *in-rhs* *mtrace* (setq *madeby*
X                                 (cons (cons wme *p-name*) *madeby*)))))
X
X; remove-from-wm uses eq, not equal to determine if wme is present
X
X(defun remove-from-wm (wme)
X  (prog (fa z part timetag port)
X    (setq fa (wm-hash wme))
X    (setq part (get fa 'wmpart*))
X    (setq z (assoc wme part :test #'eq))
X    (or z (return nil))
X    (setq timetag (cdr z))
X    (cond ((and *wtrace* *in-rhs*)
X           (setq port (trace-file))
X           (terpri port)
X           (princ '|<=wm: | port)
X           (ppelm wme port)))
X    (setq *action-count* (1+ *action-count*))
X    (setq *critical* t)
X    (setq *current-wm* (1- *current-wm*))
X    (record-change '<=wm timetag wme)
X    (match nil wme)
X    (putprop fa (delete z part :test #'eq) 'wmpart* )
X    (setq *critical* nil)))
X
X; mapwm maps down the elements of wm, applying fn to each element
X; each element is of form (datum . creation-time)
X
X(defun mapwm (fn)
X  (prog (wmpl part)
X        (setq wmpl *wmpart-list*)
X   lab1 (cond ((atom wmpl) (return nil)))
X        (setq part (get (car wmpl) 'wmpart*))
X        (setq wmpl (cdr wmpl))
X        (mapc fn part)
X        (go lab1)))
X
X(defmacro wm (&rest a)
X  `(progn
X   (mapc (function (lambda (z) (terpri) (ppelm z t)))
X       (get-wm ',a))
X  nil) )
X
X(defun get-wm (z)
X  (setq *wm-filter* z)
X  (setq *wm* nil)
X  (mapwm (function get-wm2))
X  (prog2 nil *wm* (setq *wm* nil)))
X
X(defun get-wm2 (elem)
X (cond ((or (null *wm-filter*) (member (cdr elem) *wm-filter*))
X       (setq *wm* (cons (car elem) *wm*)))))
X
X(defun wm-hash (x)
X  (cond ((not x) '<default>)
X        ((not (car x)) (wm-hash (cdr x)))
X        ((symbolp (car x)) (car x))
X        (t (wm-hash (cdr x)))))
X
X(defun creation-time (wme)
X  (cdr (assoc wme (get (wm-hash wme) 'wmpart*) :test #'eq)))
X
X(defun rehearse nil
X  (prog nil
X    (setq *old-wm* nil)
X    (mapwm (function refresh-collect))
X    (mapc (function refresh-del) *old-wm*)
X    (mapc (function refresh-add) *old-wm*)
X    (setq *old-wm* nil)))
X
X(defun refresh-collect (x) (setq *old-wm* (cons x *old-wm*)))
X
X(defun refresh-del (x) (remove-from-wm (car x)))
X
X(defun refresh-add (x) (add-to-wm (car x) (cdr x)))
X
X(defun trace-file ()
X  (prog (port)
X        (setq port t)
X       (cond (*trace-file*
X              (setq port ($ofile *trace-file*))
X              (cond ((null port)
X                     (%warn '|trace: file has been closed| *trace-file*)
X                     (setq port t)))))
X        (return port)))
X
X
X;;; Basic functions for RHS evaluation
X
X(defun eval-rhs (pname data)
X  (prog (node port)
X    (cond (*ptrace*
X           (setq port (trace-file))
X           (terpri port)
X           (princ *cycle-count* port)
X           (princ '|. | port)
X           (princ pname port)
X           (time-tag-print data port)))
X    (setq *data-matched* data)
X    (setq *p-name* pname)
X    (setq *last* nil)
X    (setq node (get pname 'topnode))
X    (init-var-mem (var-part node))
X    (init-ce-var-mem (ce-var-part node))
X    (begin-record pname data)
X    (setq *in-rhs* t)
X    (eval (rhs-part node))
X    (setq *in-rhs* nil)
X    (end-record)))
X
X(defun time-tag-print (data port)
X  (cond ((not (null data))
X         (time-tag-print (cdr data) port)
X         (princ '| | port)
X         (princ (creation-time (car data)) port))))
X
X(defun init-var-mem (vlist)
X  (prog (v ind r)
X        (setq *variable-memory* nil)
X   top  (and (atom vlist) (return nil))
X        (setq v (car vlist))
X        (setq ind (cadr vlist))
X        (setq vlist (cddr vlist))
X        (setq r (gelm *data-matched* ind))
X        (setq *variable-memory* (cons (cons v r) *variable-memory*))
X        (go top)))
X

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Sat Jan 31 00:56:23 1987
Date: Sat, 31 Jan 87 00:55:51 est
From: vtcs1::in% <LAWS@SRI-STRIPE.ARPA>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #27
Status: R


AIList Digest            Friday, 30 Jan 1987       Volume 5 : Issue 27

Today's Topics:
  Code - AI Expert Magazine Sources (Part 8 of 22)

----------------------------------------------------------------------

Date: 19 Jan 87 03:36:40 GMT
From: imagen!turner@ucbvax.Berkeley.EDU  (D'arc Angel)
Subject: AI Expert Magazine Sources (Part 8 of 22)

X(defun init-ce-var-mem (vlist)
X  (prog (v ind r)
X        (setq *ce-variable-memory* nil)
X   top  (and (atom vlist) (return nil))
X        (setq v (car vlist))
X        (setq ind (cadr vlist))
X        (setq vlist (cddr vlist))
X        (setq r (ce-gelm *data-matched* ind))
X        (setq *ce-variable-memory*
X              (cons (cons v r) *ce-variable-memory*))
X        (go top)))
X
X(defun make-ce-var-bind (var elem)
X  (setq *ce-variable-memory*
X        (cons (cons var elem) *ce-variable-memory*)))
X
X(defun make-var-bind (var elem)
X  (setq *variable-memory* (cons (cons var elem) *variable-memory*)))
X
X(defun $varbind (x)
X  (prog (r)
X       (and (not *in-rhs*) (return x))
X        (setq r (assoc x *variable-memory* :test #'eq))
X        (cond (r (return (cdr r)))
X              (t (return x)))))
X
X(defun get-ce-var-bind (x)
X  (prog (r)
X        (cond ((numberp x) (return (get-num-ce x))))
X        (setq r (assoc x *ce-variable-memory* :test #'eq))
X        (cond (r (return (cdr r)))
X              (t (return nil)))))
X
X(defun get-num-ce (x)
X  (prog (r l d)
X        (setq r *data-matched*)
X        (setq l (length r))
X        (setq d (- l x))
X        (and (> 0. d) (return nil))
X   la   (cond ((null r) (return nil))
X              ((> 1. d) (return (car r))))
X        (setq d (1- d))
X        (setq r (cdr r))
X        (go la)))
X
X
X(defun build-collect (z)
X  (prog (r)
X   la   (and (atom z) (return nil))
X        (setq r (car z))
X        (setq z (cdr z))
X        (cond ((and r (listp r))
X               ($value '\()
X               (build-collect r)
X               ($value '\)))
X              ((eq r '\\) ($change (car z)) (setq z (cdr z)))
X              (t ($value r)))
X        (go la)))
X
X(defun unflat (x) (setq *rest* x) (unflat*))
X
X(defun unflat* nil
X  (prog (c)
X        (cond ((atom *rest*) (return nil)))
X        (setq c (car *rest*))
X        (setq *rest* (cdr *rest*))
X        (cond ((eq c '\() (return (cons (unflat*) (unflat*))))
X              ((eq c '\)) (return nil))
X              (t (return (cons c (unflat*)))))))
X
X
X(defun $change (x)
X  (prog nil
X        (cond ((and x (listp x)) (eval-function x)) ;modified to check for nil
X              (t ($value ($varbind x))))))
X
X(defun eval-args (z)
X  (prog (r)
X        (rhs-tab 1.)
X   la   (and (atom z) (return nil))
X        (setq r (car z))
X        (setq z (cdr z))
X        (cond ((eq r #\^)
X               (rhs-tab (car z))
X               (setq r (cadr z))
X               (setq z (cddr z))))
X        (cond ((eq r '//) ($value (car z)) (setq z (cdr z)))
X              (t ($change r)))
X        (go la)))
X
X
X(defun eval-function (form)
X  (cond ((not *in-rhs*)
X        (%warn '|functions cannot be used at top level| (car form)))
X       (t (eval form))))
X
X
X;;; Functions to manipulate the result array
X
X
X(defun $reset nil
X  (setq *max-index* 0)
X  (setq *next-index* 1))
X
X; rhs-tab implements the tab ('^') function in the rhs.  it has
X; four responsibilities:
X;      - to move the array pointers
X;      - to watch for tabbing off the left end of the array
X;        (ie, to watch for pointers less than 1)
X;      - to watch for tabbing off the right end of the array
X;      - to write nil in all the slots that are skipped
X; the last is necessary if the result array is not to be cleared
X; after each use; if rhs-tab did not do this, $reset
X; would be much slower.
X
X(defun rhs-tab (z) ($tab ($varbind z)))
X
X(defun $tab (z)
X  (prog (edge next)
X        (setq next ($litbind z))
X        (and (floatp next) (setq next (round next)))
X        (cond ((or (not (numberp next))
X                  (> next *size-result-array*)
X                  (> 1. next))
X               (%warn '|illegal index after ^| next)
X               (return *next-index*)))
X        (setq edge (- next 1.))
X        (cond ((> *max-index* edge) (go ok)))
X   clear (cond ((== *max-index* edge) (go ok)))
X        (putvector *result-array* edge nil)
X        (setq edge (1- edge))
X        (go clear)
X   ok   (setq *next-index* next)
X        (return next)))
X
X(defun $value (v)
X  (cond ((> *next-index* *size-result-array*)
X         (%warn '|index too large| *next-index*))
X        (t
X         (and (> *next-index* *max-index*)
X              (setq *max-index* *next-index*))
X         (putvector *result-array* *next-index* v)
X         (setq *next-index* (1+ *next-index*)))))
X
X(defun use-result-array nil
X  (prog (k r)
X        (setq k *max-index*)
X        (setq r nil)
X   top  (and (== k 0.) (return r))
X        (setq r (cons (getvector *result-array* k) r))
X        (setq k (1- k))
X        (go top)))
X
X(defun $assert nil
X  (setq *last* (use-result-array))
X  (add-to-wm *last* nil))
X
X(defun $parametercount nil *max-index*)
X
X(defun $parameter (k)
X  (cond ((or (not (numberp k)) (> k *size-result-array*) (< k 1.))
X        (%warn '|illegal parameter number | k)
X         nil)
X        ((> k *max-index*) nil)
X       (t (getvector *result-array* k))))
X
X
X;;; RHS actions
X
X
X(defmacro make(&rest z)
X  `(prog nil
X        ($reset)
X        (eval-args ',z)
X        ($assert)))
X
X(defmacro modify (&rest z)
X  `(prog (old args)
X        (setq args ',z)
X       (cond ((not *in-rhs*)
X              (%warn '|cannot be called at top level| 'modify)
X              (return nil)))
X        (setq old (get-ce-var-bind (car args)))
X        (cond ((null old)
X               (%warn '|modify: first argument must be an element variable|
X                        (car args))
X               (return nil)))
X        (remove-from-wm old)
X        (setq args (cdr args))
X        ($reset)
X   copy (and (atom old) (go fin))
X        ($change (car old))
X        (setq old (cdr old))
X        (go copy)
X   fin  (eval-args args)
X        ($assert)))
X
X(defmacro bind (&rest z)
X  `(prog (val)
X       (cond ((not *in-rhs*)
X              (%warn '|cannot be called at top level| 'bind)
X              (return nil)))
X    (cond ((< (length z) 1.)
X           (%warn '|bind: wrong number of arguments to| ',z)
X           (return nil))
X          ((not (symbolp (car ',z)))
X           (%warn '|bind: illegal argument| (car ',z))
X           (return nil))
X          ((= (length ',z) 1.) (setq val (gensym)))
X          (t ($reset)
X             (eval-args (cdr ',z))
X             (setq val ($parameter 1.))))
X    (make-var-bind (car ',z) val)))
X
X(defmacro cbind (&rest z)
X  `(cond ((not *in-rhs*)
X        (%warn '|cannot be called at top level| 'cbind))
X       ((not (= (length ',z) 1.))
X        (%warn '|cbind: wrong number of arguments| ',z))
X       ((not (symbolp (car ',z)))
X        (%warn '|cbind: illegal argument| (car ',z)))
X       ((null *last*)
X        (%warn '|cbind: nothing added yet| (car ',z)))
X       (t (make-ce-var-bind (car ',z) *last*))))
X
X(defmacro oremove (&rest z)
X  `(prog (old args)
X        (setq args ',z)
X       (and (not *in-rhs*)(return (top-level-remove args)))
X   top  (and (atom args) (return nil))
X        (setq old (get-ce-var-bind (car args)))
X        (cond ((null old)
X               (%warn '|remove: argument not an element variable| (car args))
X               (return nil)))
X        (remove-from-wm old)
X        (setq args (cdr args))
X        (go top)))
X
X(defmacro ocall (&rest z)
X  `(prog (f)
X       (setq f (car ',z))
X        ($reset)
X        (eval-args (cdr ',z))
X        (funcall f)))
X
X(defmacro owrite (&rest z)
X `(prog (port max k x needspace)
X       (cond ((not *in-rhs*)
X              (%warn '|cannot be called at top level| 'write)
X              (return nil)))
X       ($reset)
X       (eval-args ',z)
X       (setq k 1.)
X       (setq max ($parametercount))
X       (cond ((< max 1.)
X              (%warn '|write: nothing to print| ',z)
X              (return nil)))
X       (setq port (default-write-file))
X       (setq x ($parameter 1.))
X       (cond ((and (symbolp x) ($ofile x))
X              (setq port ($ofile x))
X              (setq k 2.)))
X        (setq needspace t)
X   la   (and (> k max) (return nil))
X       (setq x ($parameter k))
X       (cond ((eq x '|=== C R L F ===|)
X              (setq needspace nil)
X               (terpri port))
X              ((eq x '|=== R J U S T ===|)
X              (setq k (+ 2 k))
X              (do-rjust ($parameter (1- k)) ($parameter k) port))
X             ((eq x '|=== T A B T O ===|)
X              (setq needspace nil)
X              (setq k (1+ k))
X              (do-tabto ($parameter k) port))
X             (t
X              (and needspace (princ '| | port))
X              (setq needspace t)
X              (princ x port)))
X       (setq k (1+ k))
X       (go la)))
X
X(defun default-write-file ()
X  (prog (port)
X       (setq port t)
X       (cond (*write-file*
X              (setq port ($ofile *write-file*))
X              (cond ((null port)
X                     (%warn '|write: file has been closed| *write-file*)
X                     (setq port t)))))
X        (return port)))
X
X
X(defun do-rjust (width value port)
X  (prog (size)
X       (cond ((eq value '|=== T A B T O ===|)
X              (%warn '|rjust cannot precede this function| 'tabto)
X               (return nil))
X             ((eq value '|=== C R L F ===|)
X              (%warn '|rjust cannot precede this function| 'crlf)
X               (return nil))
X             ((eq value '|=== R J U S T ===|)
X              (%warn '|rjust cannot precede this function| 'rjust)
X               (return nil)))
X        (setq size (length (princ-to-string value )))
X       (cond ((> size width)
X              (princ '| | port)
X              (princ value port)
X              (return nil)))
X        (do k (- width size) (1- k) (not (> k 0)) (princ '| | port))
X       (princ value port)))
X
X(defun do-tabto (col port)
X  (eval `(format ,port (concatenate 'string "~" (princ-to-string ,col) "T"))))
X
X;  (prog (pos)
X;      (setq pos (1+ (nwritn port)))
X;      (cond ((> pos col)
X;             (terpri port)
X;             (setq pos 1)))
X;      (do k (- col pos) (1- k) (not (> k 0)) (princ '| | port))
X;      (return nil)))
X
X
X(defun halt nil
X  (cond ((not *in-rhs*)
X        (%warn '|cannot be called at top level| 'halt))
X       (t (setq *halt-flag* t))))
X
X(defmacro build (&rest z)
X  `(prog (r)
X       (cond ((not *in-rhs*)
X              (%warn '|cannot be called at top level| 'build)
X              (return nil)))
X        ($reset)
X        (build-collect ',z)
X        (setq r (unflat (use-result-array)))
X        (and *build-trace* (funcall *build-trace* r))
X        (compile-production (car r) (cdr r))))
X
X(defun infile(file)
X   (open file :direction :input))
X
X(defun outfile(file)
X   (open file :direction :output))
X
X(defmacro openfile (&rest z)
X  `(prog (file mode id)
X       ($reset)
X       (eval-args ',z)
X       (cond ((not (equal ($parametercount) 3.))
X              (%warn '|openfile: wrong number of arguments| ',z)
X              (return nil)))
X       (setq id ($parameter 1))
X       (setq file ($parameter 2))
X       (setq mode ($parameter 3))
X       (cond ((not (symbolp id))
X              (%warn '|openfile: file id must be a symbolic atom| id)
X              (return nil))
X              ((null id)
X               (%warn '|openfile: 'nil' is reserved for the terminal| nil)
X               (return nil))
X             ((or ($ifile id)($ofile id))
X              (%warn '|openfile: name already in use| id)
X              (return nil)))
X       (cond ((eq mode 'in) (putprop id  (infile file) 'inputfile))
X             ((eq mode 'out) (putprop id  (outfile file) 'outputfile))
X             (t (%warn '|openfile: illegal mode| mode)
X                (return nil)))
X       (return nil)))
X
X(defun $ifile (x)
X  (cond ((and x (symbolp x)) (get x 'inputfile))
X        (t *standard-input*)))
X
X(defun $ofile (x)
X  (cond ((and x (symbolp x)) (get x 'outputfile))
X        (t *standard-output*)))
X
X
X(defmacro closefile (&rest z)
X  `(progn
X    ($reset)
X    (eval-args ',z)
X    (mapc (function closefile2) (use-result-array))))
X
X(defun closefile2 (file)
X  (prog (port)
X       (cond ((not (symbolp file))
X              (%warn '|closefile: illegal file identifier| file))
X             ((setq port ($ifile file))
X              (close port)
X              (remprop file 'inputfile))
X             ((setq port ($ofile file))
X              (close port)
X              (remprop file 'outputfile)))
X       (return nil)))
X
X(defmacro default (&rest z)
X  `(prog (file use)
X       ($reset)
X       (eval-args ',z)
X       (cond ((not (equal ($parametercount) 2.))
X              (%warn '|default: wrong number of arguments| ',z)
X              (return nil)))
X       (setq file ($parameter 1))
X       (setq use ($parameter 2))
X       (cond ((not (symbolp file))
X              (%warn '|default: illegal file identifier| file)
X              (return nil))
X             ((not (member use '(write accept trace)))
X              (%warn '|default: illegal use for a file| use)
X              (return nil))
X             ((and (member use '(write trace))
X                   (not (null file))
X                   (not ($ofile file)))
X              (%warn '|default: file has not been opened for output| file)
X              (return nil))
X             ((and (eq use 'accept)
X                   (not (null file))
X                   (not ($ifile file)))
X              (%warn '|default: file has not been opened for input| file)
X              (return nil))
X             ((eq use 'write) (setq *write-file* file))
X             ((eq use 'accept) (setq *accept-file* file))
X             ((eq use 'trace) (setq *trace-file* file)))
X       (return nil)))
X
X
X
X;;; RHS Functions
X
X(defmacro accept (&rest z)
X  `(prog (port arg)
X       (cond ((> (length ',z) 1.)
X              (%warn '|accept: wrong number of arguments| ',z)
X              (return nil)))
X       (setq port t)
X       (cond (*accept-file*
X              (setq port ($ifile *accept-file*))
X              (cond ((null port)
X                     (%warn '|accept: file has been closed| *accept-file*)
X                     (return nil)))))
X       (cond ((= (length ',z) 1)
X              (setq arg ($varbind (car ',z)))
X              (cond ((not (symbolp arg))
X                     (%warn '|accept: illegal file name| arg)
X                     (return nil)))
X              (setq port ($ifile arg))
X              (cond ((null port)
X                     (%warn '|accept: file not open for input| arg)
X                     (return nil)))))
X        (cond ((= (tyipeek port) -1.)
X              ($value 'end-of-file)
X              (return nil)))
X       (flat-value (read port))))
X
X(defun flat-value (x)
X  (cond ((atom x) ($value x))
X        (t (mapc (function flat-value) x))))
X
X(defun span-chars (x prt)
X  (do ((ch (tyipeek prt) (tyipeek prt))) ((not (member ch x #'char-equal)))
       (read-char prt)))
X
X(defmacro acceptline (&rest z)
X  `(prog ( def arg port)
X       (setq port t)
X       (setq def ',z)
X       (cond (*accept-file*
X              (setq port ($ifile *accept-file*))
X              (cond ((null port)
X                     (%warn '|acceptline: file has been closed|
X                            *accept-file*)
X                     (return nil)))))
X       (cond ((> (length def) 0)
X              (setq arg ($varbind (car def)))
X              (cond ((and (symbolp arg) ($ifile arg))
X                     (setq port ($ifile arg))
X                     (setq def (cdr def))))))
X        (span-chars '(9. 41.) port)
X       (cond ((member (tyipeek port) '(-1. 10.))
X              (mapc (function $change) def)
X              (return nil)))
X   lp1 (flat-value (read port))
X        (span-chars '(9. 41.) port)
X       (cond ((not (member (tyipeek port) '(-1. 10.))) (go lp1)))))
X
X(defmacro substr (&rest l)
X  `(prog (k elm start end)
X        (cond ((not (= (length ',l) 3.))
X               (%warn '|substr: wrong number of arguments| ',l)
X               (return nil)))
X        (setq elm (get-ce-var-bind (car ',l)))
X        (cond ((null elm)
X               (%warn '|first argument to substr must be a ce var|
X                        ',l)
X               (return nil)))
X        (setq start ($varbind (cadr ',l)))
X       (setq start ($litbind start))
X        (cond ((not (numberp start))
X               (%warn '|second argument to substr must be a number|
X                        ',l)
X               (return nil)))
X       ;if a variable is bound to INF, the following
X       ;will get the binding and treat it as INF is
X       ;always treated.  that may not be good
X        (setq end ($varbind (caddr ',l)))
X        (cond ((eq end 'inf) (setq end (length elm))))
X       (setq end ($litbind end))
X        (cond ((not (numberp end))
X               (%warn '|third argument to substr must be a number|
X                        ',l)
X               (return nil)))
X        ;this loop does not check for the end of elm
X        ;instead it relies on cdr of nil being nil
X        ;this may not work in all versions of lisp
X        (setq k 1.)
X   la   (cond ((> k end) (return nil))
X              ((not (< k start)) ($value (car elm))))
X        (setq elm (cdr elm))
X        (setq k (1+ k))
X        (go la)))
X
X
X(defmacro compute (&rest z) `($value (ari ',z)))
X
X; arith is the obsolete form of compute
X(defmacro arith (&rest z) `($value (ari ',z)))
X
X(defun ari (x)
X  (cond ((atom x)
X         (%warn '|bad syntax in arithmetic expression | x)
X        0.)
X        ((atom (cdr x)) (ari-unit (car x)))
X        ((eq (cadr x) '+)
X         (+ (ari-unit (car x)) (ari (cddr x))))
X        ((eq (cadr x) '-)
X         (difference (ari-unit (car x)) (ari (cddr x))))
X        ((eq (cadr x) '*)
X         (times (ari-unit (car x)) (ari (cddr x))))
X        ((eq (cadr x) '//)
X         (/ (ari-unit (car x)) (ari (cddr x))))
X        ((eq (cadr x) '\\)
X         (mod (round (ari-unit (car x))) (round (ari (cddr x)))))
X        (t (%warn '|bad syntax in arithmetic expression | x) 0.)))
X
X(defun ari-unit (a)
X  (prog (r)
X        (cond ((listp a) (setq r (ari a)))
X              (t (setq r ($varbind a))))
X        (cond ((not (numberp r))
X               (%warn '|bad value in arithmetic expression| a)
X               (return 0.))
X              (t (return r)))))
X
X(defun genatom nil ($value (gensym)))
X
X(defmacro litval (&rest z)
X  `(prog (r)
X       (cond ((not (= (length ',z) 1.))
X              (%warn '|litval: wrong number of arguments| ',z)
X              ($value 0)
X              (return nil))
X             ((numberp (car ',z)) ($value (car ',z)) (return nil)))
X       (setq r ($litbind ($varbind (car ',z))))
X       (cond ((numberp r) ($value r) (return nil)))
X       (%warn '|litval: argument has no literal binding| (car ',z))
X       ($value 0)))
X
X
X(defmacro rjust (&rest z)
X  `(prog (val)
X        (cond ((not (= (length ',z) 1.))
X              (%warn '|rjust: wrong number of arguments| ',z)
X               (return nil)))
X        (setq val ($varbind (car ',z)))
X       (cond ((or (not (numberp val)) (< val 1.) (> val 127.))
X              (%warn '|rjust: illegal value for field width| val)
X              (return nil)))
X        ($value '|=== R J U S T ===|)
X       ($value val)))
X
X
X(defmacro crlf()
X     ($value '|=== C R L F ===|))
X
X(defmacro tabto (&rest z)
X  `(prog (val)
X        (cond ((not (= (length ',z) 1.))
X              (%warn '|tabto: wrong number of arguments| ',z)
X              (return nil)))
X        (setq val ($varbind (car ',z)))
X       (cond ((or (not (numberp val)) (< val 1.) (> val 127.))
X              (%warn '|tabto: illegal column number| ',z)
X              (return nil)))
X        ($value '|=== T A B T O ===|)
X       ($value val)))
X
X
X
X;;; Printing WM
X
X(defmacro ppwm (&rest z)
X  `(prog (next a avlist)
X        (setq avlist ',z)
X        (setq *filters* nil)
X        (setq next 1.)
X   l   (and (atom avlist) (go print))
X        (setq a (car avlist))
X        (setq avlist (cdr avlist))
X        (cond ((eq a #\^)
X               (setq next (car avlist))
X               (setq avlist (cdr avlist))
X               (setq next ($litbind next))
X               (and (floatp next) (setq next (round next)))
X               (cond ((or (not (numberp next))
X                          (> next *size-result-array*)
X                          (> 1. next))
X                      (%warn '|illegal index after ^| next)
X                      (return nil))))
X              ((variablep a)
X               (%warn '|ppwm does not take variables| a)
X               (return nil))
X              (t (setq *filters* (cons next (cons a *filters*)))
X                 (setq next (1+ next))))
X        (go l)
X   print (mapwm (function ppwm2))
X        (terpri)
X        (return nil)))
X
X(defun ppwm2 (elm-tag)
X  (cond ((filter (car elm-tag)) (terpri) (ppelm (car elm-tag) t))))
X
X(defun filter (elm)
X  (prog (fl indx val)
X        (setq fl *filters*)
X   top  (and (atom fl) (return t))
X        (setq indx (car fl))
X        (setq val (cadr fl))
X        (setq fl (cddr fl))
X        (and (ident (nth (1- indx) elm) val) (go top))
X        (return nil)))
X
X(defun ident (x y)
X  (cond ((eq x y) t)
X        ((not (numberp x)) nil)
X        ((not (numberp y)) nil)
X        ((=alg x y) t)
X        (t nil)))
X
X; the new ppelm is designed especially to handle literalize format
X; however, it will do as well as the old ppelm on other formats
X

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Tue Feb  3 00:43:13 1987
Date: Tue, 3 Feb 87 00:43:05 est
From: vtcs1::in% <LAWS@SRI-STRIPE.ARPA>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #28
Status: R


AIList Digest             Monday, 2 Feb 1987       Volume 5 : Issue 28

Today's Topics:
  Queries- OPS5 for 4.2BSD & Tutorial References & Common Lisp Code,
  Policy - Revised Policy on AI Expert Sources,
  Description - Telesophy Project,
  Seminars - Knowledge-Based Reasoning Toolkit (CMU) &
    Understanding How Devices Work (CMU),
  Conference - Conceptual Information Processing

----------------------------------------------------------------------

Date: 29 Jan 87 20:02:25 GMT
From: Bill Roberts <bill%ncar.csnet@RELAY.CS.NET>
Subject: OPS5 for 4.2BSD?

Does anyone know of a public domain version of OPS5 (with Lisp as its
implementation language) that runs under UNIX 4.2/4.3BSD?  We have 4.2BSD with
Franz Lisp and I would like to port some "stuff" from my Mac over to the VAX.
Thanks in advance for information on this.

                                                        Bill Roberts
                                                        NCAR/HAO
                                                        Boulder, CO
                                                        UUCP:...!hao!bill

------------------------------

Date: 29 Jan 87 19:52:14 GMT
From: atux01!jlc@rutgers.rutgers.edu  (J. Collymore)
Subject: Need References to VERY BASIC Concepts of AI & Preferred
         Comp. Langs.


I am interested in being pointed in the right direction to some VERY BASIC
concepts of how AI is used, what has gone before, which computer languages are
used for AI development, which aren't and why, basic concepts of mathematical
models used for simulating cognitive judgements and appropriate responses
(e.g. I feel bad vs. I don't feel too bad).

If you know of some good books in this area, please send me e-mail.  If any-
one else is interested, I'll post my responses.

Thanks.


                                                Jim Collymore

------------------------------

Date: Sat 31 Jan 87 18:31:04-EST
From: John C. Akbari <AKBARI@CS.COLUMBIA.EDU>
Subject: common lisp code

anyone have common lisp or zetalisp versions of any (or all) of the
code from the yale group, a la SAM, PAM, CA, etc... from
  Schank,R. & Riesbeck,C.  _Inside Computer Understanding: Five programs
Plus Miniatures_.  Erlbaum 1981.

also, a cl or zetalisp implementation of a reasonable ATN implementation
would be appreciated.

thanks in advance.

john c akbari
akbari@cs.columbia.edu

------------------------------

Date: Fri 30 Jan 87 08:17:26-PST
From: Stephen Barnard <BARNARD@SRI-IU.ARPA>
Subject: enough already

These listings really are outrageous.  Is this a plot to make
philosophical tracts seem amusing?

------------------------------

Date: 31 Jan 87  16:40 EST (Sat)
From: Tom Fawcett <FAWCETT@RED.RUTGERS.EDU>
Subject: Code on AIList


>The bulk of this code mailing does bother me, but there seems to be
>at least as much interest in it as in the seminar notices, bibliographies,
>and philosophy discussions.  AIList reaches thousands of students, and
>a fair proportion are no doubt interested in examining the code.  The
>initial offer of the code drew only positive feedback, so far as I
>know.
> ...
>
>Suggestions are welcome.
>                                        -- Ken Laws

OK, here's one - resurrect the idea of splitting the AIList.
One list for code and philosophy, another for seminar notices and real
discussion.

Ironically, the people who like the endless discussions about consciousness
are probably the same people who would be interested in this vast amount of
code.

-Tom Fawcett

------------------------------

Date: 28 Jan 87 18:42:31 GMT
From: pyramid!amdahl!meccts!meccsd!mecc!sewilco@decwrl.dec.com  (Scot
      E. Wilcoxon)
Subject: Re: posting of AI Expert magazine sources

Unfortunately, putting those interesting (to me) sources in comp.ai required
that I save them manually.  The source groups are archived automatically here
and at many other sites.  Scattering sources makes them harder to keep.

If these sources will be regularly posted, a comp.ai.sources group will
help the problem.
--
Scot E. Wilcoxon   Minn Ed Comp Corp  {quest,dayton,meccts}!mecc!sewilco
(612)481-3507           sewilco@MECC.COM       ihnp4!meccts!mecc!sewilco

  "Who's that lurking over there?  Is that Merv Griffin?"

------------------------------

Date: Sun 1 Feb 87 17:50:48-PST
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Reply-to: AIList-Request@SRI-AI.ARPA
Subject: Revised Code Policy

OK, I give up.  I've only received about ten comments, and the
negative ones are balanced by ones like this:

  By the way, discussions of consciousness, code, lengthy
  rebuttals, bibliographies, etc.: I love it all.

but the volume of the code messages has started to offend even
my sensibilities.  I'll halt distribution through the Arpanet
mail channels unless I get too many requests for copies of the
full text.  Arpanetters who still want the code can FTP the
files <AILIST>AIE*.TXT from SRI-STRIPE (using ANONYMOUS login)
where * ranges from 1 through 22.  (1 through 8 have been sent.)
Others who want the original nine 50K-char mesage files can send a
request to AIList-Request@SRI-STRIPE.ARPA.  Try not to make multiple
requests from one site, although I realize that there's no good
coordination mechanism.

The lesson here seems to be that the AIList is a discussion
list rather than a distribution list.  The code met my previous
criteria for inclusion -- it was a noncommercial submission,
relevant to AI, and of interest to a reasonable proportion of
the list membership.  I had thought that the bulk was acceptable
for a one-shot event; this seems to have been the case on the
Usenet half of AIList, but not on the Arpanet half.  There really
should be separate Arpanet lists for discussion and for seminar
and conference notices, bibliographies, code, and the like.  (I'm
still waiting for volunteers ...)

I apologize for the awkwardness of this resolution.  Having
started to provide the material, I find myself in the situation
of the man with the donkey who learned he couldn't please
everyone.  There won't be an easy remedy for these situations
until someone develops netwide fileservers and FTP, or at
least a coordinated list system that allows people to register
their interest profiles without human intervention.

I should also point out that Usenet has its comp.sources distribution,
but that the Arpanet lacks any broadcast channel for sharing code.
Perhaps it shouldn't have one, given the current U.S. paranoia about
technology export, but there are definite advantages for shared
subroutine libraries over having each student, researcher, or
engineer reinvent from scratch.  This exposure to "real code" may
also have had the beneficial effect of popping some illusions about
the nature of expert systems code, permitting the advantages of other
approaches (C, ADA, software engineering, sharable libraries, etc.)
to compete against the AI mystique.

                                        -- Ken Laws

------------------------------

Date: 30 Jan 87 00:47:31 GMT
From: imagen!turner@ucbvax.Berkeley.EDU  (D'arc Angel)
Subject: posting of AI expert sources


When i offered to post the AI Expert source listings, I received a
few weeks of please post or please email or both, as i result i am
unsure who asked for mailings and got the subsequent postings and
who did not receive them, so... could the people who did not get
them off of comp.ai or were missing parts please send me email so i
can make sure everybody got what they wanted.

Also due to my unfamiliarity with IBM PC format (that's where they
came from) i included trailing CR's (^M) in the shar file, this
caused unshar to complain about missing control codes. to the best
of my knowledge this had no effect on the sources and future
postings will have the CR's stripped.


C'est la vie, C'est le guerre, C'est la pomme de terre
Mail:   Imagen Corp. 2650 San Tomas Expressway Santa Clara, CA 95052-8101
UUCP:   ...{decvax,ucbvax}!decwrl!imagen!turner      AT&T: (408) 986-9400

------------------------------

Date: Mon, 26 Jan 87 14:40 EST
From: Tim Finin <Tim@cis.upenn.edu>
Subject: source postings from AI EXPERT magazine


If anyone is interested in the source code which goes with my AI EXPERT
articles on frame-based representation languages (Nov. and Dec '87, it can
be FTP'd from linc.cis.upenn.edu.  The file ~tim/pfl/pfltar contains a tar
tape of all of neccessary files.

Tim

------------------------------

Date: Thu, 29 Jan 87 11:55:12 est
From: schatz@thumper.bellcore.com (Bruce R. Schatz at
      thumper.bellcore.com)
Subject: Telesophy Project


Readers of this newsgroup may be interested in the following:

  The Telesophy Project at Bell Communications Research is a
research effort to understand how to provide uniform access
to AnyThing AnyWhere and thus permit browsing the WorldNet.
A telesophy system transparently stores and retrieves
information of different types from different locations.
  We have built a prototype on Sun workstation hardware, which
accesses multiple datatypes from multiple databases on multiple machines.
A set of databases have been obtained, ranging from Netnews
to journal citations to full-text magazines to color pictures,
and we are beginning to use the system on a daily basis.
  The prototype attempts to achieve the full potential of networks of
bitmapped workstations.  It provides a content-addressable distributed file
system coupled with local multi-media editing.  Building such an
end-to-end system requires finding some workable solution to a myriad of
unsolved research problems.
  We are seeking new colleagues to help build the telesophy prototype.
[...]

        Bruce Schatz
        schatz@bellcore.com
        (decvax,ihnp4,ucbvax)!bellcore!schatz

------------------------------

Date: 30 Jan 87 10:39:15 EST
From: Marcella.Zaragoza@isl1.ri.cmu.edu
Subject: Seminar - Knowledge-Based Reasoning Toolkit (CMU)


                        AI SEMINAR

TOPIC:    Knowledge-Based Reasoning at the Right Level of Abstraction:
          A Generic Task Toolkit

SPEAKER:  B. Chandrasekaran
          Laboratory for Artificial Intelligence Research
          Department of Computer and Information Science
          The Ohio State University
          Columbus, Ohio 43210

PLACE:    Wean Hall 5409

DATE:     Tuesday, February 3, 1987

TIME:     3:30 pm

                        ABSTRACT:

The first part of the talk is a critique of the level of abstraction of much
of the current  discussion on knowledge-based systems.  It will be argued
that the discussion at the level of rules-logic-frames-networks is the
"civil engineering" level, and there is a need for a level of abstraction
that corresponds to what the discipline of architecture does for
construction of buildings.  The constructs in architecture, viewed as a
language of habitable spaces, can be @i(implemented ) using the constructs
of civil engineering, but are not reducible to them.  Similarly, the level
of abstraction that we advocate is the language of generic tasks, types of
knowledge and control regimes.

In the second  part of the talk, I will outline the elements of a framework
at this level of abstraction for expert system design that we have been
developing in our research group over the last several years.  Complex
knowledge-based reasoning tasks can often be decomposed into a number of
@i(generic tasks each with associated types of knowledge and family of
control regimes). At different stages in reasoning, the system will
typically engage in one of the tasks, depending upon the knowledge available
and the state of problem solving.  The advantages of this point of view are
manifold:  (i) Since typically the generic tasks are at a much higher level
of abstraction than those associated with first generation expert system
languages, knowledge can be represented directly at the level appropriate to
the information processing task.  (ii) Since each of the generic tasks has
an appropriate control regime, problem solving behavior may be more
perspicuously encoded.  (iii) Because of a richer generic vocabulary in
terms of which knowledge and control are represented, explanation of problem
solving behavior is also more perspicuous.  We briefly describe six generic
tasks that we have found very useful in our work on knowledge-based
reasoning: classification, state abstraction, knowledge-directed retrieval,
object synthesis by plan selection and refinement, hypothesis matching, and
assembly of compound hypotheses for abduction.

Finally, we will describe how the above approach leads naturally to
a new technology: a toolbox which helps one to build expert systems
by using higher level building blocks.  We will review the toolbox,
and outline what sorts of systems can be build using the toolbox,
and what advantages accrue from this approach.

------------------------------

Date: 30 Jan 87 10:45:09 EST
From: Marcella.Zaragoza@isl1.ri.cmu.edu
Subject: Seminar - Understanding How Devices Work (CMU)


                        AI SEMINAR

TOPIC:    Understanding How Devices Work: Functional Representation
                of Devices and Compilation of Diagnostic Knowledge


SPEAKER:  B. Chandrasekaran
                Department of Computer & Information Science
                The Ohio State University
                Columbus, OH 43210


PLACE:    Wean Hall 4605

DATE:     Wednesday, February 4, 1987

TIME:     10:00 a.m.

                        ABSTRACT:

Where does diagnostic knowledge -- knowledge about  malfunctions and their
relation to observations -- come from?  One source of it is an agent's
understanding of how devices work, what has been called a ``deep model.''
We distinguish between deep models in the sense of scientific first
principles and deep cognitive models where the problem solver has a
qualitative symbolic representation of the system or device that accounts
qualtitatively for how the system ``works.''  We provide a typology of
different knowledge structures and reasoning processes that play a role in
qualitative or functional reasoning.  We indicate where the work of Kuipers,
de Kleer and Brown, Davis, Forbus, Bylander, Sembugamoorthy and
Chandrasekaran fit in this typology and what types of information each of
them can produce.  We elaborate on functional representations as deep
cognitive models for some aspects of causal reasoning in medicine.

Causal reasoning about devices or physical systems involves multiple types
of knowledge structures and reasoning mechanisms.  Two broad types of
approaches can be distinguished.  In one, causal reasoning is viewed mainly
as an ability to reason at different levels of detail: the work of Weiss and
Kulikowski, Patil and Pople come to mind.  Any hierarchies in this line of
work have as organizing principle different levels of detail.  In the other
strand of work, causal reasoning is viewed as reasoning from @i(structure)
of a device to its @i(behavior), from behavior to its @i(function), and from
all this to diagnostic conclusions.  In this approach, the hierarchical
organization of the device or system naturally results in an ability to move
into more or less levels of detail.  We discuss the primitives of such a
functional representation and show how it organizes an agent's understanding
of how a systems functions result from the behavior of the device, and how
such behavior results from the functions of the components and the structure
of the device.  We also indicate how device-independent compilers can
process this representation and produce diagnostic knowledge organized in a
hiererchy that mirrors the functional hierarchy.  Sticklen, Chandrasekaran
and Smith have work in progress that applies these notions to the medical
domain.

If you wish to meet with Dr. Chandrasekaran, please contact Marce at
x8818, or send mail to mlz@d.

------------------------------

Date: Fri, 30 Jan 87 14:28:03 EST
From: Jim Hendler <hendler@brillig.umd.edu>
Subject: Conference - Conceptual Information Processing


                        Call for Participation


                        Fourth Annual Workshop

                                on

        Theoretical Issues in Conceptual Information Processing


                        Washington, D.C.
                         June 4-5, 1987

                         Sponsored by

        American Association for Artificial Intelligence

                              and

                University of Maryland Institute for
                     Advanced Computer Studies

Objectives:

The goal of the investigations under the title "conceptual information
processing" has been understanding intelligence and cognition
computationally, rather than merely the construction of performance programs
or formalization per se.  Thus, this workshop will focus on an exploration
of issues common to representation and organization of knowledge and memory
for natural language understanding, planning, problem solving, explanation,
learning and other cognitive tasks.  The approaches to be covered are united
by a concern with representation, organization and processing of conceptual
knowledge with an emphasis on empirical investigation of these phenomena by
experimentation and implementation of computer programs.

Format:

The TICIP workshop will be comprised of a combination of panels, invited
paper presentations, and "debates" designed to encourage lively and active
discussion.  Not all participants will be invited to present, but all will
be expected to interact.

Attendance:

In order to maximize the interactive nature of this workshop, attendance
will be limited.  Those interested in participating, either as speakers or
audience, are asked to submit a one-page summary of work in this area.  A
small number of invitations will be extended to those who are interested in
the area but have not yet contributed.  Those interested in such an
invitation should contact the Program Chair.  A limited amount of financial
assistance will be available to graduate students invited to participate.


Review Process:

Invitation will be based on an informal review of submissions by the Program
Committee.


Workshop Information:

The conference chair is Prof. B. Chandrasekaran (Ohio State University).  The
program committee consists of Prof.s R. Alterman (Brandeis), J. Carbonell
(CMU), M. Dyer (UCLA), and J. Hendler (U of Maryland, Chair).


Submission:

A one page abstract of recent work in the area should be submitted to the
Program Chair.  The deadline for these submissions is April 15, 1987.
Applicants will be informed of their status soon thereafter.  Send abstracts
(but please, no papers) to:

James Hendler
Computer Science Department
University of Maryland
College Park, Md. 20742.
hendler@brillig.umd.edu
hendler@maryland

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Wed Feb  4 00:39:03 1987
Date: Wed, 4 Feb 87 00:38:47 est
From: vtcs1::in% <LAWS@SRI-STRIPE.ARPA>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #29
Status: R


AIList Digest             Monday, 2 Feb 1987       Volume 5 : Issue 29

Today's Topics:
  Philosophy - Consciousness & Methodological Epiphenomenalism

----------------------------------------------------------------------

Date: Thu, 29 Jan 87 09:27 EST
From: Seth Steinberg <sas@bfly-vax.bbn.com>
Subject: Consciousness?

I always thought that a scientific theory had to undergo a number of
tests to determine how "good" it is.  Needless to say, a perfect score
on one test may be balanced by a mediocre score on another test.  Some
useful tests are:

- Does the theory account for the data?
- Is the theory simple?  Are there unnecessary superfluousities?
- Is the theory useful?  Does it provide the basis for a fruitful
        program of research?

There are theories of the mind which include consciousness and those
arguing that it is secondary - a side effect of thought.  It seems
quite probable that the bulk of artificial intelligence work (machine
reasoning, qualitative physics, theorem proving ... ) can be performed
without considering this thorny issue.  While I frequently accuse my
computers of malice, I doubt they are consciously malicious when they
flake out on me.

While the study of consciousness is fascinating and lies at the base of
numerous religions, it doesn't seem to be scientifically useful.  Do I
rewrite my code because the machine is conscious or because it is
getting the wrong answer?  Is there a program of experimentation
suggested by the search for consciousness?  (Don't confuse this with
using conscious introspection to build unconscious intelligence as I
would to guide a toy tank from my office to the men's room).  Does
consciousness change the way artificial intelligence must be
programmed?  The evidence so far says NO.  [How is that for a baldfaced
assertion?  Send me your code with comments showing how consciousness
is taken into account and I'll see if I can rewrite it without
consciousness].

I don't think scientific theories of consciousness are incorrect, I
think they are barren.

                                        Seth

P.S. For an excellent example of a nifty but otherwise barren theory
read the essay Adam's Navel in Stephen Gould's book the Flamingo's
Smile.

------------------------------

Date: 28 Jan 87 17:36:10 GMT
From: princeton!mind!harnad@rutgers.rutgers.edu  (Stevan Harnad)
Subject: Re: More on Minsky on Mind(s)


Ken Laws <Laws@SRI-STRIPE.ARPA> wrote on mod.ai:

>       I'm inclined to grant a limited amount of consciousness to corporations
>       and even to ant colonies.  To do so, though, requires rethinking the
>       nature of pain and pleasure (to something related to homeostatis).

Unfortunately, the problem can't be resolved by mere magnanimity. Nor
by simply reinterpreting experience as something else -- at least not
without a VERY persuasive argument -- one no one in the history of the M/B
problem has managed to come up with so far. This history is just one of
hand-waving. Do you think "rethinking" pain as homeostastis does the trick?

>       computer operating systems and adaptive communications networks are
>       close [to conscious]. The issue is partly one of complexity, partly
>       of structure, partly of function.

I'll get back to the question of whether experiencing is an
all-or-none phenomenon or a matter of degree below. For now, I just
wonder what kind and degree of structural/functional "complexity" you
believe adds up to EXPERIENCING pain as opposed to merely behaving as
if experiencing pain.

>       I am assuming that neurons and other "simple" systems are C-1 but
>       not C-2  -- and C-2 is the kind of consciousness that people are
>       really interested in.

Yes, but do you really think that hard questions like these can be
settled by assumption? The question is: What justifies the inference
that an organism or device is experiencing ANYTHING AT ALL (C-1), and
what justifies interpreting internal functions as conscious ones?
Assumption does not seem like a very strong justification for an
inference or interpretation. What is the basis for your assumption?

I have proposed the TTT as the only justifiable basis, and I've given
arguments in support of that proposal. The default assumptions in the
AI/Cog-Sci community seem to be that sufficiently "complex" function
and performance capacity, preferably with "memory" and "learning," can be
dubbed "conscious," especially with the help of the subsidiary
assumption that consciousness admits of degrees. The thrust of my
critique is that this position is rather weak and arbitrary, and open
to telling counter-examples (like Searle's). But, more important, it
is not an issue on which the Cog-sci community even needs to take a
stand! For Cog-sci's objective goal -- of giving a causal explanation
of organisms' and devices' functional properties -- can be achieved
without embellishing any of its functional constructs with a conscious
interpretation. This is what I've called "methodological
epiphenomenalism." Moreover, the TTT (as an asymptotic goal) even
captures the intuitions about "sufficient functional complexity and
performance capacity," in a nonarbitrary way.

It is the resolution of these issues by unsupportable assumption, circularity,
arbitrary fiat and obiter dicta that I think is not doing the field
any good. And this is not at all because (1) it simply makes cog-sci look
silly to philosophers, but because, as I've repeatedly suggested, (2) the
unjustified embellishment of (otherwise trivial, toy-like) function
or performance as "conscious" can actually side-track cog-sci from its
objective, empirical goals, masking performance weaknesses by
anthropomorphically over-interpreting them. Finally (3), the
unrealizable goal of objectively capturing conscious phenomenology,
being illogical, threatens to derail cog-sci altogether, heading it in
the direction of hermeneutics (i.e., subjective interpretation of
mental states, i.e., C-2) rather than objective empirical explanation of
behavioral capacity. [If C-2 is "what people are really interested
in," then maybe they should turn to lit-crit instead of cog-sci.]

>       The mystery for me is why only >>one<< subsystem in my brain
>       seems to have that introspective property -- but
>       multiple personalities or split-brain subjects may be examples that
>       this is not a necessary condition.

Again, we'd probably be better off tackling the mystery of what the
brain can DO in the world, rather than what subjective states it can
generate. But, for the record, there is hardly agreement in clinical
psychology and neuropsychology about whether split-brain subjects or
multiple-personality patients really have more than one "mind," rather
than merely somewhat dissociated functions -- some conscious, some not --
that are not fully integrated, either temporally or experientially.
Inferring that someone has TWO minds seems to be an even trickier
problem than the usual problem ("solved" by the TTT) of inferring that
someone has ONE (a variant of the mind/body problem called the "other-minds"
problem). At least in the case of the latter we have our own, normal unitary
experience to generalize from...

>       [Regarding the question of whether consciousness admits of degrees:]
>       An airplane either can fly or it can't. Yet there are
>       simpler forms of flight used by other entities-- kites, frisbees,
>       paper airplanes, butterflies, dandelion seeds... My own opinion
>       is that insects and fish feel pain, but often do so in a generalized,
>       nonlocalized way that is similar to a feeling of illness in humans.

Flight is an objective, objectively definable function. Experience is
not. We can, for example, say that a massive body that stays aloft in
space for any non-zero period of time is "flying" to a degree. There
is no logical problem with this. But what does it mean to say that
something is conscious to a degree? Does the entity in question
EXPERIENCE anything AT ALL? If so, it is conscious. If not, not. What
has degree to do with it (apart from how much, or how intensely it
experiences, which is not the issue)?

I too believe that lower animals feel pain. I don't want to conjecture
what it feels like to them; but having conceded that it feels like
anything at all, you seem to have conceded that they are conscious.
Now where does the question of degree come into it?

The mind/body problem is the problem of subjectivity. When you ask
whether something is conscious, you're asking whether it has
subjective states at all, not which ones, how many, or how strong.
That is an all-or-none matter, and it concerns C-1. You can't speak of
C-2 at all until you have a principled handle on C-1.

>       I assume that lower forms experience lower forms of consciousness
>       along with lower levels of intelligence.  Such continuua seem natural
>       to me. If you wish to say that only humans and TTT-equivalents are
>       conscious, you should bear the burden of establishing the existence
>       and nature of the discontinuity.

I happen to share all those assumptions about consciousness in lower
forms, except that I don't see any continuum of consciousness there at
all. They're either conscious or not. I too believe they are conscious,
but that's an all-or-none matter. What's on a continuum is what they're
conscious OF, how much, to what degree, perhaps even what it's "like" for
them (although the latter is more a qualitative than a quantitative
matter). But THAT it's like SOMETHING is what it is that I am
assenting to when I agree that they are conscious at all. That's C-1.
And it's the biggest discontinuity we're ever likely to know of.

(Note that I didn't say "ever likely to experience," because of course
we DON'T experience the discontinuity: We know what it is like to
experience something, and to experience more or less things, more or less
intensely. But we don't know what it's like NOT to experience
something. [Be careful of the scope of the "not" here: I know what
it's like to see not-red, but not what it's like to not-see red, or be
unconscious, etc.] To know what it's like NOT to experience
anything at all is to experience not-experiencing, which is
a contradiction in terms. This is what I've called, in another paper,
the problem of "uncomplemented" categories. It is normally solved by
analogy. But where the categories are uncomplementable in principle,
analogy fails in principle. I think that this is what is behind our
incoherent intuition that consciousness admits of degrees: Because to
experience the conscious/unconscious discontinuity is logically
impossible, hence, a fortiori, experientially impossible.)

>       [About why neurons are conscious and atoms are not:]
>       When someone demonstrates that atoms can learn, I'll reconsider.

You're showing your assumptions here. What can be more evident about
the gratuitousness of mentalistic interpretation (in place of which I'm
recommending abstention or agnosticism on methodological grounds)
than that you're prepared to equate it with "learning"?

>       You are questioning my choice of discontinuity, but mine is easy
>       to defend (or give up) because I assume that the scale of
>       consciousness tapers off into meaninglessness. Asking whether
>       atoms are conscious is like asking whether aircraft bolts can fly.

So far, it's the continuum itself that seems meaningless (and the defense
a bit too easy-going). Asking questions about subjective phenomena
is not as easy as asking about objective ones, hopeful analogies
notwithstanding. The difficulty is called the mind/body problem.

>       I hope you're not insisting that no entity can be conscious without
>       passing the TTT. Even a rock could be conscious without our having
>       any justifiable means of deciding so.

Perhaps this is a good place to point out the frequent mistake of
mixing up "ontic" questions (about what's actually TRUE of the world)
and "epistemic" ones (about what we can KNOW about what's actually true of
the world, and how). I am not claiming that no entity can be conscious
without passing the TTT. I am not even claiming that every entity that
passes the TTT must be conscious. I am simply saying that IF there is
any defensible basis for inferring that an entity is conscious, it is
the TTT. The TTT is what we use with one another, when we daily
"solve" the informal "other-minds" problem. It is also cog-sci's
natural asymptotic goal in mind-modeling, and again the only one that
seems methodologically and logically defensible.

I believe that animals are conscious; I've even spoken of
species-specific variants of the TTT; but with these variants both our
intuitions and our ecological knowledge become weaker, and with them
the usefulness of the TTT in such cases. Our inability to devise or
administer an animal TTT doesn't make animals any less conscious. It just
makes it harder to know whether they are, and to justify our inferences.

(I'll leave the case of the stone as an exercise in applying the
ontic/epistemic distinction.)

>>SH:  "(To reply that synthetic substances with the same functional properties
>>      must be conscious under these conditions is to beg the question.)"
>KL:    I presume that a synthetic replica of myself, or any number of such
>       replicas, would continue my consciousness.

I agree completely. The problem was justifying attributing consciousness
to neurons and denying it to, say, atoms. It's circular to say
neurons are conscious because they have certain functional properties
that atoms lack MERELY on the grounds that neurons are functional
parts of (obviously) conscious organisms. If synthetic components
would work just as well (as I agree they would), you need a better
justification for imputing consciousness to neurons than that they are
parts of conscious organisms. You also need a better argument for
imputing consciousness to their synthetic substitutes. The TTT is my
(epistemic) criterion for consciousness at the whole-organism level.
Its usefulness and applicability trail off drastically with lower and lower
organisms. I've criticized cog-sci's default criteria earlier in this
response. What criteria do you propose, and what is the supporting
justification, for imputing consciousness to, say, neurons?

>       Perhaps professional philosophers are able to strive for a totally
>       consistent world view.

The only thing at issue is logical consistency, not world view. And even
professional scientists have to strive for that.

>       Why is there Being instead of Nothingness?  Who cares?

These standard examples (along with the unheard sound of the tree
falling alone in the forest) are easily used to lampoon philosophical
inquiry. They tend to be based on naive misunderstandings of what
philosophers are actually doing -- which is usually as significant and
rigorous as any other area of logically constrained intellectual
inquiry (although I wouldn't vouch for all of it, in any area of
inquiry).

But in this case consider the actual ironic state of affairs:
It is cog-sci that is hopefully opening up and taking an ambitious
position on the problems that normally only concern philosophers,
such as the mind/body problem. NONphilosophers are claiming : "this is
conscious and that's not," and "this is why," and "this is what
consciousness is." So who's bringing it up, and who's the one that cares?

Moreover, I happen myself to be a nonphilosopher (although I have a
sizeable respect for that venerable discipline and its inevitable quota
of insightful exponents); yet I repeatedly find myself in the peculiar
role of having to point out the philosophically well-known howlers
that cog-sci keeps tumbling into in its self-initiated inquiry into
"Nothingness." More ironic still, in arguing for the TTT and methodological
epiphenomenalism, I am actually saying: "Why do you care? Worrying about
consciousness will get you nowhere, and there's objective empirical
work to do!"

>       If I had to build an aircraft, I would not begin by refuting
>       theological arguments about Man being given dominion over the
>       Earth rather than the Heavens. I would start from a premise that
>       flight was possible and would try to derive enabling conditions.

Building aircraft and devices that (attempt to) pass the TTT are objective,
do-able empirical tasks. Trying to model conscious phenomenology, or to
justify interpreting processes as conscious, gets you as embroiled in
"theology" as trying to justify interpreting the Communal wafer as the
body of Christ. Now who's the pragmatist and who's the theologian?

--

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

Date: 30 Jan 87 07:39:00 EST
From: "CUGINI, JOHN" <cugini@icst-ecf>
Reply-to: "CUGINI, JOHN" <cugini@icst-ecf>
Subject: Why I am not a Methodological Epiphenomenalist


> >  Me: Consciousness may be as superflouous (wrt evolution) as earlobes.
> >  That hardly goes to show that it ain't there.
>
> Harnad: Agreed. It only goes to show that methodological epiphenomalism may
> indeed be the right research strategy.
>
> >  I don't think [the existence of consciousness] does NEED to be so.
> >  It just is so.
>
> Fine. Now what are you going to do about it, methodologically speaking?
>
> ... Methodological epiphenomenalism recommends we face it [the inability
> to objectively measure subjective phenomena] and live
> with it, since not that much is lost. The "incompleteness" of an
> objective account is, after all, just a subjective problem. But
> supposing away the incompleteness -- by wishful thinking, hopeful
> over-interpretation, hidden (subjective) premises or blurring of the
> objective/subjective distinction -- is a logical problem.

A few points:

1.  Insofar as meth.. ep.. (ME) is simply the following kind of counsel:
"when trying to get a computer to play chess, don't worry about the
subjective feelings which accompany human chess-playing, just get the
machine to make the right moves", I have no particular quarrel with it.

2.  It is the claim that the TTT is the only relevant criterion (or,
by far, the major criterion) for the presence of consciousness that
strikes me as unnecessarily provocative and, almost as bad, false.
It is not clear to me whether this claim is an integral part of ME,
or an independent thesis.  At any rate, such a claim is clearly
a philosophical one, having to do mainly with the epistemology of
consciousness, and as such is fair game for philosophically-based
(rather than AI-research-based) debate.  If the claim instead were
that the TTT is the major criterion for the presence of intelligence
(defined in a perhaps somewhat austere way, as the ability to
perform certain kinds of tasks...) then, again, I would have no
serious disagreement.

3.  Is the incompleteness of objective accounts of the world "just
a subjective problem" ?  Is it true that "not that much is lost"?
Well, I guess each of us can decide how much to be bothered by this
incompleteness.  I agree it's no argument against AI, psychophysics
or anything else that they "leave consciousness out" any more than it
is that they leave astronomy out.  But there are astronomers around to
cover that ground (metaphorical ground, of course). It does bother me
(more than it does you?) that consciousness, of all things,
consciousness, which may be subjective, but, we agree, is real,
consciousness, without which my day would be so boring, is simply not
addressed by any systematic rational inquiry.

John Cugini <Cugini@icst-ecf>

------------------------------

Date: 26 Jan 87 23:43:37 GMT
From: clyde!watmath!utzoo!dciem!mmt@rutgers.rutgers.edu  (Martin
      Taylor)
Subject: Re: Minsky on Mind(s)


>      To telescope the intuitive sense
>of the rebuttals: Do you believe rooms or corporations feel pain, as
>we do?

That final comma is crucial.  Of course they do not feel pain as we do,
but they might feel pain, as we do.

On what grounds do you require proof that something has consciousness,
rather than proof that it has not?  Can there be grounds other than
prejudice (i.e. prior judgment that consciousness in non-humans is
overwhelmingly unlikely?).  As I understand the Total Turing Test,
the objective is to find whether soemthing can be distinguished from
human, but this again prejudges the issue.  I don't think one CAN use
the TTT to assess whether another entity is conscious.

As I have tried to say in a posting that may or may not get to mod.ai,
Okham's razor demands that we describe the world using the simplest
possible hypotheses, INCLUDING the boundary conditions, which involve
our prior conceptions.  It seems to me simpler to ascribe consciousness
to an entity that resembles me in many ways than not to ascribe
consciousness to that entity.  Humans have very many points of resemblance;
comatose humans fewer.  Silicon-based entities have few overt points
of resemblance, so their behaviour has to be convincingly like mine
before I will grant them a consciousness like mine.  I don't really
care whether their behaviour is like yours, if you don't have
consciousness, and as Steve Harnad has so often said, mine is the
only consciousness I can be sure of.

The problem splits in two ways: (1) Define consciousness so that it does
not involve a reference to me, or (2) Find a way of describing behaviour
that is simpler than ascribing consciousness to me alone.  Only if you
can fulfil one of these conditions can there be a sensible argument about
the consciousness of some entity other than ME.
--

Martin Taylor
{allegra,linus,ihnp4,floyd,ubc-vision}!utzoo!dciem!mmt
{uw-beaver,qucis,watmath}!utcsri!dciem!mmt

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Wed Feb  4 00:39:14 1987
Date: Wed, 4 Feb 87 00:39:00 est
From: vtcs1::in% <LAWS@SRI-STRIPE.ARPA>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #30
Status: R


AIList Digest             Monday, 2 Feb 1987       Volume 5 : Issue 30

Today's Topics:
  Philosophy - Consciousness

----------------------------------------------------------------------

Date: 30 Jan 87 01:51:19 GMT
From: princeton!mind!harnad@rutgers.rutgers.edu  (Stevan Harnad)
Subject: Re: Minsky on Mind(s)


mmt@dciem.UUCP (Martin Taylor) of D.C.I.E.M., Toronto, Canada,
writes:

>       Of course [rooms and corporations] do not feel pain as we do,
>       but they might feel pain, as we do.

The solution is not in the punctuation, I'm afraid. Pain is just an
example standing in for whether the candidate experiences anything AT
ALL. It doesn't matter WHAT a candidate feels, but THAT it feels, for
it to be conscious.

>       On what grounds do you require proof that something has consciousness,
>       rather than proof that it has not?  Can there be grounds other than
>       prejudice (i.e. prior judgment that consciousness in non-humans is
>       overwhelmingly unlikely?).

First, none of this has anything to do with proof. We're trying to
make empirical inferences here, not mathematical deductions. Second,
even as empirical evidence, the Total Turing Test (TTT) is not evidential
in the usual way, because of the mind/body problem (private vs. public
events; objective vs. subjective inferences). Third, the natural null
hypothesis seems to be that an object is NOT conscious, pending
evidence to the contrary, just as the natural null hypothesis is that
an object is, say, not alive, radioactive or massless until shown
otherwise. -- Yes, the grounds for the null hypothesis are that the
presence of consciousness is more likely than its absence; the
alternative is animism. But no, the complement to the set of
probably-conscious entities is not "non-human," because animals are
(at least to me) just about as likely to be conscious as other humans
are (although one's intuitions get weaker down the phylogenetic scale);
the complement is "inanimate." All of these are quite natural and
readily defensible default assumptions rather than prejudices.

>       [i] Occam's razor demands that we describe the world using the simplest
>       possible hypotheses.
>       [ii] It seems to me simpler to ascribe consciousness to an entity that
>       resembles me in many ways than not to ascribe consciousness to that
>       entity.
>       [iii] I don't think one CAN use the TTT to assess whether another
>       entity is conscious.
>       [iv] Silicon-based entities have few overt points of resemblance,
>       so their behaviour has to be convincingly like mine before I will
>       grant them a consciousness like mine.

{i} Why do you think animism is simpler than its alternative?
{ii} Everything resembles everything else in an infinite number of
ways; the problem is sorting out which of the similarities is relevant.
{iii} The Total Turing Test (a variant of my own devise, not to be
confused with the classical turing test -- see prior chapters in these
discussions) is the only relevant criterion that has so far been
proposed and defended. Similarities of appearance are obvious
nonstarters, including the "appearance" of the nervous system to
untutored inspection. Similarities of "function," on the other hand,
are moot, pending the empirical outcome of the investigation of what
functions will successfully generate what performances (the TTT).
{iv} [iv] seems to be in contradiction with [iii].

>       The problem splits in two ways: (1) Define consciousness so that it does
>       not involve a reference to me, or (2) Find a way of describing behaviour
>       that is simpler than ascribing consciousness to me alone.  Only if you
>       can fulfil one of these conditions can there be a sensible argument
>       about the consciousness of some entity other than ME.

It never ceases to amaze me how many people think this problem is one
that is to be solved by "definition." To redefine consciousness as
something non-subjective is not to solve the problem but to beg the
question.

[The TTT, by the way, I proposed as logically the strongest (objective) evidence
for inferring consciousness in entities other than oneself; it also seems to be
the only methodologically defensible evidence; it's what all other
(objective) evidence must ultimately be validated against; moreover, it's
already what we use in contending with the other-minds problem intuitively
every day. Yet the TTT remains more fallible than conventional inferential
hypotheses (let alone proof) because it is really only a pragmatic conjecture
rather than a "solution." It's only good up to turing-indistinguishability,
which is good enough for the rest of objective empirical science, but not
good enough to handle the problem of subjectivity -- otherwise known as the
mind/body problem.]

--

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

Date: 30 Jan 87 23:35:23 GMT
From: clyde!watmath!utzoo!dciem!mmt@rutgers.rutgers.edu  (Martin
      Taylor)
Subject: Re: More on Minsky on Mind(s)


>     More ironic still, in arguing for the TTT and methodological
>epiphenomenalism, I am actually saying: "Why do you care? Worrying about
>consciousness will get you nowhere, and there's objective empirical
>work to do!"
>
That's a highly prejudiced, anti-empirical point of view: "Ignore Theory A.
It'll never help you.  Theory B will explain the data better, whatever
they may prove to be!"

Sure, there's all sorts of objective empirical work to do.  There's lots
of experimental work to do as well.  But there is also theoretical work
to be done, to find out how best to describe our world.  If the descriptions
are simpler using a theory that embodies consciousness than using one that
does not, then we SHOULD assume consciousness.  Whether this is the case
is itself an empirical question, which cannot be begged by asserting
(correctly) that all behaviour can be explained without resort to
consciousness.
--

Martin Taylor
{allegra,linus,ihnp4,floyd,ubc-vision}!utzoo!dciem!mmt
{uw-beaver,qucis,watmath}!utcsri!dciem!mmt

------------------------------

Date: Wed, 28 Jan 87 12:29:51 est
From: Stevan Harnad <princeton!mind!harnad@seismo.CSS.GOV>
Subject: Laws on Consciousness

Ken Laws <Laws@SRI-STRIPE.ARPA> wrote:

>       I'm inclined to grant a limited amount of consciousness to corporations
>       and even to ant colonies.  To do so, though, requires rethinking the
>       nature of pain and pleasure (to something related to homeostatis).

Unfortunately, the problem can't be resolved by mere magnanimity. Nor
by simply reinterpreting experience as something else -- at least not
without a VERY persuasive argument -- one no one in the history of the M/B
problem has managed to come up with so far. This history is just one of
hand-waving. Do you think "rethinking" pain as homeostastis does the trick?

>       computer operating systems and adaptive communications networks are
>       close [to conscious]. The issue is partly one of complexity, partly
>       of structure, partly of function.

I'll get back to the question of whether experiencing is an
all-or-none phenomenon or a matter of degree below. For now, I just
wonder what kind and degree of structural/functional "complexity" you
believe adds up to EXPERIENCING pain as opposed to merely behaving as
if experiencing pain.

>       I am assuming that neurons and other "simple" systems are C-1 but
>       not C-2  -- and C-2 is the kind of consciousness that people are
>       really interested in.

Yes, but do you really think that hard questions like these can be
settled by assumption? The question is: What justifies the inference
that an organism or device is experiencing ANYTHING AT ALL (C-1), and
what justifies interpreting internal functions as conscious ones?
Assumption does not seem like a very strong justification for an
inference or interpretation. What is the basis for your assumption?

I have proposed the TTT as the only justifiable basis, and I've given
arguments in support of that proposal. The default assumptions in the
AI/Cog-Sci community seem to be that sufficiently "complex" function
and performance capacity, preferably with "memory" and "learning," can be
dubbed "conscious," especially with the help of the subsidiary
assumption that consciousness admits of degrees. The thrust of my
critique is that this position is rather weak and arbitrary, and open
to telling counter-examples (like Searle's). But, more important, it
is not an issue on which the Cog-sci community even needs to take a
stand! For Cog-sci's objective goal -- of giving a causal explanation
of organisms' and devices' functional properties -- can be achieved
without embellishing any of its functional constructs with a conscious
interpretation. This is what I've called "methodological
epiphenomenalism." Moreover, the TTT (as an asymptotic goal) even
captures the intuitions about "sufficient functional complexity and
performance capacity," in a nonarbitrary way.

It is the resolution of these issues by unsupportable assumption, circularity,
arbitrary fiat and obiter dicta that I think is not doing the field
any good. And this is not at all because (1) it simply makes cog-sci look
silly to philosophers, but because, as I've repeatedly suggested, (2) the
unjustified embellishment of (otherwise trivial, toy-like) function
or performance as "conscious" can actually side-track cog-sci from its
objective, empirical goals, masking performance weaknesses by
anthropomorphically over-interpreting them. Finally (3), the
unrealizable goal of objectively capturing conscious phenomenology,
being illogical, threatens to derail cog-sci altogether, heading it in
the direction of hermeneutics (i.e., subjective interpretation of
mental states, i.e., C-2) rather than objective empirical explanation of
behavioral capacity. [If C-2 is "what people are really interested
in," then maybe they should turn to lit-crit instead of cog-sci.]

>       The mystery for me is why only >>one<< subsystem in my brain
>       seems to have that introspective property -- but
>       multiple personalities or split-brain subjects may be examples that
>       this is not a necessary condition.

Again, we'd probably be better off tackling the mystery of what the
brain can DO in the world, rather than what subjective states it can
generate. But, for the record, there is hardly agreement in clinical
psychology and neuropsychology about whether split-brain subjects or
multiple-personality patients really have more than one "mind," rather
than merely somewhat dissociated functions -- some conscious, some not --
that are not fully integrated, either temporally or experientially.
Inferring that someone has TWO minds seems to be an even trickier
problem than the usual problem ("solved" by the TTT) of inferring that
someone has ONE (a variant of the mind/body problem called the "other-minds"
problem). At least in the case of the latter we have our own, normal unitary
experience to generalize from...

>       [Regarding the question of whether consciousness admits of degrees:]
>       An airplane either can fly or it can't. Yet there are
>       simpler forms of flight used by other entities-- kites, frisbees,
>       paper airplanes, butterflies, dandelion seeds... My own opinion
>       is that insects and fish feel pain, but often do so in a generalized,
>       nonlocalized way that is similar to a feeling of illness in humans.

Flight is an objective, objectively definable function. Experience is
not. We can, for example, say that a massive body that stays aloft in
space for any non-zero period of time is "flying" to a degree. There
is no logical problem with this. But what does it mean to say that
something is conscious to a degree? Does the entity in question
EXPERIENCE anything AT ALL? If so, it is conscious. If not, not. What
has degree to do with it (apart from how much, or how intensely it
experiences, which is not the issue)?

I too believe that lower animals feel pain. I don't want to conjecture
what it feels like to them; but having conceded that it feels like
anything at all, you seem to have conceded that they are conscious.
Now where does the question of degree come into it?

The mind/body problem is the problem of subjectivity. When you ask
whether something is conscious, you're asking whether it has
subjective states at all, not which ones, how many, or how strong.
That is an all-or-none matter, and it concerns C-1. You can't speak of
C-2 at all until you have a principled handle on C-1.

>       I assume that lower forms experience lower forms of consciousness
>       along with lower levels of intelligence.  Such continuua seem natural
>       to me. If you wish to say that only humans and TTT-equivalents are
>       conscious, you should bear the burden of establishing the existence
>       and nature of the discontinuity.

I happen to share all those assumptions about consciousness in lower
forms, except that I don't see any continuum of consciousness there at
all. They're either conscious or not. I too believe they are conscious,
but that's an all-or-none matter. What's on a continuum is what they're
conscious OF, how much, to what degree, perhaps even what it's "like" for
them (although the latter is more a qualitative than a quantitative
matter). But THAT it's like SOMETHING is what it is that I am
assenting to when I agree that they are conscious at all. That's C-1.
And it's the biggest discontinuity we're ever likely to know of.

(Note that I didn't say "ever likely to experience," because of course
we DON'T experience the discontinuity: We know what it is like to
experience something, and to experience more or less things, more or less
intensely. But we don't know what it's like NOT to experience
something. [Be careful of the scope of the "not" here: I know what
it's like to see not-red, but not what it's like to not-see red, or be
unconscious, etc.] To know what it's like NOT to experience
anything at all is to experience not-experiencing, which is
a contradiction in terms. This is what I've called, in another paper,
the problem of "uncomplemented" categories. It is normally solved by
analogy. But where the categories are uncomplementable in principle,
analogy fails in principle. I think that this is what is behind our
incoherent intuition that consciousness admits of degrees: Because to
experience the conscious/unconscious discontinuity is logically
impossible, hence, a fortiori, experientially impossible.)

>       [About why neurons are conscious and atoms are not:]
>       When someone demonstrates that atoms can learn, I'll reconsider.

You're showing your assumptions here. What can be more evident about
the gratuitousness of mentalistic interpretation (in place of which I'm
recommending abstention or agnosticism on methodological grounds)
than that you're prepared to equate it with "learning"?

>       You are questioning my choice of discontinuity, but mine is easy
>       to defend (or give up) because I assume that the scale of
>       consciousness tapers off into meaninglessness. Asking whether
>       atoms are conscious is like asking whether aircraft bolts can fly.

So far, it's the continuum itself that seems meaningless (and the defense
a bit too easy-going). Asking questions about subjective phenomena
is not as easy as asking about objective ones, hopeful analogies
notwithstanding. The difficulty is called the mind/body problem.

>       I hope you're not insisting that no entity can be conscious without
>       passing the TTT. Even a rock could be conscious without our having
>       any justifiable means of deciding so.

Perhaps this is a good place to point out the frequent mistake of
mixing up "ontic" questions (about what's actually TRUE of the world)
and "epistemic" ones (about what we can KNOW about what's actually true of
the world, and how). I am not claiming that no entity can be conscious
without passing the TTT. I am not even claiming that every entity that
passes the TTT must be conscious. I am simply saying that IF there is
any defensible basis for inferring that an entity is conscious, it is
the TTT. The TTT is what we use with one another, when we daily
"solve" the informal "other-minds" problem. It is also cog-sci's
natural asymptotic goal in mind-modeling, and again the only one that
seems methodologically and logically defensible.

I believe that animals are conscious; I've even spoken of
species-specific variants of the TTT; but with these variants both our
intuitions and our ecological knowledge become weaker, and with them
the usefulness of the TTT in such cases. Our inability to devise or
administer an animal TTT doesn't make animals any less conscious. It just
makes it harder to know whether they are, and to justify our inferences.

(I'll leave the case of the stone as an exercise in applying the
ontic/epistemic distinction.)

>>SH:  "(To reply that synthetic substances with the same functional properties
>>      must be conscious under these conditions is to beg the question.)"
>KL:    I presume that a synthetic replica of myself, or any number of such
>       replicas, would continue my consciousness.

I agree completely. The problem was justifying attributing consciousness
to neurons and denying it to, say, atoms. It's circular to say
neurons are conscious because they have certain functional properties
that atoms lack MERELY on the grounds that neurons are functional
parts of (obviously) conscious organisms. If synthetic components
would work just as well (as I agree they would), you need a better
justification for imputing consciousness to neurons than that they are
parts of conscious organisms. You also need a better argument for
imputing consciousness to their synthetic substitutes. The TTT is my
(epistemic) criterion for consciousness at the whole-organism level.
Its usefulness and applicability trail off drastically with lower and lower
organisms. I've criticized cog-sci's default criteria earlier in this
response. What criteria do you propose, and what is the supporting
justification, for imputing consciousness to, say, neurons?

>       Perhaps professional philosophers are able to strive for a totally
>       consistent world view.

The only thing at issue is logical consistency, not world view. And even
professional scientists have to strive for that.

>       Why is there Being instead of Nothingness?  Who cares?

These standard examples (along with the unheard sound of the tree
falling alone in the forest) are easily used to lampoon philosophical
inquiry. They tend to be based on naive misunderstandings of what
philosophers are actually doing -- which is usual as significant and
rigorous as any other area of logically constrained intellectual
inquiry (although I wouldn't vouch for all of it, in any area of
inquiry).

But in this case consider the actual ironic state of affairs:
It is cog-sci that is hopefully opening up and taking an ambitious
position on the problems that normally only concern philosophers,
such as the mind/body problem. NONphilosophers are claiming : "this is
conscious and that's not," and "this is why," and "this is what
consciousness is." So who's bringing it up, and who's the one that cares?

Moreover, I happen myself to be a nonphilosopher (although I have a
sizeable respect for that venerable discipline and its inevitable quota
of insightful exponents); yet I repeatedly find myself in the peculiar
role of having to point out the philosophically well-known howlers
that cog-sci keeps tumbling into in its self-initiated inquiry into
"Nothingness." More ironic still, in arguing for the TTT and methodological
epiphenomenalism, I am actually saying: "Why do you care? Worrying about
consciousness will get you nowhere, and there's objective empirical
work to do!"

>       If I had to build an aircraft, I would not begin by refuting
>       theological arguments about Man being given dominion over the
>       Earth rather than the Heavens. I would start from a premise that
>       flight was possible and would try to derive enabling conditions.

Building aircraft and devices that (attempt to) pass the TTT are objective,
do-able empirical tasks. Trying to model conscious phenomenology, or to
justify interpreting processes as conscious, gets you as embroiled in
"theology" as trying to justify interpreting the Communal wafer as the
body of Christ. Now who's the pragmatist and who's the theologian?


Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Thu Feb  5 00:42:38 1987
Date: Thu, 5 Feb 87 00:42:30 est
From: vtcs1::in% <LAWS@SRI-STRIPE.ARPA>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #31
Status: R


AIList Digest             Monday, 2 Feb 1987       Volume 5 : Issue 31

Today's Topics:
  Seminar - Logic Programming: The Japanese Were Right (TI) &
    A Logic of Knowledge, Action, and Communication (Rutgers) &
    An Intelligent Modeling Environment (Rutgers) &
    Knowledge-Based Inductive Inference (Rutgers) &
    Spatial Objects in Database Systems (IBM) &
    Induction in Model-Based Systems (SU) &
    Influence Diagrams (CMU) &
    The ISIS Project (CMU)

----------------------------------------------------------------------

Date: WED, 10 oct 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: Seminar - Logic Programming: The Japanese Were Right (TI)


           TI Computer Science Center Lecture Series

            LOGIC PROGRAMMING:  A TOOL FOR THINKING
           (OR WHY THE JAPANESE WERE RIGHT)

               Dr. Leon Sterling
            Case Western Reserve University

           10:00 am, Friday, 6 February 1987
        Semiconductor Building Main Auditorium


Logic programming, or the design, study and implementation of logic
programs, will be significant in software developments of the future.
Logic programming links the traditional uses of logic in program
specification and database query languages with newer uses of logic as
a knowledge representation language for artificial intelligence and as
a general-purpose programming language.  A logic program is a set of
axioms, or truths about the world.  A computation of a logic program
is the use of axioms to make logical deductions.  This talk will
discuss the value of logic programming for artificial intelligence
applications.  It will demonstrate how a well-written logic program
can clearly reflect the problem solving knowledge of a human expert.
Examples will be given of AI programs in Prolog, the most developed of
the languages based on logic programming.

BIOGRAPHY

Leon Sterling received his Ph.D. in computational group theory from
the Australian National University in 1981.  After three years as a
research fellow in the Department of Artificial Intelligence at the
University of Edinburgh, and one year as the Dov Biegun Postdoctoral
Fellow in the Computer Science Department at the Weizmann Institute of
Science, he joined the faculty at Case Western Reserve University in
1985.  In 1986 he became Associate Director of the Center for
Automation and Intelligent Systems Research at Case Western.  He is
co-author, with Ehud Shapiro, of the recent textbook on Prolog,
"The Art of Prolog."


Visitors to TI should contact Dr. Bruce Flinchbaugh (214-995-0349) in
advance and meet at the north lobby of the SC Building by 9:45 am.

------------------------------

Date: 26 Jan 87 23:03:47 EST
From: KALANTARI@RED.RUTGERS.EDU
Subject: Seminar - A Logic of Knowledge, Action, and Communication
         (Rutgers)

RUTGERS COMPUTER SCIENCE AND RUTCOR COLLOQUIUM SCHEDULE - SPRING 1987

Computer Science Department Colloquium :

DATE: Thursday, January 29, 1987

SPEAKER:          Leora Morgenstern
AFFILIATION:      New York University

TITLE:     Foundations of a Logic of Knowledge, Action, and Communication

TIME: 9:50 (Coffee and Cookies will be setup at 9:30)
PLACE:  Hill Center, Room 705


Most AI planners work on the assumption that they have complete knowledge
of their problem domain and situation, so that formulating a plan consists
of searching through some pre-packaged list of action operators for an
action sequence that achieves some desired goal.  Real life planning rarely
works this way because we usually don't have enough information to map out
a detailed plan of action when we start out.  Instead, we initially draw up
a sketchy plan and fill in details as we proceed and gain more exact
information about the world.

This talk will present a formalism that is expressive enough to describe this
flexible planning process.  We begin by discussing the various requirements
that such a formalism must meet, and present a syntactic theory of knowledge
that meets these requirements.  We discuss the paradoxes, such as the Knower
Paradox, that arise from syntactic treatments of knowledge, and propose a
solution to these paradoxes based on Kripke's solution to the Liar Paradox.
Next, we present a theory of action that is powerful enough to describe
partial plans and joint-effort plans.  We demonstrate that we can integrate
this theory with an Austinian and Searlian theory of communicative acts.
Finally, we give solutions to the Knowledge Preconditions and Ignorant Agent
Problems as part of our integrated theory of planning.

The talk will include comparisons of our theory with other syntactic and
modal theories such as Konolige's and Moore's.  We will demonstrate
that our theory is powerful enough to solve classes of problems that these
theories cannot handle.

------------------------------

Date: 26 Jan 87 23:03:47 EST
From: KALANTARI@RED.RUTGERS.EDU
Subject: Seminar - An Intelligent Modeling Environment (Rutgers)

RUTGERS COMPUTER SCIENCE AND RUTCOR COLLOQUIUM SCHEDULE - SPRING 1987

Computer Science Department Colloquium :

DATE: Friday, January 30, 1987

SPEAKER:      Dr. Axel Lehmann
AFFILIATION:  University of Karlsruhe, Institute fur Informatik IV,
              F.R. Germany

TITLE:           An Interactive, Intelligent and Integrated
                       Modeling Environment
TIME: 11:00 AM
PLACE:  Hill Center, Room 423


This paper describes an approach for interactive assistance of users
in the different phases of modeling processes for analysis of system
dynamics, especially regarding performance, reliability or
cost-benefit predictions of computer systems. The conceptual approach
is based on the assumptions that more and more experts out of various
domains, who are not familiar in detail with modeling techniques,
require supporting tools available on their PC or their workstation
for quantitative analysis of system dynamics as a basis for making
decisions.

Considering this situation, the global objective of the INT3 project
and the research involved is to provide system experts as well as
users supporting tools for problem specification, for interactive
selection and (graphical) construction of a problem-adapted
(simulation) model, for validation, experiment planning and for
interpretation of modeling results. Beside a detailed concept, we have
already implemented some graphical supporting tools for semi-automatic
model synthesis and for result interpretation, as well as prototypes
of expert systems as advisory systems for the selection of
problem-adapted modeling methods and of efficient solution techniques.

This paper summarizes the goals and our basic concept of INT3, an
interactive and knowledge-based modelling environment, including
actual restrictions and its initial implementation on IBM PC/XT or AT.
In addition, it is focused on the description and stepwise solution of
a typical computer performance analysis problem and a manufacturing
problem by means of these supporting tools. These examples will
demonstrate the applicability of this concept and of INT3, its actual
state of realization, experience and problems, as well, and future
plans.

------------------------------

Date: 26 Jan 87 23:03:47 EST
From: KALANTARI@RED.RUTGERS.EDU
Subject: Seminar - Knowledge-Based Inductive Inference (Rutgers)

RUTGERS COMPUTER SCIENCE AND RUTCOR COLLOQUIUM SCHEDULE - SPRING 1987

Computer Science Department Colloquium :

DATE: Friday, January 30, 1987

SPEAKER:                 Thomas G. Dietterich
AFFILIATION:    Department of Computer Science, Oregon State University

TITLE:              KNOWLEDGE-BASED INDUCTIVE INFERENCE
                         (or EBG: The wrong view)

TIME: 2:50 (Coffee and Cookies will be setup at 2:30)
PLACE: HILL 705

Explanation-based generalization (EBG) began as a reaction to such weak
syntactic inductive inference methods as AQ11, ID3, and the version space
approach.  However, in its pursuit of "justifiable generalization", EBG has
been shown to be too strong--the system already knows (in Newell's knowledge
level sense) the knowledge it is trying to "learn."  Despite this
shortcoming, the methods employed in EBG suggest ways that knowledge might
be incorporated into the inductive learning process.  Using examples from
Meta-DENDRAL (Buchanan, et al.), Sierra (VanLehn), and WYL (Flann), it will
be argued that the process of forming "explanations" in EBG should be viewed
as knowledge-based representation change.  Each of these systems can be
viewed as shifting the learning problem to an "explanation space" where
syntactic inductive inference methods are then applied.  The conclusion is
that the "knowledge revolution," which has transformed most of the rest of
AI, has finally begun to affect machine learning research.





RUTCOR Colloquium : (Discrete Mathematics Seminar)

--------------------------------------
DATE: Tuesday, January 27, 1987

SPEAKER:   Professor P.P. Palfy

AFFILIATION:  Dept. of Mathematics, University of Hawaii at Manoa
a
TITLE:  Applications of finite simple groups in combinatorics

TIME: 1:30

PLACE:  Hill Center, Room 705

------------------------------

Date: Wed, 28 Jan 87 17:30:16 PST
From: IBM Almaden Research Center Calendar <CALENDAR@IBM.COM>
Subject: Seminar - Spatial Objects in Database Systems (IBM)


                     IBM Almaden Research Center
                           650 Harry Road
                       San Jose, CA 95120-6099

                          February 2-6, 1987


ACCESS STRUCTURES FOR SPATIAL OBJECTS IN NONTRADITIONAL DATABASE SYSTEMS
H.-P. Kriegel, University Wuerzburg, West Germany

Computer Science Seminar    Monday, Feb. 2    1:00 P.M.   Room:  B3-247

Database systems must offer storage and access structures for spatial
objects to meet the needs of nontraditional applications such as
computer-aided design and manufacturing (CAD/CAM), image processing
and geographic information processing.  First, we will show that
access methods for spatial objects should be based on multidimensional
dynamic hashing schemes.  However, even for uniform object
distributions, previous schemes of this type do not exhibit ideal
performance; for nonuniform object distributions which are common in
the above mentioned applications, the retrieval performance of all
known schemes is rather poor.  In this talk, we will present new
schemes which exhibit practically optimal retrieval performance for
uniform and nonuniform object distributions.  We will underline this
fact by the results of experimental runs with implementations of our
schemes.
Host: D. Ruland


Visitors, please arrive 15 minutes early.  IBM's new Almaden Research
Center (ARC) is located adjacent to Santa Teresa County Park, between
Almaden Expressway and U.S. 101, about 10 miles south of Interstate
280.  From U.S. 101, exit at Bernal Road, and follow Bernal Road west
past Santa Teresa Blvd.  into the hills (ignoring the left turn for
Santa Teresa Park).  Alternatively, follow Almaden Expressway to its
southern terminus, turn left onto Harry Road, then go right at the ARC
entrance (about a quarter of a mile later) and go up the hill.  For
more detailed directions, please phone the ARC receptionist at (408)
927-1080.

------------------------------

Date: 29 Jan 1987 1206-PST (Thursday)
From: Valerie Ross <ross@pescadero.stanford.edu>
Subject: Seminar - Induction in Model-Based Systems (SU)

                CS 500 Computer Science Colloquium
                Feb. 3, 4:15 pm, Skilling Auditorium

        THE PROVISION OF INDUCTION AS A PROBLEM SOLVING METHOD
                     IN MODEL BASED SYSTEMS

                      DAVID HARTZBAND, D.Sc.
             Artificial Intelligence Technology Group
             Digital Equipment Corporation, Hudson, MA

Much research in artificial intelligence and cognitive science has focused on
mental modeling and the mapping of mental models to machine systems. This is
especially critical in systems which provide inference capabilities in order to
enhance peoples' problem solving abilities.  Such a system should present a
machine model that is homomorphic with a human perception of knowledge
representation and problem solving.  An approach to the development of such a
model has allowed a model-theoretic approach to be taken toward machine
representation and problem solving.  Considerable work done in psychology,
cognitive science and decision analysis in the past 20 years has indicated that
human problem solving methods are primarily comparative (that is analogic) and
proceed by successive refinement of comparisons among known and unknown
entities (e.g. Carbonell, 1985; Rummelhart and Abrahamson, 1973; Simon, 1985;
Tversky, 1977).

A series of algorithms has been developed to provide analogic (Hartzband et al.
1986) and symmetric comparative induction methods (Hartzband and Holly, in
preparation) in the context of the homomorphic machine model previously
referred to.  These general methods can be combined with heuristics and
structural information in a specific domain to provide a powerful problem
solving paradigm which could enhance human problem solving capabilities.

This paper will:

a. describe the characteristics of this model-theoretic approach,
b. describe (in part) the model used in this work,
c. develop both the theory and algorithms for comparative induction in
   this context, and
d. discuss the use of these inductive methods in the provision of effective
   problem solving paradigms.

------------------------------

Date: 26 Jan 87 17:54:08 EST
From: Charles.Wiecha@isl1.ri.cmu.edu
Subject: Seminar - Influence Diagrams (CMU)


         Influence Diagrams: Graphical Representations for Uncertainty
                               Ross D. Shachter
                  Department of Engineering-Economic Systems
                              Stanford University

                             Wednesday, January 28
                                 2:30-4:00 PM
                               Porter Hall 223D

The  influence  diagram is a network for structuring bayesian decision analysis
problems.  The nodes represent uncertain quantities, goals, and decisions,  and
the   arcs   indicate   probabilistic   dependence  and  the  observability  of
information.  The graphical heirarchy promotes discussion  by  emphasizing  the
structure  of  a  problem and the relationships among variables, while allowing
the details of assessment to be completed later.  Because the components have a
basic  mathematical  interpretation,  even  a qualitative diagram has a precise
meaning.  When the quantitative information is complete, the influence  diagram
can  be evaluated in a generalization of decision tree solving.  Examples using
influence diagrams will be drawn from decision  analysis,  information  theory,
dynamic  programming,  Kalman filtering, and expert systems.  In the latter, we
ask  the  question  "Why  do  probabilists  insist  on  looking  at  everything
backwards?"

------------------------------

Date: 27 Jan 87 10:39:13 EST
From: Patty.Hodgson@isl1.ri.cmu.edu
Subject: Seminar - The ISIS Project (CMU)


                        AI SEMINAR

TOPIC:   THE ISIS PROJECT:  AN HISTORICAL PERSPECTIVE OR LESSONS LEARNED
         AND RESEARCH RESULTS

SPEAKER:  MARK S. FOX, CMU Robotics Institute

PLACE:    Wean Hall 5409

DATE:     Tuesday, January 27, 1987

TIME:     3:30 pm

ABSTRACT:

ISIS is a knowledge-based system designed to provide intelligent
support in the domain of job shop production management and control.
Job-shop scheduling is a "uncooperative" multi-agent (i.e., each
order is to be "optimized" separately) planning problem in which
activities must be selected, sequenced, and assigned resources and
time of execution.  Resource contention is high, hence closely
coupling decisions.  Search is combinatorially explosive; for
example, 85 orders moving through eight operations without
alternatives, with a single machine substitution for each and no
machine idle time has over 10@+[880] possible schedules. Many of
which may be discarded given knowledge of shop constraints.  At
the core of ISIS is an approach to automatic scheduling that provides
a framework for incorporating the full range of real world
constraints that typically influence the decisions made by human
schedulers. This results in an ability to generate detailed schedules
for production that accurately reflect the current status of the shop
floor, and distinguishes ISIS from traditional scheduling systems
based on more restrictive management science models.  ISIS is capable
of incrementally scheduling orders as they are received by the shop
as well as reactively rescheduling orders in response to unexpected
events (e.g. machine breakdowns) that might occur.

The construction of job shop schedules is a complex constraint-directed
activity influenced by such diverse factors as due date requirements, cost
restrictions, production levels, machine capabilities and substitutability,
alternative production processes, order characteristics, resource
requirements, and resource availability.  The problem is a prime candidate
for application of AI technology, as human schedulers are overburdened by
its complexity and existing computer-based approaches provide little more
than a high level predictive capability.  It also raises some interesting
research issues.  Given the conflicting nature of the domain's constraints,
the problem differs from typical constraint satisfaction problems. One
cannot rely solely on propagation techniques to arrive at an acceptable
solution. Rather, constraints must be selectively relaxed in which case
the problem solving strategy becomes one of finding a solution that best
satisfies the constraints. This implies that constraints must serve to
discriminate among alternative hypotheses as well as to restrict the number
of hypotheses generated. Thus, the design of ISIS has focused on

      o constructing a knowledge representation that captures the requisite
        knowledge of the job shop environment and its constraints to support
        constraint-directed search, and

      o developing a search architecture capable of exploiting this
        constraint knowledge to effectively control the combinatorics of
        the underlying search space.


This presentation will provide an historical perspective on the development
of ISIS family of systems.  It will focus on the evolution of its
representation of knowledge and search techniques.  Performance data for
each version will be presented.

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Mon Feb  9 17:07:29 1987
Date: Mon, 9 Feb 87 17:07:23 est
From: vtcs1::in% <LAWS@sri-stripe.arpa>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #32
Status: RO


AIList Digest            Thursday, 5 Feb 1987      Volume 5 : Issue 32

Today's Topics:
  Queries - Graphics for Frames and Semantic Networks & Learning Programs,
  AI Tools - Expert Shell for VAX & PCs &  OPS5 for 4.2BSD,
  Discussion List - Color and Vision Network,
  Seminars - Dynamic Belief Revision System (CMU) &
    The Synthesis of Dirty Lisp Programs (SU) &
    Why Software Cannot be Property (UTexas) &
    Expert Systems in Manufacturing (UCB)

----------------------------------------------------------------------

Date: 29 Jan 87 13:34:34 GMT
From: mcvax!ukc!hrc63!hughes@seismo.css.gov  (Andrew C. Hughes)
Subject: Graphics for frames and semantic networks

We have some people developing a knowledge representation system who wish to
implement a graphics based user interface which will support frame
editing/displaying taxonomic hierarchy/semantic network editing/displaying etc.
The system is currently written in Franz Lisp Opus 42.16 on a Sun 2,
but we hope to port to Common Lisp on a Sun 3 in the near future. The Lisp
should have an adequate interface to other languages such as 'c'.
Does anyone know of a package (preferably in the public domain) which would
ease the writing of such a UI, in particular allowing displaying/editing
of hierarchies, networks and frames.

Andrew Hughes (GEC Research, Chelmsford, UK)

Tel: +44 245 73331 Ext. 3247
Email: ..!mcvax!ukc!a.gec-mrc.co.uk!hughes
ARPA: hughes%a.gec-mrc.co.uk@ucl-cs

------------------------------

Date: 2 Feb 87 02:34:05 GMT
From: uwslh!lishka@rsch.wisc.edu  (Christopher Lishka)
Subject: Re: Learning programs wanted [Public Domain preferred]

I would also be interested in any learning programs...maybe someone (I would
be willing) could collect replies and post a listing of NAMES of good
learning programs to comp.ai after everyone has sent in their info.  [By
the way, what is this Marvin program?]

--
Chris Lishka                    /lishka@uwslh.uucp
Wisconsin State Lab of Hygiene <-lishka%uwslh.uucp@rsch.wisc.edu
                                \{seismo, harvard,topaz,...}!uwvax!uwslh!lishka

------------------------------

Date: Tue, 3 Feb 87 13:07:12 EST
From: "Fred J. Shaw" (IBD) <fshaw@BRL.ARPA>
Subject: expert shell for vax & pc's

        In response to your request for an expert system shell that runs
on both unix and pc's.  TIMM and TIMM-PC (General Research Corp 703-893-5900,
Mclean Va.) will run on vax and pc respectively.  I have used TIMM a little
and am not impressed with it's capablities.  I currently use Insight 2+ on
a pc.
                Fred

------------------------------

Date: 3 Feb 87 11:31 PST
From: Ghenis.pasa@Xerox.COM
Subject: Re: OPS5 for 4.2BSD?

The Franz and Common Lisp versions of the OPS-5 source code are
available on CompuServe from the AI Expert Data Library. Some sample
programs are posted there as well.

Pablo Ghenis
Xerox Artificial Intelligence Systems
Educational Services

------------------------------

Date: 29-JAN-1987
From: CVNET%YORKVM1.BITNET@WISCVM.WISC.EDU
Subject: COLOR & VISION NETWORK

                  Forwarded from the Neuron Digest.]


                          COLOR AND VISION NETWORK

     The Color and Vision Network is for scientists working in color and
vision research. At present the Network has three major activities.

               1. Members' E-mail addresses are maintained and sent to all
                    those in the Network.
               2. A key word list that associates scientists and their
                    interests within the areas of color and vision is
                    maintained and distributed.
               3. Any person in the Network can have a bulletin,
                    announcement, etc, sent to all other people in the
                    Network.

     Scientists working in color and/or vision who wish to join should
contact Peter Kaiser at:

                cvnet@yorkvm1 or
                cvnet%yorkvm1.bitnet@wiscvm.wisc.edu

     They will receive the list of E-mail addresses plus a request to provide
key words which represent their interests and experience in color and/or
vision research.

     Scientists from Australia, Canada, Germany, Japan, Netherlands,
Sweden, U.K., and the U.S. are in the Network.  They come from universities,
research institutes, national laboratories and private industry.  The list
is growing daily.

                                Peter K. Kaiser
                                York University
                                4700 Keele St.
                                North York, Ontario, M3J 1P3
                                Canada
                                pkaiser@yorkvm1.bitnet
                                pkaiser%yorkvm1.bitnet@wiscvm.wisc.edu

------------------------------

Date: 3 Feb 1987 1647-EST
From: Lydia DeFilippo <DEFILIPPO@C.CS.CMU.EDU>
Subject: Seminar - Dynamic Belief Revision System (CMU)


                 LOGIC COLLOQUIUM  (CMU/PITT)

Speaker:  Norman Foo and Anand Rao (U. Sydney/ IBM)
Date:     Thursday, February 5
Time:     3:30
Place:    Wean 5409
Topic:    Dynamic belief revision system


We have combined the notions of constructive negation (Gabbay & Sergot),
stratified logic programs (Apt, Blair, & Walker), and the logic of small
changes (Gardenfors, Makinson, & Alchouron) to produce a sound and complete
belief revision system.  This was done by separating the object logic from
the meta logic.  The object logic turns out to be paraconsistent (Routley &
Priest).

This talk will discuss this work and plans for future extensions.  One
extension is to adapt the logic to conceptual graphs and use it as a
back-end for the CONGRESS system.  Another extension is to attempt a
graceful merger of finite failure negation with constructive negation.

If anyone would like to have an appointment with them, please contact
me @defilippo or x3063.

------------------------------

Date: 03 Feb 87  1153 PST
From: Vladimir Lifschitz <VAL@SAIL.STANFORD.EDU>
Subject: Seminar - The Synthesis of Dirty Lisp Programs (SU)

     Commonsense and Nonmonotonic Reasoning Seminar


        THE SYNTHESIS OF DIRTY LISP PROGRAMS

                Richard Waldinger
          Artificial Intelligence Center
                SRI International

            Thursday, February 5, 4pm
              Bldg. 160, Room 161K

Most work in program synthesis has focused on the
derivation of applicative programs, which return an
output but produce no side effects.  In this talk we
turn to the synthesis of imperative programs, which
may alter data structures and produce other side effects
as part of their intended behavior.  We concentrate on
"dirty LISP," an imperative LISP with assignment and
destructive list operations (rplaca and rplacd).

We treat dirty LISP with the same deductive approach
we use for the relatively clean applicative programs.
For this purpose, we introduce a new situational
logic, called "dirty-LISP theory."  The talk will
emphasize how to represent instructions and specifications
in this theory.

------------------------------

Date: Tue, 3 Feb 1987  15:26 CST
From: AI.KUIPERS@R20.UTEXAS.EDU
Subject: Seminar - Why Software Cannot be Property (UTexas)

                 "Why Software Cannot Be Property"
                         Richard Stallman
                    Free Software Foundation

                  Friday, February 6, TAY 3.128
                          tea at 10:30 am
                          talk at 11:00 am

    Richard Stallman is the creator of the Emacs text editor, and of GNU,
    a freely distributed, complete software system to replace UNIX.  He
    was one of the hackers at the MIT Artificial Intelligence Laboratory,
    and contributed in many ways to its excellent software environment,
    including major portions of the design and implementation of the MIT
    Lisp Machine software.  The GNU project is inspired by his observations
    on the personal, societal, and technical problems that result from
    the commercialization of software.

------------------------------

Date: Mon, 2 Feb 87 16:29:11 PST
From: ashutosh%euler.Berkeley.EDU@berkeley.edu (Ashutosh Rege)
Subject: Seminar - Expert Systems in Manufacturing (UCB)


                        CS 298 Seminar

        Expert Systems for Diagnostic and Control in Manufacturing

                        Prof. Alice M. Agogino

                Dept. of Mechanical Engineering, UC Berkeley

                   608-7 Evans, Tuesday Feb.3, 5 - 6 pm.


Abtract : An architecture for the hierarchical integration of sensors and
diagnostic reasoning in expert systems for automated manufacturing and
process control is described. The system architecture uses influence diagrams
to provide a symbolic representation of the knowledge obtained from experts
with varying degrees of technical proficiency and from diverse domains of
expertise. The symbolic representation also maps to a functional level of
knowledge which can be used by the knowledge acquistion system to obtain a
more detailed numerical level of information from experts , maintenance
records, statistical data bases or sensor signals. The diagnostic
implementation uses probailistic inference to answer questions concerning
possible failures in an automated manufacturing or process system based on
observable sensor readings. A search through the influence diagram network
provides the topological solution or calculation sequence to answer any
such diagnostic query. Once the topological and numerical solution to the
influence diagram has been determined, qualitative and quantitative advice
can be relayed to the controller , operator or diagnostician. A description
of an implementation of such an architecture will  be provided.

------------------------------

End of AIList Digest
********************
From in%@vtcs1 Mon Feb  9 18:43:52 1987
Date: Mon, 9 Feb 87 18:43:28 est
From: vtcs1::in% <LAWS@sri-stripe.arpa>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #33
Status: R


AIList Digest             Monday, 9 Feb 1987       Volume 5 : Issue 33

Today's Topics:
  Queries - Terminfo Entry for Symbolics 3640 &
    W.D. Clinger & Representation Languages &
    E. Hausen-Tropper & MICE Expert System Shell,
    Availability of Wilensky's UC & Public-Domain Expert System &
    ExperLogo & Expert Shell for VAX and PCs &
    Coordinator Systems & Connectionism,
  Bibliographies - Connectionism/Neural Nets

----------------------------------------------------------------------

Date: 3 Feb 87 21:43:30 GMT
From: cadre!pitt!darth!beaver!frankb@pt.cs.cmu.edu  (Frank Berry)
Subject: Terminfo entry for Symbolics 3640

I am serching a terminfo (SysV) entry that will work for
the Release 6.1 or earlier Symbolics terminal, remote-terminal.

The docs prescribe an 'ann arbor ambassador' of unspecified type,
but my use of several 'aaa' term types yield some ugliness.

Alternatively, does anyone have modified sources for remote-terminal.lisp
that deal with the terminal attributes normally encountered in say,
a 'vi' session?

Please, no flames about regressing to 'vi';I have to be able to connect
to several SysV machines via serial links.


        Franklyn Berry
        {allegra, bellcore, cadre, idis, psuvax1}!pitt!darth!beaver!frankb

Stingray:"Some day I'm going to call and ask you for a favor..."

------------------------------

Date: 3 Feb 87 18:53:00 GMT
From: uiucdcsm!mccaugh@a.cs.uiuc.edu
Subject: Actor Semantics


 I apologize in advance if this appears in the wrong place...I need to
 communicate with W.D. Clinger re: his work "Foundations of Actor Semantics"
 (AI-TR-633, MIT AI Lab, May, 1981) and so would appreciate knowing how to
 obtain a copy of it or how to reach the author. Thanks very much,

 scott mccaughrin (mccaugh@uiucmsl)

------------------------------

Date: 5 Feb 87 03:37:30 GMT
From: berleant@sally.utexas.edu  (Dan Berleant)
Subject: representation languages: richness and flexibility

Hmm. I just attended a lecture in which frame based representation
schemes were touted on the basis of the fact that representation
languages should be rich and flexible.

Well, it sounds good, it even sounds simple, but I'm sure not sure what
it means! In the context of representation languages, what is
'rich', and what is 'flexible'?

Dan Berleant
UUCP: {gatech,ucbvax,ihnp4,seismo,kpno,ctvax}!ut-sally!berleant
ARPA: ai.berleant@r20.utexas.edu

------------------------------

Date: Fri, 6 Feb 87 11:07:21 EST
From: munnari!trlamct.oz!andrew@seismo.CSS.GOV (Andrew Jennings)
Subject: trying to locate paper/author


If anyone has knowledge of who/where the author of :

E.Hausen-Tropper "An application of learning algorithms to telecommunication
networks", presented at 6th International Workshop on Expert Systems and
their Application, Avignon France 1986

is located, I'd be grateful.


UUCP: ...!{seismo, mcvax, ucb-vision, ukc}!munnari!trlamct.trl!andrew
ARPA: andrew%trlamct.trl.oz@seismo.css.gov
Andrew Jennings                             Telecom Australia Research Labs
"Its not enough to know a few bright sparks ..... you have to burn."

------------------------------

Date: 5 Feb 87 18:59:36 GMT
From: decvax!mcnc!duke!ravi@ucbvax.Berkeley.EDU  (Ravi Subrahmanyan)
Subject: MICE expert system shell

Has anyone ordered the "MICE" expert system tool advertized in the
Winter "AI Magazine"? At $20 it seems to be too good to be true. (even
if the software isn't any good the blank disks would almost be worth it).

I am considering sending them my $20, but thought I'd see if anyone else
had first.

Thanks

Michael Lee Gleicher                    (-: If it looks like I'm wandering
        Duke University                 (-:    around like I'm lost . . .
Now appearing at : duke!ravi            (-:
Or P.O.B. 5899 D.S., Durham, NC 27706   (-:   It's because I am!

------------------------------

Date: 6 Feb 87 04:28:28 GMT
From: nosc!humu!uhmanoa!uhccux!todd@sdcsvax.ucsd.edu  (The Perplexed
      Wiz)
Subject: availability of Wilensky's UC?


Does anyone know if Wilensky's UC (UNIX Consultant) program is available
anywhere?  I'd like to install it on a VAX 8650 with a large population of
new UNIX users.

Please let me know if you know anything about the availability of UC...todd

References:
        Wilensky, R. (1982).  Talking to UNIX in English:  An overview of UC.
            Proceedings of the Second Annual National Conference on
            Artificial Intelligence.  Pittsburgh.
        Wilensky, R. (1983).  Planning and understanding:  A computational
            approach to human reasoning.  New York:  Addison-Wesley Pub. Co.

--
Todd Ogasawara, U. of Hawaii Computing Center
UUCP:           {ihnp4,seismo,ucbvax,dcdwest}!sdcsvax!nosc!uhccux!todd
ARPA:           uhccux!todd@nosc.ARPA
INTERNET:       todd@UHCC.HAWAII.EDU

------------------------------

Date: 6 Feb 87 18:12:36 GMT
From: cmcl2!localhost!aecom!aecom2!lyakhovs@seismo.CSS.GOV
Subject: expert sysyem

 I am looking into reserch of expert systems.

 I was wondering if any body have writen a shell or an expert system itself
 utilizing blackbord and many other good things.

 If you wouldn't mind shring your sourses with me or the rest of the net
 please post it or direct me to where i might find one.



         P.S. Thankx in advence for your input.

------------------------------

Date: 8 Feb 87 23:41:59 GMT
From: princeton!puvax2!6065833%PUCC.BITNET@rutgers.rutgers.edu  (Una
      Smith)
Subject: ExperLogo

I have found ExperLogo for the Macintosh a very unsatisfactory product.  There
are several problems with it:

The current version was written for 128 and 512K macs, and I have had various
adventures getting it to run on a mac+.
1)  To work on a mac+, certain files must be together in a certain folder,
    says the company.
2)  The program works with System 3.1 and Finder 5.0, but not with any more
    recent versions.  These versions are obsolete and there are very few
    around.  They were in any case very buggy.
3)  The company told me when I called on 2 occasions 2 different things:
    a) "Finder 5.3?  When did that come out?"  (answer: almost a year ago)
    b) "There is a bug in Finder 5.3, which Apple will have to fix.  Have
       you tried calling Apple?"
4)  I have not, after many hours of work, managed to print to a laserwriter;
    the program offers very little interfacing support for other applications,
    and the documentation is incredibly deficient.

I would not recommend this application to anyone.  I have found MS Logo to
be quite nice and flexible, however.  Can anyone recommend another LOGO
for the macintosh?  Has anyone found a way to print graphics windows on
a laserwriter?  Any information would be greatly appreciated.

------------------------------

Date: Thu, 5 Feb 87 17:24:30 PST
From: Stergios Marinopoul <stergios@rocky.stanford.edu>
Reply-to: rocky!stergios@rocky.stanford.edu (Stergios Marinopoul)
Subject: Re: expert shell for vax & pc's

In article <8702031307.aa03835@IBD.BRL.ARPA> fshaw@BRL.ARPA ("Fred J.
Shaw", IBD) writes:
>
>       In response to your request for an expert system shell that runs
>on both unix and pc's.  TIMM and TIMM-PC (General Research Corp 703-893-5900,
>Mclean Va.) will run on vax and pc respectively.  I have used TIMM a little
>and am not impressed with it's capablities.  I currently use Insight 2+ on
>a pc.
>               Fred

You can obtain an expert system shell that runs on most computers in use today.
It is called CLIPS ( C Language Integrated Production Systems), is available
from COSMIC, and is developed/supported by the AI section, MPAD divsion,
NASA Johnson Space Center.  The cost through COSMIC is ~$200.00 including
source.

The last time I was around there it was running on Vaxens, IBM, (big&littles)
HP9000, AS9000, CYBER, Amiga, and the Atari.  It was written with the purpose
of being portable, and extendable by the user.

So, check it out.  If you need some help obtaining it let me know,
and I'll see what I can do.

                Stergios Marinopoulos

% UUCP: { lll-crg, seismo, sun } !rocky!stergios                        %
% ARPA:                          f.flex@othello.stanford.edu            %
% USnail:       Crothers Memorial #690, Stanford, CA. 94305             %
% Pa Bell:      (415) 326-9051                                          %

------------------------------

Date: Sun, 8 Feb 87 08:38:22 est
From: davidwk@tecnet-clemson
Subject: References -- Coordinator systems


     In their book "Understanding Computers and Cognition", Winograd and Flores
discuss coordinator systems, which are programs that employ ideas from Searle's
speech acts and system theory to facilitate "conversations" between computer
users.  Is there anything in the literature about these creatures?  I would
greatly appreciate any pointers.
     Thanks in advance.

                                      David Kelley
                                      davidwk@tecnet-clemson.ARPA

------------------------------

Date: 2 Feb 87 14:01:20 GMT
From: mcvax!enea!pesv@seismo.css.gov  (Peter Svenson)
Subject: Connectionism

I wonder if anyone could give me some -> Up to date <- pointers to current
literature on the field of connectionism/neural networks. The only things
I seem to be able to dig up of our Technical libraries (in Sweden) is stuff
about moths and leeches and other equally wierd things.

Where's the computer-related stuff??? Please give some hints complete with
which company that sells them, ISBN, etc..

Thank you very, very much.


/Peter (turbo) Svenson  pesv@enea (UUCP)   enea!pesv@seismo.arpa (ARPA)

"Zen can make you help other people, or, failing that, at least get them off
 your back."


  [This really belongs on the neuron%ti-csl.csnet@csnet-relay list,
  but I'll go ahead and include the replies that came in on
  comp.ai.  -- KIL]

------------------------------

Date: 5 Feb 87 20:52:13 GMT
From: chandros@topaz.rutgers.edu  (Jonathan A. Chandross)
Subject: Connectionism/Neural Net references

>Peter (turbo) Svenson  pesv@enea (UUCP)   enea!pesv@seismo.arpa (ARPA)
>I wonder if anyone could give me some -> Up to date <- pointers to current
>literature on the field of connectionism/neural networks. The only things
>I seem to be able to dig up of our Technical libraries (in Sweden) is stuff
>about moths and leeches and other equally wierd things.


Enclosed is a small sampling of what is available.  Hope it helps.  The
format is bib, but refer should work.

(Some of the references became a little scrambled courtesy of uncompact.
Sorry if bib/refer complain).



Jonathan A. Chandross
allegra!rutgers!topaz!chandros



%A Dell, Gary S.
%T A Spreading-Activation Theory of Retrieval in Sentence Production
%J Psychological Review
%V 93
%N 3
%D 1983
%P 283-321

%A Fahlman, Scott E.
%T Representing Implicit Knowledge
%B Parallel Models of Associative Memory
%E E Geoffrey E. Hinton
%E James A. Anderson
%D 1981
%I Lawrence Erlbaum Associates
%C Hillsdale, New Jersey

%A Fanty, Mark
%T Context-Free Parsing in Connectionist Networks
%R Tech Report TR174
%I Department of Computer Science, University of Rochester
%D Nov. 1985

%A Feldman, Jerome A.
%T A Connectionist Model of Visual Memory
%B Parallel Models of Associative Memory
%E Geoffrey E. Hinton
%E James A. Anderson
%D 1981
%I Lawrence Erlbaum Associates
%C Hillsdale, New Jersey

%A Feldman, Jerome A.
%A Dana H. Ballard
%T Connectionist Models and Their Properties
%J Cognitive Science
%V 6
%P 205-254
%D 1982

%A Feldman, Jerome A.
%T Dynamic Connections in Neural Networks
%J Biological Cybernetics
%I Springer-Verlag
%V 46
%D 1982
%P 27-39

%A Fodor, Jerry A.
%T Information and Association
%O This paper is a critique of connectionism.  Author is with department
of Philosophy, MIT, Cambridge Massachussetts.


%A Hopfield, John J.
%T Neural Networks and physical systems with emergent collective
computational abilities
%J Proceedings National Acadamy of Science
%V 79
%P 2554-2558
%D Apr. 1982

%A Hopfield, John J.
%A David W. Tank
%T Simple "Neural" Optimization Networks: An A/D Converter, Signal Decision
Circuit, and a Linear Programming Circuit
%J IEEE Transactions on Circuits and Systems
%V CAS-33
%N 5
%P 533-541
%D May 1986

%A Hopfield, John J.
%A David W. Tank
%T Collective Computation with Continuous Variables
%J Disordered Systems and Biological Organization
%I Springer-Verlag
%O In press, 1986

%A Hopfield, John J.
%A David W. Tank
%T "Neural" Computation of Decisions in Optimization Problems
%J Biological Cybernetics
%I Springer-Verlag
%V 52
%D 1985
%P 141-152

%A Kosslyn, Stephen M.
%A Gary Hatfield
%T Representation without Symbol Systems
%J Social Research
%V 51
%N 4
%D 1984
%P 1019-1044
%O Winter 1984

%A Matthews, Robert J.
%T Problems with Representationalism
%J Social Research
%V 51
%N 4
%D 1984
O Winter 1984
%P 1065-1097

%A McClelland, James L.
%A Jerome Feldman
%A Beth Adelson
%A Gordon Bower
%A Drew McDermott
%T Connectionist Models and Cognitive Science: Goals, Directions and
Implications
%D Jan. 1987
%O National Science Foundation Grant Proposal

%A McClelland, James L.
%A David E. Rumelhart
%A The PDP Research Group
%T Parallel Distributed Processing: Explorations in the Microstructures
of Cognition
%I MIT Press
%C Cambridge, Massachusetts
%D 1986
%O Two Volume Set


%A Plaut, David C.
%J Visual Recognition of Simple Objects by a Connection Network
%R Tech Report TR143
%I Computer Science Department, University of Rochester
%D Aug. 1984

%A Pylyshyn, Zenon W.
%T Computation and Cognition: Toward a Foundation for Cognitive Science
%I MIT Press
%D 1984
%C Cambridge, Massachusetts

%A Reiss, Richard F.
%T An Abstract Machine Based on Classical Association Psychology
%B Proceedings 1962 Joint Computer Conference
%I AFIPS
%D 1962
%V 21

%A Shastri, Lokendra
%A Jerome A. Feldman
%T Semantic Networks and Neural Nets
%R Tech Report TR131
%I Computer Science Department, University of Rochester
%D June 1984

%A Schwartz, Robert
%T "The" Problems of Representation
%J Social Research
%V 51
%N 4
%D 1984
%P 1047-1064
%O Winter 1984

%A Touretzky, David S.
%A Geoffrey E. Hinton
%T Symbols Among the Neurons: Details of a Connectionist Inference
Architecture
%J IJCAI
%D Aug. 1985

------------------------------

Date: 6 Feb 87 22:29:37 GMT
From: ihnp4!chinet!nucsrl!coray@ucbvax.Berkeley.EDU  (Elizabeth Coray)
Subject: Re: Connectionism


Re:  Connectionist References


Ackley, D.H., Hinton, G.E., and Sejnowski, T.J.  " A learning algorithm for
Boltzmann Machines", COGNITIVE SCIENCE 9, pp. 147-149, 1985.

Ballard, D.H., Hinton, G.E., and Sejnowski, T.J. "Parallel visual computation",
NATURE (London) 306, pp.21-26, 1983

Ballard, D.H., "Cortical connections and parallel processing: structure
and function", BEHAV. BRAIN SCI., 1985.

Barto, A.G. "Learning by statistical cooperation of self-interested neuron-
like computing elements", HUMAN NEUROBIOLOGY 4, pp. 229-256, 1985.

Feldman, J.A. and Ballard, D.H., "Connectionist models and their properties",
COGNITIVE SCIENCE 6, pp. 205-254, 1982.

Hinton, G.E. "Learning in Massively Parallel Nets", an invited talk at the
AAAI 1986 confence in Philadelphia.  (The talk is not published in the
proceedings but may be available from the author--don't quote me).

Hinton, G.E. and Sejnowski, T.J. "Optimal perceptual inference" in
PROCEEDINGS OF THE IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION
AND PATTERN RECOGNITION, pp. 448-453, 1983.

Hopfield, J.J. and Tank, D.W., " 'Neural' computation of decisions in
optimization problems", BIOLOGICAL CYBERNETICS 52, pp. 141-152, 1985.

Kienker, P.K, Sejnowski, T.J., Hinton, G.E., Schumacher, L.E., "Separating
figure From ground with a parallel network", PERCEPTION 15, pp. 197-216.

Kirkpatrick, S., Gelatt, S., and Vecchi, M., "Optimization by Simulated
Annealing", SCIENCE 220, pp. 672-680, 1983.

Rumelhart, D.E., MccClelland, J.L. and the PDP research group, PARALLEL
DISTRIBUTED PROCESSING: EXPLORATIONS IN THE MICROSTRUCTURE OF COGNITION,
MIT Press, Cambridge Mass., 1986.

Saund, Eric "Abstraction and Representation of Continuous Variables
in Connectionist Networks", AAAI CONFERENCE PROCEEDINGS, pp. 638-644,
1986.

Sejnowski, T.J., Kienker, P.K., and Hinton, G.E., "Learning symmetry groups
with hidden units:  Beyond the perceptron", PHYSICA D 22, 1986.



These references offer a starting point.

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Mon Feb  9 18:44:12 1987
Date: Mon, 9 Feb 87 18:43:53 est
From: vtcs1::in% <LAWS@sri-stripe.arpa>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #34
Status: R


AIList Digest             Monday, 9 Feb 1987       Volume 5 : Issue 34

Today's Topics:
  Philosophy - Consciousness & Objective Measurement of Subjective Variables

----------------------------------------------------------------------

Date: 26 Jan 87 09:41:00 GMT
From: mcvax!unido!ztivax!steve@seismo.css.gov
Subject: Harnad on Consciousness - (nf)

/* Written  5:10 pm  Jan 23, 1987 by harnad@mind in ztivax:comp.ai */
        Everyone knows that there's no
        AT&T to stick a pin into, and to correspondingly feel pain. You can do
        that to the CEO, but we already know (modulo the TTT) that he's
        conscious. You can speak figuratively, and even functionally, of a
        corporation as if it were conscious, but that still doesn't make it so.
        To telescope the intuitive sense
        of the rebuttals: Do you believe rooms or corporations feel pain, as
        we do?

        --

        Stevan Harnad                                  (609) - 921 7771
        {allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
        harnad%mind@princeton.csnet
/* End of text from ztivax:comp.ai */

How do you know that AT&T doesn't feel pain?  How do you know that corporations
are not conscious?  People have referred to "national consciousness" (and other
consciousnesses of organisations) for a long time.  The analogy works quite
well, too well for me to be certain that there is no truth to them.  If neurons
are conscious, what kind of picture would they have of the consciousness of
a human?  In my opinion, not much.  Similarly, I cannot rule out the
possibility that corporations are also conscious.  Corporations appear to act
in a conscious manner, but they do not share much experience with us neurons
(I mean humans).  Therefore, we cannot do much of a Total Turing Test for
Corporations.

Harnad also suggested in another posting that he has never seen a convincing
argument that conscious interpretation is necessary to understand a given
set of objective behavior.  Has he ever, I wonder, tried doing that to
human behavior?  (I don't think I'm being very clear here.)

My position is this:  if the conscious interpretation of a given set of
behavior is useful, then by all means interpret the behavior as conscious!
As for proving that behavior is conscious, I feel that that is impossible.
(At least for a philosopher.)  For to do so would require a rigorous,
testable definition of consciousness and people (especially philosophers)
have a mystique about consciousness:  if someone provides a rigorous,
testable definition for consciousness, then people will not accept it
because it is not mysterious - "it is just" something.

I'm afraid I'm not being very clear again.  Consider the great numbers of
people who are very impressed about a computer learning program, and then
when they hear how it works, they say "that's not learning, that's just
optimisation [or whatever the learning algorithm is]."  People have an
intuition that says things like "learning" "intelligence" and "consciousness"
are things that cannot be defined, and will reject definitions of them that
can be used for anything.  This mystique has been greatly reduced over the
past few years for "intelligence", and people are wasting less and less time
arguing about whether computer programs really learn.

I suggest that the problem with "consciousness" is the same, that we reject
rigorous definitions because of our desire for a mystique.  In the end, I
personally feel that the issue is not particularly important - that when it
becomes really useful to think of our programs as conscious (if it ever does)
then we will, and arguing about whether they really are conscious, especially
before we talk about them (routinely) as being conscious, is an exercise in
futility.  I guess, though, that someone ought to argue against Minsky just
for the sake that he not go unchallenged.

When biologists get together, they don't waste their time trying to define
"life".  No one has come up with a good definition of "life" to date.  There
was a pretty good one a while back (unfortunately I don't remember all of it),
and part of it was something about converting energy for its benefit.  Some
clever person showed that a rock satisfied this definition of life!  When
sunlight falls on a rock, the rock warms up - (I forgot too much of this
anecdote, I don't remember why that is in the rock's benefit).  When biologists
do talk about the meaning of (I mean, the definition of) life, they don't
expect to get anywhere, it is more of a game or something.  And I suppose
occasionally some biologist thinks he's come up with the Ultimate Definition
of Life (using the Ten Tests of Timbuktu, or TTT :-) and goes on a one-man
crusade to convince the community that that's The Definition they've all
been looking for.

Have fun trying to send mail to me, it probably is possible but don't ask
me how.

Steve Clark   EUnet: unido!ztivax!steve
Usenet: topaz!princeton!siemens!steve
CSnet:  something like steve@siemens.siemens-rtl.com

------------------------------

Date: 31 Jan 87 23:13:11 GMT
From: clyde!burl!codas!mtune!mtund!adam@rutgers.rutgers.edu  (Adam V.
      Reed)
Subject: Re: Re: Objective measurement of subjective variables

This is a reply to Stevan Harnad, who wrote:
> adam@mtund.UUCP (Adam V. Reed), of AT&T ISL Middletown NJ USA, wrote:
>
> >     Stevan Harnad makes an unstated assumption... that subjective
> >     variables are not amenable to objective measurement. But if by
> >     "objective" Steve means, as I think he does, "observer-invariant", then
> >     this assumption is demonstrably false.
>
> I do make the assumption (let me state it boldly) that subjective
> variables are not objectively measurable (nor are they objectively
> explainable) and that that's the mind/body problem. I don't know what
> "observer-invariant" means, but if it means the same thing as in
> physics -- which is that the very same physical phenomenon can
> occur independently of any particular observation, and can in
> principle be measured by any observer, then individuals' private events
> certainly are not such, since the only eligible observer is the
> subject of the experience himself (and without an observer there is no
> experience -- I'll return to this below). I can't observe yours and you
> can't observe mine.

Yes, and in Efron's analogy, A can't observe B's, and vice versa.
However, I don't buy the assumption that two must *observe the same
instance of a phenomenon* in order to perform an *observer-independent
measurement of the same (generic) phenomenon*. The two physicists can
agree that they are studying the same generic phenomenon because they
know they are doing similar things to similar equipment, and getting
similar results. But there is nothing to prevent two psychologists from
doing similar (mental) things to similar (mental) equipment and getting
similar results, even if neither engages in any overt behavior apart
from reporting the results of his measurements to the other. My point is
that this constitutes objective (observer-independent) measurement of
private (no behavior observable by others) mental processes.

> That's one of the definitive features of the
> subjective/objective distinction itself, and it's intimately related to
> the nature of experience, i.e., of subjectivity, of consciousness.
>
> >     Whether or not a stimulus is experienced as belonging to some target
> >     category is clearly a private event...[This is followed by an
> >     interesting thought-experiment in which the signal detection parameter
> >     d' could be calculated for himself by a subject after an appropriate
> >     series of trials with feedback and no overt response.]... the observer
> >     would be able to mentally compute d' without engaging in any externally
> >     observable behavior whatever.
>
> Unfortunately, this in no way refutes the claim that subjective experience
> cannot be objectively measured or explained. Not only is there (1) no way
> of objectively testing whether the subject's covert calculations on
> that series of trials were correct,

This objection applies with equal force to the observation, recording
and calculations of externally observable behavior. So what?

> not only is there (2) no way of
> getting any data AT ALL without his overt mega-response at the end

Yes, but *this is not what is being measured*. Or is the subject matter
of physics the communication behavior of physicists?

> (unless, of course, the subject is the experimenter, which makes the
> whole exercise solipsistic), but, worst of all, (3) the very same
> performance data could be generated by presenting inputs to a
> computer's transducer, and no matter how accurately it reported its
> d', we presumably wouldn't want to conclude that it had experienced anything
> at all. So what's OBJECTIVELY different about the human case?

What is objectively different about the human case is that not only is
the other human doing similar (mental) things, he or she is doing those
things to similar (human mind implemented on a human brain) equipment.
If we obtain similar results, Occam's razor suggests that we explain
them similarly: if my results come from measurement of subjectively
experienced events, it is reasonable for me to suppose that another
human's similar results come from the same source. But a computer's
"mental" equipment is (at this point in time) sufficiently dissimilar
from a human's that the above reasoning would break down at the point
of "doing similar things to similar equipment with similar results",
even if the procedures and results somehow did turn out to be identical.

> At best, what's being objectively measured happens to correlate
> reliably with subjective experience (as we can each confirm in our own
> cases only -- privately and subjectively). What we are actually measuring
> objectively is merely behavior

Not true. As I have shown in my original posting, d' can be measured
without there *being* any behavior prior to measurement. There is
nothing in Harnad's reply to refute this.

> (and, if we know what to look for, also
> its neural substrate). By the usual objective techniques of scientific
> inference on these data we can then go on to formulate (again objective)
> hypotheses about underlying functional (causal) mechanisms. These should
> be testable and may even be valid (all likewise objectively). But the
> testability and validity of these hypotheses will always be objectively
> independent of any experiential correlations (i.e., the presence or
> absence of consciousness).

Why? And how can this be true in cases when it is the conscious
experience that is being measured?

> To put it my standard stark way: The psychophysics of a conscious
> organism (or device) will always be objectively identical to that
> of a turing-indistinguishable unconscious organism (or device) that
> merely BEHAVES EXACTLY AS IF it were conscious. (It is irrelevant whether
> there are or could be such organisms or devices; what's at issue here is
> objectivity. Moreover, the "reliability" of the correlations is of
> course objectively untestable.) This leaves subjective experience a
> mere "nomological dangler" (as the old identity theorists used to call
> it) in a lawful psychophysical account. We each (presumably) know it's
> there from our respective subjective observations. But, objectively speaking,
> psychophysics is only the study of, say, the detecting and discriminating
> capacity (i.e., behavior) of our trandsucer systems, NOT the qualities of our
> conscious experience, no matter how tight the subjective correlation.
> That's no limit on psychophysics. We can do it as if it were the study
> of our conscious experience, and the correlations may all be real,
> even causal. But the mind/body problem and the problem of objective
> measurement and explanation remain completely untouched by our findings,
> both in practise and in principle.

The above re-states Steve's position, but fails deal with my objections
to it.

> So even in psychophysics, the appropriate research strategy seems to
> be methodological epiphenomenalism. If you disagree, answer this: What
> MORE is added to our empirical mission in doing psychophysics if we
> insist that we are not "merely" trying to account for the underlying
> regularities and causal mechanisms of detection, discrimination,
> categorization (etc.) PERFORMANCE, but of the qualitative experience
> accompanying and "mediating" it? How would someone who wanted to
> undertake the latter rather than merely the former go about things any
> differently, and how would his methods and findings differ (apart from
> being embellished with a subjective interpretation)? Would there be any
> OBJECTIVE difference?

I think so - I would not accept as legitimate any psychological theory
which appeared to contradict my conscious experience, and failed to
account for the apparent contradiction. As far as I can tell, Steve's
position means that he would not disqualify a psychological theory just
because it happened to be contradicted by his own conscious experience.

> I have no lack of respect for psychophysics, and what it can tell us
> about the functional basis of categorization. (I've just edited and
> contributed to a book on it.) But I have no illusions about its being
> in any better a position to make objective inroads on the mind/body
> problem than neuroscience, cognitive psychology, artificial
> intelligence or evolutionary biology -- and they're in no position at all.

> >     In principle, two investigators could perform the [above] experiment
> >     ...and obtain objective (in the sense of observer-independent)
> >     results as to the form of the resulting lawful relationships between,
> >     for example, d' and memory retention time, *without engaging in any
> >     externally observable behavior until it came time to compare results*.
>
> I'd be interested in knowing how, if I were one of the experimenters
> and Adam Reed were the other, he could get "objective
> (observer-independent) results" on my experience and I on his. Of
> course, if we make some (question-begging) assumptions about the fact
> that the experience of our respective alter egos (a) exists, (b) is
> similar to our own, and (c) is veridically reflected by the "form" of the
> overt outcome of our respective covert calculations, then we'd have some
> agreement, but I'd hardly dare to say we had objectivity.

These assumptions are not "question-begging": they are logically
necessary consequences of applying Occam's razor to this situation (see
above). And yes, I would tend to regard the resulting agreement among
different subjective observers as evidence for the objectivity of their
measurements.

> (What, by the way, is the difference in principle between overt behavior
> on every trial and overt behavior after a complex-series-of-trials?
> Whether I'm detecting individual signals or calculating cumulating d's
> or even more complex psychophysical functions, I'm just an
> organism/device that's behaving in a certain way under certain
> conditions. And you're just a theorist making inferences about the
> regularities underlying my performance. Where does "experience" come
> into it, objectively speaking? -- And you're surely not suggesting that
> psychophyics be practiced as a solipsistic science, each experimenter
> serving as his own sole subject: for from solipsistic methods you can
> only arrive at solipsistic conclusions, trivially observer-invariant,
> but hardly objective.)

For measurement to be *measurement of behavior*, the behavior must be,
in the temporal sequence, prior to measurement. But if the only overt
behavior is the communication of the results of measurement, then the
behavior occurs only after measurement has already taken place. So the
measurement in question cannot be a measurement of behavior, and must be
a measurement of something else. And the only plausible candidate for
that "something else" is conscious experience.

> >     The following analogy (proposed, if I remember correctly, by Robert
> >     Efron) may illuminate what is happening here. Two physicists, A and B,
> >    live in countries with closed borders, so that they may never visit each
> >     other's laboratories and personally observe each other's experiments.
> >     Relative to each other's personal perception, their experiments are
> >     as private as the conscious experiences of different observers. But, by
> >     replicating each other's experiments in their respective laboratories,
> >     they are capable of arriving at objective knowledge. This is also true,
> >     I submit, of the psychological study of private, "subjective"
> >     experience.
>
> As far as I can see, Efron's analogy casts no light at all.

See my comments at the beginning of this reply.

> It merely reminds us that even normal objectivity in science (intersubjective
> repeatability) happens to be piggy-backing on the existence of
> subjective experience. We are not, after all, unconscious automata. When we
> perform an "observation," it is not ONLY objective, in the sense that
> anyone in principle can perform the same observation and arrive at the
> same result. There is also something it is "like" to observe
> something -- observations are also conscious experiences.
>
> But apart from some voodoo in certain quantum mechanical meta-theories,
> the subjective aspect of objective observations in physics seems to be
> nothing but an innocent fellow-traveller: The outcome of the
> Michelson-Morley Experiment would presumably be the same if it were
> performed by an unconscious automaton, or even if WE were
> unconscious automata.
> This is decidely NOT true of the (untouched) subjective aspect of a
> psychophysical experiment. Observer-independent "experience" is a
> contradiction in terms.

Yes, but observer-independent *measurement of* experience is not. See
above.

> (Most scientists, by the way, do not construe repeatability to require
> travelling directly to one another's labs; rather, it's a matter of
> recreating the same objective conditions. Unfortunately, this does not
> generalize to the replication of anyone else's private events, or even
> to the EXISTENCE of any private events other than one's own.)

Yes it does: see the argument from Occam's razor earlier in this
article.

> Note that I am not denying that objective knowledge can be derived
> from psychophysics; I'm only denying that this can amount to objective
> knowledge about anything MORE than psychophysical performance and its
> underlying causal substrate. The accompanying subjective phenomenology is
> simply not part of the objective story science can tell, no matter how, and
> how tightly, it happens to be coupled to it in reality. That's the
> mind/body problem, and a fundamental limit on objective inquiry.

Steve seems to be saying that the mind-body problem constitutes "a
fundamental limit on objective inquiry", i.e. that this problem is *in
principle* incapable of ever being solved. I happen to think that human
consciousness is a fact of reality and, like all facts of reality, will
prove amenable to scientific explanation. And I like to think that
this explanation will constitute, in some scientifically relevant sense,
a solution to the "mind-body problem". So I don't see this problem as a
"fundamental limit".

> Methodological epiphenomenalism recommends we face it and live with
> it, since not that much is lost. The "incompleteness" of an objective
> account is, after all, just a subjective problem. But supposing away
> the incompleteness -- by wishful thinking, hopeful over-interpretation,
> hidden (subjective) premises or blurring of the objective/subjective
> distinction -- is a logical problem.

Yes, but need it remain one forever?

                Adam Reed (mtund!adam, attmail!adamreed)

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Tue Feb 10 01:54:31 1987
Date: Tue, 10 Feb 87 01:54:09 est
From: vtcs1::in% <LAWS@sri-stripe.arpa>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #35
Status: R


AIList Digest             Monday, 9 Feb 1987       Volume 5 : Issue 35

Today's Topics:
  Philosophy - Consciousness & Nonconsciousness

----------------------------------------------------------------------

Date: Mon, 02 Feb 87 17:35:13 n
From: DAVIS@EMBL.BITNET
Subject: backtracking.....


        It seems like its time on the AIList to cut around some of the
interesting but bottomless waffle that has come to fill this useful organ.
I fear that Stevan Harnad's most important point is being lost in his
endless efforts to deal with a shower of light and insubstantial blows.
At the same time, his own language and approach to the problem is
obscuring some of the issues he himself raises.

        I cannot help but notice that the debates on conciousness that
we're seeing resemble the debating of the Data General engineers in
Tracy Kidder's book "The Soul of a New Machine". Its time to wake up
folks - we're not building a new Eclipse, with some giant semiconductor
supplying the new 60880 'concious' chip, and the only real task left
being the arranging of the goodies to make use of its wondrous capacities.
No, its time to wake up to the mystery of the C-1: How can ANYTHING *know*
ANYTHING at all ? We are not concerned with how we shuffle the use of
memory, illusion, perceptual inputs etc., so as to maximise efficiency and
speed - we are concerned with the most fundamental problem of all - how
can we know ? Too many contributors seem to me to be concerned with the
secondary extension of this question to a specific version of the general
one "how can we know about X ?". It may be important for AI programmers
to deal with ways of shuffling the data and the processing order so that
a system gets access to X for further data manipulation, but this has
ABSOLUTELY NOTHING to do with the primary question of how it is possible
to know anything.....

        The glimpses of Dennet & Hofstadter's wise approach that we've seen
are encouraging, but still we see Harnad struggling with why's and not how's.
Being a molecular biologist by trade if not religion, I would like to
temporarily assert that conciousness is a *biological* phenomenon, and,
taking Harnad's bull by its horns once again, to assert further that because
this is so, the question of *why* conciousness is used is quite irrelevant
in this context. Haven't any of you read Armstrong and the other arguers
for the selection of conciousness as a means for social interaction ? I
agree with Harnad that to put the origin iof conciousness in the same murky
quasi-random froth as, say, that of selfsplicing introns is a backdoor exit
from the problem. However, conciousness would certianly seem to be here
-leave it to the evolutionary biologists to sort out why, while we get
on with the how.....

        Not that we are getting anywhere fast though...and I would hark
back to the point I tried to raise a while back, merely to get sidetracked
into the semantics of intention vs. intension. This is the same point that
Harnad has been making somewhat obliquely in his wise call for performance
oriented development of AI systems. We have to separate the issues of
personal conciousness (subjective experience) and the notorious "problem
of other minds". When dealing with the possibilities of designing concious
machines, we can only concern ourselves with the latter issue - such devices
will *always* be "other minds" and never available for our subjective
experience. As many contributors have shown, we can only judge "other-mind"
conciousness by performance-oriented measures. So, on with the nuts, on
with the bolts, and forward to sentient silicon......

paul ("the questions were easy - I just didn't know the answers") davis

mail:EMBL,postfach 10.22.09, 6900 Heidleberg, FRG
email: davis@embl.bitnet
                (and hence available from UUCP, ARPA, JANET, CSNET etc...)

------------------------------

Date: 3 Feb 87 06:01:23 GMT
From: princeton!mind!harnad@rutgers.rutgers.edu  (Stevan Harnad)
Subject: Why I Am Not A Methodological Mentalist


cugini@icst-ecf.UUCP ("CUGINI, JOHN") wrote on mod.ai:

>       "Why I am not a Methodological Epiphenomenalist"

This is an ironic twist on Russell's sceptical book about religious
beliefs! I'm the one who should be writing "Why I'm Not a Methodological
Mentalist."

>       Insofar as methodological epiphenomenalism (ME) is simply the
>       following kind of counsel: "when trying to get a computer to
>       play chess, don't worry about the subjective feelings which
>       accompany human chess-playing, just get the machine to make
>       the right moves", I have no particular quarrel with it.

It's a bit more than that, as indicated by the "Total" in the Total
Turing Test. The counsel is NOT to rest with toy models and modules:
That the only kind of performance which will meet our sole frail intuitive
criterion for contending with the real-world other-minds problem --
indistinguishability from a person like any other -- is the total
performance capacity of a (generic) person. Settling for less mires us
in an even deeper underdetermination than we're stuck in anyway. The
asymptotic TTT is the only way to reduce that underdetermination to
the level we're already accustomed to. Chess-playing's simple not
enough. In mind-modeling, it's all-or-nothing. And this is again a
methodological matter. [I know that this is going to trigger (not from
Cugini) another series of queries about animals, retardates, aliens,
subtotal modules. Please, first read the prior iterations on those matters...]

>       It is the claim that the TTT is the only relevant criterion (or,
>       by far, the major criterion) for the presence of consciousness that
>       strikes me as unnecessarily provocative and, almost as bad, false.
>       It is not clear to me whether this claim is an integral part of ME,
>       or an independent thesis... If the claim instead were
>       that the TTT is the major criterion for the presence of intelligence
>       (defined in a perhaps somewhat austere way, as the ability to
>       perform certain kinds of tasks...) then, again, I would have no
>       serious disagreement.

The TTT is an integral part of ME, and the shorthand reminder of why it
must be is this: A complete, objective, causal theory of the mind will
always be equally true of conscious organisms like ourselves AND of
insentient automata that behave exactly as if they were conscious --
i.e., are turing-indistinguishable from ourselves. (It is irrelevant
whether there could really be such insentient perform-alikes; the point is
that there is no objective way of telling the difference. Hence the
difference, if any, cannot make a difference to the objective
theory. Ergo, methodological epiphenomenalism.)

The TTT may be false, of course; but unfortunately, it's not
falsifiable, so we cannot know whether or not it is in reality false.
[I'd also like to hold off the hordes -- again not Cugini -- who are
now poised to pounce on this "nonfalsifiability." The TTT is a
methodological criterion and not an empirical hypothesis. It's only
justification is that it's the only criterion available and it's the
one we use in real life already. It's also the best that one can hope for
from objective inquiry. And what is science, if not that?]

Nor will it do to try to duck the issue by focusing on "intelligence."
We don't know what intelligence is, except that it's something that
minds have, as demonstrated by what minds do. The issue, as I must
relentlessly keep recalling, is not one of definition. It cannot be
settled by fiat. Intelligence is as intelligence does. We know minds
are intelligent, if anything is. Hence only the capacity to pass the
TTT is so far entitled to be dubbed intelligent. Lesser performances
-- toy models and modules -- are no more than clever tricks, until we
know how (and whether) they figure functionally in a candidate that
can pass the TTT.

>       It does bother me (more than it does you?) that consciousness,
>       of all things, consciousness, which may be subjective, but, we
>       agree, is real, consciousness, without which my day would be so
>       boring, is simply not addressed by any systematic rational inquiry.

It does bother me. It used to bother me more; until I realized that
fretting about it only had two outcomes: To lure me into flawed
arguments about how consciousness can be "captured" objectively after
all, and to divert attention from ambitious performance modeling to
doing hermeneutics on trivial performances and promises of
performances. It also helps to settle my mind about it that if one
adopts an epiphenomenalist stance not only is consciousness
bracketed, but so is its vexatious cohort, "free will." I'm less
bothered in principle by the fact that (nondualistic) science has no
room for free will -- that it's just an illusion -- but that certainly
doesn't make the bothersome illusion go away in practice. (By the way,
without consciousness, your day wouldn't even be boring.)
--

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

Date: 2 Feb 87 22:52:29 GMT
From: clyde!watmath!utzoo!dciem!mmt@rutgers.rutgers.edu  (Martin
      Taylor)
Subject: Re: Minsky on Mind(s)

>> = Martin Taylor (me)  > = Steven Harnad
>
>>      Of course [rooms and corporations] do not feel pain as we do,
>>      but they might feel pain, as we do.
>
>The solution is not in the punctuation, I'm afraid. Pain is just an
>example standing in for whether the candidate experiences anything AT
>ALL. It doesn't matter WHAT a candidate feels, but THAT it feels, for
>it to be conscious.
Understood.  Nevertheless, the punctuation IS important, for although it
is most unlikely they feel as we do, it is less unlikely that they feel.

>
>>      [i] Occam's razor demands that we describe the world using the simplest
>>      possible hypotheses.
>>      [ii] It seems to me simpler to ascribe consciousness to an entity that
>>      resembles me in many ways than not to ascribe consciousness to that
>>      entity.
>>      [iii] I don't think one CAN use the TTT to assess whether another
>>      entity is conscious.
>>      [iv] Silicon-based entities have few overt points of resemblance,
>>      so their behaviour has to be convincingly like mine before I will
>>      grant them a consciousness like mine.
>
>{i} Why do you think animism is simpler than its alternative?
Because of [ii].
>{ii} Everything resembles everything else in an infinite number of
>ways; the problem is sorting out which of the similarities is relevant.
Absolutely.  Watanabe's Theorem of the Ugly Duckling applies.  The
distinctions (and similarities) we deem important are no more or less
real than the infinity of ones that we ignore.  Nevertheless, we DO see
some things as more alike than other things, because we see some similarities
(and some differences) as more important than others.

In the matter of consciousness, I KNOW (no counterargument possible) that
I am conscious, Ken Laws knows he is conscious, Steve Harnad knows he is
conscious.  I don't know this of Ken or Steve, but their output on a
computer terminal is enough like mine for me to presume by that similarity
that they are human.  By Occam's razor, in the absence of evidence to the
contrary, I am forced to believe that most humans work the way I do.  Therefore
it is simpler to presume that Ken and Steve experience consciousness than
that they work according to one set of natural laws, and I, alone of all
the world, conform to another.

>{iii} The Total Turing Test (a variant of my own devise, not to be
>confused with the classical turing test -- see prior chapters in these
>discussions) is the only relevant criterion that has so far been
>proposed and defended. Similarities of appearance are obvious
>nonstarters, including the "appearance" of the nervous system to
>untutored inspection. Similarities of "function," on the other hand,
>are moot, pending the empirical outcome of the investigation of what
>functions will successfully generate what performances (the TTT).
All the TTT does, unless I have it very wrong, is provide a large set of
similarities which, taken together, force the conclusion that the tested
entity is LIKE ME, in the sense of [i] and [ii].

>{iv} [iv] seems to be in contradiction with [iii].
Not at all.  What I meant was that the biological mechanisms of natural
life follow (by Occam's razor) the same rules in me as in dogs or fish,
and that I therefore need less information about their function than I
would for a silicon entity before I would treat one as conscious.

One of the paradoxes of AI has been that as soon as a mechanism is
described, the behaviour suddenly becomes "not intelligent."   The same
is true, with more force, for consciousness.  In my theory about another
entity that looks and behaves like me, Occam's razor says I should
presume consciousness as a component of their functioning.  If I have
been told the principles by which an entity functions, and those principles
are adequate to describe the behaviour I observe, Occam's razor (in its
original form "Entities should not needlessly be multiplied") says that
I should NOT introduce the additional concept of consciousness.  For the
time being, all silicon entities function by principles that are well
enough understood that the extra concept of consciousness is not required.
Maybe this will change.

>
>>      The problem splits in two ways: (1) Define consciousness so that it does
>>      not involve a reference to me, or (2) Find a way of describing behaviour
>>      that is simpler than ascribing consciousness to me alone.  Only if you
>>      can fulfil one of these conditions can there be a sensible argument
>>      about the consciousness of some entity other than ME.
>
>It never ceases to amaze me how many people think this problem is one
>that is to be solved by "definition." To redefine consciousness as
>something non-subjective is not to solve the problem but to beg the
>question.
>
I don't see how you can determine whether something is conscious without
defining what consciousness is.  Usually it is done by self-reference.
"I experience, therefore I am conscious."  Does he/she/it experience?
But never is it prescribed what experience means.  Hence I do maintain
that the first problem is that of definition.  But I never suggested that
the problem is solved by definition.  Definition merely makes the subject
less slippery, so that someone who claims an answer can't be refuted by
another who says "that wasn't what I meant at all."

The second part of my split attempts to avoid the conclusion from
similarity that beings like me function like me.  If a simpler description
of the world can be found, then I no longer should ascribe consciousness
to others, whether human or not.  Now, I believe that better descriptions
CAN be found for beings as different from me as fish or bacteria or
computers.  I do not therefore deny or affirm that they have experiences.
(In fact, despite Harnad, I rather like Ken Law's (?) proposition that
there is a graded quality of experience, rather than an all-or-none
choice).  What I do argue is that I have better grounds for not treating
these entities as conscious than I do for more human-like entities.

Harnad says that we are not looking for a mathematical proof, which is
true.  But most of his postings demand that we show the NEED for assuming
consciousness in an entity, which is empirically the same thing as
proving them to be conscious.
--

Martin Taylor
{allegra,linus,ihnp4,floyd,ubc-vision}!utzoo!dciem!mmt
{uw-beaver,qucis,watmath}!utcsri!dciem!mmt

------------------------------

Date: Sun 8 Feb 87 22:52:30-PST
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Reply-to: AIList-Request@SRI-AI.ARPA
Subject: Disclaimer of Consciousness

  From: clyde!watmath!utzoo!dciem!mmt@rutgers.rutgers.edu  (Martin Taylor)
  In the matter of consciousness, I KNOW (no counterargument possible)
  that I am conscious, Ken Laws knows he is conscious, Steve Harnad
  knows he is conscious.

I'm not so sure that I'm conscious.  Oh, in the linguistic sense I
have the same property of consciousness that [we presume] everyone
has.  But I question the "I experience a toothache" touchstone for
consciousness that Steve has been using.  On the one hand, I'm not
sure I do experience the pain because I'm not sure what "I" is doing
the experiencing; on the other hand, I'm not sure that silicon systems
can't experience pain in essentially the same way.  Instead of claiming
that robots can be conscious, I am just as willing to claim that
consciousness is an illusion and that I am just as unconscious as
any robot.

It is difficult to put this argument into words because the
presumption of consciousness is built into the language itself, so
let's try to examine the linguistic assumptions.

First, the word "I".  The aspect of my existence that we are interested
in here is my mind, which is somehow dependent on my brain as a substrate.
My brain is system of neural circuits.  The "I" is a property of the
system, and cannot be claimed by any neural subsystem (or homunculus),
although some subsystems may be more "central" to my identity than others.

Consciousness would also seem to be a property of the whole system.
But not so fast -- there is strong evidence that consciousness (in the
sense of experiencing and responding to stimuli) is primarily located
in the brain stem.  Large portions of the cortex can be cut away with
little effect on consciousness, but even slight damage to the upper
brain stem cause loss of consciousness.  [I am not recanting my position
that consciousness is quantitative across species.  Within something
as complex as a human (or a B-52), emergent system properties can be
very fragile and thus seem to be all or nothing.]  We must be careful
not to equate sensory consciousness with personality (or personal
behavioral characteristics, as in the TTT), self, or soul.

Well, I hear someone saying, that kind of consciousness hardly counts;
all birds and mammals (at least) can be comatose instead of awake --
that doesn't prove they >>experience<< pain when they are awake.  Ah,
but that leads to further difficulties.  The experience is real --
after all, behavior changes because of it.  We need to know if the
process of experience is just the setting of bits in memory, or if
there is some awareness that goes along with the changes in the neural
substrate.

All right, then, how about self-awareness?  As the bits are changed,
some other part of the brain (or the brain as a whole) is "watching"
and interprets the neural changes as a painful experience.  But either
that pushes us back to a conscious homunculus (and ultimately to a
nonphysical soul) or we must accept that computers can be self-aware
in that same sense.  No, self-awareness is Steve's C-2 consciousness.
What we have to get a grip on is C-1 consciousness, an awareness of
the pain itself.

One way out is to assume that neurons themselves are aware of pain,
and that our overall awareness is some sum over the individual
discomforts.  But the summation requires that the neurons communicate
their pain, and we are back to the problem of how the rest of the
brain can sense and interpret that signal.  A similar dead end is
to suppose that toothache signals interfere with brain functioning and
that the brain interprets its own performance degradations as pain.
What is the "I" that has the awareness of pain?

How do we know that we experience pain?  (Or, following Descarte,
that we experience our own thought?)  We can formulate sentences
about the experience, but it seems doubtful that our speech centers
are the subsystems that actually experience the pain.  (That theory,
that all awareness is linguistic awareness, has been suggested.  I am
reminded of the saying that there is no ideas so outrageous that it
has not been championed by some philosopher.)  Similarly we can
rule out the motor center, the logical centers, and just about any
other centers of the brain.  Either the pain is experienced by some
tiny neural subsystem, in which case "I" am not the conscious agent,
or it is experienced by the system as a whole, in which case analogous
states or processes in analogous systems should also be considered
conscious.

I propose that we bite the bullet and accept that our "experience"
or "awareness" of pain is an illusion, replicable in all relevant
respect by inorganic systems.  Terms such as pain, experience,
awareness, consciousness, and self are crude linguistic analogies,
based on false models, to the true patterns of neural events.
Pain is real, as are the other concepts, but our model of how
they arise and interrelate is hopelessly animistic.

                                        -- Ken Laws

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Mon Feb  9 18:44:51 1987
Date: Mon, 9 Feb 87 18:44:35 est
From: vtcs1::in% <LAWS@sri-stripe.arpa>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #36
Status: R


AIList Digest             Monday, 9 Feb 1987       Volume 5 : Issue 36

Today's Topics:
  Philosophy - Consciousness & Objective vs. Subjective Inquiry

----------------------------------------------------------------------

Date: 3 Feb 87 07:15:00 EST
From: "CUGINI, JOHN" <cugini@icst-ecf>
Reply-to: "CUGINI, JOHN" <cugini@icst-ecf>
Subject: pique experience


> Harnad: {iii} The Total Turing Test (a variant of my own devise, not
> to be confused with the classical turing test -- see prior chapters
> in these discussions) is the only relevant criterion that has so far
> been proposed and defended.  Similarities of appearance are obvious
> nonstarters, including the "appearance" of the nervous system to
> untutored inspection.

Just a quick pout here - last December I posted a somewhat detailed
defense of the "brain-as-criterion" position, since it seemed to be a
major point of contention. (Again, the one with the labeled events
A1, B1, etc.).  No one has responded directly to this posting.  I'm
prepared to argue the brain-vs-TTT case on its merits, but it would be
helpful if those who assert the TTT position would acknowledge the
existence, if not the validity, of counter-arguments.

John Cugini <Cugini@icst-ecf>

------------------------------

Date: 3 Feb 87 19:51:58 GMT
From: norman@husc4.harvard.edu  (John Norman)
Subject: Re: Objective vs. Subjective Inquiry

In article <462@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:

>Let's leave the subjective discussion of private events
>to lit-crit, where it belongs.

Could you elaborate on this smug comment, in detail?


John Norman

UUCP:           {seismo,ihnp4,allegra,ut-sally}harvard!h-sc4!norman
Internet:       norman%h-sc4@harvard.HARVARD.EDU
BITNET:         NORMAN@HARVLAW1

------------------------------

Date: Wed, 4 Feb 87 10:44:32 pst
From: Ray Allis <ray@BOEING.COM>
Subject: Conscious Intelligence


I've been reading the discussion of consciousness with interest,
because I DO consider such philosophical inquiry to be relevant
to AI.  Philosophical issues must be addressed if we are serious
about building "intelligent" systems.  Lately, though, several
people, either explicitly or by expressing impatience with the
subject, have implied that they consider consciousness irrelevant
to AI.  Does this reflect a belief that consciousness is irrelevant
to "natural" intelligence as well?  What is the explanation for the
observation that "intelligent behavior" and consciousness seem to
occur together?  Can an entity "behave intelligently" without being
conscious?  Can an entity be conscious without being "intelligent"?
Is consciousness required in order to have "intelligent behavior" or
is it a side-effect?  What are some examples? Counter-examples?
Even prior to some definitive answer to "The Mind-Body Problem", I
believe we should try to understand the nature of the relationship
between consciousness and "intelligent behavior", justify the
conclusion that there is no relationship, or lower our expectations
(and proclamations) considerably.

I'd like to see some forum for these discussions kept available,
whether or not it's the AILIST.

------------------------------

Date: 5 Feb 87 07:10:19 GMT
From: ptsfa!hoptoad!tim@LLL-LCC.ARPA  (Tim Maroney)
Subject: Re: More on Minsky on Mind(s)

How well respected is Minsky among cognitive psychologists?  I was rather
surprised to see him putting the stamp of approval on Drexler's "Engines of
Creation", since the psychology is so amazingly shallow; e.g., reducing
identity to a matter of memory, ignoring effects of the glands and digestion
on personality.  Drexler had apparently read no actual psychology, only AI
literature and neuro-linguistics, and in my opinion his approach is very
anti-humanistic.  (Much like that of hard sf authors.)

Is this true in general in the AI world?  Is it largely incestuous, without
reference to scientific observations of psychic function?  In short, does it
remain almost entirely speculative with respect to higher-order cognition?
--
Tim Maroney, Electronic Village Idiot
{ihnp4,sun,well,ptsfa,lll-crg,frog}!hoptoad!tim (uucp)
hoptoad!tim@lll-crg (arpa)

Second Coming Still Vaporware After 2,000 Years

------------------------------

Date: 3 Feb 87 16:10:05 GMT
From: princeton!mind!harnad@rutgers.rutgers.edu  (Stevan Harnad)
Subject: Re: Minsky on Mind(s)


mmt@dciem.UUCP (Martin Taylor) writes:

>       we DO see some things as more alike than other things, because
>       we see some similarities (and some differences) as more important
>       than others.

The scientific version of the other-minds problem -- the one we deal
with in the lab and at the theoretical bench, as opposed to the informal
version of the other-minds problem we practice with one another every
day -- requires us to investigate what causal devices have minds, and,
in particular, what functional properties of those causal devices are
responsible for their having minds. In other words (unless you know
the answer to the theoretical problems of cognitive science and
neurosience a priori) it is an EMPIRICAL question what the relevant
underlying functional and structural similarities are. The only
defensible prior criterion of similarity we have cannot be functional
or structural, since we don't know anything about that yet; it can
only be the frail, fallible, underdetermined one we use already in
everyday life, namely, behavioral similarity.

Every other similarity is, in this state of ignorance, arbitrary,
a mere similarity of superficial appearance. (And that INCLUDES the
similarity of the nervous system, because we do not yet have the vaguest
idea what the relevant properties there are either.) Will this state of
affairs ever change? (Will we ever find similarities other than behavioral
ones on the basis of which we can infer consciousness?) I argue that it will
not change. For any other correlate of consciousness must be VALIDATED
against the behavioral criterion. Hence the relevant functional
similarities we eventually discover will always have to be grounded in
the behavioral ones. Their predictive power will always be derivative.
And finally, since the behavioral-indistinguishability criterion is itself
abundantly fallible -- incommensurably moreso than ordinary scientific
inferences and their inductive risks  -- our whole objective structure
will be hanging on a skyhook, so to speak, always turing
indistinguishable from state of affairs in which everything behaves
exactly the same way, but the similarities are all deceiving, and
consciousness is not present at all. The devices merely behave exactly
as if it were.

Throughout the response, by the way, Taylor freely interchanges the
formal scientific problem of modeling mind -- inferring its substrates,
and hence trying to judge what functional conditions are validly
inferred to be conscious (what the relevant similarities are) -- with
the informal problem of judging who else in our everyday world is
conscious. Similarities of superficial appearance may be good enough
when you're just trying to get by in the world, and you don't have the
burden of inferring causal substrate, but it won't do any good with
the hard cases you have to judge in the lab. And in the end, even
real-world judgments are grounded in behavioral similarity
(indistinguishability) rather than something else.

>       it is simpler to presume that Ken and Steve experience
>       consciousness than that they work according to one set of
>       natural laws, and I, alone of all the world, conform to another.

Here's an example of conflating the informal and the empirical
problems. Informally, we just want to make sure we're interacting with
thinking/feeling people, not insentient robots. In the lab, we have to
find out what the "natural laws" are that generate the former and not
the latter. (Your criterion for ascribing consciousness to Ken and me,
by the way, was a turing criterion...)

>       All the TTT does, unless I have it very wrong, is provide a large set of
>       similarities which, taken together, force the conclusion that the tested
>       entity is LIKE ME

The Total Turing Test simply requires that the performance capacity of
a candidate that I infer to have a mind be indistinguishable from the
performance capacity of a real person. That's behavioral similarity
only. When a device passes that test, we are entitled to infer that
its functional substrate is also relevantly similar to our own. But
that inference is secondary and derivative, depending for its
validation entirely on the behavioral similarities.

>       If a simpler description of the world can be found, then I no
>       longer should ascribe consciousness to others, whether human or not.

I can't imagine a description sufficiently simple to make solipsism
convincing. Hence even the informal other-minds problem is not settled
by "Occam's Razor." Parsimony is a constraint on empirical inference,
not on our everyday, intuitive and practical judgements, which are
often not only uneconomical, but irrational, and irresistible.

>       What I do argue is that I have better grounds for not treating
>       these [animals and machines] as conscious than I do for more
>       human-like entities.

That may be good enough for everyday practical and perhaps ethical
judgments. (I happen to think that it's extremely wrong to treat
animals inhumanely.) I agree that our intuitions about the minds of
animals are marginally weaker than about the minds of other people,
and that these intuitions get rapidly weaker still as we go down the
phylogenetic scale. I also haven't much more doubt that present-day
artificial devices lack minds than that stones lack minds. But none
of this helps in the lab, or in the principled attempt to say what
functions DO give rise to minds, and how.

>       Harnad says that we are not looking for a mathematical proof, which is
>       true. But most of his postings demand that we show the NEED for assuming
>       consciousness in an entity, which is empirically the same thing as
>       proving them to be conscious.

No. I argue for methodological epiphenomenalism for three reasons
only: (1) Wrestling with an insoluble problem is futile. (2) Gussying
up trivial performance models with conscious interpretations gives the
appearance of having accomplished more than one has; it is
self-deceptive and distracts from the real goal, which is a
performance goal. (3) Focusing on trying to capture subjective phenomenology
rather than objective performance leads to subjectively gratifying
analogy, metaphor and hermeneutics instead of to objectively stronger
performance models. Hence when I challenge a triumphant mentalist
interpretation of a process, function or performance and ask why it
wouldn't function exactly the same way without the consciousness, I am
simply trying to show up theoretical vacuity for what it is. I promise
to stop asking that question when someone designs a device that passes
the TTT, because then there's nothing objective left to do, and an
orgy of interpretation can no longer do any harm.



--

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

Date: 4 Feb 87 02:46:27 GMT
From: clyde!watmath!utzoo!dciem!mmt@rutgers.rutgers.edu  (Martin
      Taylor)
Subject: Re: Consciousness?


(Moved from mod.ai)
> I always thought that a scientific theory had to undergo a number of
> tests to determine how "good" it is.  Needless to say, a perfect score
> on one test may be balanced by a mediocre score on another test.  Some
> useful tests are:
>
> - Does the theory account for the data?
> - Is the theory simple?  Are there unnecessary superfluousities?
> - Is the theory useful?  Does it provide the basis for a fruitful
>         program of research?
All true, and probably all necessary.
> ....
> While the study of consciousness is fascinating and lies at the base of
> numerous religions, it doesn't seem to be scientifically useful.  Do I
> rewrite my code because the machine is conscious or because it is
> getting the wrong answer?
If you CAN write your code without demanding your machine be conscious,
then you don't need consciousness to write your code.  But if you want
to construct a system that can, for example, darn socks or write a fine
sonata, you should probably (for now) write your code with the assumption
of consciousness in the executing machine.

In other words, you are confusing the unnecessary introduction of
consciousness into a system wherein you know all the working principles
with the question of whether consciousness is required for certain
functions.
>       Is there a program of experimentation
> suggested by the search for consciousness?
Consciousness need not be sought.  You experience it (I presume).  The
question is whether behaviour can better (by the tests you present above)
be described by including consciousness or by not including it.  If, by
"the search for consciousness" you mean the search for a useful definition
of consciousness, I'll let others answer that question.
> Does consciousness change the way artificial intelligence must be
> programmed?  The evidence so far says NO.  [How is that for a baldfaced
> assertion?
Pretty good.  But for reasons stated above, it's irrelevant if you start
with the idea that AI must be programmed in a silicon (i.e. constructed)
machine.  Any such development precludes the necessity of using consciousness
in the design, although it does not preclude the possibility that the
end product might BE conscious.
>
>
> I don't think scientific theories of consciousness are incorrect, I
> think they are barren.
Now THAT's a bald assertion. Barren for what purpose?  Certainly for
construction purposes, but perhaps not for understanding what evolved
organisms do. (I take no stand on whether consciousness is in fact a
useful construct.  I only want to point out that it has potential for
being useful, even though not in devising artificial constructs).
>
>                                         Seth
--

Martin Taylor
{allegra,linus,ihnp4,floyd,ubc-vision}!utzoo!dciem!mmt
{uw-beaver,qucis,watmath}!utcsri!dciem!mmt
mmt@zorac.arpa

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Mon Feb  9 18:45:22 1987
Date: Mon, 9 Feb 87 18:44:51 est
From: vtcs1::in% <LAWS@sri-stripe.arpa>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V5 #37
Status: R


AIList Digest             Monday, 9 Feb 1987       Volume 5 : Issue 37

Today's Topics:
  Philosophy - Consciousness

----------------------------------------------------------------------

Date: 9 Feb 87 06:14:48 GMT
From: well!wcalvin@lll-lcc.arpa  (William Calvin)
Subject: Re: More on Minsky on Mind(s)


     In following the replies to Minsky's excerpts from SOCIETY OF MIND, I
am struck by all the attempts to use slippery word-logic.  If that's all
one has to use, then one suffers with word-logic until something better
comes along.  But there are some mechanistic concepts from both
neurobiology and evolutionary biology which I find quite helpful in
thinking about consciousness -- or at least one major aspect of it, namely
what the writer Peter Brooks described in READING FOR THE PLOT (1985) as
follows:

     "Our lives are ceaselessly intertwined with narrative, with the
     stories that we tell and hear told, those we dream or imagine or would
     like to tell, all of which are reworked in that story of our own lives
     that we narrate to ourselves in an episodic, sometimes semiconscious,
     but virtually uninterrupted monologue.  We live immersed in narrative,
     recounting and reassessing the meaning of our past actions,
     anticipating the outcome of our future projects, situating ourselves
     at the intersection of several stories not yet completed."

     Note the emphasis on both past and future, rather than the perceiving-
the-present and recalling-the-recent-past, e.g., Minsky:

>     although people usually assume that consciousness is knowing
>     what is happening in the minds, right at the
>     present time, consciousness never is really concerned with the
>     present, but with how we think about the records of our recent
>     thoughts...  how thinking about our short term memories changes them!

But simulation is more the issue, e.g., E.O. Wilson in ON HUMAN NATURE
1978:
"Since the mind recreates reality from abstractions of sense
     impressions, it can equally well simulate reality by recall and
     fantasy.  The brain invents stories and runs imagined and remembered
     events back and forth through time."

Rehearsing movements may be the key to appreciating the brain mechanisms,
if I may quote myself (THE RIVER THAT FLOWS UPHILL: A JOURNEY FROM THE BIG
BANG TO THE BIG BRAIN, 1986):

     "We have an ability to run through a motion with our muscles detached
     from the circuit, then run through it again for real, the muscles
     actually carrying out the commands.  We can let our simulation run
     through the past and future, trying different scenarios and judging
     which is most advantageous -- it allows us to respond in advance to
     probable future environments, to imagine an accidental rockfall
     loosened by a climber above us and to therefore stay out of his fall
     line."

     Though how we acquired this foresight is a bit of a mystery.  Never
mind for a moment all those "surely it's useful" arguments which, using
compound interest reasoning, can justify anything (given enough
evolutionary time for compounding).  As Jacob Bronowski noted in THE
ORIGINS OF KNOWLEDGE AND IMAGINATION 1967, foresight hasn't been
widespread:

     "[Man's] unique ability to imagine, to make plans...  are generally
     included in the catchall phrase "free will." What we really mean by
     free will, of course, is the visualizing of alternatives and making a
     choice between them.  In my view, which not everyone shares, the
     central problem of human consciousness depends on this ability to
     imagine.....  Foresight is so obviously of great evolutionary
     advantage that one would say, `Why haven't all animals used it and
     come up with it?' But the fact is that obviously it is a very strange
     accident.  And I guess as human beings we must all pray that it will
     not strike any other species."

So if other animals have not evolved very much of our fussing-about-the-
future consciousness via its usefulness, what other avenues are there for
evolution?  A major one, noted by Darwin himself but forgotten by almost
everyone else, is conversion ("functional change in anatomical
continuity"), new functions from old structures.  Thus one looks at brain
circuitry for some aspects of the problem -- such as planning movements --
and sees if a secondary use can be made of it to yield other aspects of
consciousness -- such as spinning scenarios about past and future.

     And how do we generate a detailed PLAN A and PLAN B, and then compare
them?  First we recognize that detailed plans are rarely needed:  many
elaborate movements can get along fine on just a general goal and feedback
corrections, as when I pick up my cup of coffee and move it to my lips.
But feedback has a loop time (nerve conduction time, plus decision-making,
often adds up to several hundred milliseconds of reaction time).  This
means the feedback arrives too late to do any good in the case of certain
rapid movements (saccadic eye flicks, hammering, throwing, swinging a golf
club).  Animals who utilize such "ballistic movements" (as we call them in
motor systems neurophysiology) simply have to evolve a serial command
buffer:  plan at leisure (as when we "get set" to throw) but then pump out
that whole detailed sequence of muscle commands without feedback.  And get
it right the first time.  Since it goes out on a series of channels (all
those muscles of arm and hand), it is something like planning a whole
fireworks display finale (carefully coordinated ignitions from a series of
launch platforms with different inherent delays, etc.).

     But once a species has such a serial command buffer, it may be useful
for all sorts of things besides the actions which were originally under
natural selection during evolution (throwing for hunting is my favorite
shaper-upper --see J.Theor.Biol. 104:121-135,1983 -- but word-order-coded
language is conceivably another way of selecting for a serial command
buffer).  Besides rehearsing slow movements better with the new-fangled
ballistic movement sequencer, perhaps one could also string together other
concepts-images-schemata with the same neural machinery: spin a scenario?

     The other contribution from evolutionary biology is the notion that
one can randomly generate a whole family of such strings and then select
amongst them (imagine a railroad marshalling yard, a whole series of
possible trains being randomly assembled).  Each train is graded against
memory for reasonableness -- Does it have an engine at one end and a
caboose at the other? -- before one is let loose on the main line.  "Best"
is surely a value judgment determined by memories of the fate of similar
sequences in the past, and one presumes a series of selection steps that
shape up candidates into increasingly more realistic sequences, just as
many generations of evolution have shaped up increasingly more
sophisticated species.  To quote an abstract of mine called "Designing
Darwin Machines":

          This selection of stochastic sequences is more
          analogous to the ways of Darwinian evolutionary biology
          than to von Neumann machines.  One might call it a
          Darwin machine instead, but operating on a time scale
          of milliseconds rather than millennia, using innocuous
          virtual environments rather than noxious real-time
          ones.

     Is this what Darwin's "bulldog," Thomas Henry Huxley, would have
agreed was the "mechanical equivalent of consciousness" which Huxley
thought possible, almost a century ago?  It would certainly be fitting.

     We do not yet know how much of our mental life such stochastic
sequencers might explain.  But I tend to think that this approach using
mechanical analogies from motor systems neurophysiology and evolutionary
biology might have something to recommend it, in contrast to word-logic
attempts to describe consciousness.  At least it provides a different place
to start, hopefully less slippery than variants on the little person inside
the head with all their infinite regress.

                                   William H. Calvin
                                        Biology Program NJ-15
                                        University of Washington
                                        Seattle WA 98195 USA
                                        206/328-1192
                                        USENET:  wcalvin@well.uucp

------------------------------

Date: 9 Feb 87 08:41:00 EST
From: "CUGINI, JOHN" <cugini@icst-ecf>
Reply-to: "CUGINI, JOHN" <cugini@icst-ecf>
Subject: another stab at "what are we arguing about"


> > Me:       "Why I am not a Methodological Epiphenomenalist"
>
> Harnad: This is an ironic twist on Russell's sceptical book about religious
> beliefs!  I'm the one who should be writing "Why I'm Not a Methodological
> Mentalist."

Yeah, but I said it first...

OK, seriously folks, I think I see this discussion starting to converge on
a central point of disagreement (don't look so skeptical).  Harnad,
Reed, Taylor, and I have all mentioned this "on the side" but I think it
may be the major sticking point between Harnad and the latter three.

> Reed: ...However, I don't buy the assumption that two must *observe the same
> instance of a phenomenon* in order to perform an *observer-independent
> measurement of the same (generic) phenomenon*. The two physicists can
> agree that they are studying the same generic phenomenon because they
> know they are doing similar things to similar equipment, and getting
> similar results. But there is nothing to prevent two psychologists from
> doing similar (mental) things to similar (mental) equipment and getting
> similar results, even if neither engages in any overt behavior apart
> from reporting the results of his measurements to the other....
>
> What is objectively different about the human case is that not only is
> the other human doing similar (mental) things, he or she is doing those
> things to similar (human mind implemented on a human brain) equipment.
> If we obtain similar results, Occam's razor suggests that we explain
> them similarly: if my results come from measurement of subjectively
> experienced events, it is reasonable for me to suppose that another
> human's similar results come from the same source. But a computer's
> "mental" equipment is (at this point in time) sufficiently dissimilar
> from a human's that the above reasoning would break down at the point
> of "doing similar things to similar equipment with similar results",
> even if the procedures and results somehow did turn out to be identical.



> > Harnad: Everything resembles everything else in an infinite number of
> > ways; the problem is sorting out which of the similarities is relevant.
>
> Taylor: Absolutely.  Watanabe's Theorem of the Ugly Duckling applies.  The
> distinctions (and similarities) we deem important are no more or less
> real than the infinity of ones that we ignore.  Nevertheless, we DO see
> some things as more alike than other things, because we see some similarities
> (and some differences) as more important than others.
>
> In the matter of consciousness, I KNOW (no counterargument possible) that
> I am conscious, Ken Laws knows he is conscious, Steve Harnad knows he is
> conscious.  I don't know this of Ken or Steve, but their output on a
> computer terminal is enough like mine for me to presume by that similarity
> that they are human.  By Occam's razor, in the absence of evidence to the
> contrary, I am forced to believe that most humans work the way I do.
> Therefore
> it is simpler to presume that Ken and Steve experience consciousness than
> that they work according to one set of natural laws, and I, alone of all
> the world, conform to another.


The Big Question: Is your brain more similar to mine than either is to any
plausible silicon-based device?

I (and Reed and Taylor?) been pushing the "brain-as-criterion" based
on a very simple line of reasoning:

1. my brain causes my consciousness.
2. your brain is a lot like mine.
3. therefore, by "same cause, same effect" your brain probably
   causes consciousness in you.

(BTW, The above does NOT deny the relevance of similar performance in
confirming 3.)

Now, when I say simple things like this, Harnad says complicated things like:
re 1: how do you KNOW your brain causes your consciousness?  How can you have
causal knowledge without a good theory of mind-brain interaction?
Re 2: How do you KNOW your brain is similar to others'?  Similar wrt
what features?  How do you know these are the relevant features?

For now (and with some luck, for ever) I am going to avoid a
straightforward philosophical reply.  I think there may be some
reasonably satisfactory (but very long and philosophical) answers to
these questions, but I maintain the questions are really not relevant.

We are dealing with the mind-body problem.  That's enough of a philosophical
problem to keep us busy.  I have noticed (although I can't explain why),
that when you start discussing the mind-body problem, people (even me, once
in a while) start to use it as a hook on which to hang every other
known philosophical problem:

1. well how do we know anything at all, much less our neighbors' mental states?
   (skepticism and epistemology).

2. what does it mean to say that A causes B, and what is the nature of
   causal knowledge?  (metaphysics and epistemology).

3. is it more moral to kill living thing X than a robot?  (ethics).

All of these are perfectly legitimate philosophical questions, but
they are general problems, NOT peculiar to the mind-body problem.
When addressing the mind-body problem, we should deal with its
peculiar features (of which there are enough), and not get mired in
more general problems * unless they are truly in doubt and thus their
solution truly necessary for M-B purposes. *

I do not believe that this is so of the issues Harnad raises.  I
believe people can a) have causal knowledge, both of instances and
types of events, without any articulated "deep" theory of the
mechanics going on behind the scenes (indeed the deep knowledge
comes later as an attempt to explain the already observed causal
interaction), and b) can spot relevant similarities without being
able to articulate them.

A member of an Amazon tribe could find out, truly know, that light
switches cause lights to come on, with a few minutes of
experimentation.  It is no objection to his knowledge to say that he
has no causal theory within which to embed this knowledge, or to
question his knowledge of the relevance of the similarities among
various light switches, even if he is hard-pressed to say anything
beyond "they look alike."  It is a commonplace example that many
people can distinguish between canines and felines without being
able to say why.  I do not assert, I am quick to add, that
these rough-and-ready processes are infallible - yes, yes, are whales
more like cows than fish, how should I know?

But again, to raise the specter of certainty is again a side-issue.
Do we all not agree that the Indian's knowledge of lights and light
switches is truly knowledge, however unsophisticated?

Now, S. Harnad, upon your solemn oath, do you have any serious practical
doubt, that, in fact,

1. you have a brain?
2. that it is the primary cause of your consciousness?
3. that other people have brains?
4. that these brains are similar to your own (and if not, why do you
   and everyone else use the same word to refer to them?), at least
   more so than any other object with which you are familiar?

Now if you do know these utterly ordinary assertions to be true,
* even if you can't produce a high-quality philosophical defense for
them, (which inability, I argue, does not cast serious doubt on them,
or on the status of your belief in them as knowledge) *  then, what
is wrong with the simple inference that others' possession of a brain
is a good reason (not necessarily the only reason) to believe that
they are conscious?

John Cugini <Cugini@icst-ecf>

------------------------------

Date: 9 Feb 87 08:59:00 EST
From: "CUGINI, JOHN" <cugini@icst-ecf>
Reply-to: "CUGINI, JOHN" <cugini@icst-ecf>
Subject: and another thing..


Yeah, while I'm at it, how do you, Harnad, know that two performances
by two entities in question (a human and a robot) are relevantly
similar?  What is it precisely about the performances you intend to
measure?  How do you know that these are the important aspects?

Refresh my memory if I'm wrong, but as I recall, the TTT was a kind
of gestalt you'll-know-intelligent-behavior-when-you-see-it test.
How is this different from looking at two brains and saying, yeah
they look like the same kind of thing to me?

John Cugini <Cugini@icst-ecf>

------------------------------

End of AIList Digest
********************