1-Nov-83 10:43:08-PST,18980;000000000001
Mail-From: LAWS created at  1-Nov-83 09:58:29
Date: Tuesday, November 1, 1983 9:47AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #87
To: AIList@SRI-AI


AIList Digest            Tuesday, 1 Nov 1983       Volume 1 : Issue 87

Today's Topics:
  Rational Psychology - Definition,
  Parallel Systems,
  Conciousness & Intelligence,
  Halting Problem,
  Molecular Computers
----------------------------------------------------------------------

Date: 29 Oct 83 23:57:36-PDT (Sat)
From: hplabs!hao!csu-cs!denelcor!neal @ Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: denelcor.182

I see what you are saying and I beg to disagree.  I don't believe that
the distinction between rational and irrational psychology (it's probably
not that simple) as depending on whether or not the scientist is being
rational but on whether or not the subject is (or rather which aspect of
his behavior--or mentation if you accept the existence of that--is under
consideration).  More like the distinction between organic and inorganic
chemistry.

------------------------------

Date: Mon, 31 Oct 83 10:16:00 PST
From: Philip Kahn <kahn@UCLA-CS>
Subject: Sequential vs. parallel

 It was claimed that "parallel computation can always
be done sequentially."  I had thought that this naive concept had passed
away into never never land, but I suppose not.  I do not deny that MANY
parallel computations can be accomplished sequentially, yet not ALL
parallel computations can be made sequential.  Those class of parallel
computations that cannot be accomplished sequentially are those that
involve the state of all variables in a single instant.  This class
of parallelism often arises in sensor applications.  It would not be
valid, for example, to raster-scan (sequential computation) a sensing field
if the processing of that sensing field relied upon the quantization of
elements in a single instant.

     I don't want to belabor this point, but it should be recognized
that the common assertion that all parallel computation can be done
sequentially is NOT ALWAYS VALID.  In my own experience, I have found
that artificial intelligence (and real biologic intelligence for that
matter) relies heavily upon comparisons of various elements at a single
time instant.  As such, the assumption of sequentialty of parallelistic
algorithms is often invalid.  Something to think about.

------------------------------

Date: Saturday, 29 Oct 1983 21:05-PST
From: sdcrdcf!trw-unix!scgvaxd!qsi03!achut@rand-relay
Subject: Conciousness, Halting Problem, Intelligence


        I am new to this mailing list and I see there is some lively
discussion going on.  I am eager to contribute to it.

Consciousness:
        I treat the words self-awareness, consciousness, and soul as
synonyms in the context of these discussions.  They are all epiphenomena
of the phenomenon of intelligence, along with emotions, desires, etc.
To say that machines can never be truly intelligent because they cannot
have a "soul" is to be excessively naive and anthropocentric.  Self-
awareness is not a necessary prerequisite for intelligence; it arises
naturally *because* of intelligence.  All intelligent beings possess some
degree of self-awareness; to perceive and interact with the world, there
must be an internal model, and this invariably involves taking into
account the "self".  A very, very low intelligence, like that of a plant,
will possess a very, very low self-awareness.

Parallelism:
        The human brain resembles a parallel machine more than it does a
purely sequential one.  Parallel machines can do many things much quicker
than their sequential counterpart.  Parallel hardware may well make the
difference between the attainment of AI in the near future and the
unattainment for several decades.  But I cannot understand those who claim
that there is something *fundamentally* different between the two types of
architectures.  I am always amazed at the extremes to which some people will
go to find the "magic spark" which separates intelligence from non-
intelligence.  Two of these are "continuousness vs. discreteness" and
"non-determinism vs. determinism".
        Continuous?  Nothing in the universe is continuous. (Except maybe
arguments to the contrary :-))  Mass, energy, space and even time, at least
according to current physical knowledge, are all quantized.  Non-determinism?
Many people feel that "randomness" is a necessary ingredient to intelligence.
But why isn't this possible with a sequential architecture?  I can
construct a "discrete" random number generator for my sequential machine
so that it behaves in a similar manner to your "non-deterministic" parallel
machine, although perhaps slower. (See "Slow intelligence" below)
Perhaps the "magic sparkers" should consider that difference they are
searching for is merely one of complexity.  (I really hate to use the
word "merely", since I appreciate the vast scope of the complexity, but
it seems appropriate here)  There is no evidence, currently, to justify
thinking otherwise.

The Halting(?) Problem:
        What Stan referred to as the "Halting Problem" is really
the "looping problem", hence the subsequent confusion.  The Halting Problem
is not really relevant to AI, but the looping problem *is* relevant.  The
question is not even "why don't humans get caught in loops", since, as
Mr. Frederking aptly points out, "beings which aren't careful about this
fail to breed, and are weeded out by evolution".  (For an interesting story
of what could happen if this were not the case, see "Riddle of the universe
and its solution" by Christoper Cerniak in "The Mind's I")  But rather, the
more interesting question is "by what mechanisms do humans avoid them?",
and then, "are these the best mechanisms to use in AI programs?".
It not clear that this might not be a problem when AI is attempted on a
machine whose internal states could conceivably recur.  Now I am not saying
that this an insurmountable problem by any means; I am merely saying that
it might be a worthy topic of discussion.

Slow intelligence:
        Intelligence is dependent on time?  This would require a curious
definition of intelligence.  Suppose you played chess at strength 2000 given
5 seconds per move, 2010 given 5 minutes, and 2050 given as much time as you
desired.  Suppose the corresponding numbers for me were 1500, 2000, and 2500.
Who is the better (more intelligent) player?  True, I need 5 minutes per
move just to play as good as you can at only 5 seconds.  But shouldn't the
"high end" be compared instead?  There are many bases on which to decide the
"greater" of two intelligences.  One is (conceivably, but not exclusively)
speed.  Another is number and power of inferences it can make in a given
situation.  Another is memory, and ability to correlate current situations
with previous ones.  STRAZ@MIT-OZ has the right idea.  Incidentally, I'm
surprised that no one pointed out an example of an intelligence staring
us in the face which is slower but smarter than us all, individually.
Namely, this net!

------------------------------

Date: 25 Oct 83 13:34:02-PDT (Tue)
From: harpo!eagle!mhuxl!ulysses!cbosgd!cbscd5!pmd @ Ucb-Vax
Subject: Artificial Consciousness? [and Reply]

I'm interested in getting some feedback on some philosophical
questions that have been haunting me:

1) Is there any reason why developments in artificial intelligence
and computer technology could not someday produce a machine with
human consciousness (i.e. an I-story)?

2) If the answer to the above question is no, and such a machine were
produced, what would distinguish it from humans as far as "human"
rights were concerned?  Would it be murder for us to destroy such a
machine?  What about letting it die of natural (?) causes if we
have the ability to repair it indefinitely?
(Note:  Just having a unique, human genetic code does not legally make
one human as per the 1973 *Row vs Wade* Supreme Court decision on
abortion.)

Thanks in advance.

Paul Dubuc

[For an excellent discussion of the rights and legal status of AI
systems, see Marshal Willick's "Artificial Intelligence: Some Legal
Approaches and Implications" in the Summer '83 issue (V. 4, N. 2) of
AI magazine.  The resolution of this issue will of course be up to the
courts. -- KIL]

------------------------------

Date: 28 Oct 1983 21:01-PDT
From: fc%usc-cse%USC-ECL@SRI-NIC
Subject: Halting in learning programs

        If you restrict the class of things that can be learned by your
program to those which don't cause infinite recursion or circularity,
you will have a good solution to the halting problem you state.
Although generalized learning might be nice, until we know more about
learning, it might be more appropriate to select specific classes of
adaption which lend themselves to analysis and development of new
theories.

        As a simple example of a non halting problem learning automata,
the Purr Puss system developed by John Andreas (from New Zealand) does
an excellent job of learning without any such difficulty. Other such
systems exist as well, all you have to do is look for them. I guess the
point is that rather than pursue the impossible, find something
possible that may lead to the solution of a bigger problem and pursue
it with the passion and rigor worthy of the problem. An old saying:
'Problems worthy of attack prove their worth by fighting back'

                Fred

------------------------------

Date: Sat, 29 Oct 83 13:23:33 CDT
From: Bob.Warfield <warbob.rice@Rand-Relay>
Subject: Halting Problem Discussion

It turns out that any computer program running on a real piece of hardware
may be simulated by a deterministic finite automaton, since it only has a
finite (but very large) number of possible states. This is usually not a
productive observation to make, but it does present one solution to the
halting problem for real (i.e. finite) computing hardware. Simulate the
program in question as a DFA and look for loops. From this, one should
be able to tell what input to the DFA would produce an infinite loop,
and recognition of that input could be done by a smaller DFA (the old
one sans loops) that gets incorporated into the learning program. It
would run the DFA in parallel (or 1 step ahead?) and take action if a
dangerous situation appeared.

                                        Bob Warfield
                                        warbob@rice

------------------------------

Date: Mon 31 Oct 83 15:45:12-PST
From: Calton Pu <CALTON@WASHINGTON.ARPA>
Subject: Halting Problem: Resource Use

   From Shebs@Utah-20:

        The question is this: consider a learning program, or any
        program that is self-modifying in some way.  What must I do
        to prevent it from getting caught in an infinite loop, or a
        stack overflow, or other unpleasantnesses?  ...
        How can *it* know when it's stuck in a losing situation?

Trying to come up with a loop detector program seemed to find few enthusiasts.
The limited loop detector suggests another approach to the "halting problem".
The question above does not require the solution of the halting problem,
although that could help.   The question posed is one of resource allocation
and use.   Self-awareness is only necessary for the program to watch itself
and know whether it is making progress considering its resource consumption.
Consequently it is not surprising that:

        The best answers I saw were along the lines of an operating
        system design, where a stuck process can be killed, or
        pushed to the bottom of an agenda, or whatever.

However, Stan wants more:

        Workable, but unsatisfactory.  In the case of an infinite
        loop (that nastiest of possible errors), the program can
        only guess that it has created a situation where infinite
        loops can happen.

The real issue here is not whether the program is in loop, but whether the
program will be able to find a solution in feasible time.   Suppose a program
will take a thousand years to find a solution, will you let it run that long?
In other words, the problem is one of measuring gained progress versus
spent resources.   It may turn out that a program is not in loop but you
choose to write another program instead of letting the first run to completion.
Looping is just one of the losing situations.

Summarizing, the learning program should be allowed to see a losing situation
because it is unfeasible, whether the solution is possible or not.
From this view, there are two aspects to the decision: the measurement of
progress made by the program, and monitoring resource consumption.
It is the second aspect that involves some "operating systems design".
I would be interested to know whether your parser knows it is making progress.


                -Calton-

        Usenet: ...decvax!microsoft!uw-beaver!calton

------------------------------

Date: 31 Oct 83 2030 EST
From: Dave.Touretzky@CMU-CS-A
Subject: forwarded article


- - - - Begin forwarded message - - - -
  Date: 31 Oct 1983  18:41 EST (Mon)
  From: Daniel S. Weld <WELD%MIT-OZ@MIT-MC.ARPA>
  To:   macmol%MIT-OZ@MIT-MC.ARPA
  Subject: Molecular Computers

  Below is a forwarded message:
    From: David Rogers <DRogers at SUMEX-AIM.ARPA>

I have always been confused by the people who work on
"molecular computers", it seems so stupid. It seems much
more reasonable to consider the reverse application: using
computers to make better molecules.

Is anyone out there excited by this stuff?

                MOLECULAR  COMPUTERS  by  Lee  Dembart, LA Times
              (reprinted from the San Jose Mercury News 31 Oct 83)

SANTA MONICA - Scientists have dreamed for the past few years of
building a radically different kind of computer, one based on
molecular reactions rather than on silicon.

With such a machine, they could pack circuits much more tightly than
they can inside today's computers.  More important, a molecular
computer might not be bound by the rigid binary logic of conventional
computers.

Biological functions - the movement of information within a cell or
between cells - are the models for molecular computers. If that basic
process could be reproduced in a machine, it would be a very powerful
machine.

But such a machine is many, many years away.  Some say the idea is
science fiction.  At the moment, it exists only in the minds of of
several dozen computer scientists, biologists, chemists and engineers,
many of whom met here last week under the aegis of the Crump Institute
for Medical Engineering at the University of California at Los
Angeles.

"There are a number of ideas in place, a number of technologies in
place, but no concrete results," said Michael Conrad, a biologist and
computer scientist at Wayne State University in Detroit and a
co-organizer of the conference.

For all their strengths, today's digital computers have no ability to
judge.  They cannot recognize patterns. They cannot, for example,
distinguish one face from another, as even babies can.

A great deal of information can be packed on a computer chip, but it
pales by comparison to the contents of the brain of an ant, which can
protect itself against its environment.

If scientists had a computer with more flexible logic and circuitry,
they think they might be able to develop "a different style of
computing", one less rigid than current computers, one that works more
like a brain and less like a machine.  The "mood" of such a device
might affect the way scientists solve problems, just as people's moods
affect their work.

The computing molecules would be manufactured by genetically
engineered bacteria, which has given rise to the name "biochip" to
describe a network of them.

"This is really the new gene technology", Conrad said.

The conference was a meeting on the frontiers - some would say fringes
- of knowledge, and several times participants scoffed, saying that
the discussion was meandering into philosophy.

The meeting touched on some of the most fundamental questions of brain
and computer research, revealing how little is known of the mind's
mechanisms.

The goal of artificial intelligence work is to write programs that
simulate thought on digital computers. The meeting's goal was to think
about different kinds of computers that might do that better.

Among the questions posed at the conference:

- How do you get a computer to chuckle at a joke?

- What is the memory capacity of the brain? Is there a limit to that
capacity?

- Are there styles of problem solving that are not digitally
computable?

- Can computer science shed any light on the mechanisms of biological
science?  Can computer science problems be addressed by biological
science mechanisms?

Proponents of molecular computers argue that it is possible to make
such a machine because biological systems perform those processes all
the time.  Proponents of artificial intelligence have argued for years
that the existence of the brain is proof that it is possible to make a
small machine that thinks like a brain.

It is a powerful argument.  Biological systems already exist that
compute information in a better way than digital computers do. "There
has got to be inspiration growing out of biology", said F. Eugene
Yates, the Crump Institutes director.

Bacteria use sophisticated chemical processes to transfer information.
Can that process be copied?

Enzymes work by stereoscopically matching their molecules with other
molecules, a decision-making process that occurs thousands of times a
second.  It would take a binary computer weeks to make even one match.

"It's that failure to do a thing that an enzyme does 10,000 times a
second that makes us think there must be a better way," Yates said.

In the history of science, theoretical progress and technological
progress are intertwined.  One makes the other possible. It is not
surprising, therefore, that thinking about molecular computers has
been spurred recently by advances in chemistry and biotechnology that
seem to provide both the materials needed and a means for producing it
on a commercial scale.

"If you could design such a reaction, you could probably get a
bacteria to make it," Yates said.

Conrad thinks that a functioning machine is 50 years away, and he
described it as a "futuristic" development.
- - - - End forwarded message - - - -

------------------------------

End of AIList Digest
********************
 3-Nov-83 13:38:43-PST,11566;000000000001
Mail-From: LAWS created at  3-Nov-83 13:26:23
Date: Thursday, November 3, 1983 1:09PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #88
To: AIList@SRI-AI


AIList Digest            Thursday, 3 Nov 1983      Volume 1 : Issue 88

Today's Topics:
  Molecular Computers - Comment,
  Sequential Systems - Theoretical Sufficiency,
  Humanness - Definition, 
  Writing Analysis - Reference,
  Lab Report - Prolog and SYLLOG at IBM,
  Seminars - Translating LISP & Knowledge and Reasoning
----------------------------------------------------------------------

Date: 1 Nov 83 1844 EST
From: Dave.Touretzky@CMU-CS-A
Subject: Comment on Molecular Computers


- - - - Begin forwarded message - - - -
Date: Tue, 1 Nov 1983  12:19 EST
From: DANNY%MIT-OZ@MIT-MC.ARPA
To:   Daniel S. Weld <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Molecular Computers

I was at the Molecular Computer conference. Unfortunately, there has
very lttle progress since the Molecular Electronics conference a year
ago. The field is too full of people who think analog computation is
"more powerful" and who think that Goedel's proof shows that people
can always think better than machine. Sigh.
--danny

------------------------------

Date: Thursday, 3 November 1983 13:27:10 EST
From: Robert.Frederking@CMU-CS-CAD
Subject: Parallel vs. Sequential

Re: Phillip Kahn's claim that "not ALL parallel computations can be made
sequential": I don't believe it, unless you are talking about infinitely
many processing elements.  The Turing Machine is the most powerful model of
computation known, and it is inherently serial (and equivalent to a
Tesselation Automaton, which is totally parallel).  Any computation that
requires all the values at an "instant" can simply run at N times the
sampling rate of your sensors: it locks them, reads each one, and makes its
decisions after looking at all of them, and then unlocks them to examine the
next time slice.  If one is talking practically, this might not be possible
due to speed considerations, but theoretically it is possible.  So while at
a theoretical level ALL parallel computations can be simulated sequentially,
in practice one often requires parallelism to cope with real-world speeds.

------------------------------

Date: 2 Nov 83 10:52:22 PST (Wednesday)
From: Hoffman.es@PARC-MAXC.ARPA
Subject: Awareness, Human-ness


Sorry it took me a while to track this down.  It's something I recalled
when reading the discussion of awareness in V1 #80.  It's been lightly
edited.

--Rodney Hoffman

**** **** **** **** **** **** **** ****

From Richard Rorty's book, "Philosophy and The Mirror of Nature":

Personhood is a matter of decision rather than knowledge, an acceptance
of another being into fellowship rather than a recognition of a common
essence.

Knowledge of what pain is like or what red is like is attributed to
beings on the basis of their potential membership in the community.
Thus babies and the more attractive sorts of animal are credited with
"having feelings" rather than  (like machines or spiders) "merely
responding to stimuli."  To say that babies know what heat is like, but
not what the motion of molecules is like is just to say that we can
fairly readily imagine them opening their mouths and remarking on the
former, but not the latter.  To say that a gadget that says "red"
appropriately *doesn't* know what red is like is to say that we cannot
readily imagine continuing a conversation with the gadget.

Attribution of pre-linguistic awareness is merely a courtesy extended to
potential or imagined fellow-speakers of our language.  Moral
prohibitions against hurting babies and the better looking sorts of
animals are not based on their possessions of feeling.  It is, if
anything, the other way around.  Rationality about denying civil rights
to morons or fetuses or robots or aliens or blacks or gays or trees is a
myth.  The emotions we have toward borderline cases depend on the
liveliness of our imagination, and conversely.

------------------------------

Date: 1 November 1983 18:55 EDT
From: Herb Lin <LIN @ MIT-ML>
Subject: writing analysis

You might want to take a look at some of the stuff by R. Flesch
who is the primary exponent of a system that takes word and sentence
and paragraph lengths and turns it into grade-equivalent reading
scores.  It's somewhat controversial.

[E.g., The Art of Readable Writing.  Or, "A New Readability Index",
J. of Applied Psychology, 1948, 32, 221-233.  References to other
authors are also given in Cherry and Vesterman's writeup of the
STYLE and DICTION systems included in Berkeley Unix.  -- KIL]

------------------------------

Date: Monday, 31-Oct-83  11:49:55-GMT
From: Bundy HPS (on ERCC DEC-10) <Bundy@EDXA>
Subject: Prolog and SYLLOG at IBM

                 [Reprinted from the Prolog Digest.]


    Date: 9 Oct 1983 11:43:51-PDT (Sunday)
    From: Adrian Walker <ADRIAN.IBM@Rand-Relay>
    Subject: Prolog question


                                   IBM Research Laboratory K51
                                   5600 Cottle Road
                                   San Jose
                                   CA 95193 USA

                                   Telephone:    408-256-6999
                                   ARPANet: Adrian.IBM@Rand-Relay

                                   10th October 83


Alan,

In answer to your question about Prolog implementations, we
do most of our work using the Waterloo Prolog 1.3 interpreter
on an IBM mainframe (3081).  Although not a traditional AI
environment, this turns out to be pretty good.  For instance,
the speed of the Interpreter turns out to be about the same
as that of compiled DEC-10 Prolog (running on a DEC-10).

As for environment, the system delivered by Waterloo is
pretty much stand alone, but there are several good environments
built in Prolog on top of it.

A valuable feature of Waterloo Prolog 1.3 is a 'system' predicate,
which can call anything on the system, E.g.  a full screen editor.

The work on extracting explanations of 'yes' and 'no' answers
from Prolog, which I reported at IJCAI, was done in Waterloo
Prolog.  We have also implemented a syllogistic system called
SYLLOG, and several expert system types of applications.  An
English language question answerer written by Antonio Porto and
me, produces instantaneous answers, even when the 3081 has 250
users.

As far as I know, Waterloo Prolog only runs under the VM operating
system (not yet under MVS, the other major IBM OS for mainframes).
It is available, for a moderate academic licence fee, from Sandra
Ward, Department of Computing Services, University of Waterloo,
Waterloo, Ontario, Canada.

We use it with IBM 3279 colour terminals, which adds variety to a
long day at the screen, and can also be useful !

Best wishes,

-- Adrian Walker

Walker, A. (1981). 'SYLLOG: A Knowledge Based Data Management
System,' Report No. 034. Computer Science Department, New York
University, New York.

Walker, A. (1982). 'Automatic Generation of Explanations of
Results from Knowledge Bases,' RJ3481. Computer Science
Department, IBM Research Laboratory, San Jose, California.

Walker, A. (1983a). 'Data Bases, Expert Systems, and PROLOG,'
RJ3870. Computer Science Department, IBM Research Laboratory,
San Jose, California. (To appear as a book chapter)

Walker, A. (1983b). 'Syllog: An Approach to Prolog for
Non-Programmers.' RJ3950, IBM Research Laboratory, San Jose,
Cal1fornia. (To appear as a book chapter)

Walker, A. (1983c). 'Prolog/EX1: An Inference Engine which
Explains both Yes and No Answers.'
RJ3771, IBM Research Laboratory, San Jose, Calofornia.
(Proc. IJCAI 83)

Walker, A. and Porto, A. (1983). 'KBO1, A Knowledge Based
Garden Store Assistant.'
RJ3928, IBM Research Laboratory, San Jose, California.
(In Proc Portugal Workshop, 1983.)

------------------------------

Date: Mon 31 Oct 83 22:57:03-CST
From: John Hartman <CS.HARTMAN@UTEXAS-20.ARPA>
Subject: Fri. Grad Lunch - Understanding and Translating LISP

                [Reprinted from the UTEXAS-20 bboard.]

GRADUATE BROWN BAG LUNCH - Friday 11/4/83, PAI 5.60 at noon:

I will talk about how programming knowledge contributes to
understanding programs and translating between high level languages.
The problems of translating between LISP and MIRROR (= HLAMBDA) will
be introduced.  Then we'll look at the translation of A* (Best First
Search) and see some examples of how recognizing programming cliches
contributes to the result.

I'll try to keep it fairly short with the hope of getting critical
questions and discussion.


Old blurb:
I am investigating how a library of standard programming constructs
may be used to assist understanding and translating LISP programs.
A programmer reads a program differently than a compiler because she
has knowledge about computational concepts such as "fail/succeed loop"
and can recognize them by knowing standard implementations.  This
recognition benefits program reasoning by creating useful abstractions and
connections between program syntax and the domain.

The value of cliche recognition is being tested for the problem of
high level translation.  Rich and Temin's MIRROR language is designed
to give a very explicit, static expression of program information
useful for automatically answering questions about the program.  I am
building an advisor for LISP to MIRROR translation which will exploit
recognition to extract implicit program information and guide
transformation.

------------------------------

Date: Wed, 2 Nov 83 09:17 PST
From: Moshe Vardi <vardi@Diablo>
Subject: Knowledge Seminar

               [Forwarded by Yoni Malachi <YM@SU-AI>.]

We are planning to start at IBM San Jose a research seminar on
theoretical aspects of reasoning about knowledge, such as reasoning
with incomplete information, reasoning in the presence of
inconsistencies, and reasoning about changes of belief.  The first few
meetings are intended to be introductory lectures on various attempts
at formalizing the problem, such as modal logic, nonmonotonic logic,
and relevance logic.  There is a lack of good research in this area,
and the hope is that after a few introductory lectures, the format of
the meetings will shift into a more research-oriented style.  The
first meeting is tentatively scheduled for Friday, Nov. 18, at 1:30,
with future meetings also to be held on Friday afternoon, but this may
change if there are a lot of conflicts.  The first meeting will be
partly organizational in nature, but there will also be a talk by Joe
Halpern on "Applying modal logic to reason about knowledge and
likelihood".

For further details contact:

Joe Halpern [halpern.ibm-sj@rand-relay, (408) 256-4701]
Yoram Moses [yom@sail, (415) 497-1517]
Moshe Vardi [vardi@su-hnv, (408) 256-4936]


    03-Nov-83  0016     MYV     Knowledge Seminar
    We may have a problem with Nov. 18. The response from Stanford to the
    announcement is overwhelming, but have a room only for 25 people.
    We may have to postpone the seminar.


To be added to the mailing list contact Moshe Vardi (MYV@sail,vardi@su-hnv)

------------------------------

End of AIList Digest
********************
 3-Nov-83 17:04:40-PST,15168;000000000001
Mail-From: LAWS created at  3-Nov-83 17:03:25
Date: Thursday, November 3, 1983 4:59PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #89
To: AIList@SRI-AI


AIList Digest             Friday, 4 Nov 1983       Volume 1 : Issue 89

Today's Topics:
  Intelligence - Definition & Measurement & Necessity for Definition
----------------------------------------------------------------------

Date: Tue, 1 Nov 83 13:39:24 PST
From: Philip Kahn <v.kahn@UCLA-LOCUS>
Subject: Definition of Intelligence

        When it comes down to it, isn't intelligence the ability to
recognize space-time relationships?  The nice thing about this definition
is that it recognizes that ants, programs, and humans all possess
varying degrees of intelligence (that is, varying degrees in their
ability to recognize space-time relationships).  This implies that
intelligence is only correlative, and only indirectly related to
physical environmental interaction.

------------------------------

Date: Tue, 1 Nov 1983  22:22 EST
From: SLOAN%MIT-OZ@MIT-MC.ARPA
Subject: Slow intelligence/chess

        ... Suppose you played chess at strength 2000 given 5 seconds
        per move, 2010 given 5 minutes, and 2050 given as much time as
        you desired...

An excellent point.  Unfortunately wrong.  This is a common error,
made primarily by 1500 players and promoters of chess toys.  Chess
ratings measure PERFORMANCE at TOURNAMENT TIME CONTROLS (generally
ranging between 1.5 to 3 moves per minute).  To speak of "strength
2000 at 5 seconds per move" or "2500 given as much time as desired" is
absolutely meaningless.  That is why there are two domestic rating
systems, one for over-the-board play and another for postal chess.
Both involve time limits, the limits are very different, and the
ratings are not comparable.  There is probably some correlation,  but
the set of skills involved are incomparable.
  This is entirely in keeping with the view that intelligence is
coupled with the environment, and involves a speed factor (you must
respond in "real-time" - whatever that happens to mean.)  It also
speaks to the question of "loop-avoidance": in the real world, you
can't step in the same stream twice; you must muddle through, ready or
not.
  To me, this suggests that all intelligent behavior consists of
generating crude, but feasible solutions to problems very quickly (so
as to be ready with a response) and then incrementally improving the
solution as time permits.  In an ever changing environment, it is
better to respond inadequately than to ponder moot points.
-Ken Sloan

------------------------------

Date: Tue, 1 Nov 1983 10:15:54 EST
From: AXLER.Upenn-1100@Rand-Relay (David M. Axler - MSCF Applications
      Mgr.)
Subject: Turing Test Re-visited

     I see that the Turing Test has (not unexpectedly) crept back into the
discussions of intelligence (1:85).  I've wondered a bit as to whether the
TT shouldn't be extended a bit; to wit, the challenge it poses should not only
include the ability to "pass" the test, but also the ability to act as a judge
for the test.  Examining the latter should give us all sorts of clues as to
what preconceived notions we're imposing when we try to develop a machine or
program that satisfies only Turing's original problem

Dave Axler

------------------------------

Date: Wed, 2 Nov 1983  10:10 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Parallelism & Consciousness


What I meant is that defining intelligence seems as pointless as
defining "life" and then arguing whether viruses are alive instead of
asking how they work and solve the problems that appear to us to be
the interesting ones.  Instead of defining so hard, one should look to
see what there is.

For example, about the loop-detecting thing, it is clear that in full
generality one can't detect all Turing machine loops.  But we all know
intelligent people who appear to be caught, to some extent, in thought
patterns that appear rather looplike.  That paper of mine on jokes
proposes that to be intelligent enough to keep out of simple loops,
the problem is solved by a variety of heuristic loop detectors, etc.
Of course, this will often deflect one from behaviors that aren't
loops and which might lead to something good if pursued.  That's life.


I guess my complaint is that I think it is unproductive to be so
concerned with defining "intelligence" to the point that you even
discuss whether "it" is time-scale invariant, rather than, say, how
many computrons it takes to solve some class of problems.  We want to
understand problem-solvers, all right.  But I think that the word
"intelligence" is a social one that accumulates all sorts of things
that one person admires when observed in others and doesn't understand
how to do.  No doubt, this can be narrowed down, with great effort,
e.g., by excluding physical; skills (probably wrongly, in a sense) and
so forth.  But it seemed to me that the discussion here in AILIST was
going nowwhere toward understand intelligence, even in that sense.

In other words, it seems strange to me that there is no public
discussion of substantive issues in the field...

------------------------------

Date: Wed, 2 Nov 1983  10:21 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Intelligence and Competition


   The ability to cope with  a CHANGE
    in  the environment marks  intelligence.


See, this is what's usually called adaptiveness.  This is why you
don't get anywhere defining intelligence -- until you have a clear idea
to define.  Why be enslaved to the fact that people use a word, unless
you're sure it isn't a social accumulation.

------------------------------

Date: 2 Nov 1983 23:44-PST
From: ISAACSON@USC-ISI
Subject: Re: Parallelism & Consciousness


From Minsky:

    ...I think that the word "intelligence" is a social one
    accumulates all sorts of things that one person
    admires observed in others and doesn't understand how to
    do...

    In other words, it seems strange to me that there
    is no public discussion of substantive issues in the
    field...


Exactly...  I agree on both counts.  My purpose is to help
crystallize a few basic topics, worthy of serious discussion, that
relate to those elusive epiphenomena that we tend to lump under
that loose characterization: "Intelligence".  I read both your LM
and Jokes papers and consider them seminal in that general
direction.  I think, though, that your ideas there need, and
certainly deserve, further elucidation.  In fact, I was hoping
that you would be willing to state some of your key points to
this audience.


More than this.  Recently I've been attracted to Doug
Hofstadter's ideas on subcognition and think that attention
should be paid to them as well.  As a matter of fact, I see
certain affinities between you two and would like to see a good
discussion that centers on LM, Jokes, and Subcognition as
Computation.  I think that, in combination, some of the most
promising ideas for AI are awaiting full germination in those
papers.

------------------------------

Date: Thu, 3 Nov 1983  13:17 EST
From: BATALI%MIT-OZ@MIT-MC.ARPA
Subject: Inscrutable Intelligence

    From Minsky:

    ...I think that the word "intelligence" is a social one
    that accumulates all sorts of things that one person
    admires when observed in others and doesn't understand how to
    do...

This seems like an extremely negative and defeatist thing to say.
What does it leave us in AI to do, but to ignore the very notion we
are supposedly trying to understand?  What will motivate one line of
research rather than another, what can we use to judge the quality of
a piece of research, if we have no idea what it is we are after?

It seems to me that one plausible approach to AI is to present an
arguable account of what intelligence is about, and then to show that
some mechanism is intelligent according to that account.  The account,
the "definition", of intelligence may not be intuitive to everyone at
first.  But the performance of the mechanisms constructed in accord
with the account will constitute evidence that the account is correct.
(This is where the Turing test comes in, not as a definition of
intelligence, but as evidence for its presence.)

------------------------------

Date: Tue 1 Nov 83 13:10:32-EST
From: SUNDAR@MIT-OZ
Subject: parallelism and conciousness

                 [Forwarded by RickL%MIT-OZ@MIT-MC.]

     [...]

     It seems evident from the recent conversations that the meaning of
intelligence is much more than mere 'survivability' or 'adaptability'.
Almost all the views expressed however took for granted the concept of
"time"-which,seems to me is 'a priori'(in the Kantian sense).

What do you think of a view of that says :intelligence is the ability of
an organism that enables it to preserve,propagate and manipulate these
'a priori'concepts.
The motivation for doing so could be a simple pleasure,pain mechanism
(which again  I feel are concepts not adequately understood).It would
seem that while the pain mechanism would help cut down large search
spaces when the organism comes up against such problems,the pleasure
mechanism would help in learning,and in the acquisition of new 'a priori'
wisdom.
Clearly in the case of organisms that multiply by fission (where the line
of division between parent and child is not exactly clear)the structure
of the organism may be preserved .In such cases it would seem that the
organism survives seemingly forever . However it would not be considered
intelligent by the definition proposed above .
The questions that seem interesting to me therefore are:
1 How do humans acquire the concept of 'time'?
2 'Change' seem to be measured in terms of time (adaptation,survival etc
are all the presence or absense of change) but 'time' itself seems to be
meaningless without 'change'!
3 How do humans decide that an organism is 'intelligent ' or not?
Seems to me that most of the people in the AIList made judgements (the
amoeba , desert tortoise, cockroach examples )which should mean that
they either knew what intelligence was or wasn't-but it still isn't
exactly clear after all the smoke's cleared.

    Any comments on the above ideas? As a relative novice to the field
of AI I'd appreciate your opinions.

Thanks.

--Sundar--

------------------------------

Date: Thu, 3 Nov 1983  16:42 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Inscrutable Intelligence


Sure.  I agree you want an account of what intelligence is "about".
When I complained about making a "definition" I meant
one of those useless compact thingies in dictionaries.

But I don't agree that you need this for scientific motivation.
Batali: do you really think Biologists need definitions of Life
for such purposes?

Finally, I simply don't think this is a compact phenomenon.
Any such "account", if brief, will be very partial and incomplete.
To expect a test to show that "the account is correct" depends
on the nature of the partial theory.  In a nutshell, I still
don't see any use at all for
such definition, and it will lead to calling all sorts of
partial things "intelligence".  The kinds of accounts to confirm
are things like partial theories that need their own names, like

   heuristic search method
   credit-assignment scheme
   knowledge-representation scheme, etc.

As in biology, we simply are much too far along to be so childish as
to say "this program is intelligent" and "this one is not".  How often
do you see a biologist do an experiment and then announce "See, this
is the secret of Life".  No.  He says, "this shows that enzyme
FOO is involved in degrading substrate BAR".

------------------------------

Date: 3 Nov 1983 14:45-PST
From: ISAACSON@USC-ISI
Subject: Re: Inscrutable Intelligence


I think that your message was really addressed to Minsky, who
already replied.

I also think that the most one can hope for are confirmations of
"partial theories" relating, respectively, to various aspects
underlying phenomena of "intelligence".  Note that I say
"phenomena" (plural).  Namely, we may have on our hands a broad
spectrum of "intelligences", each one of which the manifestation
of somewhat *different* mix of underlying ingredients.  In fact,
for some time now I feel that AI should really stand for the
study of Artificial Intelligences (plural) and not merely
Artificial Intelligence (singular).

------------------------------

Date: Thu, 3 Nov 1983  19:29 EST
From: BATALI%MIT-OZ@MIT-MC.ARPA
Subject: Inscrutable Intelligence

    From: MINSKY%MIT-OZ at MIT-MC.ARPA

    do you really think Biologists need definitions of Life
    for such purposes?

No, but if anyone was were claiming to be building "Artificial Life",
that person WOULD need some way to evaluate research.  Remember, we're
not just trying to find out things about intelligence, we're not just
trying to see what it does -- like the biochemist who discovers enzyme
FOO -- we're trying to BUILD intelligences.  And that means that we
must have some relatively precise notion of what we're trying to build.

    Finally, I simply don't think this is a compact phenomenon.
    Any such "account", if brief, will be very partial and incomplete.
    To expect a test to show that "the account is correct" depends
    on the nature of the partial theory.  In a nutshell, I still
    don't see any use at all for
    such definition, and it will lead to calling all sorts of
    partial things "intelligence".

If the account is partial and incomplete, and leads to calling partial
things intelligence, then the account must be improved or rejected.
I'm not claiming that an account must be short, just that we need
one.

    The kinds of accounts to confirm
    are things like partial theories that need their own names, like

       heuristic search method
       credit-assignment scheme
       knowledge-representation scheme, etc.


But why are these thing interesting?  Why is heuristic search better
than "blind" search?  Why need we assign credit?  Etc?  My answer:
because such things are the "right" thing to do for a program to be
intelligent.  This answer appeals to a pre-theoretic conception of
what intelligence is.   A more precise notion would help us
assess the relevance of these and other methods to AI.

One potential reason to make a more precise "definition" of
intelligence is that such a definition might actually be useful in
making a program intelligent.  If we could say "do that" to a program
while pointing to the definition, and if it "did that", we would have
an intelligent program.  But I am far too optimistic.  (Perhaps
"childishly" so).

------------------------------

End of AIList Digest
********************
 4-Nov-83 22:25:08-PST,11809;000000000001
Mail-From: LAWS created at  4-Nov-83 22:05:10
Date: Friday, November 4, 1983 9:43PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #90
To: AIList@SRI-AI


AIList Digest            Saturday, 5 Nov 1983      Volume 1 : Issue 90

Today's Topics:
  Intelligence,
  Looping Problem
----------------------------------------------------------------------

Date: Thu, 3 Nov 1983  23:46 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Inscrutable Intelligence


     One potential reason to make a more precise "definition" of
     intelligence is that such a definition might actually be useful
     in making a program intelligent.  If we could say "do that" to a
     program while pointing to the definition, and if it "did that",
     we would have an intelligent program.  But I am far too
     optimistic.

I think so.  You keep repeating how good it would be to have a good
definition of intelligence and I keep saying it would be as useless as
the biologists' search for the definition of "life".  Evidently
we're talking past each other so it's time to quit.

Last word: my reason for making the argument was that I have seen
absolutely no shred of good ideas in this forum, apparently because of
this definitional orientation.  I admit the possibility that some
good mathematical insight could emerge from such discussions.  But
I am personally sure it won't, in this particular area.

------------------------------

Date: Friday, 4 November 1983, 01:17-EST
From: jcma@MIT-MC
Subject: Inscrutable Intelligence

                          [Reply to Minsky.]


BOTTOM LINE:  Have you heard of OPERATIONAL DEFINITIONS?

You are correct in pointing out that we need not have the ultimate definition
of intelligence.  But, it certainly seems useful for the practical purposes of
investigating the phenomena of intelligence (whether natural or artificial) to
have at least an initial approximation, an operational definition.

Some people, (e.g., Winston), have proposed "people-like behavior" as their
operational definition for intelligence.  Perhaps you can suggest an
incremental improvement over that rather vague definition.

If artficial intelligence can't come up with an operational definition of
intellgence, no matter how crude, it tends to undermine the credibility of the
discipline and encourage the view that AI researchers are flakey.  Moreover,
it makes it very difficult to determine the degree to which a program exhibits
"intelligence."

If you were being asked to spend $millions on a field of inquiry, wouldn't you
find it strange (bordering on absurd) that the principle proponents couldn't
render an operational definition of the object of investigation?

p.s.  I can't imagine that psychology has no operational definition of
intelligence (in fact, what is it?).  So, if worst comes to worst, AI can just
borrow psychology's definition and improve on it.

------------------------------

Date: Fri, 4 Nov 1983  09:57 EST
From: Dan Carnese <DJC%MIT-OZ@MIT-MC.ARPA>
Subject: Inscrutable Intelligence

There's a wonderful quote from Wittgenstein that goes something like:

  One of the most fundamental sources of philosophical bewilderment is to have
  a substantive but be unable to find the thing that corresponds to it.

Perhaps the conclusion from all this is that AI is an unfortunate name for the
enterprise, since no clear definitions for I are available.  That shouldn't
make it seem any less flakey than, say, "operations research" or "management
science" or "industrial engineering" etc. etc.  People outside a research area
care little what it is called; what it has done and is likely to do is
paramount.

Trying to find the ultimate definition for field-naming terms is a wonderful,
stimulating philosophical enterprise.  However, one can make an empirical
argument that this activity has little impact on technical progress.

------------------------------

Date: 4 Nov 1983 8:01-PST
From: fc%usc-cse%USC-ECL@SRI-NIC
Subject: Re: AIList Digest   V1 #89

        This discussion on intelligence is starting to get very boring.
I think if you want a theoretical basis, you are going to have to
forget about defining intelligence and work on a higher level. Perhaps
finding representational schemes to represent intelligence would be a
more productive line of pursuit. There are such schemes in existence.
As far as I can tell, the people in this discussion have either scorned
them, or have never seen them. Perhaps you should go to the library for
a while and look at what all the great philosophers have said about the
nature of intelligence rather than rehashing all of their arguments in
a light and incomplete manner.
                        Fred

------------------------------

Date: 3 Nov 83 0:46:16-PST (Thu)
From: hplabs!hp-pcd!orstcs!hakanson @ Ucb-Vax
Subject: Re: Parallelism & Consciousness - (nf)
Article-I.D.: hp-pcd.2284


No, no, no.  I understood the point as meaning that the faster intelligence
is merely MORE intelligent than the slower intelligence.  Who's to say that
an amoeba is not intelligent?  It might be.  But we certainly can agree that
most of us are more intelligent than an amoeba, probably because we are
"faster" and can react more quickly to our environment.  And some super-fast
intelligent machine coming along does NOT make us UNintelligent, it just
makes it more intelligent than we are.  (According to the previous view
that faster = more intelligent, which I don't necessarily subscribe to.)

Marion Hakanson         {hp-pcd,teklabs}!orstcs!hakanson        (Usenet)
                        hakanson@{oregon-state,orstcs}          (CSnet)

------------------------------

Date: 31 Oct 83 13:18:58-PST (Mon)
From: decvax!duke!unc!mcnc!ecsvax!unbent @ Ucb-Vax
Subject: re: transcendental recursion [& reply]
Article-I.D.: ecsvax.1457

i'm also new on this net, but this item seemed like
a good one to get my feet wet with.
     if we're going to pursue the topic of consciousness
vs intelligence, i think it's important not to get
confused about consciousness vs *self*-consciousness at
the beginning.  there's a perfectly clear sense in which
any *sentient* being is "conscious"--i.e., conscious *of*
changes in its environment.  but i have yet to see any
good reason for supposing that cats, rats, bats, etc.
are *self*-conscious, e.g., conscious of their own
states of consciousness.  "introspective" or "self-
monitoring" capacity goes along with self-consciousness,
but i see no particular reason to suppose that it has
anything special to do with *consciousness* per se.
     as long as i'm sticking my neck out, let me throw
in a cautionary note about confusing intelligence and
adaptability.  cockroaches are as adaptable as all get
out, but not terribly intelligent; and we all know some
very intelligent folks who can't adapt to novelties at
all.
                      --jay rosenberg (escvax!unbent)

[I can't go along with the cockroach claim.  They are a
successful species, but probably haven't changed much in
millions of years.  Individual cockroaches are elusive,
but can they solve mazes or learn tricks?  As for the
"intelligent folks":  I previously stated my preference
for power tests over timed aptitude tests -- I happen to
be rather slow to change channels myself.  If these people
are unable to adapt even given time, on what basis can we
say that they are intelligent?  If they excel in particular
areas (e.g. idiot savants), we can qualify them as intelligent
within those specialties, just as we reduce our expectations
for symbolic algebra programs.  If they reached states of
high competence through early learning, then lost the ability
to learn or adapt further, I will only grant that they >>were<<
intelligent.  -- KIL]

------------------------------

Date: 3 Nov 83 0:46:00-PST (Thu)
From: hplabs!hp-pcd!orstcs!hakanson @ Ucb-Vax
Subject: Re: Semi-Summary of Halting Problem Disc [& Comment]


A couple weeks ago, I heard Marvin Minsky speak up at Seattle.  Among other
things, he discussed this kind of "loop detection" in an AI program.  He
mentioned that he has a paper just being published, which he calls his
"Joke Paper," which discusses the applications of humor to AI.  According
to Minsky, humor will be a necessary part of any intelligent system.

If I understood correctly, he believes that there is (will be) a kind
of a "censor" which recognizes "bad situations" that the intelligent
entity has gotten itself into.  This censor can then learn to recognize
the precursors of this bad situation if it starts to occur again, and
can intervene.  This then is the reason why a joke isn't funny if you've
heard it before.  And it is funny the first time because it's "absurd,"
the laughter being a kind of alarm mechanism.

Naturally, this doesn't really help with a particular implementation,
but I believe that I agree with the intuitions presented.  It seems to
agree with the way I believe *I* think, anyway.

I hope I haven't misrepresented Minsky's ideas, and to be sure, you should
look for his paper.  I don't recall him mentioning a title or publisher,
but he did say that the only reference he could find on humor was a book
by Freud, called "Jokes and the Unconscious."

(Gee, I hope his talk wasn't all a joke....)

Marion Hakanson         {hp-pcd,teklabs}!orstcs!hakanson        (Usenet)
                        hakanson@{oregon-state,orstcs}          (CSnet)


[Minsky has previously mentioned this paper in AIList.  You can get
a copy by writing to Minsky%MIT-OZ@MIT-MC.  -- KIL]

------------------------------

Date: 31 Oct 83 7:52:43-PST (Mon)
From: hplabs!hao!seismo!ut-sally!ut-ngp!utastro!nather @ Ucb-Vax
Subject: Re: The Halting Problem
Article-I.D.: utastro.766

A common characteristic of humans that is not shared by the machines
we build and the programs we write is called "boredom."  All of us get
bored running around the same loop again and again, especially if nothing
is seen to change in the process.  We get bored and quit.

         *--->    WARNING!!!   <---*

If we teach our programs to get bored, we will have solved the
infinite-looping problem, but we will lose our electronic slaves who now
work, uncomplainingly, on the same tedious jobs day in and day out.  I'm
not sure it's worth the price.

                                    Ed Nather
                             ihnp4!{kpno, ut-sally}!utastro!nather

------------------------------

Date: 31 Oct 83 20:03:21-PST (Mon)
From: harpo!eagle!hou5h!hou5g!hou5f!hou5e!hou5d!mat @ Ucb-Vax
Subject: Re: The Halting Problem
Article-I.D.: hou5d.725

    If we teach our programs to get bored, we will have solved the
    infinite-looping problem, but we will lose our electronic slaves who now
    work, uncomplainingly, on the same tedious jobs day in and day out.  I'm
    not sure it's worth the price.

Hmm.  I don't usually try to play in this league, but it seems to me that there
is a place for everything and every talent.  Build one machine that gets bored
(in a controlled way, please) to work on Fermat's last Theorem.  Build another
that doesn't to check tolerances on camshafts or weld hulls.  This [solving
the looping problem] isn't like destroying one's virginity, you know.

                                                Mark Terribile
                                                Duke Of deNet

------------------------------

End of AIList Digest
********************
 6-Nov-83 22:59:43-PST,17372;000000000001
Mail-From: LAWS created at  6-Nov-83 22:58:19
Date: Sunday, November 6, 1983 10:51PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #91
To: AIList@SRI-AI


AIList Digest             Monday, 7 Nov 1983       Volume 1 : Issue 91

Today's Topics:
  Parallelism,
  Turing Machines
----------------------------------------------------------------------

Date: 1 Nov 83 22:39:06-PST (Tue)
From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!israel @ Ucb-Vax
Subject: Re: Parallelism and Conciousness
Article-I.D.: umcp-cs.3498

[Initial portion missing. -- KIL]

a processing unit that we can currently build.  If you mean 'at the
exact same time', then I defy you to show me a case where this is
necessary.

The statement "No algorithm is inherently parallel", just means that
the algoritm itself (as opposed to the engineering of putting it
into practice) does not necessarily have to be done in parallel.
Any parallel algorithm that you give me, I can write a sequential
algorithm that does the same thing.

Now, if you assume a finite number of processors for the parallel
algorithm, then the question of whether the sequential algorithm will
work under time constraints is dependent on the speed of the
processor worked on.  I don't know if there has been any work
done on theoretical limits of the speed of a processor (Does
anyone know? is this a meaningful question?), but if we assume
none (a very chancy assumption at best), then any parallel algorithm
can be done sequentially in practice.

If you allow an infinite number of processors for the parallel
algorithm, then the sequential version of the algorithm can't
ever work in practice.  But can the parallel version?  What
do we run it on?  Can you picture an infinitely parallel
computer which has robots with shovels with it, and when the
computer needs an unallocated processor and has none, then
the robots dig up the appropriate minerals and construct
the processor.  Of course, it doesn't need to be said that
if the system notices that the demand for processors is
faster than the robots' processor production output, then
the robots make more robots to help them with the raw materials
gathering and the construction.  :-)
--

^-^ Bruce ^-^

University of Maryland, Computer Science
{rlgvax,seismo}!umcp-cs!israel (Usenet)    israel.umcp-cs@CSNet-Relay (Arpanet)

------------------------------

Date: 31 Oct 83 19:55:44-PST (Mon)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: Parallelism and Conciousness - (nf)
Article-I.D.: uiucdcs.3572


I see no reason why consciousness should be inherently parallel.  But
it turns out that the only examples of conscious entities (i.e. those
which nearly everyone agrees are conscious) rely heavily on parallelism
at several levels.  This is NOT to say that they derive their
consciousness from parallelism, only that there is a high corelation
between the two.

There are good reasons why natural selection would favor parallelism.
Besides the usually cited ones (e.g. speed, simplicity) is the fact
that the world goes by very quickly, and carries a high information
content.  That makes it desirable and advantageous for a conscious
entity to be aware of several things at once.  This strongly suggests
parallelism (although a truly original species might get away with
timesharing).

Pushing in the other direction, I should note that it is not necessary
to bring the full power of the human intellect to bear against ALL of
our environment at once.  Hence the phenomenon of attention.  It
suffices to have weaker processes in charge of uninteresting phenomena
in the environment, as long as these have the ability to enlist more of
the organism's information processing power when the situation becomes
interesting enough to demand it.  (This too could be finessed with a
clever timesharing scheme, but I know of no animal that does it that
way.)

Once again, none of this entails a connection causal connection between
parallelism and consciousness.  It just seems to have worked out that
nature liked it that way (in the possible world in which we live).

Rick Dinitz
...!uiucdcs!uicsl!dinitz

------------------------------

Date: 1 Nov 83 11:53:58-PST (Tue)
From: hplabs!hao!seismo!rochester!blenko @ Ucb-Vax
Subject: Re:  Parallelism & Consciousness
Article-I.D.: rocheste.3648

Interesting to see this discussion taking place among people
(apparently) committed to an information-processing model for
intelligence.

I would be satisfied with the discovery of mechanisms that duplicate
the information-processing functions associated with intelligence.

The issue of real-time performance seems to be independent of
functional performance (not from an engineering point of view, of
course; ever tell one of your hardware friends to "just turn up the
clock"?).  The fact that evolutionary processes act on both the
information-processing and performance characteristics of a system may
argue for the (evolutionary) superiority of one mechanism over another;
it does not provide prescriptive information for developing functional
mechanisms, however, which is the task we are currently faced with.

        Tom

------------------------------

Date: 1 Nov 83 19:01:59-PST (Tue)
From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!speaker @ Ucb-Vax
Subject: Re: Parallelism and Conciousness
Article-I.D.: umcp-cs.3523

                No algorithm is inherently parallel.

        The algorithms you are thinking about occur in the serial world of
        the Turing machine.  Turing machines, remember, have only only one
        input.  Consider what happens to your general purpose turing machine
        when it must compute on more than one input and simultaneously!

        So existence in the real world may require parallelism.


    How do you define simultaneously?  If you mean within a very short
    period of time, then that requirement is based on the maximum speed of
    a processing unit that we can currently build.  If you mean 'at the
    exact same time', then I defy you to show me a case where this is
    necessary.

A CHALLENGE!!!  Grrrrrrrr......

Okay, let's say we have two discrete inputs that must
be monitored by a Turing machine.  Signals may come in
over these inputs simultaneously.  How do you propose
to monitor both discretes at the same time?  You can't
monitor them as one input because your Turing machine
is allowed only one state at a time on its read/write head.
Remember that the states of the inputs run as fast as
those of the Turing machine.


You can solve this problem by building two Turing machines,
each of which may look at the discretes.

I don't have to appeal to practical speeds of processors.
We're talking pure theory here.
--

                                        - Speaker-To-Stuffed-Animals
                                        speaker@umcp-cs
                                        speaker.umcp-cs@CSnet-Relay

------------------------------

Date: 1 Nov 83 18:41:10-PST (Tue)
From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!speaker @ Ucb-Vax
Subject: Infinite loops and Turing machines...
Article-I.D.: umcp-cs.3521

        One of the things I did in my undergrad theory class was to prove that
        a multiple-tape Turing machine is equivalent to one with a single tape
        (several tapes were very handy for programming).  Also, we showed that
        a TM with a 2-dimensional tape infinite in both x and y was also
        equivalent to a single-tape TM.  On the other hand, the question of
        a machine with an infinite number of read heads was left open...

Aha!  I knew someone would come up with this one!
Consider that when we talk of simultaneous events... we speak of
simultaneous events that occur within one Turing machine state
and outside of the Turing machine itself.  Can a one-tape
Turing machine read the input of 7 discrete sources at once?
A 7 tape machine with 7 heads could!

The reason that they are not equivelent is that we have
allowed for external states (events) outside of the machine
states of the Turing machine itself.
--

                                        - Speaker-To-Stuffed-Animals
                                        speaker@umcp-cs
                                        speaker.umcp-cs@CSnet-Relay

------------------------------

Date: 1 Nov 83 16:56:19-PST (Tue)
From: hplabs!hao!seismo!philabs!linus!security!genrad!mit-eddie!rlh @
      Ucb-Vax
Subject: Re: Parallelism and Conciousness
Article-I.D.: mit-eddi.885

    requirement is based on the maximum speed of
    a processing unit that we can currently build.  If you mean 'at the
    exact same time', then I defy you to show me a case where this is
    necessary.

    The statement "No algorithm is inherently parallel", just means that
    the algorithm itself (as opposed to the engineering of putting it
    into practice) does not necessarily have to be done in parallel.
    Any parallel algorithm that you give me, I can write a sequential
    algorithm that does the same thing.

Consider the retina, and its processing algorithm.  It is certainly
true that once the raw information has been collected and in some way
band-limited, it can be processed in either fashion; but one part of
the algorithm must necessarily be implemented in parallel.  To get
the photon efficiencies that are needed for dark-adapted vision
(part of the specifications for the algorithm) one must have some
continuous, distributed attention to the light field.  If I match
the spatial and temporal resolution of the retina, call it several thousand
by several thousand by some milliseconds, by sequentially scanning with
a single receptor, I can only catch one in several-squared million
photons, not the order of one in ten that our own retina achieves.

------------------------------

Date: 2 Nov 83 19:44:21-PST (Wed)
From: pur-ee!uiucdcs!uicsl!preece @ Ucb-Vax
Subject: Re: Parallelism and Conciousness - (nf)
Article-I.D.: uiucdcs.3633


There is a significant difference between saying "No algorithm is
inherently parallel" and saying "Any algorithm can be carried out
without parallelism."  There are many algorithms that are
inherently parallel. Many (perhaps all) of them can be SIMULATED
without true parallel processing.

I would, however, support the contention that computational models
of natural processes need not follow the same implementations, and
that a serial simulation of a parallel process can produce the
same result.

scott preece
ihnp4!uiucdcs!uicsl!preece

------------------------------

Date: 2 Nov 83 15:22:20-PST (Wed)
From: hplabs!hao!seismo!philabs!linus!security!genrad!grkermit!masscom
      p!kobold!tjt @ Ucb-Vax
Subject: Re: Parallelism and Conciousness
Article-I.D.: kobold.191

Gawd!! Real-time processing with a Turing machine?!
Pure theory indeed!

Turing machines are models for *abstract* computation.  You get to
write an initial string on the tape(s) and start up the machine: it
does not monitor external inputs changing asynchronously.  You can
define your *own* machine which is just like a Turing machine, except
that it *does* monitor external inputs changing asynchronously (Speaker
machines anyone :-).

Also, if you want to talk *pure theory*, I could just enlarge my input
alphabet on a single input to encode all possible simultaneous values
at multiple inputs.


--
        Tom Teixeira,  Massachusetts Computer Corporation.  Littleton MA
        ...!{harpo,decvax,ucbcad,tektronix}!masscomp!tjt   (617) 486-9581

------------------------------

Date: 2 Nov 83 16:28:10-PST (Wed)
From: hplabs!hao!seismo!philabs!linus!security!genrad!grkermit!masscom
      p!kobold!tjt @ Ucb-Vax
Subject: Re: Parallelism and Conciousness
Article-I.D.: kobold.192

In regards to the statement

        No algorithm is inherently parallel.

which has been justified by the ability to execute any "parallel"
program on a single sequential processor.

The difference between parallel and sequential algorithms is one of
*expressive* power rather than *computational* power.  After all, if
it's just computational power you want, why aren't you all programming
Turing machines?

The real question is what is the additional *expressive* power of
parallel programs.  The additional expressive power of parallel
programming languages is a result of not requiring the programmer to
serialize steps of his computation when he is uncertain whether either
one will terminate.
--
        Tom Teixeira,  Massachusetts Computer Corporation.  Littleton MA
        ...!{harpo,decvax,ucbcad,tektronix}!masscomp!tjt   (617) 486-9581

------------------------------

Date: 4 Nov 83 8:13:22-PST (Fri)
From: hplabs!hao!seismo!ut-sally!ut-ngp!utastro!nather @ Ucb-Vax
Subject: Our Parallel Eyeballs
Article-I.D.: utastro.784


        Consider the retina, and its processing algorithm. [...]

There seems to be a misconception here.  It's not clear to me that "parallel
processing" includes simple signal accumulation.  Astronomers use area
detectors that simply accumulate the charge deposited by photons arriving
on an array of photosensitive diodes; after the needed "exposure" the charge
image is read out (sequentially) for display, further processing, etc.
If the light level is high, readout can be repeated every few milliseconds,
or, in some devices, proceed continuously, allowing each pixel to accumulate
photons between readouts, which reset the charge to zero.

I note in passing that we tend to think sequentially (our self-awareness
center seems to be serial) but operate in parallel (our heart beats along,
and body chemistry gets its signals even when we're chewing gum).  We
have, for the most part, built computers in our own (self)image: serial.
We're encountering real physical limits in serial computing (the finite
speed of light) and clearly must turn to parallel operations to go much
faster.  How we learn to "think in parallel" is not clear, but people
who do the logic design of computers try to get as many operations into
one clock cycle as possible, and maybe that's the place to start.

                                         Ed Nather
                                         ihnp4!{ut-sally,kpno}!utastro!nather

------------------------------

Date: 3 Nov 83 9:39:07-PST (Thu)
From: decvax!microsoft!uw-beaver!ubc-visi!majka @ Ucb-Vax
Subject: Get off the Turing Machines
Article-I.D.: ubc-visi.513

From: Marc Majka <majka@ubc-vision.UUCP>

A Turing machine is a theoretical model of computation.
<speaker.umcp-cs@CSnet-Relay> points out that all this noise about
"simultaneous events" is OUTSIDE of the notion of a Turing machine. Turing
machines are a theoretical formulation which gives theoreticians a formal
system in which to consider problems in computability, decidability, the
"hardness" of classes of functions, and etc.  They don't really care whether
set membership in a class 0 grammer is decidable in less than 14.2 seconds.
The unit of time is the state transition, or "move" (as Turing called it).
If you want to discuss time (in seconds or meters), you are free to invent a
new model of computation which includes that element.  You are then free to
prove theorems about it and attempt to prove it equivalent to other models
of computation.  Please do this FORMALLY and post (or publish) your results.
Otherwise, invoking Turing machines is a silly and meaningless exercise.

Marc Majka

------------------------------

Date: 3 Nov 83 19:47:04-PST (Thu)
From: pur-ee!uiucdcs!uicsl!preece @ Ucb-Vax
Subject: Re: Parallelism and Conciousness - (nf)
Article-I.D.: uiucdcs.3677


Arguments based on speed of processing aren't acceptable.  The
question of whether parallel processing is required has to be
in the context of arbitrarily fast processors.  Thus you can't
talk about simultaneous inputs changing state at processor speed
(unless you're considering the interesting case where the input
is directly monitoring the processor itself and therefore
intrinsically as fast as the processor; in that case you can't
cope, but I'm not sure it's an interesting case with respect to
consciousness).

Consideration of the retina, on the other hand, brings up the
basic question of what is a parallel processor.  Is an input
latch (allowing delayed polling) or a multi-input averager a
parallel process or just part of the plumbing? We can also, of
course, group the input bits and assume an arbitrarily fast
processor dealing with the bits 64 (or 128 or 1 million) at a
time.

I don't think I'd be willing to say that intelligence or
consciousness can't be slow. On the other hand, I don't think
there's too much point to this argument, since it's pretty clear
that producing a given level of performance will be easier with
parallel processing.

scott preece
ihnp4!uiucdcs!uicsl!preece

------------------------------

End of AIList Digest
********************
 6-Nov-83 23:14:43-PST,12728;000000000001
Mail-From: LAWS created at  6-Nov-83 23:13:27
Date: Sunday, November 6, 1983 11:06PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #92
To: AIList@SRI-AI


AIList Digest             Monday, 7 Nov 1983       Volume 1 : Issue 92

Today's Topics:
  Halting Problem,
  Metaphysics,
  Intelligence
  ----------------------------------------------------------------------

Date: 31 Oct 83 19:13:28-PST (Mon)
From: harpo!floyd!clyde!akgua!psuvax!simon @ Ucb-Vax
Subject: Re: Semi-Summary of Halting Problem Discussion
Article-I.D.: psuvax.335

About halting:
it is unclear what is meant precisely by "can a program of length n
decide whether programs of length <= n will halt".  First, the input
to the smaller programs is not specified in the question.  Assuming
that it is a unique input for each program, known a priori (for
example, the index of the program), then the answer is obviously YES
for the following restriction: the deciding program has size 2**n and
decides on smaller programs (there are a few constants that are
neglected too). There are less than 2*2**n programs of length <=n. For
each represent halting on the specific input the test is to apply to
by 1, looping by 0. The resulting string is essentially the program
needed - it clearly exists. Getting hold of it is another matter - it
is also obvious that this cannot be done in a uniform manner for every
n because of the halting problem.  At the cost of more sophisticated
coding, and tremendous expenditure of time, a similar construction can
be made to work for programs of length O(n).


If the input is not fixed, the question is obviously hopeless - there are
very small universal programs.

As a practical matter it is not the halting proble that is relevant, but its
subrecursive analogues.
janos simon

------------------------------

Date: 3 Nov 83 13:03:22-PST (Thu)
From: harpo!eagle!mhuxl!mhuxm!pyuxi!pyuxss!aaw @ Ucb-Vax
Subject: Re: Halting Problem Discussion
Article-I.D.: pyuxss.195

A point missing in this discussion is that the halting problem is
equivalent to the question:
        Can a method be formulated to attempt to solve ANY problem
        which can determine if it is not getting closer to the
        solution
so the meta-halters (not the clothing) can't be more than disguised
time limits etc. for the general problem, since they CAN NOT MAKE
INFERENCES ABOUT THE PROCESS they are to halt
                Aaron Werman pyuxi!pyuxss!aaw

------------------------------

Date: 9 Nov 83 21:05:28-EST (Wed)
From: pur-ee!uiucdcs!uokvax!andree @ Ucb-Vax
Subject: Re: re: awareness - (nf)
Article-I.D.: uiucdcs.3586


Robert -

If I understand correctly, your reasons for preferring dualism (or
physicalism) to functionalism are:

        1) It seems more intuitively obvious.
        2) You are worried about legal/ethical implications of functionalism.

I find that somewhat amusing, as those are EXACTLY my reasons for
prefering functionalism to either dualism or physicalism. The legal
implications of differentiating between groups by arbitrarily denying
`souls' to one is well-known; it usually leads to slavery.

        <mike

------------------------------

Date: Saturday, 5 November 1983, 03:03-EST
From: JCMA@MIT-AI
Subject: Inscrutable Intelligence

    From: Dan Carnese <DJC%MIT-OZ@MIT-MC.ARPA>

    Trying to find the ultimate definition for field-naming terms is a
    wonderful, stimulating philosophical enterprise.

I think you missed the point all together.  The idea is that *OPERATIONAL
DEFINITIONS* are known to be useful and are found in all mature disciplines
(e.g., physics).  The fact that AI doesn't have an operation definition of
intelligence simply points up the fact that the field of inquiry is not yet a
discipline.  It is a proto-discipline precisely because key issues remain
vague and undefined and because there is no paradigm (in the Khunian sense of
the term, not popular vulgarizations).

That means that it is not possible to specify criteria for certification in
the field, not to mention the requisite curriculum for the field.  This all
means that there is lots of work to be done before AI can enter the normal
science phase.

    However, one can make an empirical argument that this activity has little
    impact on technical progress.

Let's see your empirical argument.  I haven't noticed any intelligent machines
running around the AI lab lately.  I certainly haven't noticed any that can
carry on any sort of reasonable conversation.  Have you?  So, where is all
this technical progress regarding understanding intelligence?

Make sure you don't fall into the trap of thinking that intelligent machines
are here today (Douglas Hofstadter debunks this position in his "Artificial
Intelligence: Subcognition as Computation," CS Dept., Indiana U., Nov. 1982).

------------------------------

Date: 5 November 1983 15:38 EST
From: Steven A. Swernofsky <SASW @ MIT-MC>
Subject: Turing test in everyday life

Have you ever gotten one of those phone calls from people who are trying
to sell you a magazine subscription?  Those people sound *awfully* like
computers!  They have a canned speech, with canned places to wait for
human (customer) response, and they seem to have a canned answer to
anything you say.  They are also *boring*!

I know the entity at the other end of the line is not a computer
(because they recognize my voice -- someone correct me if this is not a
good test) but we might ask: how good would a computer program have to
be to fool someone into thinking that it is human, in this limited case?
I suspect you wouldn't have to do much, since the customer doesn't
expect much from the salescreature who phones.  Perhaps there is a
lesson here.

-- Steve

[There is a system, in use, that can recognize affirmative and negative
replies to its questions.  It also stores a recording of your responses
and can play the recording back to you before ending the conversation.
The system is used for selling (e.g., record albums) and for dunning,
and is effective partly because it is perceived as "mechanical".  People
listen to it because of the novelty, it can be programmed to make negative
responses very difficult, and the playback of your own replies is very
effective.  -- KIL]

------------------------------

Date: 1 Nov 83 13:41:53-PST (Tue)
From: hplabs!hao!seismo!uwvax!reid @ Ucb-Vax
Subject: Slow Intelligence
Article-I.D.: uwvax.1129

When people's intelligence is evaluated, at least subjectively, it is common
to hear such things as "He is brilliant but never applies himself," or "She
is very intelligent, but can never seem to get anything accomplished due to
her short attention span."  This seems to imply to me that intelligence is
sort of like voltage--it is potential.  Another analogy might be a
weight-lifter, in the sense that no one doubts her
ability to do amazing physical things, based on her appearance, but she needn't
prove it on a regular basis....  I'm not at all sure that people's working
definition of intelligence has anything at all to do with either time or sur-
vival.



Glenn Reid
..seismo!uwvax!reid  (reid@uwisc.ARPA)

------------------------------

Date: 2 Nov 83 8:08:19-PST (Wed)
From: harpo!eagle!mhuxl!ulysses!unc!mcnc!ecsvax!unbent @ Ucb-Vax
Subject: intelligence and adaptability
Article-I.D.: ecsvax.1466

Just two quick remarks from a philosopher:

1.  It ain't just what you do; it's how you do it.
Chameleons *adapt* to changing environments very quickly--in a way
that furthers their goal of eating lots of flies.  But what they're doing
isn't manifesting *intelligence*.

2.   There's adapting and adapting.  I would have thought that
one of the best evidences of *our* intelligence is not our ability to
adapt to new environments, but rather our ability to adapt new
environments to *us*.  We don't change when our environment changes.
We build little portable environments which suit *us* (houses,
spaceships), and take them along.

------------------------------

Date: 3 Nov 83 7:51:42-PST (Thu)
From: decvax!tektronix!ucbcad!notes @ Ucb-Vax
Subject: What about physical identity? - (nf)
Article-I.D.: ucbcad.645


        It's surprising to me that people are still speaking in terms of
machine intelligence unconnected with a notion of a physical host that
must interact with the real world.  This is treated as a trivial problem
at most (I think Ken Laws said that one could attach any kind of sensing
device, and hence (??) set any kind of goal for a machine).  So why does
Hubert Dreyfus treat this problem as one whose solution is a *necessary*,
though not sufficient, condition for machine intelligence?

        But is it a solved problem?  I don't think so--nowhere near, from
what I can tell.  Nor is it getting the attention it requires for solution.
How many robots have been built that can infer their own physical limits
and capabilities?

        My favorite example is the oft-quoted SHRDLU conversation; the
following exchange has passed for years without comment:

        ->  Put the block on top of the pyramid
        ->  I can't.
        ->  Why not?
        ->  I don't know.

(That's not verbatim.)  Note that in human babies, fear of falling seems to
be hardwired.  It will still attempt, when old enough, to do things like
put a block on top of a pyramid--but it certainly doesn't seem to need an
explanation for why it should not bother after the first few tries.  (And
at that age, it couldn't understand the explanation anyway!)

        SHRDLU would have to be taken down, and given another "rule".
SHRDLU had no sense of what it is to fall down.  It had an arm, and an
eye, but only a rather contrived "sense" of its own physical identity.
It is this sense that Dreyfus sees as necessary.
---
Michael Turner (ucbvax!ucbesvax.turner)

------------------------------

Date: 4 Nov 83 5:57:48-PST (Fri)
From: ihnp4!ihuxn!ruffwork @ Ucb-Vax
Subject: RE:intelligence and adaptability
Article-I.D.: ihuxn.400

I would tend to agree that it's not how a being adapts to its
environment, but how it changes the local environment to better
suit itself.

Also, I would have to say that adapting the environment
would only aid in ranking the intelligence of a being if that
action was a voluntary decision.  There are many instances
of creatures that alter their surroundings (water spiders come
to mind), but could they decide not to ???  I doubt it.

                        ...!iham1!ruffwork

------------------------------

Date: 4 Nov 83 15:36:33-PST (Fri)
From: harpo!eagle!hou5h!hou5a!hou5d!mat @ Ucb-Vax
Subject: Re: RE:intelligence and adaptability
Article-I.D.: hou5d.732

Man is the toolmaker and the principle tooluser of all the living things
that we know of.  What does this mean?

Consider driving a car or skating.  When I do this, I have managed to
incorporate an external system into my own control system with its myriad
of pathways both forward and backward.

This takes place at a level below that which usually is considered to
constitute intelligent thought.  On the other hand, we can adopt external
things into our thought-model of the world in a way which no other creature
seems to be capable of.

Is there any causal relationship here?

                                        Mark Terribile
                                        DOdN

------------------------------

Date: 6 Nov 1983 20:54-PST
From: fc%usc-cse%USC-ECL@SRI-NIC
Subject: Re: AIList Digest   V1 #90

        Irwin Marin's course in AI started out by asking us to define
the term 'Natural Stupidity'. I guess artificial intelligence must be
anything both unnatural and unstupid. We had a few naturally stupid
examples to work with, so we got a definition quite quickly. Naturally
stupid types were unable to adapt, unable to find new representations,
and made of flesh and bone. Artificially intelligent types were
machines designed to adapt their responses and seek out more accurate
representations of their environment and themselves. Perhaps this would
be a good 'working' definition. At any rate, definitions are only
'working' if you work with them. If you can work with this one I
suggest you go to it and stop playing with definitions.
                FC

------------------------------

End of AIList Digest
********************
 7-Nov-83 13:20:30-PST,15167;000000000001
Mail-From: LAWS created at  7-Nov-83 13:19:17
Date: Monday, November 7, 1983 1:11PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #93
To: AIList@SRI-AI


AIList Digest            Tuesday, 8 Nov 1983       Volume 1 : Issue 93

Today's Topics:
  Implementations - Lisp for MV8000,
  Expert Systems - Troubleshooting & Switching Systems,
  Alert - IEEE Spectrum,
  Fifth Generation - Stalking The Gigalip,
  Intelligence - Theoretical Speed,
  Humor - Freud Reference,
  Metadiscussion - Wittgenstein Quote,
  Seminars - Knowledge Representation & Logic Programming,
  Conferences - AAAI-84 Call for Papers
----------------------------------------------------------------------

Date: Tue, 1 Nov 83 16:51:42 EST
From: Michael Fischer <Fischer@YALE.ARPA>
Subject: Lisp for MV8000

The University of New Haven is looking for any version of Lisp that
runs on a Data General MV8000, or for a portable Lisp written in Fortran
or Pascal that could be brought up in a short time.

Please reply to me by electronic mail and I will bring it to their
attention, or contact Alice Fischer directly at (203) 932-7069.

                           --  Michael Fischer <Fischer@YALE.ARPA>

------------------------------

Date: 5 Nov 83 21:31:57-EST (Sat)
From: decvax!microsoft!uw-beaver!tektronix!tekig1!sal @ Ucb-Vax
Subject: Expert systems for troubleshooting
Article-I.D.: tekig1.1442

I am in the process of evaluating the feasibility of developing expert
systems for troubleshooting instruments and functionally complete
circuit boards.  If anyone has had any experience in this field or has
seen a similar system, please get in touch with me either through the
net or call me at 503-627-3678 during 8:00am - 6:00pm PST.  Thanks.

                                    Salahuddin Faruqui
                                    Tektronix, Inc.
                                    Beaverton, OR 97007.

------------------------------

Date: 4 Nov 83 17:20:42-PST (Fri)
From: ihnp4!ihuxl!pvp @ Ucb-Vax
Subject: Looking for a rules based expert system.
Article-I.D.: ihuxl.707

I am interested in obtaining a working version of a rule based
expert system, something on the order of RITA, ROSIE, or EMYCIN.
I am interested in the knowledge and inference control structure,
not an actual knowledge base. The application would be in the
area of switching system maintenance and operation.

I am in the 5ESS(tm) project, and so prefer a Unix based product,
but I would be willing to convert a different type if necessary.
An internal BTL product would be desirable, but if anyone knows
about a commercially available system, I would be interested in
evaluating it.

Thanks in advance for your help.

                Philip Polli
                BTL Naperville
                IX 1F-474
                (312) 979-0834
                ihuxl!pvp

------------------------------

Date: Mon 7 Nov 83 09:50:29-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: IEEE Spectrum Alert

The November issue of IEEE Spectrum is devoted to the 5th Generation.
In addition to the main survey (which includes some very detailed tables
about sources of funding), there are:

  A review of Feigenbaum and McCorduck's book, by Mark Stefik.

  A glossary (p. 39) of about 25 AI and CS terms, taken from
  Gevarter's Overview of AI and Robotics for NASA.

  Announcement (p. 126) of The Artificial Intelligence Report, a
  newsletter for people interested in AI but not engaged in research.
  It will begin in January; no price is given.  Contact Artificial
  Intelligence Publications, 95 First St., Los Altos, CA  94022,
  (415) 949-2324.

  Announcement (p. 126) of a tour of Japan for those interested in
  the 5th Generation effort.

  Brief discussion (p. 126) of Art and Computers: The First Artificial-
  Intelligence Coloring Book, a set of line drawings by an artist-taught
  rule-based system.

  An interesting parable (p. 12) for those who would educate the public
  about AI or any other topic.

                                        -- Ken Laws

------------------------------

Date: 5-Nov-83 10:41:44-CST (Sat)
From: Overbeek@ANL-MCS (Overbeek)
Subject: Stalking The Gigalip

                 [Reprinted from the Prolog Digest.]

E. W. Lusk and I recently wrote a short note concerning attempts
to produce high-speed Prolog machines.  I apologize for perhaps
restating the obvious in the introduction.  In any event we
solicit comments.


                              Stalking the Gigalip

                                   Ewing Lusk

                                Ross A. Overbeek

                   Mathematics and Computer Science Division
                          Argonne National Laboratory
                            Argonne, Illinois 60439


          1.  Introduction

               The Japanese have recently established the goal of pro-
          ducing a machine capable of producing between 10 million and
          1 billion logical inferences per  second  (where  a  logical
          inference  corresponds  to  a  Prolog procedure invocation).
          The motivating belief is that logic programming unifies many
          significant  areas of computer science, and that expert sys-
          tems based on logic programming will be the dominant  appli-
          cation  of  computers  in  the 1990s.  A number of countries
          have at least considered  attempting  to  compete  with  the
          Japanese  in  the  race  to attain a machine capable of such
          execution rates.  The United States  funding  agencies  have
          definitely  indicated  a  strong  desire to compete with the
          Japanese in the creation of such a logic engine, as well  as
          in  the  competition  to  produce  supercomputers  that  can
          deliver at least two orders of magnitude improvement  (meas-
          ured in megaflops) over current machines.  Our goal in writ-
          ing this short note is to offer some opinions on how  to  go
          about  creating  a machine that could execute a gigalip.  It
          is certainly true that the entire goal of  creating  such  a
          machine should be subjected to severe criticism.  Indeed, we
          feel that it is probably the case that a majority of  people
          in the AI research community feel that it offers (at best) a
          misguided effort.  Rather  than  entering  this  debate,  we
          shall  concentrate  solely  on discussing an approach to the
          goal.  In our opinion a significant component of many of the
          proposed  responses  by  researchers in the United States is
          based on the unstated assumption that the goal itself is not
          worth pursuing, and that the benefits will accrue from addi-
          tional funding to areas in AI that only minimally impinge on
          the stated objective.

[ This paper is available on {SU-SCORE} as:

       PS:<Prolog>ANL-LPHunting.Txt

  There is a limited supply of hard copies that
  can be mailed to those with read-only access
  to this newsletter  -ed ]

------------------------------

Date: Monday, 7 November 1983 12:03:23 EST
From: Robert.Frederking@CMU-CS-CAD
Subject: Intelligence; theoretical speed

        Not to stir this up again, but around here, some people like the
definition that intelligence is "knowledge brought to bear to solve
problems".  This indicates that you need knowledge, ways of applying it, and
a concept of a "problem", which implies goals.  One problem with measuring
human "IQ"s is that you almost always end up measuring (at least partly) how
much knowledge someone has, and what culture they're part of, as well as the
pure problem solving capabilities (if any such critter exists).

        As for the theoretical speed of processing, the speed of light is a
theoretical limit on the propagation of information (!), not just matter, so
the maximum theoretical cycle speed of a processor with a one foot long
information path (mighty small) is a nanosecond (not too fast!).  So the
question is, what is the theoretical limit on the physical size of a
processor?  (Or, how do you build a transistor out of three atoms?)

------------------------------

Date: 4 Nov 83 7:01:30-PST (Fri)
From: harpo!eagle!mhuxl!mhuxm!pyuxi!pyuxss!aaw @ Ucb-Vax
Subject: Humor
Article-I.D.: pyuxss.196

[Semi-Summary of Halting Problem Disc]
must have been some kind of joke. Sigmunds' book is a real layman
thing, and in it he asserts that the joke
    a: where are you going?
    b: MINSKY
    a: you said "minsky" so I'd think you are going to "pinsky".  I
       happen to know you are going to "minsky" so whats the use in lying?
is funny.
                                aaron werman pyuxi!pyuxss!aaw

------------------------------

Date: 05 Nov 83  1231 PST
From: Jussi Ketonen <JK@SU-AI>
Subject: Inscrutable Intelligence

On useless discussions - one more quote by Wittgenstein:
        Wovon man nicht sprachen kann, darueber muss man schweigen.

------------------------------

Date: 05 Nov 83  0910 PST
Date: Fri, 4 Nov 83 19:28 PST
From: Moshe Vardi <vardi@Diablo>
Subject: Knowledge Seminar

Due to the overwhelming response to my announcement and the need to
find a bigger room, the first meeting is postponed to Dec. 9,
10:00am.

Moshe Vardi

------------------------------

Date: Thu, 3 Nov 1983  22:50 EST
From: HEWITT%MIT-OZ@MIT-MC.ARPA
Subject: SEMINAR

               [Forwarded by SASW@MIT-MC.]


        Date:  Thursday, November 10, l983   3:30 P.M.
        Place: NE43 8th floor Playroom
        Title: "Some Fundamental Limitations of Logic Programming"
        Speaker: Carl Hewitt

Logic Programming has been proposed by some as the universal
programming paradigm for the future.  In this seminar I will discuss
some of the history of the ideas behind Logic Programming and assess
its current status.  Since many of the problems with current Logic
Programming Languages such as Prolog will be solved, it is not fair to
base a critique of Logic Programming by focusing on the particular
limitations of languages like Prolog.  Instead I will focus discussion
on limitations which are inherent in the enterprise of attempting to
use logic as a programming language.

------------------------------

Date: Thu 3 Nov 83 10:44:08-PST
From: Ron Brachman <Brachman at SRI-KL>
Subject: AAAI-84 Call for Papers


                          CALL FOR PAPERS


                              AAAI-84


        The 1984 National Conference on Artificial Intelligence

   Sponsored by the American Association for Artificial Intelligence
     (in cooperation with the Association for Computing Machinery)

                 University of Texas, Austin, Texas

                         August 6-10, 1984

AAAI-84 is the fourth national conference sponsored by the American
Association for Artificial Intelligence.  The purpose of the conference
is to promote scientific research of the highest caliber in Artificial
Intelligence (AI), by bringing together researchers in the field and by
providing a published record of the conference.


TOPICS OF INTEREST

Authors are invited to submit papers on substantial, original, and
previously unreported research in any aspect of AI, including the
following:

AI and Education                        Knowledge Representation
     (including Intelligent CAI)        Learning
AI Architectures and Languages          Methodology
Automated Reasoning                        (including technology transfer)
     (including automatic program-      Natural Language
      ming, automatic theorem-proving,      (including generation,
      commonsense reasoning, planning,       understanding)
      problem-solving, qualitative      Perception (including speech, vision)
      reasoning, search)                Philosophical and Scientific
Cognitive Modelling                                Foundations
Expert Systems                          Robotics



REQUIREMENTS FOR SUBMISSION

Timetable:  Authors should submit five (5) complete copies of their
papers (hard copy only---we cannot accept on-line files) to the AAAI
office (address below) no later than April 2, 1984.  Papers received
after this date will be returned unopened.  Notification of acceptance
or rejection will be mailed to the first author (or designated
alternative) by May 4, 1984.

Title page:  Each copy of the paper should have a title page (separate
from the body of the paper) containing the title of the paper, the
complete names and addresses of all authors, and one topic from the
above list (and subtopic, where applicable).

Paper body:  The authors' names should not appear in the body of the
paper.  The body of the paper must include the paper's title and an
abstract.  This part of the paper must be no longer than thirteen (13)
pages, including figures but not including bibliography.  Pages must be
no larger than 8-1/2" by 11", double-spaced (i.e., no more than
twenty-eight (28) lines per page), with text no smaller than standard
pica type (i.e., at least 12 pt. type).  Any submission that does not
conform to these requirements will not be reviewed.  The publishers will
allocate four pages in the conference proceedings for each accepted
paper, and will provide additional pages at a cost to the authors of
$100.00 per page over the four page limit.

Review criteria:  Each paper will be stringently reviewed by experts in
the area specified as the topic of the paper.  Acceptance will be based
on originality and significance of the reported research, as well as
quality of the presentation of the ideas.  Proposals, surveys, system
descriptions, and incremental refinements to previously published work
are not appropriate for inclusion in the conference.  Applications
clearly demonstrating the power of established techniques, as well as
thoughtful critiques and comparisons of previously published material
will be considered, provided that they point the way to new research in
the field and are substantive scientific contributions in their own
right.


Submit papers and                     Submit program suggestions
   general inquiries to:                    and inquiries to:

American Association for              Ronald J. Brachman
    Artificial Intelligence           AAAI-84 Program Chairman
445 Burgess Drive                     Fairchild Laboratory for
Menlo Park, CA  94025                    Artificial Intelligence Research
(415) 328-3123                        4001 Miranda Ave., MS 30-888
AAAI-Office@SUMEX                     Palo Alto, CA  94304
                                      Brachman@SRI-KL

------------------------------

End of AIList Digest
********************
 9-Nov-83 13:44:17-PST,15644;000000000001
Mail-From: LAWS created at  9-Nov-83 13:41:24
Date: Wednesday, November 9, 1983 1:34PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #94
To: AIList@SRI-AI


AIList Digest           Wednesday, 9 Nov 1983      Volume 1 : Issue 94

Today's Topics:
  Metaphysics - Functionalism vs Dualism,
  Ethics - Implications of Consciousness,
  Alert - Turing Biography,
  Theory - Parallel vs. Sequential & Ultimate Speed,
  Intelligence - Operational Definitions
----------------------------------------------------------------------

Date: Mon 7 Nov 83 18:30:07-PST
From: WYLAND@SRI-KL.ARPA
Subject: Functionalism vs Dualism in consciousness

        The argument of functionalism versus dualism is
unresolvable because the models are based on different,
complementry paradigms:

      * The functionalism model is based on the reductionist
    approach, the approach of modern science, which explains
    phenomena by logically relating them to controlled,
    repeatable, publically verifiable experiments.  The
    explanations about falling bodies and chemical reactions are
    in this catagory.

      * The dualism model is based on the miraculous approach,
    which explains phenomena as singular events, which are by
    definition not controlled, not repeatable, not verifiable,
    and not public - i.e., the events are observed by a specific
    individual or group.  The existance of UFO's, parapsychology,
    and the existance of externalized consciosness (i.e. soul) is
    in this catagory.

        These two paradigms are the basis of the argument of
Science versus Religion, and are not resolvable EITHER WAY.  The
reductionist model, based on the philosophy of Parminides and
others, assumes a constant, unchanging universe which we discover
through observation.  Such a universe is, by definition,
repeatable and totally predictable: the concept that we could
know the total future if we knew the position and velocity of all
particles derives from this.  The success of Science at
predicting the future is used as an argument for this paradigm.

        The miraculous model assumes the reality of change, as
put forth by Heraclitus and others.  It allows reality to be
changed by outside forces, which may or may not be knowable
and/or predictable.  Changes caused by outside forces are, by
definition, singular events not caused by the normal chains of
causality.  Our personal consciousness and (by extension,
perhaps) the existance of life in the universe are singular
events (as far as we know), and the basic axioms of any
reductionist model of the universe are, by definition,
unexplainable because they must come from outside the system.

        The argument of functionalism versus dualism is not
resolvable in a final sense, but there are some working rules we
can use after considering both paradigms.  Any definition of
intellegence, consciousness (as opposed to Consciousness), etc.
has to be based on the reductionist model: it is the only way we
can explain things in such a manner that we can predict results
and prove theories.  On the other hand, the concept that all
sources of consciousness are mechanical is a religious position: a
catagorical assumption about reality.  It was not that long ago
that science said that stones do not fall from the sky; all
it would take to make UFOs accepted as fact would be for one to
land and set up shop as a merchant dealing in rugs and spices
from Aldebaran and Vega.

------------------------------

Date: Tuesday, 8 November 1983 14:24:55 EST
From: Robert.Frederking@CMU-CS-CAD
Subject: Ethics and Definitions of Consciousness

        Actually, I believe you'll find that slavery has existed both with
and without believing that the slave had a soul.  In many ancient societies
slaves were of identically the same stock as yourself, they had just run
into serious economic difficulties.  As I recall, slavery of the blacks in
the U.S. wasn't justified by their not having souls, but by claiming they
were better off (or similar drivel).  The fact that denying other people had
souls was used at some time to justify it doesn't bother me, since all kinds
of other rationalizations have been used.

        Now we are approaching the time when we will have intelligent
mechanical slaves.  Are you advocating that it should be illegal to own
robots that can pass the Turing (or other similar) test?  I think that a
very important thing to consider is that we can probably make a robot really
enjoy being a slave, by setting up the appropriate top-level goals.  Should
this be illegal?  I think not.  Suppose we reach the point where we can
alter fetuses (see "Brave New World" by Aldous Huxley) to the point where
they *really* enjoy being slaves to whoever buys them.  Should this be
illegal?  I think so.  What if we build fetuses from scratch?  Harder to
say, but I suspect this should be illegal.

        The most conservative (small "c") approach to the problem is to
grant human rights to anything that *might* qualify as intelligent.  I think
this would be a mistake, unless you allow biological organisms a distinction
as outlined above.  The next most conservative approach seems to me to leave
the situation where it is today: if it is physically an independent human
life, it has legal rights.

------------------------------

Date: 8 Nov 1983 09:26-EST
From: Jon.Webb@CMU-CS-IUS.ARPA
Subject: parallel vs. sequential

Parallel and sequential machines are not equivalent, even in abstract
models.  For example, an absract parallel machine can generate truly
random numbers by starting two processes at the same time, which are
identical except that one sends the main processor a "0" and the other
sends a "1". The main processor accepts the first number it receives.
A Turing machine can generate only pseudo-random numbers.

However, I do not believe a parallel machine is more powerful (in the
formal sense) than a Turing machine with a true random-number
generator.  I don't know of a proof of this; but it sounds like
something that work has been done on.

Jon

------------------------------

Date: Tuesday, 8-Nov-83  18:33:07-GMT
From: O'KEEFE HPS (on ERCC DEC-10) <okeefe.r.a.@edxa>
Reply-to: okeefe.r.a. <okeefe.r.a.%edxa@ucl-cs>
Subject: Ultimate limit on computing speed

--------
    There was a short letter about this in CACM about 6 or 7 years ago.
I haven't got the reference, but the argument goes something like this.

1.  In order to compute, you need a device with at least two states
    that can change from one state to another.
2.  Information theory (or quantum mechanics or something, I don't
    remember which) shows that any state change must be accompanied
    by a transfer of at least so much energy (a definite figure was
    given).
3.  Energy contributes to the stress-energy tensor just like mass and
    momentum, so the device must be at least so big or it will undergo
    gravitational collapse (again, a definite figure).
4.  It takes light so long to cross the diameter of the device, and
    this is the shortest possible delay before we can definitely say
    that the device is in its new state.
5.  Therefore any physically realisable device (assuming the validity
    of general relativity, quantum mechanics, information theory ...)
    cannot switch faster than (again a definite figure).  I think the
    final figure was 10^-43 seconds, but it's been a long time since
    I read the letter.


     I have found the discussion of "what is intelligence" boring,
confused, and unhelpful.  If people feel unhappy working in AI because
we don't have an agreed definition of the I part (come to that, do we
*really* have an agreed definition of the A part either?  if we come
across a planet inhabited by metallic creatures with CMOS brains that
were produced by natural processes, should their study belong to AI
or xenobiology, and does it matter?) why not just change the name of
the field, say to "Epistemics And Robotics".  I don't give a tinker's
curse whether AI ever produces "intelligent" machines; there are tasks
that I would like to see computers doing in the service of humanity
that require the representation and appropriate deployment of large
amounts of knowledge.  I would be just as happy calling this AI, MI,
or EAR.

     I think some of the contributors to this group are suffering from
physics envy, and don't realise what an operational definition is.  It
is a definition which tells you how to MEASURE something.  Thus length
is operationally defined by saying "do such and such.  Now, length is
the thing that you just measured."  Of course there are problems here:
no amount of operational definition will justify any connection between
"length-measured-by-this-foot-rule-six-years-ago" and "length-measured-
by-laser-interferometer-yesterday".  The basic irrelevance is that
an operational definition of say light (what your light meter measures)
doesn't tell you one little thing about how to MAKE some light.  If we
had an operational definition of intelligence (in fact we have quite a
few, and like all operational definitions, nothing to connect them) there
is no reason to expect that to help us MAKE something intelligent.

------------------------------

Date: 7 Nov 83 20:50:48 PST (Monday)
From: Hoffman.es@PARC-MAXC.ARPA
Subject: Turing biography

Finally, there is a major biography of Alan Turing!

        Alan Turing: The Enigma
        by Andrew Hodges
        $22.50  Simon & Schuster
        ISBN 0-671-49207-1

The timing is right:  His war-time work on the Enigma has now been
de-classified.  His rather open homosexuality can be discussed in other
than damning terms these days.  His mother passed away in 1976. (She
maintained that his death in 1954 was not suicide, but an accident, and
she never mentioned his sexuality nor his 1952 arrest.)  And, of course,
the popular press is full of stories on AI, and they always bring up the
Turing Test.

The book is 529 pages, plus photographs, some diagrams, an author's note
and extensive bibliographic footnotes.

Doug Hofstadter's review of the book will appear in the New York Times
Book Review on November 13.

--Rodney Hoffman

------------------------------

Date: Mon,  7 Nov 83 15:40:46 CST
From: Robert.S.Kelley <kelleyr.rice@Rand-Relay>
Subject: Operational definitions of intelligence

  p.s.  I can't imagine that psychology has no operational definition of
  intelligence (in fact, what is it?).  So, if worst comes to worst, AI
  can just borrow psychology's definition and improve on it.

     Probably the most generally accepted definition of intelligence in
psychology comes from Abraham Maslow's remark (here paraphrased) that
"Intelligence is that quality which best distinguishes such persons as
Albert Einstein and Marie Curie from the inhabitants of a home for the
mentally retarded."  A poorer definition is that intelligence is what
IQ tests measure.  In fact psychologists have sought without success
for a more precise definition of intelligence (or even learning) for
over 100 years.
                                Rusty Kelley
                                (kelleyr.rice@RAND-RELAY)

------------------------------

Date: 7 Nov 83 10:17:05-PST (Mon)
From: harpo!eagle!mhuxl!ulysses!unc!mcnc!ecsvax!unbent @ Ucb-Vax
Subject: Inscrutable Intelligence
Article-I.D.: ecsvax.1488

I sympathize with the longing for an "operational definition" of
'intelligence'--especially since you've got to write *something* on
grant applications to justify all those hardware costs.  (That's not a
problem we philosophers have.  Sigh!)  But I don't see any reason to
suppose that you're ever going to *get* one, nor, in the end, that you
really *need* one.

You're probably not going to get one because "intelligence" is
one of those "open textury", "clustery" kinds of notions.  That is,
we know it when we see it (most of the time), but there are no necessary and
sufficient conditions that one can give in advance which instances of it
must satisfy.  (This isn't an uncommon phenomenon.  As my colleague Paul Ziff
once pointed out, when we say "A cheetah can outrun a man", we can recognize
that races between men and *lame* cheetahs, *hobbled* cheetahs, *three-legged*
cheetahs, cheetahs *running on ice*, etc. don't count as counterexamples to the
claim even if the man wins--when such cases are brought up.  But we can't give
an exhaustive list of spurious counterexamples *in advance*.)

Why not rest content with saying that the object of the game is to get
computers to be able to do some of the things that *we* can do--e.g.,
recognize patterns, get a high score on the Miller Analogies Test,
carry on an interesting conversation?  What one would like to say, I
know, is "do some of the things we do *the way we do them*--but the
problem there is that we have no very good idea *how* we do them.  Maybe
if we can get a computer to do some of them, we'll get some ideas about
us--although I'm skeptical about that, too.

                        --Jay Rosenberg (ecsvax!unbent)

------------------------------

Date: Tue, 8 Nov 83 09:37:00 EST
From: ihnp4!houxa!rem@UCLA-LOCUS


THE MUELLER MEASURE

If an AI could be built to answer all questions we ask it to assure us
that it is ideally human (the Turing Test), it ought to
be smart enough to figure out questions to ask itself
that would prove that it is indeed artificial.  Put another
way: If an AI could make humans think it is smarter than
a human by answering all questions posed to it in a
Turing-like manner, it still is dumber than a human because
it could not ask questions of a human to make us answer
the questions so that it satisfies its desire for us to
make it think we are more artificial than it is.  Again:
If we build an AI so smart it can fool other people
by answering all questions in the Turing fashion, can
we build a computer, anti-Turing-like, that could make
us answer questions to fool other machines
into believing we are artificial?

Robert E. Mueller, Bell Labs, Holmdel, New Jersey

houxa!rem

------------------------------

Date: 9 November 1983 03:41 EST
From: Steven A. Swernofsky <SASW @ MIT-MC>
Subject: Turing test in everyday life

    . . .
    I know the entity at the other end of the line is not a computer
    (because they recognize my voice -- someone correct me if this is not a
    good test) but we might ask: how good would a computer program have to
    be to fool someone into thinking that it is human, in this limited case?

    [There is a system, in use, that can recognize affirmative and negative
    replies to its questions.
    . . .  -- KIL]

No, I always test these callers by interrupting to ask them questions,
by restating what they said to me, and by avoiding "yes/no" responses.

I appears to me that the extremely limited domain, and the utter lack of
expertise which people expect from the caller, would make it very easy to
simulate a real person.  Does the fact of a limited domain "disguise"
the intelligence of the caller, or does it imply that intelligence means
a lot less in a limited domain?

-- Steve

------------------------------

End of AIList Digest
********************
 9-Nov-83 17:18:10-PST,17376;000000000001
Mail-From: LAWS created at  9-Nov-83 17:14:54
Date: Wednesday, November 9, 1983 5:08PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #95
To: AIList@SRI-AI


AIList Digest           Thursday, 10 Nov 1983      Volume 1 : Issue 95

Today's Topics:
  Alert - Hacker's Dictionary,
  Conference - Robotic Intelligence and Productivity,
  Tutorial - Machine Translation,
  Report - AISNE meeting
----------------------------------------------------------------------

Date: 8 Nov 1983 1215:19-EST
From: Lawrence Osterman <OSTERMAN@CMU-CS-C.ARPA>
Subject: Guy Steele's

                  [Reprinted from the CMU-C bboard.]

New book is now out.
  The Hacker's Dictionary, Available in the CMU Bookstore
right now.  The cost is 5.95 (6.31 after taxes) and its well
worth getting  (It includes (among other things)  The  COMPLETE
INTERCAL character set (ask anyone in 15-312 last fall),
Trash 80,N, Moby, and many others (El Camino Bignum?))


                        Larry

[According to another message, the CMU bookstore immediately
sold out.  -- KIL]

------------------------------

Date: 7 Nov 1983 1127-PST
From: MEDIONI@USC-ECLC
Subject: Conference announcement


        ******  CONFERENCE ANNOUCEMENT  ******

   ROBOTIC INTELLIGENCE AND PRODUCTIVITY CONFERENCE

        WAYNE STATE UNIVERSITY, DETROIT, MICHIGAN

                 NOVEMBER 18-19, 1983

For more information and advance program, please contact:

Dr Pepe Siy
(313) 577-3841
(313) 577-3920 - Messages

or Dr Singh
(313) 577-3840

------------------------------

Date: Tue 8 Nov 83 10:06:34-CST
From: Jonathan Slocum <LRC.Slocum@UTEXAS-20.ARPA>
Subject: Tutorial Announcement

[The following is copied from a circular, with the author's encouragement.
 Square brackets delimit my personal insertions, for clarification. -- JS]


        THE INSTITUT DALLE MOLLE POUR LES ETUDES SEMANTIQUES ET
       COGNITIVES DE L'UNIVERSITE DE GENEVE ("ISSCO") is to hold

                             a Tutorial on

                          MACHINE TRANSLATION

   from Monday 2nd April to Friday 6th, 1984, in Lugano, Switzerland


The attraction of Machine Translation as an application domain for
computers has long been recognized, but pioneers in the field seriously
underestimated the complexity of the problem.  As a result, early
systems were severely limited.

The design of more recent systems takes into account the
interdisciplinary nature of the task, recognizing that MT involves the
construction of a complete system for the collection, representation,
and strategic deployment of a specialised kind of linguistic knowledge.
This demands contribution from the fields of both theoretical and
computational linguistics, conputer science, and expert system design.

The aim of this tutorial is to convey the state of the art by allowing
experts in different aspects of MT to present their particular points of
view.  Sessions covering the historical development of MT and its
possible future evolution will also be included to provide a tutorial
which should be relevant to all concerned with the relationship between
natural language and computer science.

The Tutorial will take place in the Palazzo dei Congressi or the Villa
Heleneum, both set in parkland on the shore of Lake Lugano, which is
perhaps the most attractive among the lakes of the Swiss/Italian Alps.
Situated to the south of the Alpine massif, Spring is early and warm.
Participants will be accommodated in nearby hotels.  Registration will
take place on the Sunday evening preceding the Tutorial.


COSTS: Fees for registration submitted by January 31, 1984, will be 120
Swiss franks for students, 220 Swiss franks for academic participants,
and 320 Swiss franks for others.  After this date the fees will increase
by 50 Swiss franks for all participants.  The fees cover tuition,
handouts, coffee, etc.  Hotel accommodation varies between 30 and 150
Swiss franks per night [booking form available, see below].  It may be
possible to arrange cheaper [private] accommodation for students.

FOR FURTHER INFORMATION [incl. booking forms, etc.] (in advance of the
Tutorial) please contact ISSCO, 54 route des Acacias, CH-1227 Geneva; or
telephone [41 for Switzerland] (22 for Geneva) 20-93-33 (University of
Geneva), extension ("interne") 21-16 ("vingt-et-un-seize").  The
University switchboard is closed daily from 12 to 1:30 Swiss time.
[Switzerland is six (6) hours ahead of EST, thus 9 hours ahead of PST.]

------------------------------

Date: Tue 8 Nov 83 10:59:12-CST
From: Jonathan Slocum <LRC.Slocum@UTEXAS-20.ARPA>
Subject: Tutorial Program

                         PROVISIONAL PROGRAMME


Each session is scheduled to include a 50-minute lecture followed by a
20-minute discussion period.  Most evenings are left free, but rooms
will be made available for informal discussion, poster sessions, etc.

Sun. 1st   5 p.m. to 9 p.m.  Registration

Mon. 2nd   9:30  Introductory session                   M. King [ISSCO]

          11:20  A non-conformist's view of the         G. Sampson [Lancaster]
                 state of the art
           2:30  Pre-history of Machine Translation     B. Buchmann [ISSCO]

           4:20  SYSTRAN                                P. Wheeler [Commission
                                                           of the European
                                                           Communities]

Tue. 3rd   9:30  An overview of post-65 developments    E. Ananiadou [ISSCO]
                                                        S. Warwick [ISSCO]
          11:20  Software for MT I: background          J.L. Couchard [ISSCO]
                                                        D. Petitpierre  [ISSCO]
           2:30  SUSY                                   D. MAAS [Saarbruecken]

           4:20  TAUM Meteo and TAUM Aviation           P. Isabelle [Montreal]

Wed. 4th   9:30  Linguistic representations in          A. De Roeck [Essex]
                 syntax based MT systems
          11:00  AI approaches to MT                    P. Shann [ISSCO]

          12:00  New developments in Linguistics        E. Wehrli [UCLA]
                 and possible implications for MT
           3:00  Optional excursion

Thu. 5th   9:30  GETA                                   C. Boitet [Grenoble]

          11:20  ROSETTA                                J. Landsbergen [Philips]

           2:30  Software for MT II:                    R. Johnson [Manchester]
                 some recent developments               M. Rosner [ISSCO]
           4:20  Creating an environment for            A. Melby [Brigham Young]
                 the translator
Fri. 5th   9:30  METAL                                  J. Slocum [Texas]

          11:20  EUROTRA                                M. King [ISSCO]

           2:30  New projects in France                 C. Boitet [Grenoble]

           4:20  MT - the future                        A. Zampoli [Pisa]

           5:30  Closing session


There will be a 1/2 hour coffee break between sessions.  The lunch break
is from 12:30 to 2:30.

------------------------------

Date: Mon, 7 Nov 83 14:01 EST
From: Visions <kitchen%umass-cs@CSNet-Relay>
Subject: Report on AISNE meeting (long message)


                        BRIEF REPORT ON
                FIFTH ANNUAL CONFERENCE OF THE
                   AI SOCIETY OF NEW ENGLAND

Held at Brown University, Providence, Rhode Island, 4th-5th November 1983.
Programme Chairman: Drew McDermott (Yale)
Local Arrangements Chairman: Eugene Charniak (Brown)


Friday, 4th November

8:00PM
Long talk by Harry Pople (Pittsburgh), "Where is the expertise in
expert systems?"  Comments and insights about the general state of
work in expert systems.  INTERNIST: history, structure, and example.

9:30PM
"Intense intellectual colloquy and tippling" [Quoted from programme]

LATE
Faculty and students at Brown very hospitably billeted us visitors
in their homes.


Saturday, 5th November

10:00AM
Panel discussion, Ruven Brooks (ITT), Harry Pople (Pittsburgh), Ramesh
Patil (MIT), Paul Cohen (UMass), "Feasible and infeasible expert-systems
applications".  [Unabashedly selective and incoherent notes:]  RB: Expert
systems have to be relevant, and appropriate, and feasible.  There are
by-products of building expert systems, for example, the encouragement of
the formalization of the problem domain.  HP: Historically, considering
DENDRAL and MOLGEN, say, users have ultimately made greater use of the
tools and infrastructure set up by the designers than of the top-level
capabilities of the expert system itself.  The necessity of taking into
account the needs of the users.  RP:  What is an expert system?  Is
MACSYMA no more than a 1000-key pocket calculator?  Comparison of expert
systems against real experts.  Expert systems that actually work --
narrow domains in which hypotheses can easily be verified.  What if the
job of identifying the applicability of an expert system is a harder
problem than the one the expert system itself solves?  In the domains of
medical diagnosis: enormous space of diagnoses, especially if multiple
disorders are considered.  Needed: reasoning about: 3D space, anatomy;
time; multiple disorders, causality; demography; physiology; processes.
HP: A strategic issue in research: small-scale, tractable problems that
don't scale up.  Is there an analogue of Blocksworld?  PC: Infeasible
(but not too infeasible) problems are fit material for research; feasible
problems for development.  The importance of theoretical issues in choosing
an application area for research.  An animated, general discussion followed.

11:30AM
Short talks:
Richard Brown (Mitre), Automatic programming.  Use of knowledge about
programming and knowledge about the specific application domain.
Ken Wasserman (Columbia), "Representing complex physical objects".  For
use in a system that digests patent abstracts.  Uses frame-like represent-
ation, giving parts, subparts, and the relationships between them.
Paul Barth (Schlumberger-Doll), Automatic programming for drilling-log
interpretation, based on a taxonomy of knowledge sources, activities, and
corresponding transformation and selection operations.
Malcolm Cook (UMass), Narrative summarization.  Goal orientations of the
characters and the interactions between them.  "Affect state map".
Extract recognizable patterns of interaction called "plot units".  Summary
based on how these plot units are linked together.  From this summary
structure a natural-language summary of the original can be generated.

12:30PM
Lunch, during which Brown's teaching lab, equipped with 55 Apollos,
was demonstrated.

2:00PM
Panel discussion, Drew McDermott (Yale), Randy Ellis (UMass), Tomas
Lozano-Perez (MIT), Mallory Selfridge (UConn), "AI and Robotics".
DMcD contemplated the effect that the realization of a walking, talking,
perceiving robot would have on AI.  He remarked how current robotics
work does entail a lot of AI, but that there is necessary, robotics-
-specific, ground-work (like matrices, a code-word for "much mathematics").
All the other panelists had a similar view of this inter-relation between
robotics and AI.  The other panelists then sketched robotics work being
done at their respective institutions.  RE:  Integration of vision and
touch, using a reasonable world model, some simple planning, and feedback
during the process.  Cartesian robot, gripper, Ken Overton's tactile array
sensor (force images), controllable camera, Salisbury hand.  Need for AI
in robotics, especially object representation and search.  Learning -- a
big future issue for a robot that actually moves about in the world.
Problems of implementing algorithms in real time.  For getting started in
robotics: kinematics, materials science, control theory, AI techniques,
but how much of each depends on what you want to do in robotics.  TL-P:
A comparatively lengthy talk on "Automatic synthesis of fine motion
strategies", best exemplified by the problem of putting a peg into a hole.
Given the inherent uncertainty in all postions and motions, the best
strategy (which we probably all do intuitively) is to aim the peg just to
one side of the hole, sliding it across into the hole when it hits,
grazing the far side of the hole as it goes down.  A method for generating
such a strategy automatically, using a formalism based on configuration
spaces, generalized dampers, and friction cones.  MS: Plans for commanding
a robot in natural language, and for describing things to it, and for
teaching it how to do things by showing it examples (from which the robot
builds an abstract description, usable in other situations).  A small, but
adequate robotics facility.  Afterwards, an open discussion, during which
was stressed how important it is that the various far-flung branches of AI
be more aware of each other, and not become insular.  Regarding robotics
research, all panelists agreed strongly that it was absolutely necessary
to work with real robot hardware; software simulations could not hope to
capture all the pernickety richness of the world, motion, forces, friction,
slippage, uncertainty, materials, bending, spatial location, at least not
in any computationally practical way.  No substitute for reality!

3:30PM
More short talks
Jim Hendler (Brown), an overview of things going on at Brown, and in the
works.  Natural language (story comprehension).  FRAIL (frame-based
knowledge representation).  NASL (problem solving).  An electronic
repair manual, which generates instructions for repairs as needed from
an internal model, hooked up with a graphics and 3D modelling system.
And in the works: expert systems, probabilistic reasoning, logic programming,
problem solving, parallel computation (in particular marker-passing and
BOLTZMANN-style machines).  Brown is looking for a new AI faculty member.
[Not a job ad, just a report of one!]
David Miller (Yale), "Uncertain planning through uncertain territory".
How to get from A to B if your controls and sensors are unreliable.
Find a path to your goal, along the path select checkpoints (landmarks),
adjust the path to go within eye-shot of the checkpoints, then off you go,
running demons to watch out for checkpoints and raise alarms if they don't
appear when expected.  This means you're lost.  Then you generate hypotheses
about where you are now (using your map), and what might have gone wrong to
get you there (based on a self-model).  Verify one (some? all?) of these
hypotheses by looking around.  Patch your plan to get back to an appro-
priate checkpoint.  Verify the whole process by getting back on the beaten
track.  Apparently there's a real Hero robot that cruises about a room
doing this.
Bud Crawley (GTE) described what was going on at GTE Labs in AI.  Know-
ledge-based systems.  Natural-language front-end for data bases.
Distributed intelligence.  Machine learning.
Bill Taylor (Gould Inc.), gave an idea of what applied AI research means
to his company, which (in his division) makes digital controllers for
running machines out on the factory floor.  Currently, an expert system
for repairing these controllers in the field.  [I'm not sure how far along
in being realized this was, I think very little.]  For the future, a big,
smart system that would assist a human operator in managing the hundreds
of such controllers out on the floor of a decent sized factory.
Graeme Hirst (Brown, soon Toronto), "Artificial Digestion".  Artificial
Intelligence attempts to model a very poorly understood system, the human
cognitive system.  Much more immediate and substantial results could be
obtained by modelling a much better understood system, the human digestive
system.  Examples of the behavior of a working prototype system on simulated
food input, drawn from a number of illustrative food-domains, including
a four-star French restaurant and a garbage pail.  Applications of AD:
automatic restaurant reviewing, automatic test-marketing of new food
products, and vicarious eating for the diet-conscious and orally impaired.
[Forget about expert systems; this is the hot new area for the 80's!]

4:30PM
AISNE Business Meeting (Yes, some of us stayed till the end!)
Next year's meeting will held at Boston University.  The position of
programme chairman is still open.


A Final Remark:
All the above is based on my own notes of the conference.  At the very
least it reflects my own interests and pre-occupations.  Considering
the disorganized state of my notes, and the late hour I'm typing this,
a lot of the above may be just wrong.  My apologies to anyone I've
misrepresented; by all means correct me.  I hope the general interest of
this report to the AI community outweighs all these failings.  LJK

===========================================================================

------------------------------

End of AIList Digest
********************
14-Nov-83 08:55:06-PST,10369;000000000001
Mail-From: LAWS created at 14-Nov-83 08:54:25
Date: Monday, November 14, 1983 8:48AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #96
To: AIList@SRI-AI


AIList Digest            Monday, 14 Nov 1983       Volume 1 : Issue 96

Today's Topics:
  Theory - Parallel Systems,
  Looping Problem in Literature,
  Intelligence
----------------------------------------------------------------------

Date: 8 Nov 83 23:03:04-PST (Tue)
From: pur-ee!uiucdcs!uokvax!andree @ Ucb-Vax
Subject: Re: Infinite loops and Turing machines.. - (nf)
Article-I.D.: uiucdcs.3712

/***** uokvax:net.ai / umcp-cs!speaker /  9:41 pm  Nov  1, 1983 */
Aha!  I knew someone would come up with this one!
Consider that when we talk of simultaneous events... we speak of
simultaneous events that occur within one Turing machine state
and outside of the Turing machine itself.  Can a one-tape
Turing machine read the input of 7 discrete sources at once?
A 7 tape machine with 7 heads could!
/* ---------- */

But I can do it with a one-tape, one-head turing machine. Let's assume
that each of your 7 discrete sources can always be represeted in n bits.
Thus, the total state of all seven sources can be represented in 7*n bits.
My one-tape turing machine has 2 ** (7*n) symbols, so it can handle your
7 sources, each possible state of all 7 being one symbol of input.

One of the things I did in an undergraduate theory course was show that
an n-symbol turing machine is no more powerful than a two-symbol turing
machine for any finite (countable?) n. You just loose speed.

        <mike

------------------------------

Date: Friday, 11 November 1983, 14:54-EST
From: Carl Hewitt <HEWITT at MIT-AI>
Subject: parallel vs. sequential

An excellent treatise on how some parallel machines are more powerful
than all sequential machines can be found in Will Clinger's doctoral
dissertation "Foundations of Actor Semantics" which can be obtained by
sending $7 to

Publications Office
MIT Artificial Intelligence Laboratory
545 Technology Square
Cambridge, Mass. 02139

requesting Technical Report 633 dated May 1981.

------------------------------

Date: Fri 11 Nov 83 17:12:08-PST
From: Wilkins  <WILKINS@SRI-AI.ARPA>
Subject: parallelism and turing machines


Regarding the "argument" that parallel algorithms cannot be run serially
because a Turing machine cannot react to things that happen faster than
the time it needs to change states:
clearly, you need to go back to whoever sold you the Turing machine
for this purpose and get a turbocharger for it.

Seriously, I second the motion to move towards more useful discussions.

------------------------------

Date: 9 Nov 83 19:28:21-PST (Wed)
From: ihnp4!cbosgd!mhuxl!ulysses!unc!mcnc!ncsu!uvacs!mac @ Ucb-Vax
Subject: the halting problem in history
Article-I.D.: uvacs.1048


   If there were any 'subroutines' in the brain that could not
   halt... I'm sure they would have been found and bred out of
   the species long ago.  I have yet to see anyone die from
   an infinite loop. (umcp-cs.3451)

There is such.  It is caused by seeing an object called the Zahir.  One was
a Persian astrolabe, which was cast into the sea lest men forget the world.
Another was a certain tiger.  Around 1900 it was a coin in Buenos Aires.
Details in "The Zahir", J.L.Borges.

------------------------------

Date: 8 Nov 83 16:38:29-PST (Tue)
From: decvax!wivax!linus!vaxine!wjh12!foxvax1!brunix!rayssd!asa @ Ucb-Vax
Subject: Re: Inscrutable Intelligence
Article-I.D.: rayssd.233

The problem with a psychological definition of intelligence is in finding
some way to make it different from what animals do, and cover all of the
complex things that huumans can do. It used to be measured by written
test. This was grossly unfair, so visual tests were added. These tend to
be grossly unfair because of cultural bias. Dolphins can do very
"intelligent" things, based on types of "intelligent behavior". The best
definition might be based on the rate at which learning occurs, as some
have suggested, but that is also an oversimplification. The ability to
deduce cause and effect, and to predict effects is obviously also
important. My own feeling is that it has something to do with the ability
to build a model of yourself and modify yourself accordingly. It may
be that "I conceive" (not "I think"), or "I conceive and act", or "I
conceive of conceiving" may be as close as we can get.

------------------------------

Date: 8 Nov 83 23:02:53-PST (Tue)
From: pur-ee!uiucdcs!uokvax!rigney @ Ucb-Vax
Subject: Re: Parallelism & Consciousness - (nf)
Article-I.D.: uiucdcs.3711

Perhaps something on the order of "Intelligence enhances survivability
through modification of the environment" is in order.  By modification
something other than the mere changes brought about by living is indicated
(i.e. Rise in CO2 levels, etc. doesn't count).

Thus, if Turtles were intelligent, they would kill the baby rabbits, but
they would also attempt to modify the highway to present less of a hazard.

Problems with this viewpoint:

        1) It may be confusing Technology with Intelligence.  Still, tool
        making ability has always been a good sign.

        2) Making the distinction between Intelligent modifications and
        the effect of just being there.  Since "conscious modification"
        lands us in a bigger pit of worms than we're in now, perhaps a
        distinction should be drawn between reactive behavior (reacting
        and/or adapting to changes) and active behavior (initiating
        changes).  Initiative is therefore a factor.

        3) Monkeys make tools(Antsticks), Dolphins don't.  Is this an
        indication of intelligence, or just a side-effect of Monkeys
        having hands and Dolphins not?  In other words, does Intelligence
        go away if the organism doesn't have the means of modifying
        its environment?  Perhaps "potential" ability qualifies.  Or
        we shouldn't consider specific instances (Is a man trapped in
        a desert still intelligent, even if he has no way to modify
        his environment.)
           Does this mean that if you had a computer with AI, and
        stripped its peripherals, it would lose intelligence?  Are
        human autistics intelligent?  Or are we only considering
        species, and not representatives of species?

In the hopes that this has added fuel to the discussion,

                Carl
                ..!ctvax!uokvax!rigney
                ..!duke!uok!uokvax!rigney

------------------------------

Date: 8 Nov 83 20:51:15-PST (Tue)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: RE:intelligence and adaptability - (nf)
Article-I.D.: uiucdcs.3746

Actually, SHRDLU had neither hand nor eye -- only simulations of them.
That's a far cry from the real thing.

------------------------------

Date: 9 Nov 83 16:20:10-PST (Wed)
From: ihnp4!houxm!mhuxl!ulysses!unc!mcnc!ncsu!uvacs!mac @ Ucb-Vax
Subject: inscrutable intelligence
Article-I.D.: uvacs.1047


Regarding inscrutability of intelligence [sri-arpa.13363]:

Actually, it's typical that a discipline can't define its basic object of
study.  Ever heard a satisfactory definition of mathematics (it's not just
the consequences of set theory) or philosophy.?  What is physics?

Disciplines are distinguished from each other for historical and
methodological reasons.  When they can define their subject precisely it is
because they have been superseded by the discipline that defines their
terms.

It's usually not important (or possible) to define e.g. intelligence
precisely.  We know it in humans.  This is where the IQ tests run into
trouble.  AI seems to be about behavior in computers that would be called
intelligent in humans.  Whether the machines are or are not intelligent
(or, for that matter, conscious) is of little interest and no import.  In
this I guess I agree with Rorty [sri-arpa.13322].  Rorty is willing to
grant consciousness to thermostats if it's of any help.

(Best definition of formal mathematics I know: "The science where you don't
know what you're talking about or whether what you're saying is true".)

                        A. Colvin
                        mac@virginia

------------------------------

Date: 12 Nov 83 0:37:48-PST (Sat)
From: decvax!genrad!security!linus!utzoo!utcsstat!laura @ Ucb-Vax
Subject: Re: Parallelism & Consciousness - (nf)
Article-I.D.: utcsstat.1420

        The other problem with the "turtles should be killing baby
rabbits" definition of intelligence is that it seems to imply that
killing (or at least surviving) is an indication of intelligence.
i would rather not believe this, unless there is compelling evidence
that the 2 are related. So far I have not seen the evidence.

Laura Creighton
utcsstat!laura

------------------------------

Date: 20 Nov 83 0:24:46-EST (Sun)
From: pur-ee!uiucdcs!trsvax!karl @ Ucb-Vax
Subject: Re: Slow Intelligence - (nf)
Article-I.D.: uiucdcs.3789



     " ....  I'm not at all sure that people's working definition
     of  intelligence  has anything at all to do with either time
     or survival.  "

                    Glenn Reid

I'm not sure that people's working definition of intelligence has
anything at all to do with ANYTHING AT ALL.  The quoted statement
implies that peoples' working definition of intelligence is  different
-  it  is  subjective  to each individual.  I would like to claim
that each individual's working definition of intelligence is sub-
ject to change also.


What we are working with here is conceptual.. not a tangible  ob-
ject  which we can spot at an instance.  If the object is concep-
tual, and therefore subjective, then it seems that  we  can  (and
probably  will)  change it's definition as our collective experi-
ences teach us differently.


                                        Karl T. Braun
                                        ...ctvax!trsvax!karl

------------------------------

End of AIList Digest
********************
14-Nov-83 09:08:18-PST,15676;000000000001
Mail-From: LAWS created at 14-Nov-83 09:06:03
Date: Monday, November 14, 1983 8:59AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #97
To: AIList@SRI-AI


AIList Digest            Monday, 14 Nov 1983       Volume 1 : Issue 97

Today's Topics:
  Pattern Recognition - Vector Fields,
  Psychology - Defense,
  Ethics - AI Responsibilities,
  Seminars - NRL & Logic Specifications & Deductive Belief
----------------------------------------------------------------------
			
Date: Sun, 13 Nov 83 19:25:40 PST
From: Philip Kahn <v.kahn@UCLA-LOCUS>
Subject: Need references in field of spatial pattern recognition

        This letter to AI-LIST is a request for references from all
of you out there that are heavily into spatial pattern recognition.

        First let me explain my approach, then I'll hit you with my
request.  Optical flow and linear contrast edges have been getting a
lot of attention recently.  Utilizing this approach, I view a line
as an ordered set of [image] elements; that is, a line is comprised of a
finite ordered set of elements.  Each element of a line is treated
as a directed line (a vector with direction and magnitude).

        Here's what I am trying to define:  with such a definition
of a line, it should be possible to create mappings between lines
to form fairly abstract ideas of similarity between lines.  Since
objects are viewed as a particular arrangement of lines, this analysis
would suffice in identifying objects as being alike.  Some examples,
the two lines possessing the most similarities (i.e.,
MAX ( LINE1 .intersection. LINE2 ) ) may be one criterion of comparison.

        I'm looking for any references you might have on this area.
This INCLUDES:
        1) physiology/biology/neuroanatomy articals dealing with
           functional mappings from the ganglion to any level of
           cortical processing.
        2) fuzzy set theory.  This includes ordered set theory and
           any and all applications of set theory to pattern recognition.
        3) any other pertinent references

        I would greatly appreciate any references you might provide.
After a week or two, I will compile the references and put them
on the AI-LIST so that we all can use them.

                Viva la effort!
                Philip Kahn


[My correspondence with Philip indicates that he is already familiar
with much of the recent literature on optic flow.  He has found little,
however, on the subject of pattern recognition in vector fields.  Can
anyone help? -- KIL]

------------------------------

Date: Sun, 13 Nov 1983  22:42 EST
From: HEWITT%MIT-OZ@MIT-MC.ARPA
Subject: Rational Psychology [and Reply]

    Date: 28 Sep 83 10:32:35-PDT (Wed)
    To: AIList at MIT-MC
    From: decvax!duke!unc!mcnc!ncsu!fostel @ Ucb-Vax
    Subject: RE: Rational Psychology [and Reply]

    ... Is psychology rational?
    Someone said that all sciences are rational, a moot point, but not that
    relevant unless one wishes to consider Psychology a science.  I do not.
    This does not mean that psychologists are in any way inferior to chemists
    or to REAL scientists like those who study physics.  But I do think there
    ....

    ----GaryFostel----


This is an old submission, but having just read it I felt compelled to
reply.  I happen to be a Computer Scientist, but I think
Psychologists, especially Experimental Psychologists, are better
scientists than the average Computer "Scientist".  At least they have
been trained in the scientific method, a skill most Computer
Scientists lack.  Just because Psychologist, by and large, cannot defend
themselves on this list is no reason to make idle attacks with only
very superficial knowledge on the subject.

Fanya Montalvo

------------------------------

Date: Sun 13 Nov 83 13:14:06-PST
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: just a reminder...

Artificial intelligence promises to alter the world in enormous ways during our
lifetime;  I  believe it's crucial for all of us to look forward to the effects
our our work, both individually and collectively, to make sure that it will  be
to the benefit of all peoples in the world.

It  seems to be tiresome to people to remind them of the incredible effect that
AI will have in our lifetimes, yet the profound mature of the  changes  to  the
world made by a small group of researchers makes it crucial that we don't treat
our  efforts  casually. For example, the military applications of AI will dwarf
that of the atomic bomb, but even more important is the fact  that  the  atomic
bomb is a primarily military device, while AI will impact the world as much (if
not more) in non-military domains.

Physics in the early part of this century was at the cutting edge of knowledge,
similar to the current place of AI. The culmination of their work in the atomic
bomb  changed  their field immensely and irrevocably; even on a personal level,
researchers in physics found their lives  greatly  impacted,  often  shattered.
Many of the top researchers left the field.

During  our  lifetimes  I  think we will see a similar transformation, with the
"fun and games" of these heady years turning into a deadly seriousness, I think
we will also see top researchers leaving the field, once we start to  see  some
of  our effects on the world. It is imperative for all workers in this field to
formulate and share a moral outlook on what we do,  and  hope  to  do,  to  the
world.

I would suggest we have, at the minimum, a three part responsibility. First, we
must  make ourselves aware of the human impact of our work, both short and long
term. Second, we must use this knowledge to guide the course of  our  research,
both  individually  and  collectively, rather than simply flowing into whatever
area the grants are flowing into.  Third  and  most  importantly,  we  must  be
spokespeople  and  consciences  to  the world, forcing others to be informed of
what we are doing and its effects.  Researches who still cling to  "value-free"
science should not be working in AI.

I will suggest a few areas we should be thinking about:

-  Use of AI for offensive military use vs. legitimate defense needs. While the
line is vague, a good offense is surely not always the best defense.

- Will the work cause a centralization of power, or cause a decentralization of
power? Building massive centers of power in this  age  increases  the  risk  of
humans dominated by machine.

- Is the work offering tools to extend the grasp of humans, or tools to control
humans?

- Will people have access to the information generated by the work, or will the
benefits of information access be restricted to a few?

Finally,  will  the work add insights into ourselves a human beings, or will it
simply feed our drives, reflecting our base nature back at  ourselves?  In  the
movie  "Tron"  an  actor  says "Our spirit remains in each and every program we
wrote"; what IS our spirit?

David

------------------------------

Date: 8 Nov 1983 09:44:28-PST
From: Elaine Marsh <marsh@NRL-AIC>
Subject: AI Seminar Schedule

[I am passing this along because it is the first mention of this seminar
series in AIList and will give interested readers the chance to sign up
for the mailing list.  I will not continue to carry these seminar notices
because they do not include abstracts.  -- KIL]


                     U.S. Navy Center for Applied Research
                           in Artificial Intelligence
                     Naval Research Laboratory - Code 7510
                           Washington, DC   20375

                              WEEKLY SEMINAR SERIES

        14 Nov.  1983     Dr. Jagdish Chandra, Director
                          Mathematical Sciences Division
                          Army Research Office, Durham, NC
                                "Mathematical Sciences Activities Relating
                                 to AI and Its Applications at the Army
                                 Research Office"

        21 Nov.  1983     Professor Laveen Kanal
                          Department of Computer Science
                          University of Maryland, College Park, MD
                                "New Insights into Relationships among
                                 Heuristic Search, Dynamic Programming,
                                 and Branch & Bound Procedures"

        28 Nov.  1983     Dr. William Gale
                          Bell Labs
                          Murray Hill, NJ
                                "An Expert System for Regression
                                 Analysis: Applying A.I. Ideas in
                                 Statistics"

         5 Dec.  1983     Professor Ronald Cole
                          Department of Computer Science
                          Carnegie-Mellon University, Pittsburgh, PA
                                "What's New in Speech Recognition?"

        12 Dec.  1983     Professor Robert Haralick
                          Department of Electrical Engineering
                          Virginia Polytechnic Institute, Blacksburg, VA
                                "Application of AI Techniques to the
                                 Interpretation of LANDSAT Scenes over
                                 Mountainous Areas"

   Our meeting are usually held Monday mornings at 10:00 a.m. in the
   Conference Room of the Navy Center for Applied Research in Artificial
   Intelligence (Bldg. 256) located on Bolling Air Force Base, off I-295,
   in the South East quadrant of Washington, DC.

   Coffee will be available starting at 9:45 a.m.

   If you would like to speak, or be added to our mailing list, or would
   just like more information contact Elaine Marsh at marsh@nrl-aic
                                                     [(202)767-2382].

------------------------------

Date: Mon 7 Nov 83 15:20:15-PST
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: Ph.D. Oral

                [Reprinted from the SU-SCORE bboard.]


                                  Ph.D. Oral
          COMPILING LOGIC SPECIFICATIONS FOR PROGRAMMING ENVIRONMENTS
                               November 16, 1983
                      2:30 p.m., Location to be announced
                              Stephen J. Westfold


A major problem in building large programming systems is in keeping track of
the numerous details concerning consistency relations between objects in the
domain of the system.  The approach taken in this thesis is to encourage the
user to specify a system using very-high-level, well-factored logic
descriptions of the domain, and have the system compile these into efficient
procedures that automatically maintain the relations described.  The approach
is demonstrated by using it in the programming environment of the CHI
Knowledge-based Programming system.  Its uses include describing and
implementing the database manager, the dataflow analyzer, the project
management component and the system's compiler itself.  It is particularly
convenient for developing knowledge representation schemes, for example for
such things as property inheritance and automatic maintenance of inverse
property links.

The problem description using logic assertions is treated as a program such as
in PROLOG except that there is a separation of the assertions that describe the
problem from assertions that describe how they are to be used.  This
factorization allows the use of more general logical forms than Horn clauses as
well as encouraging the user to think separately about the problem and the
implementation.  The use of logic assertions is specified at a level natural to
the user, describing implementation issues such as whether relations are stored
or computed, that some assertions should be used to compute a certain function,
that others should be treated as constraints to maintain the consistency of
several interdependent stored relations, and whether assertions should be used
at compile- or execution-time.

Compilation consists of using assertions to instantiate particular procedural
rule schemas, each one of which corresponds to a specialized deduction, and
then compiling the resulting rules to LISP.  The rule language is a convenient
intermediate between the logic assertion language and the implementation
language in that it has both a logic interpretation and a well-defined
procedural interpretation.  Most of the optimization is done at the logic
level.

------------------------------

Date: Fri 11 Nov 83 09:56:17-PST
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: Ph.D. Oral

                [Reprinted from the SU-SCORE bboard.]

                                  Ph.D. Oral

                       Tuesday, Nov. 15, 1983, 2:30 p.m.

                  Bldg. 170 (history corner), conference room

                          A DEDUCTIVE MODEL OF BELIEF

                                 Kurt Konolige


Reasoning about knowledge and belief of computer and human agents is assuming
increasing importance in Artificial Intelligence systems in the areas of
natural language understanding, planning, and knowledge  representation in
general.  Current formal models of belief that form the basis for most of these
systems are derivatives of possible- world semantics for belief.  However,,
this model suffers from epistemological and heuristic inadequacies.
Epistemologically, it assumes that agents know all the consequences of their
belief.  This assumption is clearly inaccurate, because it doesn't take into
account resource limitations on an agent's reasoning ability.  For example, if
an agent knows the rules of chess, it then follows in the possible- world model
that he knows whether white has a winning strategy or not.  On the heuristic
side, proposed mechanical deduction procedures have been first-order
axiomatizations of the possible-world belief.

A more natural model of belief is a deductions model:  an agent has a set of
initial beliefs about the world in some internal language, and a deduction
process for deriving some (but not necessarily all)  logical consequences of
these beliefs.  Within this model, it is possible to account for resource
limitations of an agent's deduction process; for example, one can model a
situation in which an agent knows the rules of chess but does not have the
computational  resources to search the complete game tree before making a move.

This thesis is an investigation of Gentzen-type formalization of the deductive
model of belief.  Several important original results are  proven.  Among these
are soundness and completeness theorems for a deductive belief logic; a
corespondence result that shows the possible- worlds model is a special case of
the deduction model; and a model analog ot Herbrand's Theorem for the belief
logic. Several other topics of knowledge and belief are explored in the thesis
from the viewpoint of the deduction model, including a theory of introspection
about self-beliefs, and a theory of circumscriptive ignorance, in which facts
an agent doesn't know are formalized by limiting or circumscribing the
information available to him.  Here it is!

------------------------------

End of AIList Digest
********************
15-Nov-83 10:31:37-PST,15081;000000000001
Mail-From: LAWS created at 15-Nov-83 10:28:54
Date: Tuesday, November 15, 1983 10:21AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #98
To: AIList@SRI-AI


AIList Digest            Tuesday, 15 Nov 1983      Volume 1 : Issue 98

Today's Topics:
  Intelligence - Definitions & Metadiscussion,
  Looping Problem,
  Architecture - Parallelism vs. Novel Architecture,
  Pattern Recognition - Optic Flow & Forced Matching,
  Ethics & AI,
  Review - Biography of Turing
----------------------------------------------------------------------

Date: 14 Nov 1983 15:03-PST
From: fc%usc-cse%USC-ECL@SRI-NIC
Subject: Re: AIList Digest   V1 #96

An intelligent race is one with a winner, not one that keeps on
rehashing the first 5 yards till nobody wants to watch it anymore.
        FC

------------------------------

Date: 14 Nov 83 10:22:29-PST (Mon)
From: ihnp4!houxm!mhuxl!ulysses!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: Intelligence and Killing
Article-I.D.: ncsu.2396


    Someone wondered if there was evidence that intelligence was related to
    the killing off of other animals.  Presumably that person is prepared to
    refute the apparant similtaneous claims of man as the most intelligent
    and the most deadly animal.   Personally, I might vote dolphins as more
    intelligent, but I bet they do their share of killing too.  They eat things.
    ----GaryFostel----

------------------------------

Date: 14 Nov 83 14:01:55-PST (Mon)
From: ihnp4!ihuxv!portegys @ Ucb-Vax
Subject: Behavioristic definition of intelligence
Article-I.D.: ihuxv.584

What is the purpose of knowing whether something is
intelligent?  Or has a soul?  Or has consciousness?

I think one of the reasons is that it makes it easier to
deal with it.  If a creature is understood to be a human
being, we all know something about how to behave toward it.
And if a machine exhibits intelligence, the quintessential
quality of human beings, we also will know what to do.

One of the things that this implies is that we really should
not worry too much about whether a machine is intelligent
until one gets here.  The definition of it will be in part
determined by how we behave toward it. Right now, I don't feel
very confused about how to act in the presence of a computer
running an AI program.

           Tom Portegys, Bell Labs IH, ihuxv!portegys

------------------------------

Date: 12 Nov 83 19:38:02-PST (Sat)
From: decvax!decwrl!flairvax!kissell @ Ucb-Vax
Subject: Re: the halting problem in history
Article-I.D.: flairvax.267

"...If there were any subroutines in the brain that did not halt..."

It seems to me that there are likely large numbers of subroutines in the
brain that aren't *supposed* to halt.  Like breathing.  Nothing wrong with
that; the brain is not a metaphor for a single-instruction-stream
processor.  I've often suspected, though, that some pathological states,
depression, obsession, addiction, etcetera can be modeled as infinite
loops "executed" by a portion of the brain, and thus why "shock" treatments
sometimes have beneficial effects on depression; a brutal "reset" of the
whole "system".

------------------------------

Date: Tue, 15 Nov 83 07:58 PST
From: "Glasser Alan"@LLL-MFE.ARPA
Subject: parallelism vs. novel architecture

There has been a lot of discussion in this group recently about the
role of parallelism in artificial intelligence.  If I'm not mistaken,
this discussion began in response to a message I sent in, reviving a
discussion of a year ago in Human-Nets.  My original message raised
the question of whether there might exist some crucial, hidden,
architectural mechanism, analogous to DNA in genetics, which would
greatly clarify the workings of intelligence.  Recent discussions
have centered on the role of parallelism alone.  I think this misses
the point.  While parallelism can certainly speed things up, it is
not the kind of fundamental departure from past practices which I
had in mind.  Perhaps a better example would be Turing's and von
Neumann's concept of the stored-program computer, replacing earlier
attempts at hard-wired computers.  This was a fundamental break-
through, without which nothing like today's computers could be
practical.  Perhaps true intelligence, of the biological sort,
requires some structural mechanism which has yet to be imagined.
While it's true that a serial Turing machine can do anything in
principle, it may be thoroughly impractical to program it to be
truly intelligent, both because of problems of speed and because of
the basic awkwardness of the architecture.  What is hopelessly
cumbersome in this architecture may be trivial in the right one.  I
know this sounds pretty vague, but I don't think it's meaningless.

------------------------------

Date: Mon 14 Nov 83 17:59:07-PST
From: David E.T. Foulser <FOULSER@SU-SCORE.ARPA>
Subject: Re: AIList Digest   V1 #97

There is a paper by Kruskal on multi-dimensional scaling that might be of
interest to the user interested in vision processing. I'm not too clear on
what he's doing, so this could be off-base.

                                Dave Foulser

------------------------------

Date: Mon 14 Nov 83 22:24:45-MST
From: Stanley T. Shebs <SHEBS@UTAH-20.ARPA>
Subject: Pattern Matchers

Thanks for the replies about loop detection; some food for thought
in there...

My next puzzle is about pattern matchers.  Has anyone looked carefully
at the notion of a "non-failing" pattern matcher?  By that I mean one
that never or almost never rejects things as non-matching.  Consider
a database of assertions (or whatever) and the matcher as a search
function which takes a pattern as argument.  If something in the db
matches the pattern, then it is returned.  At this point, the caller
can either accept or reject the item from the db.  If rejected, the
matcher would be called again, to find something else matching, and
so forth.  So far nothing unusual.  The matcher will eventually
signal utter failure, and that there is nothing satisfactory in the
database.  My idea is to have the matcher constructed in such a way
that it will return things until the database is entirely scanned, even
if the given pattern is a very simple and rigid one.  In other words,
the matcher never gives up - it will always try to find the most
tenuous excuse to return a match.

Applications I have in mind: NLP for garbled and/or incomplete sentences,
and creative thinking (what does a snake with a tail in its mouth
have to do with benzene? talk about tenuous connections!).

The idea seems related to fuzzy logic (an area I am sadly ignorant
of), but other than that, there seems to be no work on the idea
(perhaps it's a stupid one?).  There seem to be two main problems -
organizing the database in such a way that the matcher can easily
progress from exact matches to extremely remote ones (can almost
talk about a metric space of assertions!),  and setting up the
matcher's caller so as not to thrash too badly (example: a parser
may have trouble deciding whether a sentence is grammatically
incorrect or a word's misspelling looks like another word,
if the word analyzer has a nonfailing matcher).

Does anybody know anything about this?  Is there a fatal flaw
somewhere?

                                                Stan Shebs

BTW, a frame-based system can be characterized as a semantic net
(if you're willing to mung concepts!), and a semantic net can
be mapped into an undirected graph, which *is* a metric space.

------------------------------

Date: 14 November 1983 1359-PST (Monday)
From: crummer at AEROSPACE (Charlie Crummer)
Subject: Ethics and AI Research

Dave Rogers brought up the subject of ethics in AI research. I agree with him
that we must continually evaluate the projects we are asked to work on.
Unfortunately, like the example he gave of physicists working on the bombs,
we will not always know what the government has in mind for our work. It may
be valid to indict the workers on the Manhattan project because they really
did have an idea what was going on but the very early researchers in the
field of radioactivity probably did not know how their discoveries would be
used.

The application of morality must go beyond passively choosing not to
work on certain projects. We must become actively involved in the
application by our government of the ideas we create. Once an idea or
physical effect is discovered it can never be undiscovered.  If I
choose not to work on a project (which I definitely would if I thought
it immoral) that may not make much difference. Someone else will
always be waiting to pick up the work. It is sort of like preventing
rape by refusing to rape anyone.

  --Charlie

------------------------------

Date: 14 Nov 83  1306 PST
From: Russell Greiner <RDG@SU-AI>
Subject: Biography of Turing

                [Reprinted from the SU-SCORE bboard.]

n055  1247  09 Nov 83
BC-BOOK-REVIEW (UNDATED)
By CHRISTOPHER LEHMANN-HAUPT
c. 1983 N.Y. Times News Service
ALAN TURING: The Enigma. By Andrew Hodges. 587 pages.
Illustrated. Simon & Schuster. $22.50.

    He is remembered variously as the British cryptologist whose
so-called ''Enigma'' machine helped to decipher Germany's top-secret
World War II code; as the difficult man who both pioneered and
impeded the advance of England's computer industry; and as the
inventor of a theoretical automaton sometimes called the ''Turing
(Editors: umlaut over the u) Machine,'' the umlaut being, according
to a glossary published in 1953, ''an unearned and undesirable
addition, due, presumably, to an impression that anything so
incomprehensible must be Teutonic.''
    But this passionately exhaustive biography by Andrew Hodges, an
English mathematician, brings Alan Turing very much back to life and
offers a less forbidding impression. Look at any of the many verbal
snapshots that Hodges offers us in his book - Turing as an
eccentrically unruly child who could keep neither his buttons aligned
nor the ink in his pen, and who answered his father when asked if he
would be good, ''Yes, but sometimes I shall forget!''; or Turing as
an intense young man with a breathless high-pitched voice and a
hiccuppy laugh - and it is difficult to think of him as a dark
umlauted enigma.
    Yet the mind of the man was an awesome force. By the time he was 24
years old, in 1936, he had conceived as a mathematical abstraction
his computing machine and completed the paper ''Computable Numbers,''
which offered it to the world. Thereafter, Hodges points out, his
waves of inspiration seemed to flow in five-year intervals - the
Naval Enigma in 1940, the design for his Automatic Computing Engine
(ACE) in 1945, a theory of structural evolution, or morphogenesis, in
1950. In 1951, he was elected a Fellow of the Royal Society. He was
not yet 40.
    But the next half-decade interval did not bring further revelation.
In February 1952, he was arrested, tried, convicted and given a
probationary sentence for ''Gross Indecency contrary to Section 11 of
the Criminal Law Amendment Act 1885,'' or the practice of male
homosexuality, a ''tendency'' he had never denied and in recent years
had admitted quite openly. On June 7, 1954, he was found dead in his
home near Manchester, a bitten, presumably cyanide-laced apple in his
hand.
    Yet he had not been despondent over his legal problems. He was not
in disgrace or financial difficulty. He had plans and ideas; his work
was going well. His devoted mother - about whom he had of late been
having surprisingly (to him) hostile dreams as the result of a
Jungian psychoanalysis - insisted that his death was the accident she
had long feared he would suffer from working with dangerous
chemicals. The enigma of Alan Mathison Turing began to grow.
    Andrew Hodges is good at explaining Turing's difficult ideas,
particularly the evolution of his theoretical computer and the
function of his Enigma machines. He is adept at showing us the
originality of Turing's mind, especially the passion for truth (even
when it damaged his career) and the insistence on bridging the worlds
of the theoretical and practical. The only sections of the biography
that grow tedious are those that describe the debates over artificial
intelligence - or maybe it's the world's resistance to artificial
intelligence that is tedious. Turing's position was straightforward
enough: ''The original question, 'Can machines think?' I believe to
be too meaningless to deserve discussion. Nevertheless I believe that
at the end of the century the use of words and general educated
opinion will have altered so much that one will be able to speak of
machines thinking without expecting to be contradicted.''
    On the matter of Turing's suicide, Hodges concedes its
incomprehensibility, but then announces with sudden melodrama: ''The
board was ready for an end game different from that of Lewis
Carroll's, in which Alice captured the Red Queen, and awoke from
nightmare. In real life, the Red Queen had escaped to feasting and
fun in Moscow. The White Queen would be saved, and Alan Turing
sacrificed.''
    What does Hodges mean by his portentous reference to cold-war
politics? Was Alan Turing a murdered spy? Was he a spy? Was he the
victim of some sort of double-cross? No, he was none of the above:
the author is merely speculating that as the cold war heated up, it
must have become extremely dangerous to be a homosexual in possession
of state secrets. Hodges is passionate on the subject of the
precariousness of being homosexual; it was partly his participation
in the ''gay liberation'' movement that got him interested in Alan
Turing in the first place.
    Indeed, one has to suspect Hodges of an overidentification with Alan
Turing, for he goes on at far too great length on Turing's
existential vulnerability. Still, word by word and sentence by
sentence, he can be exceedingly eloquent on his subject. ''He had
clung to the simple amidst the distracting and frightening complexity
of the world,'' the author writes of Turing's affinity for the
concrete.
    ''Yet he was not a narrow man,'' Hodges continues. ''Mrs. Turing was
right in saying, as she did, that he died while working on a
dangerous experiment. It was the experiment called LIFE - a subject
largely inducing as much fear and embarrassment for the official
scientific world as for her. He had not only thought freely, as best
he could, but had eaten of two forbidden fruits, those of the world
and of the flesh. They violently disagreed with each other, and in
that disagreement lay the final unsolvable problem.''

------------------------------

End of AIList Digest
********************
16-Nov-83 14:35:20-PST,18538;000000000001
Mail-From: LAWS created at 16-Nov-83 14:33:27
Date: Wednesday, November 16, 1983 2:25PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #99
To: AIList@SRI-AI


AIList Digest           Thursday, 17 Nov 1983      Volume 1 : Issue 99

Today's Topics:
  AI Literature - Comtex,
  Review - Abacus,
  Artificial Humanity,
  Conference - SPIE Call for Papers,
  Seminar - CRITTER for Critiquing Circuit Designs,
  Military AI - DARPA Plans (long message)
----------------------------------------------------------------------

Date: Wed 16 Nov 83 10:14:02-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Comtex

The Comtex microfiche series seems to be alive and well, contrary
to a rumor printed in an early AIList issue.  The ad they sent me
offers the Stanford and MIT AI memoranda (over $2,000 each set), and
says that the Purdue PRIP [pattern recognition and image processing]
technical reports will be next.  Also forthcoming are the SRI and
Carnegie-Mellon AI reports.

                                        -- Ken Laws

------------------------------

Date: Wed 16 Nov 83 10:31:26-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Abacus

I have the first issue of Abacus, the new "soft" computer science
magazine edited by Anthony Ralston.  It contains a very nice survey or
introduction to computer graphics for digital filmmaking and an
interesting exploration of how the first electronic digital computer
came to be.  There is also a superficial article about computer vision
which fails to answer its title question, "Why Computers Can't See
(Yet)".  [It is possibly that I'm being overly harsh since this is my
own area of expertise.  My feeling, however, is that the question
cannot be answered by just pointing out that vision is difficult and
that we have dozens of different approaches, none of which works in
more than specialized cases.  An adequate answer requires a guess at
how it is that the human vision system can work in all cases, and why
we have not been able to duplicate it.]

The magazine also offers various computer-related departments,
notably those covering book reviews, the law, personal computing,
puzzles, and politics.  Humorous anecdotes are solicited for
filler material, a la Reader's Digest.  There is no AI-related
column at present.

The magazine has a "padded" feel, particularly since every ad save
one is by Springer-Verlag, the publisher.  They even ran out of
things to advertise and so repeated several full-page ads.  No doubt
this is a new-issue problem and will quickly disappear.  I wish
them well.

                                        -- Ken Laws

------------------------------

Date: 16 Nov 1983 10:21:32 EST (Wednesday)
From: Mark S. Day <mday@bbnccj>
Subject: Artificial Humanity

     From: ihnp4!ihuxv!portegys @ Ucb-Vax
     Subject: Behavioristic definition of intelligence

     What is the purpose of knowing whether something is
     intelligent?  Or has a soul?  Or has consciousness?

     I think one of the reasons is that it makes it easier to
     deal with it.  If a creature is understood to be a human
     being, we all know something about how to behave toward it.
     And if a machine exhibits intelligence, the quintessential
     quality of human beings, we also will know what to do.

Without wishing to flame or start a pointless philosophical
discussion, I do not consider intelligence to be the quintessential
quality of human beings.  Nor do I expect to behave in the same way
towards an artificially intelligent program as I would towards a
person.  Turing tests etc. notwithstanding, I think there is a
distinction between "artificial intelligence" and "artificial
humanity," and that by and large people are not striving to create
"artificial humanity."

------------------------------

Date: Wed 16 Nov 83 09:30:18-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Artificial Humanity

I attended a Stanford lecture by Doug Lenat on Tuesday.  He mentioned
three interesting bugs that developed in EURISKO, a self-monitoring
and self-modifying program.

One turned up when EURISKO erroneously claimed to have discovered a
new type of flip-flop.  The problem was traced to an array indexing
error.  EURISKO, realizing that it had never in its entire history
had a bounds error, had deleted the bounds-checking code.  The first
bounds error occurred soon after.

Another bug cropped up in the "credit assignment" rule base.  EURISKO
was claiming that a particular rule had been responsible for discovering
a great many other interesting rules.  It turned out that the gist of
the rule was "If the system discovers something interesting, attach my
name as the discoverer."

The third bug became evident when EURISKO halted at 4:00 one morning
waiting for an answer to a question.  The system was supposed to know
that questions were OK when a person was around, but not at night with
no people at hand.  People are represented in its knowledge base in the
same manner as any other object.  EURISKO wanted (i.e., had as a goal)
to ask a question.  It realized that the reason it could not was that
no object in its current environment had the "person" attribute.  It
therefore declared itself to be a "person", and proceeded to ask the
question.

Doug says that it was rather difficult to explain to the system why
these were not reasonable things to do.

                                        -- Ken Laws

------------------------------

Date: Wed 16 Nov 83 10:09:24-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: SPIE Call for Papers

SPIE has put out a call for papers for its Technical Symposium
East '84 in Arlington, April 29 - May 4.  One of the 10 subtopics
is Applications of AI, particularly image understanding, expert
systems, autonomous navigation, intelligent systems, computer
vision, knowledge-based systems, contextual scene analysis, and
robotics.

Abstracts are due Nov. 21, manuscripts by April 2.  For more info,
contact

  SPIE Technical Program Committee
  P.O. Box 10
  Bellingham, Washington  98227-0010

  (206) 676-3290, Technical Program Dept.
  Telex 46-7053

                                        -- Ken Laws

------------------------------

Date: 15 Nov 83 14:19:54 EST
From: Smadar <KEDAR-CABELLI@RUTGERS.ARPA>
Subject: An III talk this Thursday...

                 [Reprinted from the RUTGERS bboard.]

          Title:    CRITTER - A System for 'Critiquing' Circuits
          Speaker:  Van Kelly
          Date:     Thursday, November 17,1983, 1:30-2:30 PM
          Location: Hill Center, Seventh floor lounge

       Van  kelly,  a  Ph.D.  student  in  our  department, will describe a
    computer system, CRITTER, for  'critiquing'  digital  circuit  designs.
    This  informal  talk  based on his current thesis research.  Here is an
    abstract of the talk:

    CRITTER is  an  exploratory  prototype  design  aid  for  comprehensive
    "critiquing" of digital circuit designs.  While originally intended for
    verifying  a circuit's functional correctness and timing safety, it can
    also be used to  estimate  design  robustness,  sensitivity  to  device
    parameters,  and  (to some extent) testability.  CRITTER has been built
    using Artificial Intelligence ("Expert  Systems")  technology  and  its
    reasoning is guided by an extensible collection of electronic knowledge
    derived  from human experts.  Also, a new non-procedural representation
    for both the real-time behavior of circuits and circuit  specifications
    has  led  to a streamlined circuit modeling formalism based on ordinary
    mathematical function composition.   A  version  of  CRITTER  has  been
    tested  on  circuits  with  complexities  of  up to a dozen TTL SSI/MSI
    packages.  A more powerful version is  being  adapted  for  use  in  an
    automated VLSI design environment.

------------------------------

Date: 16 Nov 83 12:58:07 PST (Wednesday)
From: John Larson <JLarson.PA@PARC.ARPA>
Subject: AI and the military (long message)

Received over the network  . . .

STRATEGIC COMPUTING PLAN ANNOUNCED; REVOLUTIONARY ADVANCES
IN MACHINE INTELLIGENCE TECHNOLOGY TO MEET CRITICAL DEFENSE NEEDS

  Washington, D.C. (7 Nov. 1983) - - Revolutionary advances in the way
computers will be applied to tomorrow's national defense needs were
described in a comprehensive "Strategic Computing" plan announced
today by the Defense Advanced Research Projects Agency (DARPA).

  DARPA's plan encompasses the development and application of machine
intelligence technology to critical defense problems.  The program
calls for transcending today's computer capabilities by a "quantum
jump."  The powerful computers to be developed under the plan will be
driven by "expert systems" that mimic the thinking and reasoning
processes of humans. The machines will be equipped with sensory and
communication modules enabling them to hear, talk, see and act on
information and data they develop or receive.  This new technology as
it emerges during the coming decade will have unprecedented
capabilities and promises to greatly increase our national security.

  Computers are already widely employed in defense, and are relied on
to help hold the field against larger forces.  But current computers
have inflexible program logic, and are limited in their ability to
adapt to unanticipated enemy actions in the field.  This problem is
heightened by the increasing pace and complexity of modern warfare.
The new DARPA program will confront this challenge by producing
adaptive, intelligent computers specifically aimed at critical
military applications.

  Three initial applications are identified in the DARPA plan.  These
include autonomous vehicles (unmanned aircraft, submersibles, and land
vehicles), expert associates, and large-scale battle management
systems.

  In contrast with current guided missiles and munitions, the new
autonomous vehicles will be capable of complex, far-ranging
reconnaissance and attack missions, and will exhibit highly adaptive
forms of terminal homing.

  A land vehicle described in the plan will be able to navigate
cross-country from one location to another, planning its route from
digital terrain data, and updating its plan as its vision and image
understanding systems sense and resolve ambiguities between observed
and stored terrain data.  Its expert local-navigation system will
devise schemes to insure concealment and avoid obstacles as the
vehicle pursues its mission objectives.

  A pilot's expert associate will be developed that can interact via
speech communications and function as a "mechanized co-pilot". This
system will enable a pilot to off-load lower-level instrument
monitoring, control, and diagnostic functions, freeing him to focus on
high-priority decisions and actions.  The associate will be trainable
and personalizable to the requirements of specific missions and the
methods of an individual pilot.  It will heighten pilots' capabilities
to act effectively and decisively in high stress combat situations.

  The machine intelligence technology will also be applied in a
carrier battle-group battle management system. This system will aid in
the information fusion, option generation, decision making, and event
monitoring by the teams of people responsible for managing such
large-scale, fast-moving combat situations.

  The DARPA program will achieve its technical objectives and produce
machine intelligence technology by jointly exploiting a wide range of
recent scientific advances in artificial intelligence, computer
architecture, and microelectronics.

  Recent advances in artificial intelligence enable the codification
in sets of computer "rules" of the thinking processes that people use
to reason, plan, and make decisions.  For example, a detailed
codification of the thought processes and heuristics by which a person
finds his way through an unfamiliar city using a map and visual
landmarks might be employed as the basis of an experimental expert
system for local navigation (for the autonomous land vehicle).  Such
expert systems are already being successfully employed in medical
diagnosis, experiment planning in genetics, mineral exploration, and
other areas of complex human expertise.

  Expert systems can often be decomposed into separate segments that
can be processed concurrently. For example, one might search for a
result along many paths in parallel, taking the first satisfactory
solution and then proceeding on to other tasks.  In many expert
systems rules simply "lay in wait" - firing only if a specific
situation arises. Different parts of such a system could be operated
concurrently to watch for the individual contexts in which their rules
are to fire.

  DARPA plans to develop special computers that will exploit
opportunities for concurrent processing of expert systems.  This
approach promises a large increase in the power and intelligence of
such systems.  Using "coarse-mesh" machines consisting of multiple
microprocessors, an increase in power of a factor of one hundred over
current systems will be achievable within a few years.  By creating
special VLSI chip designs containing multiple "fine-mesh" processors,
by populating entire silicon wafers with hundreds of such chips, and
by using high-bandwidth optoelectronic cables to interconnect groups
of wafers, increases of three or four orders of magnitude in symbol
processing and rule-firing rates will be achieved as the research
program matures. While the program will rely heavily on silicon
microelectronics for high-density processing structures, extensive use
will also be made of gallium arsenide technology for high-rate signal
processing, optoelectronics, and for military applications requiring
low-power dissipation and high-immunity to radiation.

  The expert system technology will enable the DARPA computers to
"think smarter."  The special architectures for concurrency and the
faster, denser VLSI microelectronics will enable them to "think harder
and faster."  The combination of these approaches promises to be
potent indeed.

  But machines that mimic thinking are not enough by themselves. They
must be provided with sensory devices that mimic the functions of eyes
and ears. They must have the ability to see their environment, to hear
and understand human language, and to respond in kind.

  Huge computer processing rates will be required to provide effective
machine vision and machine understanding of natural language.  Recent
advances in the architecture of special processor arrays promise to
provide the required rates.  By patterning many small special
processors together on a silicon chip, computer scientists can now
produce simple forms of machine vision in a manner analogous to that
used in the retina of the eye. Instead of each image pixel being
sequentially processed as when using a standard von Neumann computer,
the new processor arrays allow thousands of pixels to be processed
simultaneously. Each image pixel is processed by just a few transistor
switches located close together in a processor cell that communicates
over short distances with neighboring cells.  The number of
transistors required to process each pixel can be perhaps one
one-thousandth of that employed in a von Neumann machine, and the
short communications distances lead to much faster processing rates
per pixel. All these effects multiply the factor of thousands gained
by concurrency.  The DARPA program plans to provide special vision
subsystems that have rates as high as one trillion von Neumann
equivalent operations per second as the program matures in the late
1980's.

  The DARPA Strategic Computing plan calls for the rapid evolution of
a set of prototype intelligent computers, and their experimental
application in military test-bed environments.  The planned activities
will lead to a series of demonstrations of increasingly sophisticated
machine intelligence technology in the selected applications as the
program progresses.

  DARPA will utilize an extensive infrastructure of computers,
computer networks, rapid system prototyping services, and silicon
foundries to support these technology explorations.  This same
infrastructure will also enable the sharing and propagation of
successful results among program participants.  As experimental
intelligent machines are created in the program, some will be added to
the computer network resources - further enhancing the capabilities of
the research infrastructure.


  The Strategic Computing program will be coordinated closely with
Under Secretary of Defense Research and Engineering, the Military
Services, and other Defense Agencies.  A number of advisory panels and
working groups will also be constituted to assure inter-agency
coordination and maintain a dialogue within the scientific community.

  The program calls for a cooperative effort among American industry,
universities, other research institutions, and government.
Communication is critical in the management of the program since many
of the contibutors will be widely dispersed throughout the U.S.  Heavy
use will be made of the Defense Department's ARPANET computer network
to link participants and to establish a productive research
environment.

  Ms. Lynn Conway, Assistant Director for Strategic Computing in
DARPA's Information Processing Techniques Office, will manage the new
program.  Initial program funding is set at $50M in fiscal 1984. It is
proposed at $95M in FY85, and estimated at $600M over the first five
years of the program.

  The successful achievement of the objectives of the Strategic
Computing program will lead to the deployment of a new generation of
military systems containing machine intelligence technology.  These
systems promise to provide the United States with important new
methods of defense against both massed forces and unconventional
threats in the future - methods that can raise the threshold and
decrease the likelihood of major conflict.

------------------------------

End of AIList Digest
********************
20-Nov-83 15:06:03-PST,12574;000000000001
Mail-From: LAWS created at 20-Nov-83 15:05:20
Date: Sunday, November 20, 1983 2:53PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #100
To: AIList@SRI-AI


AIList Digest            Sunday, 20 Nov 1983      Volume 1 : Issue 100

Today's Topics:
  Intelligence - Definition & Msc.,
  Looping Problem - The Zahir,
  Scientific Method - Psychology
----------------------------------------------------------------------

Date: Wed, 16 Nov 1983 10:48:34 EST
From: AXLER.Upenn-1100@Rand-Relay (David M. Axler - MSCF Applications Mgr.)
Subject: Intelligence and Categorization

     I think Tom Portegys' comment in 1:98 is very true.  Knowing whether or
not a thing is intelligent, has a soul, etc., is quite helpful in letting
us categorize it.  And, without that categorization, we're unable to know
how to understand it.  Two minor asides that might be relevant in this
regard:

     1)  There's a school of thought in the fields of linguistics, folklore,
anthropology, and folklore, which is based on the notion (admittedly arguable)
that the only way to truly understand a culture is to first record and
understand its native categories, as these structure both its language and its
thought, at many levels.  (This ties in to the Sapir-Whorf hypothesis that
language structures culture, not the reverse...)  From what I've read in this
area, there is definite validity in this approach.  So, if it's reasonable to
try and understand a culture in terms of its categories (which may or may not
be translatable into our own culture's categories, of course), then it's
equally reasonable for us to need to categorize new things so that we can
understand them within our existing framework.

     2)  Back in medieval times, there was a concept known as the "Great
Chain of Being", which essentially stated that everything had its place in
the scheme of things; at the bottom of the chain were inanimate things, at the
top was God, and the various flora and fauna were in-between.  This set of
categories structured a lot of medieval thinking, and had major influences on
Western thought in general, including thought about the nature of intelligence.
Though the viewpoint implicit in this theory isn't widely held any more, it's
still around in other, more modern, theories, but at a "subconscious" level.
As a result, the notion of 'machine intelligence' can be a troubling one,
because it implies that the inanimate is being relocated in the chain to a
position nearly equal to that of man.

I'm ranging a bit far afield here, but this ought to provoke some discussion...
Dave Axler

------------------------------

Date: 15 Nov 83 15:11:32-PST (Tue)
From: pur-ee!CS-Mordred!Pucc-H.Pucc-I.Pucc-K.ags @ Ucb-Vax
Subject: Re: Parallelism & Consciousness - (nf)
Article-I.D.: pucc-k.115

Faster = More Intelligent.  Now there's an interesting premise...

According to relativity theory, clocks (and bodily processes, and everything
else) run faster at the top of a mountain or on a plane than they do at sea
level.  This has been experimentally confirmed.

Thus it seems that one can become more intelligent merely by climbing a
mountain.  Of course the effect is temporary...

Maybe this is why we always see cartoons about people climbing mountains to
inquire about "the meaning of life" (?)

                                Dave Seaman
                                ..!pur-ee!pucc-k!ags

------------------------------

Date: 17 Nov 83 16:38 EST
From: Jim Lynch <jimlynch@nswc-wo>
Subject: Continuing Debate (discussion) on intelligence.

   I have enjoyed the continuing discussion concerning the definition of
intelligence and would only add a few thoughts.
   1.  I tend to agree with Minsky that intelligence is a social concept,
but I believe that it is probably even more of an emotional one. Intelligence
seems to fall in the same category with notions such as beauty, goodness,
pleasant, etc.  These concepts are personal, intensely so, and difficult to
describe, especially in any sort of quantitative terms.
   2.  A good part of the difficulty with defining Artificial Intelligence is
due, no doubt, to a lack of a good definition for intelligence.  We probablyy
cannot define AI until the psychologists define "I".
   3.  Continuing with 2, the definition probably should not worry us too much.
After all, do psychologists worry about "Natural Computation"?  Let us let the
psychologists worry about what intelligence is, let us worry about how to make
it artificial!!  (As has been pointed out many times, this is certainly an
iterative process and we can surely learn much from each other!).
   4.  The notion of intelligence seems to be a continuum; it is doubtful
that we can define a crisp and fine line dividing the intelligent from the
non-intelligent.  The current debate has provided enough examples to make
this clear.  Our job, therefore, is not to make computers intelligent, but
to make them more intelligent.
                              Thanks for the opportunity to comment,
                                     Jim Lynch, Dahlgren, Virginia

------------------------------

Date: Thu 17 Nov 83 16:07:41-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Intelligence

I had some difficultly refuting a friend's argument that intelligence
is "problem solving ability", and that deciding what problems to solve
is just one facet or level of intelligence.  I realize that this is
a vague definition, but does anyone have a refutation?

I think we can take for granted that summing the same numbers over and
over is not more intelligent than summing them once.  Discovering a
new method of summing them (e.g., finding a pattern and a formula for
taking advantage of it) is intelligent, however.  To some extent,
then, the novelty of the problem and the methods used in its solution
must be taken into account.

Suppose that we define intelligence in terms of the problem-solving
techniques available in an entity's repertoire.  A machine's intelligence
could be described much as a pocket calculator's capabilities are:
this one has modus ponens, that one can manipulate limits of series.
The partial ordering of such capabilities must necessarily be goal-
dependent and so should be left to the purchaser.

I agree with the AIList reader who defined an intelligent entity as
one that builds and refines knowledge structures representing its world.
Ability to manipulate and interconvert particular knowledge structures
fits well into the capability rating system above.  Learning, or ability
to remember new techniques so that they need not be rederived, is
downplayed in this view of intelligence, although I am sure that it is
more than just an efficiency hack.  Problem solving speed seems to be
orthogonal to the capability dimension, as does motivation to solve
problems.

                                        -- Ken Laws

------------------------------

Date: 16 Nov 83 4:21:55-PST (Wed)
From: harpo!seismo!philabs!linus!utzoo!utcsstat!laura @ Ucb-Vax
Subject: KILLING THINGS
Article-I.D.: utcsstat.1439

I think that one has to make a distinction between dolphins killing fish
to eat, and hypothetical turtles killing rabbits, not to eat, but because
they compete for the same land resources. To my mind they are different
sorts of killings (though from the point of veiw of the hapless rabbit
or fish they may be the same). Dolphins kill sharks that attack the school,
though -- I do not think that this 'self-defense' killing is the same as
the planned extermination of another species.

if you believe that planned extermination is the definition of intelligence
then I'll bet you are worried about SETI. On the other hand, I suppose you
must not believe that pacifist vegetarian monks qualify as intelligent.
Or is intelligence something posessed by a species rather than an individual?
Or perhaps you see that eating plants is indeed killing them. Now, we
have, defined all animals and plants like the venus fly-trap as intelligent
while most plants are not. All the protists that I can think of right now
would also be intelligent, though a euglena would be an interesting case.

I think that "killing things" is either too general or too specific
(depending on your definition of killing and which things you admit
to your list of "things") to be a useful guide for intelligence.

What about having fun? Perhaps the ability to laugh is the dividing point
between man (as  a higher intelligence) and animals, who seem to have
some appreciation for pleasure (if not fun) as distinct from plants and
protists whose joy I have never seen measured. Dolphins seem to have
a sense of fun as well, which is (to my mind) a very good thing.

What this bodes for Mr. Spock, though, is not nice. And despite
megabytes of net.jokes, this 11/70 isn't chuckling. :-)

Laura Creighton
utzoo!utcsstat!laura

------------------------------

Date: Sun 20 Nov 83 02:24:00-CST
From: Aaron Temin <CS.Temin@UTEXAS-20.ARPA>
Subject: Re: Artificial Humanity

I found these errors really interesting.

I would think a better rule for Eurisko to have used in the bounds
checking case would be to keep the bounds-checking code, but use it less
frequently, only when it was about to announce something as interesting,
for instance.  Then it may have caught the flip-flop error itself, while
still gaining speed other times.

The "credit assignment bug" makes me think Eurisko is emulating some
professors I have heard of....

The person bug doesn't even have to be bug.  The rule assumes that if a
person is around, then he or she will answer a question typed to a
console, perhaps?  Rather it should state that if a person is around,
Eurisko should ask THAT person the question.  Thus if Eurisko is a
person, it should have asked itself (not real useful, maybe, but less of
a bug, I think).

While computer enthusiasts like to speak of all programs in
anthropomorphic terms, Eurisko seems like one that might really deserve
that.  Anyone know of any others?

-aaron

------------------------------

Date: 13 Nov 83 10:58:40-PST (Sun)
From: ihnp4!houxm!hogpc!houti!ariel!vax135!cornell!uw-beaver!tektronix
      !ucbcad!notes @ Ucb-Vax
Subject: Re: the halting problem in history - (nf)
Article-I.D.: ucbcad.775

Halting problem, lethal infinite loops in consciousness, and the Zahir:

Borges' "Zahir" story was interesting, but the above comment shows just
how successful Borges is in his stylistic approach: by overwhelming the
reader with historical references, he lends legitimacy to an idea that
might only be his own.  Try tracking down some of his references some-
time--it's not easy!  Many of them are simply made up.

Michael Turner (ucbvax!ucbesvax.turner)

------------------------------

Date: 17 Nov 83 13:50:54-PST (Thu)
From: ihnp4!houxm!mhuxl!ulysses!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: I recall Rational Psychology
Article-I.D.: ncsu.2407

First, let's not revive the Rational Psychology debate. It died of natural
causes, and we should not disturb its immortal soul. However, F Montalvo
has said something very unpleasant about me, and I'm not quite mature
enough to ignore it.

I was not making an idle attack, nor do I do so with superficial knowledge.
Further, I have made quite similar statements in the presence of the
enemy -- card carrying psychologists.  Those psychologists whose egos are
secure often agree with the assesment.  Proper scientific method is very
hard to apply in the face of stunning lack of understanding or hard,
testable theories.  Most proper experiments are morally unacceptable in
the pschological arena.  As it is, there are so many controls not done,
so many sources of artifact, so much use of statistics to try to ferret
out hoped-for correlations, so much unavoidable anthropomorphism. As with
scholars such as H. Dumpty, you can define "science" to mean what you like,
but I think most psychological work fails the test.

One more thing, It's pretty immature to assume that someone who disagrees
with you has only superficial knowledge of the subject.  (See, I told you
I was not very mature ....)
----GaryFostel----

------------------------------

End of AIList Digest
********************
20-Nov-83 15:44:02-PST,15439;000000000001
Mail-From: LAWS created at 20-Nov-83 15:39:35
Date: Sunday, November 20, 1983 3:15PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #101
To: AIList@SRI-AI


AIList Digest            Monday, 21 Nov 1983      Volume 1 : Issue 101

Today's Topics:
  Pattern Recognition - Forced Matching,
  Workstations - VAX,
  Alert - Computer Vision,
  Correction - AI Labs in IEEE Spectrum,
  AI - Challenge,
  Conferences - Announcements and Calls for Papers
----------------------------------------------------------------------

Date: Wed, 16 Nov 83 10:53 EST
From: Tim Finin <Tim.UPenn@Rand-Relay>
Subject: pattern matchers

     From: Stanley T. Shebs <SHEBS@UTAH-20.ARPA>
     Subject: Pattern Matchers
     ... My next puzzle is about pattern matchers.  Has anyone looked carefully
     at the notion of a "non-failing" pattern matcher?  By that I mean one that
     never or almost never rejects things as non-matching. ...

There is a long history of matchers which can be asked to "force" a match.
In this mode, the matcher is given two objects and returns a description
of what things would have to be true for the two objects to match.  Two such
matchers come immediately to my mind - see "How can MERLIN Understand?" by
Moore and Newell in Gregg (ed), Knowledge and Cognition, 1973, and also
"An Overview of KRL, A Knowledge Representation Language" by Bobrow and
Winograd (which appeared in the AI Journal, I believe, in 76 or 77).

------------------------------

Date: Fri 18 Nov 83 09:31:38-CST
From: CS.DENNEY@UTEXAS-20.ARPA
Subject: VAX Workstations

I am looking for information on the merits (or lack of) of the
VAX Workstation 100 for AI development.

------------------------------

Date: Wed, 16 Nov 83 22:22:03 pst
From: weeks%ucbpopuli.CC@Berkeley (Harry Weeks)
Subject: Computer Vision.

There have been some recent articles in this list on computer
vision, some of them queries for information.  Although I am
not in this field, I read with interest a review article in
Nature last week.  Since Nature may be off the beaten track for
many people in AI (in fact articles impinging on computer science
are rare, and this one probably got in because it also falls
under neuroscience), I'm bringing the article to the attention of
this list.  The review is entitled ``Parallel visual computation''
and appears in Vol 306, No 5938 (3-9 November), page 21.  The
authors are Dana H Ballard, Geoffrey E Hinton and Terrence J
Sejnowski.  There are 72 references into the literature.

                                                Harry Weeks
                                                g.weeks@Berkeley

------------------------------

Date: 17 Nov 83 20:25:30-PST (Thu)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Re: IEEE Spectrum Alert - (nf)
Article-I.D.: uiucdcs.3909


For safety's sake, let me add a qualification about the table on sources of
funding: it's incorrect. The University of Illinois is represented as having
absolutely NO research in 5th-generation AI, not even under OTHER funding.
This is false, and will hopefully be rectified in the next issue of the
Spectrum. I believe a delegation of our Professors is flying to the coast to
have a chat with the Spectrum staff ...

If we can be so misrepresented, I wonder how the survey obtained its
information. None of our major AI researchers remember any attempts to survey
their work.

                                        Marcel Schoppers
                                        U of Illinois @ Urbana-Champaign

------------------------------

Date: 17 Nov 83 20:25:38-PST (Thu)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Re: just a reminder... - (nf)
Article-I.D.: uiucdcs.3910

I agree [with a previous article].
I myself am becoming increasingly worried about a blithe attitude I
sometimes hear: if our technology eliminates some jobs, it will create others.
True, but not everyone will be capable of keeping up with the change.
Analogously, the Industrial Revolution is now seen as a Good Thing, and its
impacts were as profound as those promised by AI. And though it is said that
the growth of knowledge can only be advantageous in the long run (Logical
Positivist view?), many people became victims of the Revolution.

In this respect I very much appreciated an idea that was aired at IJCAI-83,
namely that we should be building expert systems in economics to help us plan
and control the effects of our research.

As for the localization of power, that seems almost inevitable. Does not the
US spend enough on cosmetics to cover the combined Gross National Products of
37 African countries? And are we not so concerned about our Almighty Pocket
that we simply CANNOT export our excess groceries to a needy country, though
the produce rot on our dock? Then we can also keep our technology to ourselves.

One very obvious, and in my opinion sorely needed, application of AI is to
automating legal, veterinary and medical expertise. Of course the law system
and our own doctors will give us hell for this, but on the other hand what kind
of service profession is it that will not serve except at high cost? Those most
in need cannot afford the price. See for yourself what kind of person makes it
through Medical School: those who are most aggressive about beating their
fellow students, or those who have the money to buy their way in. It is little
wonder that so few of them will help the under-priviledged -- from the start
the selection criteria wage against such motivation. Let's send our machines
in where our "doctors" will not go!

                                        Marcel Schoppers
                                        U of Illinois @ Urbana-Champaign

------------------------------

Date: 19 Nov 83 09:22:42 EST (Sat)
From: rej@Cornell (Ralph Johnson)
Subject: The AI Challenge

The recent discussions on AIlist have been boring, so I have another
idea for discussion.  I see no evidence that that AI is going to make
as much of a change on the world as data processing or information
retrieval.  While research in AI has produced many results in side areas
such as computer languages, computer architecture, and programming
environments, none of the past promises of AI (automatic language
translation, for example) have been fulfilled.  Why should I expect
anything more in the future?

I am a soon-to-graduate PhD candidate at Cornell.  Since Cornell puts
little emphasis on AI, I decided to learn a little on my own.  Most AI
literature is hard to read, as very little concrete is said.  The best
book that I read (best for someone like me, that is) was the three-volume
"Handbook on Artificial Intelligence".  One interesting observation was
that I already knew a large percentage of the algorithms.  I did not
even think of most of them as being AI algorithms.  The searching
algorithms (with the exception of alpha beta pruning) are used in many
areas, and algorithms that do logical deduction are part of computational
mathematics (just my opinion, as I know some consider this hard core AI).
Algorithms in areas like computer vision were completely new, but I could
see no relationship between those algorithms and algorithms in programs
called "expert systems", another hot AI topic.

  [Agreed, but the gap is narrowing.  There have been 1 or 2 dozen
  good AI/vision dissertations, but the chief link has been that many
  individuals and research departments interested in one area have
  also been interested in the other.  -- KIL]

As for expert systems, I could see no relationship between one expert system
and the next.  An expert system seems to be a program that uses a lot of
problem-related hacks to usually come up with the right answer.  Some of
the "knowledge representation" schemes (translated "data structures") are
nice, but everyone seems to use different ones.  I have read several tech
reports describing recent expert systems, so I am not totally ignorant.
What is all the noise about?  Why is so much money being waved around?
There seems to be nothing more to expert systems than to other complicated
programs.

  [My own somewhat heretical view is that the "expert system" title
  legitimizes something that every complicated program has been found
  to need: hackery.  A rule-based system is sufficiently modular that
  it can be hacked hundreds of times before it is so cumbersome
  that the basic structures must be rewritten.  It is software designed
  to grow, as opposed to the crystalline gems of the "optimal X" paradigm.
  The best expert systems, of course, also contain explanatory capabilities,
  hierarchical inference, constrained natural language interfaces, knowledge
  base consistency checkers, and other useful features.  -- KIL]

I know that numerical analysis and compiler writing are well developed fields
because there is a standard way of thinking that is associated with each
area and because a non-expert can use tools provided by experts to perform
computation or write a parser without knowing how the tools work.  In fact,
a good test of an area within computer science is whether there are tools
that a non-expert can use to do things that, ten years ago, only experts
could do.  Is there anything like this in AI?  Are there natural language
processors that will do what YACC does for parsing computer languages?

There seem to be a number of answers to me:

1)  Because of my indoctrination at Cornell, I categorize much of the
    important results of AI in other areas, thus discounting the achievements
    of AI.

2)  I am even more ignorant than I thought, and you will enlighten me.

3)  Although what I have said describes other areas of AI pretty much, yours
    is an exception.

4)  Although what I have said describes past results of AI, major achievements
    are just around the corner.

5)  I am correct.

You may be saying to yourself, "Is this guy serious?"  Well, sort of.  In
any case, this should generate more interesting and useful information
than trying to define intelligence, so please treat me seriously.

        Ralph Johnson

------------------------------

Date: Thu 17 Nov 83 16:57:55-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Conference Announcements and Call for Papers

                [Reprinted from the SU-SCORE bboard.]

Image Technology 1984 37th annual conference  May 20-24, 1984
Boston, Mass.  Jim Clark, papers chairman

British Robot Association 7th annual conference  14-17, May 1984
Cambridge, England   Conference director-B.R.A. 7,
British Robot Association, 28-30 High Street, Kempston, Bedford
MK427AJ, England

First International Conference on Computers and Applications
Beijing, China, June 20-22, 1984   co-sponsored by CIE computer society
and IEEE computer society

CMG XIV conference on computer evaluation--preliminary agenda
December 6-9, 1983  Crystal City, Va.

International Symposium on Symbolic and Algebraic Computation
EUROSAM 84  Cambridge, England July 9-11, 1984  call for papers
M. Mignotte, Centre de Calcul, Universite Louis Pasteur, 7 rue
rene Descartes, F67084 Strasvourg, France

ACM Computer Science Conference  The Future of Computing
February 14-16, 1984  Philadelphia, Penn. Aaron Beller, Program
Chair, Computer and Information Science Department, Temple University
Philadelphia, Penn. 19122

HL

------------------------------

Date: Fri 18 Nov 83 04:00:10-CST
From: Werner Uhrig  <CMP.WERNER@UTEXAS-20.ARPA>
Subject: ***** Call for Papers:  LISP and Functional Programming *****

please help spread the word by announcing it on your local machines.  thanks
                ---------------

()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()
()                                CALL FOR PAPERS                           ()
()                             1984 ACM SYMPOSIUM ON                        ()
()                        LISP AND FUNCTIONAL PROGRAMMING                   ()
()                UNIVERSITY OF TEXAS AT AUSTIN, AUGUST 5-8, 1984           ()
()            (Sponsored by the ASSOCIATION FOR COMPUTING MACHINERY)        ()
()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()

This is the third in a series of biennial conferences on the LISP language and
issues related to applicative languages.  Especially welcome are papers
addressing implementation problems and programming environments.  Areas of
interest include (but are not restricted to) systems, large implementations,
programming environments and support tools, architectures, microcode and
hardware implementations, significant language extensions, unusual applications
of LISP, program transformations, compilers for applicative languages, lazy
evaluation, functional programming, logic programming, combinators, FP, APL,
PROLOG, and other languages of a related nature.

Please send eleven (11) copies of a detailed summary (not a complete paper) to
the program chairman:

        Guy L. Steele Jr.
        Tartan Laboratories Incorporated
        477 Melwood Avenue
        Pittsburgh, Pennsylvania  15213

Submissions will be considered by each member of the program committee:

 Robert Cartwright, Rice            William L. Scherlis, Carnegie-Mellon
 Jerome Chailloux, INRIA            Dana Scott, Carnegie-Mellon
 Daniel P. Friedman, Indiana        Guy L. Steele Jr., Tartan Laboratories
 Richard P. Gabriel, Stanford       David Warren, Silogic Incorporated
 Martin L. Griss, Hewlett-Packard   John Williams, IBM
 Peter Henderson, Stirling

Summaries should explain what is new and interesting about the work and what
has actually been accomplished.  It is important to include specific findings
or results and specific comparisons with relevant previous work.  The committee
will consider the appropriateness, clarity, originality, practicality,
significance, and overall quality of each summary.  Time does not permit
consideration of complete papers or long summaries; a length of eight to twelve
double-spaced typed pages is strongly suggested.

February 6, 1984 is the deadline for the submission of summaries.  Authors will
be notified of acceptance or rejection by March 12, 1984.  The accepted papers
must be typed on special forms and received by the program chairman at the
address above by May 14, 1984.  Authors of accepted papers will be asked to
sign ACM copyright forms.

Proceedings will be distributed at the symposium and will later be available
from ACM.

Local Arrangements Chairman             General Chairman

Edward A. Schneider                     Robert S. Boyer
Burroughs Corporation                   University of Texas at Austin
Austin Research Center                  Institute for Computing Science
12201 Technology Blvd.                  2100 Main Building
Austin, Texas 78727                     Austin, Texas 78712
(512) 258-2495                          (512) 471-1901
CL.SCHNEIDER@UTEXAS-20.ARPA             CL.BOYER@UTEXAS-20.ARPA

------------------------------

End of AIList Digest
********************
22-Nov-83 11:16:29-PST,18647;000000000001
Mail-From: LAWS created at 22-Nov-83 10:36:14
Date: Tuesday, November 22, 1983 10:31AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #102
To: AIList@SRI-AI


AIList Digest            Tuesday, 22 Nov 1983     Volume 1 : Issue 102

Today's Topics:
  AI and Society - Expert Systems,
  Scientific Method - Psychology,
  Architectures - Need for Novelty,
  AI - Response to Challenge
----------------------------------------------------------------------

Date: 20 Nov 83 14:50:23-PST (Sun)
From: harpo!floyd!clyde!akgua!psuvax!simon @ Ucb-Vax
Subject: Re: just a reminder... - (nf)
Article-I.D.: psuvax.357

It seems a little dangerous "to send machines where doctors won't go" -
you'll get the machines treating the poor, and human experts for the privileged
few.
Also, expert systems for economics and social science, to help us would be fine,
if there was a convincing argument that a)these social sciences are truly
helpful for coping with unpredictable technological change, and b) that there
is a sufficiently accepted basis of quantifiable knowledge to put in the
proposed systems.
janos simon

------------------------------

Date: Mon, 21 Nov 1983  15:24 EST
From: MONTALVO%MIT-OZ@MIT-MC.ARPA
Subject: I recall Rational Psychology

    Date: 17 Nov 83 13:50:54-PST (Thu)
    From: ihnp4!houxm!mhuxl!ulysses!unc!mcnc!ncsu!fostel @ Ucb-Vax
    Subject: I recall Rational Psychology

          ... Proper scientific method is very
    hard to apply in the face of stunning lack of understanding or hard,
    testable theories.  Most proper experiments are morally unacceptable in
    the pschological arena.  As it is, there are so many controls not done,
    so many sources of artifact, so much use of statistics to try to ferret
    out hoped-for correlations, so much unavoidable anthropomorphism. As with
    scholars such as H. Dumpty, you can define "science" to mean what you like,
    but I think most psychological work fails the test.

    ----GaryFostel----

You don't seem to be aware of Experimental Psychology, which involves
subjects' consent, proper controls, hypothesis formation and
evaluation, and statistical validation.  Most of it involves sensory
processes and learning.  The studies are very rigorous and must be so
to end up in the literature.  You may be thinking of Clinical Psychology.
If so, please don't lump all of Psychology into the same group.

Fanya Montalvo

------------------------------

Date: 19 Nov 83 11:15:50-PST (Sat)
From: decvax!tektronix!ucbcad!notes @ Ucb-Vax
Subject: Re: parallelism vs. novel architecture - (nf)
Article-I.D.: ucbcad.835

Re: parallelism and fundamental discoveries

The stored-program concept (Von Neumann machine) was indeed a breakthrough
both in the sense of Turing (what is theoretically computable) and in the
sense of Von Neuman (what is a practical machine).  It is noteworthy,
however, that I am typing this message using a text editor with a segment
of memory devoted to program, another segment devoted to data, and with an
understanding on the part of the operating system that if the editor were
to try to alter one of its own instructions, the operating system should
treat this as pathological, and abort it.

In other words, the vaunted power of being able to write data that can be
executed as a program is treated in the most stilted and circumspect manner
in the interests of practicality.  It has been found to be impractical to
write programs that modify their own inner workings.  Yet people do this to
their own consciousness all the time--in a largely unconscious way.

Turing-computability is perhaps a necessary condition for intelligence.
(That's been beaten to death here.)  What is needed is a sufficient condition.
Can that possibly be a single breakthrough or innovation?  There is no
question that, working from the agenda for AI that was so hubristically
layed out in the 50's and 60's, such a breakthrough is long overdue.  Who
sees any intimation of it now?

Perhaps what is needed is a different kind of AI researcher.  New ground
is hard to break, and harder still when the usual academic tendency is to
till old soil until it is exhausted.  I find it interesting that many of
the new ideas in AI are coming from outside the U.S. AI establishment
(MIT, CMU, Stanford, mainly).  Logic programming seems largely to be a
product of the English-speaking world *apart* from the U.S.  Douglas
Hofstadter's ideas (though probably too optimistic) are at least a sign
that, after all these years, some people find the problem too important
to be left to the experts.  Tally Ho!  Maybe AI needs a nut with the
undaunted style of a Nicola Tesla.

Some important AI people say that Hofstadter's schemes can't work.  This
makes me think of the story about the young 19th century physicist, whose
paper was reviewed and rejected as meaningless by 50 prominent physicists
of the time.  The 51st was Maxwell, who had it published immediately.

Michael Turner (ucbvax!ucbesvax.turner)

------------------------------

Date: 20 November 1983 2359-PST (Sunday)
From: helly at AEROSPACE (John Helly)
Subject: Challenge

I  am  responding  to  Ralph  Johnson's  recent submittal concerning the
content and contribution of work in the field  of  AI.    The  following
comments  should  be  evaluated in light of the fact that I am currently
developing an 'expert system' as a dissertation topic at UCLA.

My immediate reaction to Johnson's queries/criticisms of AI is  that  of
hearty  agreement.    Having  read  a  great  deal  of AI literature, my
personal bias is that there is a great deal of rediscovery of  Knuth  in
the  context of new applications.  The only things apparently unique are
that each new 'discovery' carries with  it  a  novel  jargon  with  very
little attempt to connect and build on previous work in the field.  This
reflects a broader concern I have with Computer Science  in  general  in
that,  having been previously trained as a biologist, I find very little
that I consider scientific in this field.  This  does  not  diminish  my
hope for, and consequently my commitment to, work in this area.

Like  many things, this commitment is based on my intuition (read faith)
that there really is something  of  value  in  this  field.    The  only
rationale  I can offer for such a commitment is the presumption that the
lack of progress in AI research is the result of the lack of  scientific
discipline of AI researchers and computer scientists in general.  The AI
community looks much more like a heterogeneous population of hackers than
that  of a disciplined, scientific community.  Maybe this is symptomatic
of a new field of science going through  growing  pains  but  I  do  not
personally  believe  this  is  the  case.    I am unaware of any similar
developmental process in the history of science.

This all sounds pretty negative, I  know.    I  believe  that  criticism
should  always  be  stated with some possible corrective action, though,
and maybe I have some.  Computer science curricula should require formal
scientific training.  Exposure to truly empirical sciences  would  serve
to   familiarize   students  with  the  value  of  systematic  research,
experimental design, hypothesis testing and the like.   We  should  find
ways  to  apply  the  scientific  method  to  our  research  rather than
collecting  a  lot  of  anecdotal  information  about  our  'programming
environment' and 'heuristics' and publishing it at first light.

Maybe the computer science is basically an engineering discipline (i.e.,
application-oriented)  rather  than a science.  I believe, however, that
in the least computer science, even if misnamed, offers  powerful  tools
for  investigating  human  information processing (i.e, intelligence) if
approached scientifically.  Properly applied these tools can provide the
same benefits they  have  offered  physicists,  biologists  and  medical
researchers  - insight into mechanisms and techniques for simulating the
systems of interest.

Much of AI is very slick programming.  I'm just not certain that  it  is
anything more than that, at least at present.

------------------------------

Date: Mon 21 Nov 83 14:12:35-PST
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Reply to Ralph Johnson

Your recent msg to AILIST was certainly provocative, and I thought I'd
try to reply to a couple of the points you made.  First, I'm a little
appalled at what you portray as the "Cornell" attitude towards AI.  I
hope things will improve there in the future.  Maybe I can contribute
a little by trying to persuade you that AI has substance.

I'd like to begin by calling attention to the criteria that you are
using to evaluate AI.  I believe that if you applied these same
criteria to other areas of computer science, you would find them
lacking also.  For example, you say that "While research in AI has
produced many results in side areas..., none of the past promises of
AI have been fulfilled."  If we look at other fields of computer
science, we find similar difficulties.  Computer science has promised
secure, reliable, user-friendly computing facilities, cheap and robust
distributed systems, integrated software tools.  But what do we have?
Well, we have some terrific prototypes in research labs, but the rest
of the world is still struggling with miserable computing
environments, systems that constantly crash, and distributed systems
that end up being extremely expensive and unreliable.

The problem with this perspective is that it is not fair to judge a
research discipline by the success of its applications.  In AI
research labs, AI has delivered on many of its early promises.  We now
have machines with limited visual and manipulative capabilities.  And
we do have systems that perform automatic language translation (e.g.,
at Texas).

Another difficulty of judging AI is that it is a "residual"
discipline.  As Avron Barr wrote in the introduction to the AI
Handbook, "The realization that the detailed steps of almost all
intelligent human activity were unknown marked the beginning of
Artificial Intelligence as a separate part of computer science."  AI
tackles the hardest application problems around: those problems whose
solution is not understood.  The rest of computer science is primarily
concerned with finding optimum points along various solution
dimensions such as speed, memory requirements, user interface
facilities, etc.  We already knew HOW to sort numbers before we had
computers.  The role of Computer Science was to determine how to sort
them quickly and efficiently using a computer.  But, we didn't know
HOW to understand language (at least not at a detailed level).  AI's
task has been to find solutions to these kinds of problems.

Since AI has tackled the most difficult problems, it is not surprising
that it has had only moderate success so far.  The bright side of
this, however, is that long after we have figured out whether P=NP, AI
will still be uncovering fascinating and difficult problems.  That's
why I study it.

You are correct in saying that the AI literature is hard to read.  I
think there are several reasons for this.  First, there is a very
large amount of terminology to master in AI.  Second, there are great
differences in methodology.  There is no general agreement within the
AI community about what the hard problems are and how they should be
addressed (although I think this is changing).  Good luck with any
further reading that you attempt.

Now let me address some of your specific observations about AI.  You
say "I already knew a large percentage of the algorithms.  I did not
even think of most of them as being AI algorithms."  I would certainly
agree.  I cite this as evidence that there is a unity to all parts of
computer science, including AI.  You also say "An expert system seems
to be a program that uses a lot of problem-related hacks to usually
come up with the right answer."  I think you have hit upon the key
lesson that AI learned in the seventies: The solution to many of the
problems we attack in AI lies NOT in the algorithms but in the
knowledge.  That lesson reflects itself, not so much in differences in
code, but in differences in methodology.  Expert systems are different
and important because they are built using a programming style that
emphasizes flexibility, transparency, and rapid prototyping over
efficiency.  You say "There seems to be nothing more to expert systems
than to other complicated programs".  I disagree completely.  Expert
systems can be built, debugged, and maintained more cheaply than other
complicated programs.  And hence, they can be targeted at applications
for which previous technology was barely adequate.  Expert systems
(knowledge programming) techniques continue the revolution in
programming that was started with higher-level languages and furthered
by structured programming and object-oriented programming.

Your view of "knowledge representations" as being identical with data
structures reveals a fundamental misunderstanding of the knowledge vs.
algorithms point.  Most AI programs employ very simple data structures
(e.g., record structures, graphs, trees).  Why, I'll bet there's not a
single AI program that uses leftist-trees or binomial queues!  But, it
is the WAY that these data structures are employed that counts.  For
example, in many AI systems, we use record structures that we call
"schemas" or "frames" to represent domain concepts.  This is
uninteresting.  But what is interesting is that we have learned that
certain distinctions are critical, such as the distinction between a
subset of a set and an element of a set.  Or the distinction between a
causal agent of a disease (e.g., a bacterium) and a feature that is
helpful in guiding diagnosis (e.g., whether or not the patient has
been hospitalized).  Much of AI is engaged in finding and cataloging
these distinctions and demonstrating their value in simplifying the
construction of expert systems.

In your message, you gave five possible answers that you expected to
receive.  I guess mine doesn't fit any of your categories.  I think
you have been quite perceptive in your analysis of AI.  But you are
still looking at AI from the "algorithm" point of view.  If you shift
to the "knowledge" perspective, your criteria for evaluating AI will
shift as well, and I think you will find the field to be much more
interesting.

--Tom Dietterich

------------------------------

Date: 22 Nov 83 11:45:30 EST (Tue)
From: rej@Cornell (Ralph Johnson)
Subject: Clarifying my "AI Challange"

I am sorry to create the mistaken impression that I don't think AI should
be done or is worth the money we spend on it.  The side effects alone are
worth much more than has been spent.  I do understand the effects of AI on
other areas of CS.  Even though going to the moon brought no direct benefit
to the US outside of prestige (which, by the way, was enormous), we learned
a lot that was very worthwhile.  Planetary scientists point out that we
would have learned a lot more if we had spent the money directly on planetary
exploration, but the moon race captured the hearts of the public and allowed
the money to be spent on space instead of bombs.  In a similar way, AI
provides a common area for some of our brightest people to tackle very hard
problems, and consequently learn a great deal.  My question, though, is
whether AI is really going to change the world any more than the rest of
computer science is already doing.  Are the great promises of AI going to
be fulfilled?

I am thankful for the comments on expert systems.  Following these lines of
reasoning, expert systems are differentiated from other programs more by the
programming methodology used than by algorithms or data structures.  It is
very helpful to have these distinctions pointed out, and has made several
ideas clearer to me.

The ideas in AI are not really any more difficult than those in other areas
of CS, they are just more poorly explained.  Several times I have run in to
someone who can explain well the work that he/she has been doing, and each
time I understand what they are doing.  Consequently, I believe that the
reason that I see few descriptions of how systems work is because the
designers are not sure how they work, or they do not know what is important
in explaining how they work, or they do not know that it is important to
explain how they work.  Are they, in fact, describing how they work, and I
just don't notice?  What I would like is more examples of systems that work,
descriptions of how they work, and of how well they work.

        Ralph Johnson (rej@cornell,  cornell!rej)

------------------------------

Date: Tue 22 Nov 83 09:25:52-PST
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Re: Clarifying my "AI Challange"

Ralph,

I can think of a couple of reasons why articles describing Expert
Systems are difficult to follow.  First, these programs are often
immense.  It would take a book to describe all of the system and how
it works.  Hence, AI authors try to pick out a few key things that
they think were essential in getting the system to work.  It is kind
of like reading descriptions of operating systems.  Second, the lesson
that knowledge is more important than algorithm has still not been
totally accepted within AI.  Many people tend to describe their
systems by describing the architecture (ie., the algorithms and data
structures) instead of the knowledge.  The result is that the reader
is left saying "Yes, of course I understand how backward chaining (or
an agenda system) works, but I still don't understand how it diagnoses
soybean diseases..."  The HEARSAY people are particularly guilty of
this.  Also, Lenat's dissertation includes much more discussion of
architecture than of knowledge.  It often takes many years before
someone publishes a good analysis of the structure of the knowledge
underlying the expert performance of the system.  A good example is
Bill Clancey's work analyzing the MYCIN system.  See his most recent
AI Journal paper.

--Tom

------------------------------

End of AIList Digest
********************
25-Nov-83 15:36:56-PST,14704;000000000001
Mail-From: LAWS created at 25-Nov-83 09:36:58
Date: Fri Nov 25, 1983 09:29-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #103
To: AIList@SRI-AI


AIList Digest            Friday, 25 Nov 1983      Volume 1 : Issue 103

Today's Topics:
  Alert - Neural Network Simulations & Weizenbaum on The Fifth Generation,
  AI Jargon - Why AI is Hard to Read,
  AI and Automation - Economic Effects & Reliability,
  Conference - Logic Programming Symposium
----------------------------------------------------------------------

Date: Sun, 20 Nov 83 18:05 PST
From: Allen VanGelder <avg@diablo>
Subject: Those interested in AI might want to read ...

                [Reprinted from the SU-SCORE bboard.]

[Those interested in AI might want to read ...]
the article in November *Psychology Today* about Francis Crick and Graeme
Michison's neural network simulations. Title is "The Dream Machine", p. 22.

------------------------------

Date: Sun 20 Nov 83 18:50:27-PST
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: Re: Those interested in AI might want to read...

                [Reprinted from the SU-SCORE bboard.]

I would guess that the "Psychology Today" article is a simplified form of the
Crick & Michelson paper which came out in "Nature" about 2 months ago. Can't
comment on the Psychology Today article, but the Nature article was
stimulating and provocative. The same issue of Nature has a paper (referred to
by Crick) of a simulation which was even better than the Crick paper
(sorry, Francis!).

------------------------------

Date: Mon 21 Nov 83 09:58:04-PST
From: Benjamin Grosof <GROSOF@SUMEX-AIM.ARPA>
Subject: Weizenbaum review of "The Fifth Generation": hot stuff!

                [Reprinted from the SU-SCORE bboard.]

The current issue of the NY REview of Books contains a review by Joseph
Weizenbaum of MIT (Author of "Computer Power and Human Reason", I think)
of Feigenbaum and McCorduck's "The Fifth Generation".  Warning: it is
scathing and controversial, hence great reading.  --Benjamin

------------------------------

Date: Wed 23 Nov 83 14:38:38-PST
From: Wilkins  <WILKINS@SRI-AI.ARPA>
Subject: why AI is hard to read

There is one reason much AI literature is hard to read. It is common for
authors to invent a whole new set of jargon to describe their system, instead
of desribing it in some common language (e.g., first order logic) or relating
it to previous well-understood systems or principles.  In recent years
there has been an increased awareness of this problem, and hopefully things
are improving and will continue to do so. There are also a lot more
submissions now to IJCAI, etc, so higher standards end up being applied.
Keep truckin'
David Wilkins

------------------------------

Date: 21 Nov 1983 10:54-PST
From: dietz%usc-cse%USC-ECL@SRI-NIC
Reply-to: dietz%USC-ECL@SRI-NIC
Subject: Economic effects of automation

Reply to Marcel Schoppers (AIList 1:101):

I agree that "computers will eliminate some jobs but create others" is
a feeble excuse.  There's not much evidence for it.  Even if it's true,
those whose jobs skills are devalued will be losers.

But why should this bother me?  I don't buy manufactured goods to
employ factory workers, I buy them to gratify my own desires.   As a
computer scientist I will not be laid off; indeed, automation will
increase the demand for computer professionals.  I will benefit from
the higher quality and lower prices of manufactured goods.  Automation
is entirely in my interest.  I need no excuse to support it.

   ... I very much appreciated the idea ... that we should be building
   expert systems in economics to help us plan and control the effects of
   our research.

This sounds like an awful waste of time to me.  We have no idea how to
predict the economic effects of much of anything except at the most
rudimentary levels, and there is no evidence that we will anytime soon
(witness the failure of econometrics).  There would be no way to test
the systems.  Building expert systems is not a substitute for
understanding.

Automating medicine and law:  a much better idea is to eliminate or
scale back the licensing requirements that allow doctors and lawyers to
restrict entry into their fields.  This would probably be necessary to
get much benefit from expert systems anyway.

------------------------------

Date: 22 Nov 83 11:27:05-PST (Tue)
From: decvax!genrad!security!linus!utzoo!dciem!mmt @ Ucb-Vax
Subject: Re: just a reminder... - (nf)
Article-I.D.: dciem.501

    It seems a little dangerous "to send machines where doctors won't go" -
    you'll get the machines treating the poor, and human experts for the
    privileged few.

If the machines were good enough, I wouldn't mind being underpriveleged.
I'd rather be flown into a foggy airport by autopilot than human pilot.

Martin Taylor
{allegra,linus,ihnp4,uw-beaver,floyd,ubc-vision}!utcsrgv!dciem!mmt

------------------------------

Date: 22 Nov 1983 13:06:13-EST (Tuesday)
From: Doug DeGroot <Degroot.YKTVMV.IBM@Rand-Relay>
Subject: Logic Programming Symposium (long message)

                 [Excerpt from a notice in the Prolog Digest.]

               1984 International Symposium on Logic Programming

                               February 6-9, 1984

                           Atlantic City, New Jersey
                           BALLY'S PARK PLACE CASINO

                     Sponsored by the IEEE Computer Society


          For more information contact PERIERA@SRI-AI or:

               Registration - 1984 ISLP
               Doug DeGroot, Program Chairman
               IBM Thomas J. Watson Research Center
               P.O. Box 218
               Yorktown Heights, NY 10598

          STATUS           Conference    Tutorial
          Member, IEEE      __ $155      __ $110
          Non-member        __ $180      __ $125
         ____________________________________________________________

                              Conference Overview

          Opening Address:
             Prof. J.A. (Alan) Robinson
             Syracuse University

          Guest Speaker:
             Prof. Alain Colmerauer
             Univeristy of Aix-Marseille II
             Marseille, France

          Keynote Speaker:
             Dr. Ralph E. Gomory,
             IBM Vice President & Director of Research,
             IBM Thomas J. Watson Research Center

          Tutorial: An Introduction to Prolog
             Ken Bowen, Syracuse University

          35 Papers, 11 Sessions (11 Countries, 4 Continents)


          Preliminary Conference Program

          Session 1: Architectures I
          __________________________

          1. Parallel Prolog Using Stack Segments on Shared-memory
             Multiprocessors
             Peter Borgwardt (Univ. Minn)

          2. Executing Distributed Prolog Programs on a Broadcast Network
             David Scott Warren (SUNY Stony Brook, NY)

          3. AND Parallel Prolog in Divided Assertion Set
             Hiroshi Nakagawa (Yokohama Nat'l Univ, Japan)

          4. Towards a Pipelined Prolog Processor
             Evan Tick (Stanford Univ,CA) and David Warren

          Session 2: Architectures II
          ___________________________

          1. Implementing Parallel Prolog on a Multiprocessor Machine
             Naoyuki Tamura and Yukio Kaneda (Kobe Univ, Japan)

          2. Control of Activities in the OR-Parallel Token Machine
             Andrzej Ciepielewski and Seif Haridi (Royal Inst. of
             Tech, Sweden)

          3. Logic Programming Using Parallel Associative Operations
             Steve Taylor, Andy Lowry, Gerald Maguire, Jr., and Sal
             Stolfo (Columbia Univ,NY)

          Session 3: Parallel Language Issues
          ___________________________________

          1. Negation as Failure and Parallelism
             Tom Khabaza (Univ. of Sussex, England)

          2. A Note on Systems Programming in Concurrent Prolog
             David Gelertner (Yale Univ,CT)

          3. Fair, Biased, and Self-Balancing Merge Operators in
             Concurrent Prolog
             Ehud Shaipro (Weizmann Inst. of Tech, Israel)

          Session 4: Applications in Prolog
          _________________________________

          1. Editing First-Order Proofs: Programmed Rules vs. Derived Rules
             Maria Aponte, Jose Fernandez, and Phillipe Roussel (Simon
             Bolivar Univ, Venezuela)

          2. Implementing Parallel Algorithms in Concurrent Prolog:
             The MAXFLOW Experience
             Lisa Hellerstein (MIT,MA) and Ehud Shapiro (Weizmann
             Inst. of Tech, Israel)

          Session 5: Knowledge Representation and Data Bases
          __________________________________________________

          1. A Knowledge Assimilation Method for Logic Databases
             T. Miyachi, S. Kunifuji, H. Kitakami, K. Furukawa, A.
             Takeuchi, and H. Yokota (ICOT, Japan)

          2. Knowledge Representation in Prolog/KR
             Hideyuki Nakashima (Electrotechnical Laboratory, Japan)

          3. A Methodology for Implementation of a Knowledge
             Acquisition System
             H. Kitakami, S. Kunifuji, T. Miyachi, and K. Furukawa
             (ICOT, Japan)

          Session 6: Logic Programming plus Functional Programming - I
          ____________________________________________________________

          1. FUNLOG = Functions + Logic: A Computational Model
             Integrating Functional and Logical Programming
             P.A. Subrahmanyam and J.-H. You (Univ of Utah)

          2. On Implementing Prolog in Functional Programming
             Mats Carlsson (Uppsala Univ, Sweden)

          3. On the Integration of Logic Programming and Functional Programming
             R. Barbuti, M. Bellia, G. Levi, and M. Martelli (Univ. of
             Pisa and CNUCE-CNR, Italy)

          Session 7: Logic Programming plus Functional Programming- II
          ____________________________________________________________

          1. Stream-Based Execution of Logic Programs
             Gary Lindstrom and Prakash Panangaden (Univ of Utah)

          2. Logic Programming on an FFP Machine
             Bruce Smith (Univ. of North Carolina at Chapel Hill)

          3. Transformation of Logic Programs into Functional Programs
             Uday S. Reddy (Univ of Utah)

          Session 8: Logic Programming Implementation Issues
          __________________________________________________

          1. Efficient Prolog Memory Management for Flexible Control Strategies
             David Scott Warren (SUNY at Stony Brook, NY)

          2. Indexing Prolog Clauses via Superimposed Code Words and
             Field Encoded Words
             Michael J. Wise and David M.W. Powers, (Univ of New South
             Wales, Australia)

          3. A Prolog Technology Theorem Prover
             Mark E. Stickel, (SRI, CA)

          Session 9: Grammars and Parsing
          _______________________________

          1. A Bottom-up Parser Based on Predicate Logic: A Survey of
             the Formalism and Its Implementation Technique
             K. Uehara, R. Ochitani, O. Kakusho, and J. Toyoda (Osaka
             Univ, Japan)

          2. Natural Language Semantics: A Logic Programming Approach
             Antonio Porto and Miguel Filgueiras (Univ Nova de Lisboa,
             Portugal)

          3. Definite Clause Translation Grammars
             Harvey Abramson, (Univ. of British Columbia, Canada)

          Session 10: Aspects of Logic Programming Languages
          __________________________________________________

          1. A Primitive for the Control of Logic Programs
             Kenneth M. Kahn (Uppsala Univ, Sweden)

          2. LUCID-style Programming in Logic
             Derek Brough (Imperial College, England) and Maarten H.
             van Emden (Univ. of Waterloo, Canada)

          3. Semantics of a Logic Programming Language with a
             Reducibility Predicate
             Hisao Tamaki (Ibaraki Univ, Japan)

          4. Object-Oriented Programming in Prolog
             Carlo Zaniolo (Bell Labs, New Jersey)

          Session 11: Theory of Logic Programming
          _______________________________________

          1. The Occur-check Problem in Prolog
             David Plaisted (Univ of Illinois)

          2. Stepwise Development of Operational and Denotational
             Semantics for Prolog
             Neil D. Jones (Datalogisk Inst, Denmark) and Alan Mycroft
             (Edinburgh Univ, Scotland)
         ___________________________________________________________


                           An Introduction to Prolog

                          A Tutorial by Dr. Ken Bowen

          Outline of the Tutorial

          -  AN OVERVIEW OF PROLOG
          -  Facts, Databases, Queries, and Rules in Prolog
          -  Variables, Matching, and Unification
          -  Search Spaces and Program Execution
          -  Non-determinism and Control of Program Execution
          -  Natural Language Processing with Prolog
          -  Compiler Writing with Prolog
          -  An Overview of Available Prologs

          Who Should Take the Tutorial

          The tutorial is intended for both managers and programmers
          interested in understanding the basics of logic programming
          and especially the language Prolog. The course will focus on
          direct applications of Prolog, such as natural language
          processing and compiler writing, in order to show the power
          of logic programming. Several different commercially
          available Prologs will be discussed and compared.

          About the Instructor

          Dr. Ken Bowen is a member of the Logic Programming Research
          Group at Syracuse University in New York, where he is also a
          Professor in the School of Computer and Information
          Sciences. He has authored many papers in the field of logic
          and logic programming. He is considered to be an expert on
          the Prolog programming language.

------------------------------

End of AIList Digest
********************
28-Nov-83 09:43:11-PST,14310;000000000001
Mail-From: LAWS created at 28-Nov-83 09:41:25
Date: Mon 28 Nov 1983 09:32-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #104
To: AIList@SRI-AI


AIList Digest            Monday, 28 Nov 1983      Volume 1 : Issue 104

Today's Topics:
  Information Retrieval - Request,
  Programming Languages - Lisp Productivity,
  AI and Society - Expert Systems,
  AI Funding - Capitalistic AI,
  Humor - Problem with Horn Clauses,
  Seminar - Introspective Problem Solver,
  Graduate Program - Social Impacts at UC-Irvine
----------------------------------------------------------------------

Date: Sun, 27 Nov 83 11:41 EST
From: Ed Fox <fox.vpi@Rand-Relay>
Subject: Request for machine readable volumes, info. on retrieval
         projects

   Please send details of how to obtain any machine readable documents such
as books, reference volumes, encyclopedias, dictionaries, journals, etc.
These would be utilized for experiments in information retrieval.  This
is not aimed at large bibliographic databases but rather at finding
a few medium to long items that exist both in book form and full text
computer tape versions (readable under UNIX or VMS).
   Information on existing or planned projects for retrieval of passages
(e.g., paragraphs or pages) from books, encyclopedias, electronic mail
digests, etc. would also be helpful.
     I look forward to your reply.  Thanks in advance, Ed Fox.
Dr. Edward A. Fox, Dept. of Computer Science, 562 McBryde Hall,
Virginia Polytechnic Institute and State University (VPI&SU or Virginia Tech),
Blacksburg, VA 24061; (703)961-5113 or 6931; fox%vpi@csnet-relay via csnet,
foxea%vpivm1.bitnet@berkeley via bitnet

------------------------------

Date: 25 Nov 83 22:47:27-PST (Fri)
From: pur-ee!uiucdcs!smu!leff @ Ucb-Vax
Subject: lisp productivity question - (nf)
Article-I.D.: uiucdcs.4149

Is anybody aware of study's on productivity studies for lisp?

1. Can lisp programmers program in lisp at the same number of
   lines per day,week,month as in 'regular' languages like pascal, pl/1, etc.

2. Has anybody tried to write a fairly large program that normally would
   be done in lisp in a regular language and compared the number of lines
   ratio.

In APL, a letter to Comm. ACM reported that APL programs took one fifth
the number of lines as equivalent programs in regular language and took
about twice as long per line to debug.  Thus APL improved the productivity
to get a function done by about a factor of two.  I am curious if anything
similar has been done for lisp.

  [One can, off course, write any APL program body as a single line.
  I suspect it would not take much longer to write that way, but it
  would be impossible to modify a week later.  Much the same could be
  said for undocumented and poorly structured Lisp code.  -- KIL]

------------------------------

Date: 22 Nov 83 21:01:33-PST (Tue)
From: decvax!genrad!grkermit!masscomp!clyde!akgua!psuvax!lewis @ Ucb-Vax
Subject: Re:Re: just a reminder... - (nf)
Article-I.D.: psuvax.359

Why should it be dangerous to have machines treating the poor?  There
is no reason to believe that human experts will always be superior to
machines; in fact, a carefully designed expert system could embody all
the skill of the world's best diagnosticians.  In addition, an expert
system would never get tired or complain about its pay.  On the
other hand, perhaps you are worried about the machine lacking 'human'
insight or compassion. I don't think anyone is suggesting that these
qualities can or should be built into such a system.  Perhaps we will
see a new generation of medical personnel whose job will be to use the
available AI facilities to make the most accurate diagnoses, and help
patients interface with the system.  This will provide patients with
the best medical knowledge available, and still allow personal interaction
between patients and technicians.

-jim lewis

psuvax!lewis

------------------------------

Date: 24 Nov 83 22:46:53-PST (Thu)
From: pur-ee!uiucdcs!uokvax!emjej @ Ucb-Vax
Subject: Re: just a reminder... - (nf)
Article-I.D.: uiucdcs.4127

Re sending machines where doctors won't go: do you really think that it's
better that poor people not be treated at all than treated by a machine?
That's a bit much for me to swallow.

                                                James Jones

------------------------------

Date: 22 Nov 83 19:37:14-PST (Tue)
From: pur-ee!uiucdcs!uicsl!Anonymous @ Ucb-Vax
Subject: Capitalistic AICapitalistic AI - (nf)
Article-I.D.: uiucdcs.4071

        Have you had your advisor leave to make megabucks in industry?

        Seriously, I feel that this is a major problem for AI.  There
is an extremely limited number of AI professors and a huge demand from
venture capitalists to set them up in a new company.  Even fresh PhD's
are going to be disappearing into industry when they can make several
times the money they would in academia.  The result is an acute (no
make that terminal) shortage of professors to oversee the new research
generation. The monetary imbalance can only grow as AI grows.

        At this university (UI) there are lots (hundreds?) of undergrads
who want to study AI; and about 8 professors to teach them. Maybe the
federal government ought to recognize that this imbalance hurts our
technological competitiveness. What will prevent academic flight?
Will IBM, Digital, and WANG support professors or will they start
hiring them away?

        Here are a few things needed to keep the schools strong:

                1) Higher salaries for profs in "critical areas."
                   (maybe much higher)

                2) Long term funding of research centers.
                   (buildings, equipment, staff)

                3) University administration support for capitalizing
                   on the results of research, either through making
                   it easy for a professor to maintain a dual life, or
                   by setting up a university owned company to develop
                   and sell the results of research.

------------------------------

Date: 14 Nov 83 17:26:03-PST (Mon)
From: harpo!floyd!clyde!akgua!psuvax!burdvax!sjuvax!bbanerje @ Ucb-Vax
Subject: Problem with Horn Clauses.
Article-I.D.: sjuvax.140

As a novice to Prolog, I have a problem determining whether a
clause is Horn, or non Horn.

I understand that a clause of the form :

             A + ~B + ~C is a Horn Clause,

While one of the form :

            A + B + ~C is non Horn.

However, my problem comes when trying to determine if the
following Clause is Horn or non-Horn.








                           ------------\
                          /          _  \
                         /_________ / \__**
                        _#        #      **
                       (_   o   o _)        __________
                         xx   !  xx        ! HO HO HO !
                         xxx \_/xxx      __/-----------
                         xxxxxxxxxx

Happy Holidays Everyone!

-- Binayak Banerjee
{bpa!astrovax!burdvax}!sjuvax!bbanerje

------------------------------

Date: 11/23/83 11:48:29
From: AGRE
Subject: John Batali at the AI Revolving Seminar 30 November

                      [Forwarded by SASW@MIT-MC]

John Batali
Trying to build an introspective problem-solver

Wednesday 30 November at 4PM
545 Tech Sq 8th floor playroom

Abstract:

I'm trying to write a program that understands how it works, and uses
that understanding to modify and improve its performance.  In this
talk, I'll describe what I mean by "an introspective problem-solver",
discuss why such a thing would be useful, and give some ideas about
how one might work.

We want to be able to represent how and why some course of action is
better than another in certain situations.  If we take reasoning to be
a kind of action, then we want to be able to represent considerations
that might be relevant during the process of reasoning.  For this
knowledge to be useful the program must be able to reason about itself
reasoning, and the program must be able to affect itself by its
decisions.

A program built on these lines cannot think about every step of its
reasoning -- because it would never stop thinking about "how to think
about" whatever it is thinking about.  On the other hand, we want it
to be possible for the program to consider any and all of its
reasoning steps.  The solution to this dilemma may be a kind of
"virtual reasoning" in which a program can exert reasoned control over
all aspects of its reasoning process even if it does not explicitly
consider each step.  This could be implemented by having the program
construct general reasoning plans which are then run like programs in
specific situations.  The program must also be able to modify
reasoning plans if they are discovered to be faulty.  A program with
this ability could then represent itself as an instance of a reasoning
plan.

Brian Smith's 3-LISP achieves what he calls "reflective" access and
causal connection: A 3-LISP program can examine and modify the state
of its interpreter as it is running.  The technical tricks needed to
make this work will also find their place in an introspective
problem-solver.

My work has involved trying to make sense of these issues, as well as
working on a representation of planning and acting that can deal with
real world goals and constraints as well as with those of the planning
and plan-execution processes.

------------------------------

Date: 25 Nov 1983 1413-PST
From: Rob-Kling <Kling.UCI-20B@Rand-Relay>
Subject: Social Impacts Graduate Program at UC-Irvine


                                     CORPS

                                    -------

                             A Graduate Program on

                 Computing, Organizations, Policy, and Society

                    at the University of California, Irvine


          This interdisciplinary program at the University of California,
     Irvine provides an opportunity for scholars and students to
     investigate the social dimensions of computerization in a setting
     which supports reflective and sustained inquiry.

          The primary educational opportunities are a PhD programs in the
     Department of Information and Computer Science (ICS) and MS and PhD
     programs in the Graduate School of Management (GSM).  Students in each
     program can specialize in studying the social dimensions of computing.
     Several students have recieved graduate degrees from ICS and GSM for
     studying topics in the CORPS program.

          The faculty at Irvine have been active in this area, with many
     interdisciplinary projects, since the early 1970's.  The faculty and
     students in the CORPS program have approached them with methods drawn
     from the social sciences.

          The CORPS program focuses upon four related areas of inquiry:

      1.  Examining the social consequences of different kinds of
          computerization on social life in organizations and in the larger
          society.

      2.  Examining the social dimensions of the work and industrial worlds
          in which computer technologies are developed, marketed,
          disseminated, deployed, and sustained.

      3.  Evaluating the effectiveness of strategies for managing the
          deployment and use of computer-based technologies.

      4.  Evaluating and proposing public policies which facilitate the
          development and use of computing in pro-social ways.


          Studies of these questions have focussed on complex information
     systems, computer-based modelling, decision-support systems, the
     myriad forms of office automation, electronic funds transfer systems,
     expert systems, instructional computing, personal computers, automated
     command and control systems, and computing at home.  The questions
     vary from study to study.  They have included questions about the
     effectiveness of these technologies, effective ways to manage them,
     the social choices that they open or close off, the kind of social and
     cultural life that develops around them, their political consequences,
     and their social carrying costs.

          The CORPS program at Irvine has a distinctive orientation -

     (i) in focussing on both public and private sectors,

     (ii) in examining computerization in public life as well as within
           organizations,

     (iii) by examining advanced and common computer-based technologies "in
           vivo" in ordinary settings, and

     (iv) by employing analytical methods drawn from the social sciences.



              Organizational Arrangements and Admissions for CORPS


          The primary faculty in the CORPS program hold appointments in the
     Department of Information and Computer Science and the Graduate School
     of Management.  Additional faculty in the School of Social Sciences,
     and the Program on Social Ecology, have collaborated in research or
     have taught key courses for students in the CORPS program.  Research
     is administered through an interdisciplinary research institute at UCI
     which is part of the Graduate Division, the Public Policy Research
     Organization.

     Students who wish additional information about the CORPS program
     should write to:

               Professor Rob Kling (Kling.uci-20b@rand-relay)
               Department of Information and Computer Science
               University of California, Irvine
               Irvine, Ca. 92717

                                     or to:

               Professor Kenneth Kraemer
               Graduate School of Management
               University of California, Irvine
               Irvine, Ca. 92717

------------------------------

End of AIList Digest
********************
28-Nov-83 22:42:32-PST,14027;000000000001
Mail-From: LAWS created at 28-Nov-83 22:41:17
Date: Mon 28 Nov 1983 22:36-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #105
To: AIList@SRI-AI


AIList Digest            Tuesday, 29 Nov 1983     Volume 1 : Issue 105

Today's Topics:
  AI - Challenge & Responses & Query
----------------------------------------------------------------------

Date: 21 Nov 1983 12:25-PST
From: dietz%usc-cse%USC-ECL@SRI-NIC
Reply-to: dietz%USC-ECL@SRI-NIC
Subject: Re: The AI Challenge

I too am skeptical about expert systems.  Their attraction seems to be
as a kind of intellectual dustbin into which difficulties can be swept.
Have a hard problem that you don't know (or that no one knows) how to
solve?  Build an expert system for it.

Ken Laws' idea of an expert system as a very modular, hackable program
is interesting.  A theory or methodology on how to hack programs would
be interesting and useful, but would become another AI spinoff, I fear.

------------------------------

Date: Wed 23 Nov 83 18:02:11-PST
From: Michael Walker <WALKER@SUMEX-AIM.ARPA>
Subject: response to response to challenge

Tom,

        I thought you made some good points in your response to Ralph
Johnson in the AIList, but one of your claims is unsupported, important,
and quite possibly wrong. The claim I refer to is

        "Expert systems can be built, debugged, and maintained more cheaply
        than other complicated systems. And hence, they can be targeted at
        applications for which previous technology was barely adequate."

        I would be delighted if this could be shown to be true, because I
would very much like to show friends/clients in industry how to use AI to
solve their problems more cheaply.

        However, there are no formal studies that compare a system built
using AI methods to one built using other methods, and no studies that have
attempted to control for other causes of differences in ease of building,
debugging, maintaining, etc. such as differences in programmer experience,
programming language, use or otherwise of structured programming techniques,
etc..

        Given the lack of controlled, reproducible tests of the effectiveness
of AI methods for program development, we have fallen back on qualitative,
intuitive arguments. The same sort of arguments have been and are made for
structured programming, application generators, fourth-generation languages,
high-level languages, and ADA. While there is some truth in the various
claims about improved programmer productivity they have too often been
overblown as The Solution To All Our Problems. This is the case with
claiming AI is cheaper than any other methods.

        A much more reasonable statement is that AI methods may turn out
to be cheaper / faster / otherwise better than  other methods if anyone ever
actually builds an effective and economically viable expert system.

        My own guess is that it is easier to develop AI systems because we
have been working in a LISP programming environment that has provided tools
like interpreted code, interactive debugging/tracing/editing, masterscope
analysis, etc.. These points were made quite nicely in Beau Shiel's recent
article in Datamation (Power Tools for Programming, I think was the title).
None of these are intrinsic to AI.

        Many military and industry managers who are supporting AI work are
going to be very disillusioned in a few years when AI doesn't deliver what
has been promised. Unsupported claims  about the efficacy of AI aren't going
to help. It could hurt our credibility, and thereby our funding and ability
to continue the basic research.

Mike Walker
WALKER@SUMEX-AIM.ARPA

------------------------------

Date: Fri 25 Nov 83 17:40:44-PST
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Re: response to response to challenge

Mike,

While I would certainly welcome the kinds of controlled studies that
you sketched in your msg, I think my claim is correct and can be
supported.  Virtually every expert system that has been built has been
targeted at tasks that were previously untouched by computing
technology.  I claim that the reason for this is that the proper
programming methodology was needed before these tasks could be
addressed.  I think the key parts of that methodology are (a) a
modular, explicit representation of knowledge, (b) careful separation
of this knowledge from the inference engine, and (c) an
expert-centered approach in which extensive interviews with experts
replace attempts by computer people to impose a normative,
mathematical theory on the domain.

Since there are virtually no cases where expert systems and
"traditional" systems have been built to perform the same task, it is
difficult to support this claim.  If we look at the history of
computers in medicine, however, I think it supports my claim.
Before expert systems techniques were available, many people
had attempted to build computational tools for physicians.  But these
tools suffered from the fact that they were often burdened with
normative theories and often ignored the clinical aspects of disease
diagnosis.  I blame these deficiencies on the lack of an
"expert-centered" approach.  These programs were also difficult to
maintain and could not produce explanations because they did not
separate domain knowledge from the inference engine.

I did not claim anywhere in my msg that expert systems techniques are
"The Solution to All Our Problems".  Certainly there are problems for
which knowledge programming techniques are superior.  But there are
many more for which they are too expensive, too slow, or simply
inappropriate.  It would be absurd to write an operating system in
EMYCIN, for example!  The programming advances that would allow
operating systems to be written and debugged easily are still
undiscovered.

You credit fancy LISP environments for making expert systems easy to
write, debug, and maintain.  I would certainly agree: The development
of good systems for symbolic computing was an essential prerequisite.
However, the level of program description and interpretation in EMYCIN
is much higher than that provided by the Interlisp system.  And the
"expert-centered" approach was not developed until Ted Shortliffe's
dissertation.

You make a very good point in your last paragraph:

        Many military and industry managers who are supporting AI work
        are going to be very disillusioned in a few years when AI
        doesn't deliver what has been promised. Unsupported claims
        about the efficacy of AI aren't going to help. It could hurt
        our credibility, and thereby our funding and ability to
        continue the basic research.

AI (at least in Japan) has "promised" speech understanding, language
translation, etc. all under the rubric of "knowledge-based systems".
Existing expert-systems techniques cannot solve these problems.  We
need much more research to determine what things CAN be accomplished
with existing technology.  And we need much more research to continue
the development of the technology.  (I think these are much more
important research topics than comparative studies of expert-systems
technology vs. other programming techniques.)

But there is no point in minimizing our successes.  My original
message was in response to an accusation that AI had no merit.
I chose what I thought was AI's most solid contribution: an improved
programming methodology for a certain class of problems.

--Tom

------------------------------

Date: Fri 25 Nov 83 17:52:47-PST
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Re: Clarifying my "AI Challange"

Although I've written three messages on this topic already, I guess
I've never really addressed Ralph Johnson's main question:

        My question, though, is whether AI is really going to change
        the world any more than the rest of computer science is
        already doing.  Are the great promises of AI going to be
        fulfilled?

My answer: I don't know.  I view "the great promises" as goals, not
promises.  If you are a physicalist and believe that human beings are
merely complex machines, then AI should in principle succeed.
However, I don't know if present AI approaches will turn out to be
successful.  Who knows?  Maybe the human brain is too complex to ever
be understood by the human brain.  That would be interesting to
demonstrate!

--Tom

------------------------------

Date: 24 Nov 83 5:00:32-PST (Thu)
From: pur-ee!uiucdcs!smu!leff @ Ucb-Vax
Subject: Re: The AI Challenge - (nf)
Article-I.D.: uiucdcs.4118


There was a recent discussion of an AI project that was done at
ONR on determining the cause of a chemical spill in a large chemical
plant with various ducts and pipes and manholes, etc.  I argued that
the thing was just an application of graph algorithms and searching
techniques.

(That project was what could be done in three days by an AI team as
part of a challenge from ONR and quite possibly is not representative.)

Theorem proving using resolution is something that someone with just
a normal algorithms background would not simply come up with 'as an
application of normal algorithms.'  Using if-then rules perhaps might
be a search of the type you might see an algorithms book.  Although, I
don't expect the average CS person with a background in algorithms to
come up with that application although once it was pointed out it would
be quite intuitive.

One interesting note is that although most of the AI stuff is done in
LISP, a big theorem proving program discussed by Wos at a recent IEEE
meeting here was written in PASCAL.  It did some very interesting things.
One point that was made is that they submitted a paper to a logic journal.
Although the journal agreed the results were worth publishing, the "computer
stuff" had to go.

Continuing on this rambling aside, some people submitted results in
mechanical engineering using a symbolic manipulator referencing the use
of the program in a footnote.  The poor referee conscientiously
tried to duplicate the derivations manually.  Finally he noticed the
reference and sent a letter back saying that they must put symbolic
manipulation by computer in the covering.

Getting back to the original subject, I had a discussion with someone
doing research in daemons.  After he explained to me what daemons were,
I came to the conclusion they were a fancy name for what you described
as a hack.  A straightforward application of theorem proving or if-then
rule techniques would be inefficient or otherwise infeasable so one
puts an exception in to handle a certain kind of a case.  What is the
difference between that an error handler for zero divides rather than
putting a statement everywhere one does a division?

Along the subject of hacking, a DATAMATION article on 'Real Programmers
Don't Use PASCAL.' in which he complained about the demise of the person
who would modify a program on the fly using the switch register, etc.
He remarkeed at the end that some of the debugging techniques in
LISP AI environments were starting to look like the old style techniques
of assembler hackers.

------------------------------

Date: 24 Nov 83 22:29:44-PST (Thu)
From: pur-ee!notes @ Ucb-Vax
Subject: Re: The AI Challenge - (nf)
Article-I.D.: pur-ee.1148

As an aside to this discussion, I'm curious as to just what everyone
thinks of when they think of AI.

I am a student at Purdue, which has absolutely nothing in the way of
courses on what *I* consider AI.  I have done a little bit of reading
on natural language processing, but other than that, I haven't had
much of anything in the way of instruction on this stuff, so maybe I'm
way off base here, but when I think of AI, I primarily think of:

        1) Natural Language Processing, first and foremost.  In
           this, I include being able to "read" it and understand
           it, along with being able to "speak" it.
        2) Computers "knowing" things - i.e., stuff along the
           lines of the famous "blocks world", where the "computer"
           has notions of pyramids, boxes, etc.
        3) Computers/programs which can pass the Turing test (I've
           always thought that ELIZA sort of passes this test, at
           least in the sense that lots of people actually think
           the computer understood their problems).
        4) Learning programs, like the tic-tac-toe programs that
           remember that "that" didn't work out, only on a much
           more grandiose scale.
        5) Speech recognition and understanding (see #1).

For some reason, I don't think of pattern recognition (like analyzing
satellite data) as AI.  After all, it seems to me that this stuff is
mostly just "if <cond 1> it's trees, if <cond 2> it's a road, etc.",
which doesn't really seem like "intelligence".

  [If it were that easy, I'd be out of a job.  -- KIL]

What do you think of when I say "Artificial Intelligence"?  Note that
I'm NOT asking for a definition of AI, I don't think there is one.  I
just want to know what you consider AI, and what you consider "other"
stuff.

Another question -- assuming the (very) hypothetical situation where
computers and their programs could be made to be "infinitely" intelligent,
what is your "dream program" that you'd love to see written, even though
it realistically will probably never be possible?  Jokingly, I've always
said that my dream is to write a "compiler that does what I meant, not
what I said".

--Dave Curry
decvax!pur-ee!davy
eevax.davy@purdue

------------------------------

End of AIList Digest
********************
29-Nov-83 12:59:19-PST,20343;000000000001
Mail-From: LAWS created at 29-Nov-83 12:58:05
Date: Tue 29 Nov 1983 12:50-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #106
To: AIList@SRI-AI


AIList Digest           Wednesday, 30 Nov 1983    Volume 1 : Issue 106

Today's Topics:
  Conference - Logic Conference Correction,
  Intelligence - Definitions,
  AI - Definitions & Research Methodology & Jargon,
  Seminar - Naive Physics
----------------------------------------------------------------------

Date: Mon 28 Nov 83 22:32:29-PST
From: PEREIRA@SRI-AI.ARPA
Subject: Correction

The ARPANET address in the announcement of the IEEE 1984 Logic Programming
Symposium should be PEREIRA@SRI-AI, not PERIERA@SRI-AI.

Fernando Pereira

[My apologies.  I am the one who inserted Dr. Pereira's name incorrectly.
I was attempting to insert information from another version of the same
announcement that also reached the AIList mailbox.  -- KIL]

------------------------------

Date: 21 Nov 83 6:04:05-PST (Mon)
From: decvax!mcvax!enea!ttds!alf @ Ucb-Vax
Subject: Re: Behavioristic definition of intelligence
Article-I.D.: ttds.137

Doesn't the concept "intelligence" have some characteristics in common with
a concept such as "traffic"?  It seems obvious that one can measure such
entities as "traffic intensity" and the like thereby gaining an indirect
understanding of the conditions that determine the "traffic" but it seems
very difficult to find a direct measure of "traffic" as such.  Some may say
that "traffic" and "traffic intensity" are synonymous concepts but I don't
agree.  The common opinion among psychologists seems to be that
"intelligence" is that which is measured by an intelligence test.  By
measuring a set of problem solving skills and weighing the results together
we get a value.  Why not call it "intelligence" ?  The measure could be
applicable to machine intelligence also as soon as (if ever) we teach the
machines to pass intelligence tests.  It should be quite clear that
"intelligence" is not the same as "humanness" which is measured by a Turing
test.

------------------------------

Date: Sat, 26 Nov 83 2:09:14 EST
From: A B Cooper III <abc@brl-bmd>
Subject: Where wise men fear to tread

Being nothing more than an amateur observer on the AI scene,
I hesitate to plunge in like a fool.

Nevertheless, the roundtable on what constitutes intelligence
seems ed to cover many interesting hypotheses:

        survivability
        speed of solving problems
        etc

but one.  Being married to a professional educator, I've found
that the common working definition of intelligence is
the ability TO LEARN.

                The more easily one learns new material, the
                        more intelligent one is said to be.

                The more quickly one learns new material,
                        the more intelligent one is said to be.

                One who can learn easily and quickly across a
                        broad spectrum of subjects is said to
                        be more intelligent than one whose
                        abilities are concentrated in one or
                        two areas.

                One who learns only at an average rate, except
                        for one subject area in which he or she
                        excells far above the norms is thought
                        to be TALENTED rather than INTELLIGENT.

                It seems to be believed that the most intelligent
                        folks learn easily and rapidly without
                        regard to the level of material.  They
                        assimilate the difficult with the easy.


Since this discussion was motivated, at least in part, by the
desire to understand what an "intelligent" computer program should
do, I feel that we should re-visit some of our terminology.

In the earlier days of Computer Science, I seem to recall some
excitement about machines (computers) that could LEARN.  Was this
the precursor of AI?  I don't know.

If we build an EXPERT SYSTEM, have we built an intelligent machine
(can it assimilate new knowledge easily and quickly), or have we
produced a "dumb" expert?  Indeed, aren't many of our AI or
knowledge-based or expert systems really something like "dumb"
experts?

                       ------------------------

You might find the following interesting:

        Siegler, Robert S, "How Knowledge Influences Learning,"
AMERICAN SCIENTIST, v71, Nov-Dec 1983.

In this reference, Siegler addresses the questions of how
children  learn and what they know.  He points out that
the main criticism of intelligence tests (that they measure
'knowledge' and not 'aptitute') may miss the mark--that
knowledge and learning may be linked, in humans anyway, in
ways that traditional views have not considered.

                      -------------------------

In any case, should we not be addressing as a primary research
objective, how to make our 'expert systems' into better learners?

Brint Cooper
abc@brl.arpa

------------------------------

Date: 23 Nov 83 11:27:34-PST (Wed)
From: dambrosi @ Ucb-Vax
Subject: Re: Intelligence
Article-I.D.: ucbvax.373

Hume once said that when a discussion or argument seems to be
interminable and without discernable progress, it is worthwhile
to attempt to produce a concrete visualisation of the concept
being argued about. Often, he claimed, this will be IMPOSSIBLE
to do, and this will be evidence that the word being argued
about is a ringer, and the discussion pointless. In more
modern parlance, these concepts are definitionally empty
for most of us.
I submit the following definition as the best presently available:
Intelligence consists of perception of the external environment
(e.g. vision), knowledge representation, problem solving, learning,
interaction with the external environment (e.g. robotics),
and communication with other intelligent agents (e.g. natural
language understanding). (note the conjunctive connector)
If you can't guess where this comes from, check AAAI83
procedings table of contents.
                                bruce d'ambrosio
                                dambrosi%ucbernie@berkeley

------------------------------

Date: Tuesday, 29 Nov 1983 11:43-PST
From: narain@rand-unix
Subject: Re: AI Challenge


AI is advanced programming.

We need to solve complex problems involving reasoning, and judgment. So
we develop appropriate computer techniques (mainly software)
for that. It is our responsibility to invent techniques that make development
of efficient intelligent computer programs easier, debuggable, extendable,
modifiable. For this purpose it is only useful to learn whatever we can from
traditional computer science and apply it to the AI effort.

Tom Dietterich said:

>> Your view of "knowledge representations" as being identical with data
>> structures reveals a fundamental misunderstanding of the knowledge vs.
>> algorithms point.  Most AI programs employ very simple data structures
>> (e.g., record structures, graphs, trees).  Why, I'll bet there's not a
>> single AI program that uses leftist-trees or binomial queues!  But, it
>> is the WAY that these data structures are employed that counts.

We at Rand have ROSS (Rule Oriented Simulation System) that has been employed
very successfully for developing two large scale simulations (one strategic
and one tactical). One implementation of ROSS uses leftist trees for
maintaining event queues. Since these queues are in the innermost loop
of ROSS's operation, it was only sensible to make them as efficient as
possible. We think we are doing AI.

Sanjai Narain
Rand Corp.

------------------------------

Date: Tue, 29 Nov 83 11:31:54 PST
From: Michael Dyer <dyer@UCLA-CS>
Subject: defining AI, AI research methodology, jargon in AI (long msg)

This is in three flaming parts:   (I'll probably never get up the steam to
respond again,  so I'd better get it all out at once.)

Part I.  "Defining intelligence", "defining AI" and/or "responding to AI
challenges" considered harmful:  (enough!)

Recently, I've started avoiding/ignoring AIList since, for the most
part, it's been a endless discussion on "defining A/I" (or, most
recently) defending AI.  If I spent my time trying to "define/defend"
AI or intelligence, I'd get nothing done.  Instead, I spend my time
trying to figure out how to get computers to achieve some task -- exhibit
some behavior -- which might be called intelligent or human-like.
If/whenever I'm partially successful, I try to keep track about what's
systematic or insightful.  Both failure points and partial success
points serve as guides for future directions.  I don't spend my time
trying to "define" intelligence by BS-ing about it.  The ENTIRE
enterprise of AI is the attempt to define intelligence.

Here's a positive suggestion for all you AIList-ers out there:

I'd be nice to see more discussion of SPECIFIC programs/cognitive
models:  their Assumptions, their failures, ways to patch them, etc. --
along with contentful/critical/useful suggestions/reactions.

Personally, I find Prolog Digest much more worthwhile.  The discussions
are sometimes low level, but they almost always address specific issues,
with people often offering specific problems, code, algorithms, and
analyses of them.  I'm afraid AIList has been taken over by people who
spend so much time exchanging philosophical discussions that they've
chased away others who are very busy doing research and have a low BS
tolerance level.

Of course, if the BS is reduced, that means that the real AI world will
have to make up the slack.  But a less frequent digest with real content
would be a big improvement.  {This won't make me popular, but perhaps part
of the problem is that most of the contributors seem to be people who
are not actually doing AI, but who are just vaguely interested in it, so
their speculations are ill-informed and indulgent.  There is a use for
this kind of thing, but an AI digest should really be discussing
research issues.  This gets back to the original problem with this
digest -- i.e. that researchers are not using it to address specific
research issues which arise in their work.}

Anyway, here are some examples of task/domains topic that could be
addressed.  Each can be considered to be of the form:  "How could we get
a computer to do X":

          Model Dear Abby.
          Understand/engage in an argument.
          Read an editorial and summarize/answer questions about it.
          Build a daydreamer
          Give legal advice.
          Write a science fiction short story
               ...

{I'm an NLP/Cognitive modeling person -- that's why my list may look
bizzare to some people}

You researchers in robotics/vision/etc.  could discuss, say, how to build
a robot that can:

          climb stairs
             ...
          recognize a moving object
             ...
          etc.

People who participate in this digest are urged to:  (1) select a
task/domain, (2) propose a SPECIFIC example which represents
PROTOTYPICAL problems in that task/domain, (3) explain (if needed) why
that specific example is prototypic of a class of problems, (4) propose
a (most likely partial) solution (with code, if at that stage), and 4)
solicit contentful, critical, useful, helpful reactions.

This is the way Prolog.digest is currently functioning, except at the
programming language level.  AIList could serve a useful purpose if it
were composed of ongoing research discussions about SPECIFIC, EXEMPLARY
problems, along with approaches, their limitations, etc.

If people don't think a particular problem is the right one, then they
could argue about THAT.  Either way, it would elevate the level of
discussion.  Most of my students tell me that they no longer read
AIList.  They're turned off by the constant attempts to "defend or
define AI".

Part II.  Reply to R-Johnson

Some of R-Johnson's criticisms of AI seem to stem from viewing
AI strictly as a TOOLS-oriented science.

{I prefer to refer to STRUCTURE-oriented work (ie content-free) as
TOOLS-oriented work and CONTENT-oriented work as DOMAIN or
PROCESS-oriented.  I'm referring to the distinction that was brought up
by Schank in "The Great Debate" with McCarthy at AAAI-83 Wash DC).

In general,  tools-oriented work seems more popular and accepted
than content/domain-oriented work.  I think this is because:

     1.  Tools are domain independent, so everyone can talk about them
     without having to know a specific domain -- kind of like bathroom
     humor being more universally communicable than topical-political
     humor.

     2.  Tools have nice properties:  they're general (see #1 above);
     they have weak semantics (e.g. 1st order logic, lambda-calculus)
     so they're clean and relatively easy to understand.

     3.  No one who works on a tool need be worried about being accused
     of "ad hocness".

     4.  Breakthroughs in tools-research happen rarely,  but when it
     does,  the people associated with the breakthrough become
     instantly famous because everyone can use their tool (e.g. Prolog)

In contrast, content or domain-oriented research and theories suffer
from the following ills:

     1.  They're "ad hoc" (i.e.  referring to THIS specific thing or
     other).

     2.  They have very complicated semantics,  poorly understood,
     hard to extend, fragile, etc. etc.

However,  many of the most interesting problems pop up in trying
to solve a specific problem which, if solved,  would yield insight
into intelligence.  Tools, for the most part, are neutral with respect
to content-oriented research questions.  What does Prolog or Lisp
have to say to me about building a "Dear Abby" natural language
understanding and personal advice-giving program?  Not much.
The semantics of lisp or prolog says little about the semantics of the
programs which researchers are trying to discover/write in Prolog or Lisp.
Tools are tools.  You take the best ones off the shelf you can find for
the task at hand.  I love tools and keep an eye out for
tools-developments with as much interest as anyone else.  But I don't
fool myself into thinking that the availability of a tool will solve my
research problems.

{Of course no theory is exlusively one or the other.  Also, there are
LEVELS of tools & content for each theory.  This levels aspect causes
great confusion.}

By and large, AIList discussions (when they get around to something
specific) center too much around TOOLS and not PROCESS MODELS (ie
SPECIFIC programs, predicates, rules, memory organizations, knowledge
constructs, etc.).

What distinguishes AI from compilers, OS, networking, or other aspects
of CS are the TASKS that AI-ers choose.  I want computers that can read
"War and Peace" -- what problems have to be solved, and in what order,
to achieve this goal?  Telling me "use logic" is like telling me
to "use lambda calculus" or "use production rules".

Part III.   Use and abuse of jargon in AI.

Someone recently commented in this digest on the abuse of jargon in AI.
Since I'm from the Yale school, and since Yale commonly gets accused of
this, I'm going to say a few words about jargon.

Different jargon for the same tools is BAD policy.  Different jargon
to distinguish tools from content is GOOD policy.  What if Schank
had talked about "logic"  instead of "Conceptual Dependencies"?
What a mistake that would have been!  Schank was trying to specify
how specific meanings (about human actions) combine during story
comprehension.  The fact that prolog could be used as a tool to
implement Schank's conceptual dependencies is neutral with respect
to what Schank was trying to do.

At IJCAI-83  I heard a paper (exercise for the reader to find it)
which went something like this:

     The work of Dyer (and others) has too many made-up constructs.
     There are affects, object primitives, goals, plans, scripts,
     settings, themes, roles, etc.  All this terminology is confusing
     and unnecessary.

     But if we look at every knowledge construct as a schema (frame,
     whatever term you want here), then we can describe the problem much
     more elegantly.  All we have to consider are the problems of:
     frame activation, frame deactivation, frame instantiation, frame
     updating, etc.

Here, clearly we have a tools/content distinction.  Wherever
possible I actually implemented everything using something like
frames-with-procedural-attachment (ie demons).  I did it so that I
wouldn't have to change my code all the time.  My real interest,
however, was at the CONTENT level.  Is a setting the same as an emotion?
Does the task:  "Recall the last 5 restaurant you were at" evoke the
same search strategies as "Recall the last 5 times you accomplished x",
or "the last 5 times you felt gratitude."?  Clearly, some classes of
frames are connected up to other classes of frames in different ways.
It would be nice if we could discover the relevant classes and it's
helpful to give them names (ie jargon).  For example, it turns out that
many (but not all) emotions can be represented in terms of abstract goal
situations.  Other emotions fall into a completely different class (e.g.
religious awe, admiration).  In my program "love" was NOT treated as
(at the content level) an affect.

When I was at Yale, at least once a year some tools-oriented person
would come through and give a talk of the form:  "I can
represent/implement your Scripts/Conceptual-Dependency/
Themes/MOPs/what-have-you using my tool X" (where X = ATNs, Horn
clauses, etc.).

I noticed that first-year students usually liked such talks, but the
advanced students found them boring and pointless.  Why?  Because if
you're content-oriented you're trying to answer a different set of
questions, and discussion of the form:  "I can do what you've already
published in the literature using Prolog" simply means "consider Prolog
as a nice tool" but says nothing at the content level, which is usually
where the advanced students are doing their research.

I guess I'm done.  That'll keep me for a year.

                                                  -- Michael Dyer

------------------------------

Date: Mon 28 Nov 83 08:59:57-PST
From: Doug Lenat <LENAT@SU-SCORE.ARPA>
Subject: CS Colloq 11/29: John Seely Brown

                [Reprinted from the SU-SCORE bboard.]

Tues, Nov 29, 3:45 MJH refreshments; 4:15 Terman Aud (lecture)

A COMPUTATIONAL FRAMEWORK FOR A QUALITATIVE PHYSICS--
Giving computers "common-sense" knowledge about physical mechanisms

John Seely Brown
Cognitive Sciences
Xerox, Palo Alto Research Center

Humans appear to use a qualitative causal calculus in reasoning about
the behavior  of their physical environment.   Judging from the kinds
of  explanations humans give,  this calculus is  quite different from
the classical physics taught in classrooms.  This raises questions as
to  what this  (naive) physics  is like, how  it helps  one to reason
about the physical world and  how to construct a formal calculus that
captures this kind of  reasoning.  An analysis of this calculus along
with a system, ENVISION, based on it will be covered.

The goals  for the qualitative physics are i)  to be far simpler than
classical  physics and  yet  retain  all the  important  distinctions
(e.g., state,  oscillation, gain,  momentum), ii)  to produce  causal
accounts of  physical mechanisms,  and  (3) to  provide a  logic  for
common-sense, causal  reasoning  for the  next generation  of  expert
systems.

A new  framework for  examining causal  accounts has  been  suggested
based  on using  collections  of  locally interacting  processors  to
represent physical mechanisms.

------------------------------

End of AIList Digest
********************
 1-Dec-83 22:38:53-PST,15226;000000000001
Mail-From: LAWS created at  1-Dec-83 22:36:42
Date: Thu  1 Dec 1983 21:58-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #107
To: AIList@SRI-AI


AIList Digest             Friday, 2 Dec 1983      Volume 1 : Issue 107

Today's Topics:
  Programming Languages - Lisp Productivity,
  Alert - Psychology Today,
  Learning & Expert Systems,
  Intelligence - Feedback Model & Categorization,
  Scientific Method - Psychology,
  Puzzle - The Lady or the Tiger,
  Seminars - Commerce Representation & Learning Linguistic Categories
----------------------------------------------------------------------

Date: 27 Nov 83 16:57:39-PST (Sun)
From: decvax!tektronix!tekcad!franka @ Ucb-Vax
Subject: Re: lisp productivity question - (nf)
Article-I.D.: tekcad.145

        I don't have any documentation, but I heard once from an attendee
at a workshop on design automation that someone had reported a 5:1 productivity
improvement in LISP vs. C, PASCAL, etc. From personal experience I know this
to be true, also. I once wrote a game program in LISP in two days. I later
spent two weeks debugging the same game in a C version (I estimated another
factor of 4 for a FORTRAN version). The nice thing about LISP is not that
the amount of code written is less (although it is, usually by a factor of
2 to 3), but that its environment (even in the scrungy LISPs) is much easier
to debug and modify code in.

                                        From the truly menacing,
   /- -\                                but usually underestimated,
    <->                                 Frank Adrian
                                        (tektronix!tekcad!franka)

[A caveat: Lisp is very well suited to the nature of game programs.
A fair test would require that data processing and numerical analysis
problems be included in the mix of test problems.  -- KIL]

------------------------------

Date: Mon, 28 Nov 83 11:03 EST
From: Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>
Subject: Psychology Today

The December issue of Psychology Today (V 17, #12) has some more articles
that may be of interest to AI people. The issue is titled "USER FRIENDLY"
and talks about technological advances that have made machines easier.

The articles of interest are:

On Papert, Minsky, and John Anderson            page 26

An Article written by McCarthy                  page 46

An Interview with Alan Kay                      Page 50

(why they call him the Grand old Man is
 beyond me, Alan is only 43)


                                        - steve

------------------------------

Date: Tue 29 Nov 83 18:36:01-EST
From: Albert Boulanger <ABOULANGER@BBNG.ARPA>
Subject: Learning Expert systems

Re: Brint Cooper's remark on non-learning expert systems being "dumb":

Yes, some people would agree with you. In fact, Dr. R.S. Michalski's group
at the U of Illinois is building an Expert System, ADVISE, that incorporates
learning capabilities.

Albert Boulanger
ABOULANGER@BBNG

------------------------------

Date: Wed, 30 Nov 83 09:07 PST
From: NNicoll.ES@PARC-MAXC.ARPA
Subject: "Intelligence"

I see Intelligence as the sophistication of the deep structure
mechanisms that generate both thought and behavior.  These structures
(per Albus), work as cross-coupled hierarchies of phase-locked loops,
generating feedback hypotheses about the stimulus at each level of the
hierarchy.  These feedback hypotheses are better at predicting and
matching the stimulus if the structure holds previous patterns that are
similar to the present stimulus.  Therefore, intelligence is a function
of both the amount of knowledge possible to bring to bear on pattern
matching a present problem (inference), and the number of levels in the
structure of the hierarchy the organism (be it mechanical or organic)
can bring to bear on breaking the stimulus/pattern down into its
component parts and generate feedback hypotheses to adjust the organisms
response at each level.

I feel any structure sufficiently complex to exhibit intelligence, be it
a bird-brained idiot whose height of reasoning is "find fish - eat
fish", or "Deep Thought" who can break down the structures and reason
about a whole world, should be considered intelligent, but with
different "amounts" of intelligence, and possibly about different
experiences.  I do not think there is any "threshold" above which an
organism can be considered intelligent and below which they are not.
This level would be too arbitrary a structure for anything except very
delimited areas.

So, lets get on with the pragmatic aspects of this work, creating better
slaves to do our scut work for us, our reasoning about single-mode
structures too complex for a human brain to assimilate, our tasks in
environments too dangerous for organic creatures, and our tasks too
repetitious for the safety of the human brain/body structure, and move
to a lower priority the re-creation of pseudo-human "intelligence".  I
think that would require a pseudo-human brain structure (combining both
"Emotion" and "Will") that would be interesting only in research on
humanity (create a test-bed wherein experiments that are morally
unacceptable when performed on organic humans could be entertained).

Nick Nicoll

------------------------------

Date: 29 Nov 83 20:47:33-PST (Tue)
From: decvax!ittvax!dcdwest!sdcsvax!sdcsla!west @ Ucb-Vax
Subject: Re: Intelligence and Categorization
Article-I.D.: sdcsla.461

        From:  AXLER.Upenn-1100@Rand-Relay
          (David M. Axler - MSCF Applications Mgr.)

               I think Tom Portegys' comment in 1:98 is very true.
          Knowing whether or not a thing is intelligent, has a soul,
          etc., is quite helpful in letting us categorize it.  And,
          without that categorization, we're unable to know how to
          understand it.  Two minor asides that might be relevant in
          this regard:

               1) There's a school of thought in the fields of
          linguistics, folklore, anthropology, and folklore, which is
          based on the notion (admittedly arguable) that the only way
          to truly understand a culture is to first record and
          understand its native categories, as these structure both
          its language and its thought, at many levels.  (This ties in
          to the Sapir-Whorf hypothesis that language structures
          culture, not the reverse...)  From what I've read in this
          area, there is definite validity in this approach.  So, if
          it's reasonable to try and understand a culture in terms of
          its categories (which may or may not be translatable into
          our own culture's categories, of course), then it's equally
          reasonable for us to need to categorize new things so that
          we can understand them within our existing framework.

Deciding whether a thing is or is not intelligent seems to be a hairier
problem than "simply" categorizing its behavior and other attributes.

As to point #1, trying to understand a culture by looking at how it
categorizes does not constitute a validation of the process of
categorization (particularly in scientific endeavours).   Restated: There
is no connection between the fact that anthropologists find that studying
a culture's categories is a very powerful tool for aiding understanding,
and the conclusion that we need to categorize new things to understand them.

I'm not saying that categorization is useless (far from it), but Sapir-Whorf's
work has no direct bearing on this subject (in my view).

What I am saying is that while deciding to treat something as "intelligent",
e.g., a computer chess program, may prove to be the most effective way of
dealing with it in "normal life", it doesn't do a thing for understanding
the thing.   If you choose to classify the chess program as intelligent,
what has that told you about the chess program?   If you classify it
as unintelligent...?   I think this reflects more upon the interaction
between you and the chess program than upon the structure of the chess
program.

                        -- Larry West   UC San Diego
                        -- ARPA:        west@NPRDC
                        -- UUCP:        ucbvax!sdcsvax!sdcsla!west
                        --      or      ucbvax:sdcsvax:sdcsla:west

------------------------------

Date: 28 Nov 83 18:53:46-PST (Mon)
From: harpo!eagle!mhuxl!ulysses!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: Rational Psych & Scientific Method
Article-I.D.: ncsu.2416

Well, I hope this is the last time ....

Again, I have been accused of ignorance; again the accustation is false.
Its fortunate only my words can make it into this medium.  I would
appreciate the termination of this discussion, but will not stand by
and be patronized without responding.  All sane and rational people,
hit the <del> and go on to the next news item please.

When I say psychologists do not do very good science I am talking about
the exact same thing you are talking about.  There is no escape. Those
"rigorous" experiments sometime succeed in establishing some "facts",
but they are sufficiently encumbered by lack of controls that one often
does not know what to make of them.  This is not to imply a critisism of
psychologists as intellectually inferior to chemists, but the field is
just not there yet.  Is Linguistics a science?  Is teaching a science?
Laws (and usually morals) prevent the experiments we need, to do REAL
controlled experiments; lack of understanding would probably prevent
immediate progress even in the absence of those laws.  Its a bit like
trying to make a "scientific" study of a silicon wafer with 1850's tools
and understanding of electronics.  A variety of interesting facts could
be established, but it is not clear that they would be very useful.  Tack
on some I/O systems and you could then perhaps allow the collection of
reams of timing and capability data and could try to corrollate the results
and try to build theories -- that LOOKS like science.  But is it? In
my book, to be a science, there must be a process of convergence in which
the theories more ever closer to explaining reality, and the experiments
become ever more precise.  I don't see much convergence in experimental
psychology. I see more of a cyclic nature to the theories ....
----GaryFostel----
		  P.S. There are a few other sciences which do not deserve
		       the title, so don't feel singled out. Computer
		       Science for example.

------------------------------

Date: Tue, 29 Nov 83 11:15 EST
From: Chris Moss <Moss.UPenn@Rand-Relay>
Subject: The Lady or the Tiger

                 [Reprinted from the Prolog Digest.]

Since it's getting near Christmas, here are a few puzzlers to
solve in Prolog. They're taken from Raymond Smullyan's delightful
little book of the above name. Sexist allusions must be forgiven.

There once was a king, who decided to try his prisoners by giving
them a logic puzzle. If they solved it they would get off, and
get a bride to boot; otherwise ...

The first day there were three trials. In all three, the king
explained, the prisoner had to open one of two rooms. Each room
contained either a lady or a tiger, but it could be that there
were tigers or ladies in both rooms.

On each room he hung a sign as follows:

                I                                    II
    In this room there is a lady        In one of these rooms there is
       and in the other room              a lady and in one of these
         there is a tiger                   rooms there is a tiger

"Is it true, what the signs say ?", asked the prisoner.
"One of them is true", replied the king, "but the other one is false"

If you were the prisoner, which would you choose (assuming, of course,
that you preferred the lady to the tiger) ?

                      -------------------------

For the second and third trials, the king explained that either
both statements were true, or both are false. What is the
situation ?

Signs for Trial 2:

                  I                                     II
       At least one of these rooms              A tiger is in the
            contains a tiger                        other room


Signs for Trial 3:

                  I                                     II
      Either a tiger is in this room             A lady is in the
      or a lady is in the other room                other room


Representing the problems is much more difficult than finding the
solutions.  The latter two test a sometimes ignored aspect of the
[Prolog] language.

Have fun !

------------------------------

Date: 27 Nov 1983 20:42:46-EST
From: Mark.Fox at CMU-RI-ISL1
Subject: AI talk

                 [Reprinted from the CMU-AI bboard.]

TITLE:          Databases and the Logic of Business
SPEAKER:        Ronald M. Lee, IIASA Austria & LNEC Portugal
DATE:           Monday, Nov. 28, 1983
PLACE:          MS Auditorium, GSIA

ABSTRACT: Business firms differentiate themsleves with special products,
services, etc.  Nevertheless, commercial activity requires certain
standardized concepts, e.g., a common temporal framework, currency of
exchange, concepts of ownership and contractual obligation.  A logical data
model, called CANDID, is proposed for modelling these standardized aspects
in axiomatic form.  The practical value is the transportability of this
knowledge across a wide variety of applications.

------------------------------

Date: 30 Nov 83 18:58:27 PST (Wednesday)
From: Kluger.PA@PARC-MAXC.ARPA
Reply-to: Kluger.PA@PARC-MAXC.ARPA
Subject: HP Computer Colloquium 12/1/83 

                Professor Roman Lopez de Montaras
                Politecnico Universidade Barcelona

      A Learning System for Linguistic Categorization of Soft
                             Observations

We describe a human-guided feature classification system. A person
teaches the denotation of subjective linguistic feature descriptors to
the system by reference to examples.  The resulting knowledge base of
the system is used in the classification phase for interpetation of
descriptions.

Interpersonal descriptions are communicated via semantic translations of
subjective descriptions.  The advantage of a subjective linguistic
description over more traditional arithmomorphic schemes is their high
descriptor-feature consistency.  This is due to the relative simplicity
of the underlying cognitive process.  This result is a high feature
resolution for the overall cognitive perception and description
processes.

At present the system is still being used for categorization of "soft"
observations in psychological research, but application in any
person-machine system are conceivable.

------------------------------

End of AIList Digest
********************
 2-Dec-83 16:31:26-PST,18493;000000000001
Mail-From: LAWS created at  2-Dec-83 16:28:44
Date: Fri  2 Dec 1983 16:15-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #108
To: AIList@SRI-AI


AIList Digest            Saturday, 3 Dec 1983     Volume 1 : Issue 108

Today's Topics:
  Editorial Policy,
  AI Jargon,
  AI - Challenge Responses,
  Expert Systems & Knowledge Representation & Learning
----------------------------------------------------------------------

Date: Fri 2 Dec 83 16:08:01-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Editorial Policy

It has been suggested that the volume on this list is too high and the
technical content is too low.  Two people have recently written to me
suggesting that the digest be converted to a magazine format with
perhaps a dozen edited departments that would constitute alternating
special issues.

I appreciate their offers to serve as editors, but have no desire to
change the AIList format.  The volume has been high, but that is
typical of new lists.  I encourage technical contributions, but I do
not wish to discourage general-interest discussions.  AIList provides
a forum for material not appropriate to journals and conferences --
"dumb" questions, requests for information, abstracts of work in
progress, opinions and half-baked ideas, etc.  I do not find these a
waste of time, and attempts to screen any class of "uninteresting"
messages will only deprive those who are interested in them.  A major
strength of AIList is that it helps us develop a common vocabulary for
those topics that have not yet reached the textbook stage.

If people would like to split off their own sublists, I will be glad
to help.  That might reduce the number of uninteresting messages
each reader is exposed to, although the total volume of material would
probably be higher.  Narrow lists do tend to die out as their boom and
bust cycles gradually lengthen, but AIList could serve as the channel
by which members could regroup and recruit new members.  The chief
disadvantage of separate lists is that we would lose valuable
cross-fertilization between disciplines.

For the present, I simply ask that members be considerate when
composing messages.  Be concise, preferably stating your main points
in list form for easy reference.  Remember that electronic messages
tend to seem pugnacious, so that even slight sarcasm may arouse
numerous rebuttals and criticisms.  It is unnecessary to marshall
massive support for every claim since you will have the opportunity to
reply to critics.  Also, please keep in mind that AIList (under my
moderatorship) is primarily concerned with AI and pattern recognition,
not psychology, metaphysics, philosophy of science, or any other topic
that has its own major following.  We welcome any material that
advances the progress of intelligent machines, but the hard-core
discussions from other disciplines should be directed elsewhere.

                                        -- Ken Laws

------------------------------

Date: Tue 29 Nov 83 21:09:12-PST
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: Re: Dyer's flame

    In this life of this list a number of issues, among them intelligence,
parallelism and AI, defense of AI, rational psychology, and others have
been maligned as "pointless" or whatever. Without getting involved in a
debate on "philosophy" vs. "real research", a quick scan of these topics
shows them to be far from pointless. I regret that Dyer's students have
stopped reading this list; perhaps they should follow his advice of submitting
the right type of article to this list.

    As a side note, I am VERY interested in having people outside of mainstream
AI participate in this list; while one sometimes wades through muddled articles
of little value, this is more than repaid by the fresh viewpoints and
occasional gem that would have been otherwise never been found.

    Ken Laws has done an excellent job grouping the articles by interest and
topic; uninterested readers can then skip reading an entire volume, if the
theme is uninteresting. A greater number of articles submitted can only
improve this process; the burden is on those unsatisfied with the content of
this board to submit them. I would welcome submissions of the kind suggested
by Dr. Dyer, and hope that others will follow his advice and try to lead the
board to whatever avenue they think is the most interesting. There's room
here for all of us...

David Rogers
DRogers@SUMEX-AIM.ARPA

------------------------------

Date: Tue 29 Nov 83 22:24:14-PST
From: PEREIRA@SRI-AI.ARPA
Subject: Tools

I agree with Michael Dyer's comments on the lack of substantive
material in this list and on the importance of dealing with
new "real" tasks rather than using old solutions of old problems
to show off one's latest tool. However, I feel like adding two
comments:

1. Some people (me included) have a limited supply of "writing energy"
to write serious technical stuff: papers, proposals and the like.
Raving about generalities, however, consumes much less of that energy
per line than the serious stuff. The people who are busily writing
substantive papers have no energy left to summarize them on the net.

2. Very special tools, in particular fortunate situations
("epiphanies"?!) can bring a new and better level of understanding of a
problem, just by virtue of what can be said with the new tool, and
how. Going the other direction, we all know that we need to change our
tools to suit our problems. The paradigmatic relation between subject
and tool is for me the one between classical physics and mathematical
analysis, where tool and subject are intimately connected but yet
distinct. Nothing of the kind has yet happened in AI (which shouldn't
surprise us, seeing at how long it took to develop that other
relationship...).

Note: Knowing of my involvement with Prolog/logic programming, some
reader of this might be tempted to think "Ahah! what he is really
driving at is that logic/Horn clauses/Prolog [choose one] is that kind
of tool for AI. Let me kill that presumption in the bud, these tool
addicts are dangerous!" Gentle reader, save your flame! Only time will
show whether anything of the kind is the case, and my private view on
the subject is sufficiently complicated (confused?) that if I could
disentangle it and write about it clearly I would have a paper rather
than a net message...

Fernando Pereira

------------------------------

Date: Wed 30 Nov 83 11:58:56-PST
From: Wilkins  <WILKINS@SRI-AI.ARPA>
Subject: jargon

I understand Dyer's comments on what he calls the tool/content distinction.
But it seems to me that the content distinctions he rightly thinks are
important can often be expressed in terms of tools, and that it would be
clearer to do so.  He talked about handling one's last trip to the restaurant
differently from the last time one is in love.  I agree that this is an
important distinction to make.  I would like to see the difference expressed
in "tools", e.g., "when handling a restaurant trip (or some similar class of
events) our system does a chronological search down its list of events, but
when looking for love, it does a best first search on its list of personal
relationships."  This is clearer and communicates more than saying the system
has a "love-MOP" and a "restaurant-script".  This is only a made up example
-- I am not saying Mr. Dyer used the above words or that he does not explain
things well.  I am just trying to construct a non-personal example of the
kind of thing to which I object, but that occurs often in the literature.

------------------------------

Date: Wed, 30 Nov 83 13:47 EST
From: Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>
Subject: McCarthy and 'mental' states

In the December Psychology Today John McCarthy has a short article that
raises a fairly contentious point.

In his article he talks about how it is not necessarily a bad thing that
people attribute "human" or what the calls 'mental' attributes to complex
systems. Thus when someone anthropomorphises the actions of his/her
car, boat, or terminal, one is engaging in a legitimate form of description
of a complex process.

Indeed he argues further that while currently most computer programs
can still be understood by their underlying mechanistic properties,
eventually complex expert systems will only be capable of being described
by attributing 'mental' states to them.

                                 ----

I think this is the proliferation of jargon and verbiage that
Ralph Johnson noted is associated with
a large segment of AI work. What has happened is not a discovery or
emulation of cognitive processes, but a break-down of certain weak
programmers' abilities to describe the mechanical characteristics of
their programs. They then resort to arcane languages and to attributing
'mental' characteristics to what are basically fuzzy algorithms that
have been applied to poorly formalized or poorly characterized problems.
Once the problems are better understood and are given a more precise
formal characterization, one no longer needs "AI" techniques.

                                        - Steven Gutfreund

------------------------------

Date: 28 Nov 83 23:04:58-PST (Mon)
From: pur-ee!uiucdcs!uicsl!Anonymous @ Ucb-Vax
Subject: Re: Clarifying my 'AI Challange' - (nf)
Article-I.D.: uiucdcs.4190

re: The Great Promises of AI

Beware the promises of used car salesmen.  The press has stories to
sell, and so do the more extravagant people within AI.  Remember that
many of these people had to work hard to convince grantmakers that AI
was worth their money, back in the days before practical applications
of expert systems began to pay off.

It is important to distinguish the promises of AI from the great
fantasies that have been speculated by the media (and some AI
researchers) in a fit of science fiction.  AI applications will
certainly be diverse and widespread (thanks no less to the VLSI
people).  However, I hope that none of us really believes that machines
will possess human general intelligence any time soon.  We banter about
such stuff hoping that when ideas fly, at least some of them will be
good ones.  The reality is that nobody sees a clear and brightly lit
path from here to super-intelligent robots.  Rather we see hundreds of
problems to be solved.  Each solution should bring our knowledge and
the capabilities of our programs incrementally forward.  But let's not
kid ourselves about the complexity of the problems.  As it has already
been pointed out, AI is tackling the hard problems -- the ones for
which nobody knows any algorithms.

------------------------------

Date: Wed, 30 Nov 83 10:29 PST
From: Tong.PA@PARC-MAXC.ARPA
Subject: Re: AI Challenge

  Tom Dietterich:
  Your view of "knowledge representations" as being identical with data
  structures reveals a fundamental misunderstanding of the knowledge vs.
  algorithms point. . .Why, I'll bet there's not a single AI program that
  uses leftist-trees or binomial queues!

  Sanjai Narain:
  We at Rand have ROSS. . .One implementation of ROSS uses leftist trees for
  maintaining event queues. Since these queues are in the innermost loop
  of ROSS's operation, it was only sensible to make them as efficient as
  possible. We think we are doing AI.

Sanjai, you take the letter but not the spirit of Tom's reflection. I
don't think any AI researcher would object to improving the efficiency
of her program, or using traditional computer science knowledge to help.
But - look at your own description of ROSS development! Clearly you
first conceptualized ROSS ("queues are the innermost loop") and THEN
worried about efficiency in implementing your conceptualization ("it was
only sensible to make them as efficient as possible"). Traditional
computer science can shed much light on implementation issues, but has
in practice been of little direct help in the conceptualization phase
(except occasionally by analogy and generalization). All branches of
computer science share basic interests such as how to represent and use
knowledge, but AI differs in the GRAIN SIZE of the knowledge it
considers.  It would be very desirable to have a unified theory of
computer science that provides ideas and tools along the continuum of
knowledge grain size; but we are not quite there, yet. Until that time,
perceiving the different branches of computer science as contributing
useful knowledge to different levels of implementation (e.g. knowledge
level, data level, register transfer level, hardware level) is probably
the best integration our short term memories can handle.

Chris Tong

------------------------------

Date: 28 Nov 83 22:25:35-PST (Mon)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: RJ vs AI: Science vs Engineering? - (nf)
Article-I.D.: uiucdcs.4187

In response to Johnson vs AI, and Tom Dietterich's defense:

The emergence of the knowledge-based perspective is only the beginning of
what AI has achieved and is working on. Obvious corollaries: knowledge
acquisition and extraction, representation, inference engines.

Some rather impressive results have been obtained here. One with which I
am most familiar is work being done at Edinburgh by the Machine Intelligence
Research Unit on knowledge extraction via induction from user-supplied
examples (the induction program is commercially available). A paper by
Shapiro (Alen) & Niblett in Computer Chess 3 describes the beginnings of the
work at MIRU. Shapiro has only this month finished his PhD, which effectively
demonstrates that human experts, with the aid of such induction programs,
can produce knowledge bases that surpass the capabilities of any expert
as regards their completeness and consistency. Shapiro synthesized a
totally correct knowledge base for part of the King-and-Pawn against
King-and-Rook chess endgame, and even that relatively small endgame
was so complex that, though it was treated in the chess literature, the
descriptions provided by human experts consisted largely of gaps. Impressively,
3 chess novices managed (again with the induction program) to achieve 99%
correctness in this normally difficult problem.

        The issue: even novices are better at articulating knowledge
        by means of examples than experts are at articulating the actual
        rules involved, *provided* that the induction program can represent
        its induced rules in a form intelligible to humans.

The long-term goal and motivation for this work is the humanization of
technology, namely the construction of systems that not only possess expert
competence, but are capable of communicating their reasoning to humans.
And we had better get this right, lest we get stuck with machines that run our
nuclear plants in ways that are perhaps super-smart but incomprehensible ...
until a crisis happens, when suddenly the humans need to understand what the
machine has been doing until now.

The problem: lack of understanding of human cognitive psychology. More
specifically, how are human concepts (even for these relatively easy
classification tasks) organized? What are the boundaries of 'intelligibility'?
Though we are able to build systems that function, in some ways, like a human
expert, we do not know much about what distinguishes brain-computable processes
from general algorithms.

But we are learning. In fact, I am tempted to define this as one criterion
distinguishing knowledge-based AI from other computing: the absolute necessity
of having our programs explain their own processing. This is close to demanding
that they also process in brain-compatible terms. In any case we will need to
know what the limits of our brain-machine are, and in what forms knowledge
is most easily apprehensible to it. This brings our end of AI very close to
cognitive psychology, and threatens to turn knowledge representation into a
hard science -- not just

        What does a system need, to be able to X?

but     How does a human brain produce behavior/inference X, and how do
        we implement that so as preserve maximal man-machine compatibility?

Hence the significance of the work by Shapiro, mentioned above: the
intelligibility of his representations is crucial to the success of his
knowledge-acquisition method, and the whole approach provides some clues on
how a humane knowledge representation might be scientifically determined.

A computer is merely a necessary weapon in this research. If AI has made little
obvious progress it may be because we are too busy trying to produce useful
systems before we know how they should work. In my opinion there is too little
hard science in AI, but that's understandable given its roots in an engineering
discipline (the applications of computers). Artificial intelligence is perhaps
the only "application" of computers in which hard science (discovering how to
describe the world) is possible.

We might do a favor both to ourselves and to psychology if knowledge-based AI
adopted this idea. Of course, that would cut down drastically on the number of
papers published, because we would have some very hard criteria about what
comprised a tangible contribution. Even working programs would not be
inherently interesting, no matter what they achieved or how they achieved it,
unless they contributed to our understanding of knowledge, its organization
and its interpretation. Conversely, working programs would be necessary only
to demonstrate the adequacy of the idea being argued, and it would be possible
to make very solid contributions without a program (as opposed to the flood of
"we are about to write this program" papers in AI).

So what are we: science or engineering? If both, let's at least recognize the
distinction as being valuable, and let's know what yet another expert system
proves beyond its mere existence.

                                        Marcel Schoppers
                                        U of Illinois @ Urbana-Champaign

------------------------------

End of AIList Digest
********************
 4-Dec-83 23:06:49-PST,13008;000000000001
Mail-From: LAWS created at  4-Dec-83 23:06:01
Date: Sun  4 Dec 1983 22:56-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #109
To: AIList@SRI-AI


AIList Digest             Monday, 5 Dec 1983      Volume 1 : Issue 109

Today's Topics:
  Expert Systems & VLSI - Request for Material,
  Programming Languages - Productivity,
  Editorial Policy - Anonymous Messages,
  Bindings - Dr. William A. Woods,
  Intelligence,
  Looping Problem,
  Pattern Recognition - Block Modeling,
  Seminars - Programs as Predicates & Explainable Expert System
----------------------------------------------------------------------

Date: Sun, 4 Dec 83 17:59:53 PST
From: Tulin Mangir <tulin@UCLA-CS>
Subject: Request for Material

      I am preparing a tutorial and a current bibliography, for IEEE,
of the work in the area of expert system applications to CAD and computer aided
testing as well as computer aided processing. Specific emphasis is
on LSI/VLSI design, testing and processing. I would like this
material to be as complete and as current as we can all make. So, if you
have any material in these areas that you would like me to include
in the notes, ideas about representation of structure, knowledge,
behaviour of digital circuits, etc., references you know of,
please send me a msg. Thanks.

Tulin Mangir <cs.tulin@UCLA-cs>
(213) 825-2692
      825-4943 (secretary)

------------------------------

Date: 29 Nov 83 22:25:19-PST (Tue)
From: sri-unix!decvax!duke!mcnc!marcel@uiucdcs.UUCP (marcel )@CCA
Subject: Re: lisp productivity question - (nf)
Article-I.D.: uiucdcs.4197

And now a plug from the logic programming people: try prolog for easy
debugging. Though it may take a while to get used to its modus operandi,
it has one advantage that is shared by no other language I know of:
rule-based computing with a clean formalism. Not to mention the ease
of implementing concepts such as "for all X satisfying P(X) do ...".
The end of cumbersome array traversals and difficult boolean conditions!
Well, almost. Not to mention free pattern matching. And I wager that
the programs will be even shorter in Prolog, primarily because of these
considerations. I have written 100-line Prolog programs which were
previously coded as Pascal programs of 2000 lines.

Sorry, I just couldn't resist the chance to be obnoxious.

------------------------------

Date: Fri, 2 Dec 83 09:47 EST
From: MJackson.Wbst@PARC-MAXC.ARPA
Subject: Lisp "productivity"

"A caveat: Lisp is very well suited to the nature of game programs.
A fair test would require that data processing and numerical analysis
problems be included in the mix of test problems."

A fair test of what?  A fair test of which language yields the greatest
productivity when applied to the particular mix of test problems, I
would think.  Clearly (deepfelt theological convictions to the contrary)
there is NO MOST-PRODUCTIVE LANGUAGE.  It depends on the problem set; I
like structured languages so I do my scientific programming in Ratfor,
and when I had to do it in Pascal it was awful, but for a different type
of problem Pascal would be just fine.

Mark

------------------------------

Date: 30 Nov 83 22:49:51-PST (Wed)
From: pur-ee!uiucdcs!uicsl!Anonymous @ Ucb-Vax
Subject: Lisp Productivity & Anonymous Messages

  Article-I.D.: uiucdcs.4245

  The most incredible programming environment I have worked with to date is
  that of InterLisp.  The graphics-based trace and break packages on Xerox's
  InterLisp-D (not to mention the Lisp editor, file package, and the
  programmer's assistant) is, to say the least, addictive.  Ease of debugging
  has been combined with power to yield an environment in which program
  development/debugging is easy, fast and productive.  I think other languages
  have a long way to go before someone develops comparable environments for
  them.  Of course, part of this is due to the language (i.e., Lisp) itself,
  since programs written in Lisp tend to be easy to conceptualize and write,
  short, and readable.

[I will pass this message along to the Arpanet AIList readers,
but am bothered by its anonymous authorship.  This is hardly an
incriminating message, and I see no reason for the author to hide.
I do not currently reject anonymous messages out of hand, but I
will certainly screen them strictly.  -- KIL]

------------------------------

Date: Thu 1 Dec 83 07:37:04-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Press Release RE: Dr. William A. Woods

                [Reprinted from the SU-SCORE bboard.]

As of September 16, Chief Scientist directing all research in AI and related
technologies for Applied Expert Systems, Inc., Five Cambridge Center,
Cambridge, Mass 02142  (617)492-7322  net address Woods@BBND (same as before)
HL

------------------------------

Date: Fri, 2 Dec 83 09:57:14 PST
From: Adolfo Di-Mare <v.dimare@UCLA-LOCUS>
Subject: a new definition of intelligence

You're intelligence is directly proportional to the time it takes
you to bounce back after you're replaced by an <intelligent> computer.

As I'm not an economist, I won't argue on how intelligent we are...
Put in another way, is an expert that builds a machine that substitutes
him/er intelligent? If s/he is not, is the machine?

        Adolfo
              ///

------------------------------

Date: 1 Dec 83 20:37:31-PST (Thu)
From: decvax!bbncca!jsol @ Ucb-Vax
Subject: Re: Halting Problem Discussion
Article-I.D.: bbncca.365

Can a method be formulated for deciding whether or not your are on the right
track? Yes. It's call interaction. Ask someone you feel you can trust about
whether or not you are getting anywhere, and to offer any advice to help you
get where you want to go.

Students do it all the time, they come to their teachers and ask them to
help them. Looping programs could decide that they have looped for as long
as they care to and reality check them. An algorithm to do this is available
if anyone wants it (read that to mean I will produce one).
--
[--JSol--]

JSol@Usc-Eclc/JSol@Bbncca (Arpa)
JSol@Usc-Eclb/JSol@Bnl (Milnet)
{decvax, wjh12, linus}!bbncca!jsol

------------------------------

From: Bibbero.PMSDMKT
Reply-to: Bibbero.PMSDMKT
Subject: Big Brother and Block Modeling, Warning

               [Reprinted from the Human-Nets Digest.]

  [This application of pattern recognition seems to warrant mention,
  but comments on the desirability of such analysis should be directed
  to Human-Nets@RUTGERS. -- KIL]

The New York Times (Nov 20, Sunday Business Section) carries a warning
from two Yale professors against a new management technique that can
be misused to snoop on personnel through sophisticted mathematical
analysis of communications, including computer network usage.
Professors Scott Boorman, a Yale sociologist, and Paul Levitt,
research mathematician at Yale and Harvard (economics) who authored
the article also invented the technique some years ago.  Briefly, it
consists of computer-intensive analysis of personnel communications to
divide them into groups or "blocks" depending on who they communicate
with, whom they copy on messages, who they phone and who's calls don't
they return.  Blocks of people so identified can be classified as
dissidents, potential traitors or "Young Turks" about to split off
their own company, company loyalists, promotion candidates and so
forth.  "Guilt by association" is built into the system since members
of the same block may not even know each other but merely copy the
same person on memos.

The existence of an informal organization as a powerful directing
force in corporations, over and above the formal organization chart,
has been recognized for a long time.  The block analysis method
permits and "x-ray" penetration of these informal organizations
through use of computer on-line analysis which may act, per the
authors, as "judge and jury."  The increasing usage of electronic
mail, voice storage and forward systems, local networks and the like
make clandestine automation of this kind of snooping simple, powerful,
and almost inevitable.  The authors cite as misusage evidence the high
degree of interest in the method by iron curtain government agencies.
An early success (late 60's) was also demonstrated in a Catholic
monastery where it averted organizational collapse by identifying
members as loyalists, "Young Turks," and outcasts.  Currently,
interest is high in U.S. corporations, particularily the internal
audit departments seeking to identify dissidents.

As the authors warn, this revolution in computers and information
systems bring us closer to George Orwell's state of Oceania.

------------------------------

Date: 1 Dec 1983 1629-EST
From: ELIZA at MIT-XX
Subject: Seminar Announcement

                 [Reprinted from the MIT-AI bboard.]


Date:  Wednesday, December 7th, l983

Time:  Refreshments 3:30 P.M.
       Seminar      3:45 P.M.

Place: NE43-512A (545 Technology Square, Cambridge)


                    PROGRAMS ARE PREDICATES
                          C. A. R. Hoare
                        Oxford University

    A program is identified with the strongest predicate
    which describes every observation that might be made
    of a mechanism which executes the program.  A programming
    language is a set of programs expressed in a limited
    notation, which ensures that they are implementable
    with adequate efficiency, and that they enjoy desirable
    algebraic properties.  A specification S is a predicate
    expressed in arbitrary mathematical notation.  A program
    P meets this specification if

                            P ==> S .

    Thus a calculus for the derivation of correct programs
    is an immediate corollary of the definition of the
    language.

    These theses are illustrated in the design of two simple
    programming languages, one for sequential programming and
    the other for communicating sequential processes.

Host:  Professor John V. Guttag

------------------------------

Date: 12/02/83 09:17:19
From: ROSIE at MIT-ML
Subject: Expert Systems Seminar

                             [Forwarded by SASW@MIT-MC.]

                          DATE:    Thursday, December 8, 1983
                          TIME:    2.15 p.m.  Refreshments
                                   2.30 p.m.  Lecture
                          PLACE:   NE43-AI Playroom


                          Explainable Expert Systems

                                Bill Swartout
                      USC/Information Sciences Institute


Traditional methods for explaining programs provide explanations by converting
the code of the program to English.  While such methods can sometimes
adequately explain program behavior, they cannot justify it.  That is, such
systems cannot tell why what the system is doing is reasonable.  The problem
is that the knowledge required to provide these justifications was used to
produce the program but is itself not recorded as part of the code and hence
is unavailable.  This talk will first describe the XPLAIN system, a previous
research effort aimed at improving the explanatory capabilities of expert
systems.  We will then outline the goals and research directions for the
Explainable Expert Systems project, a new research effort just starting up at
ISI.

The XPLAIN system uses an automatic programmer to generate a consulting
program by refinement from abstract goals.  The automatic programmer uses two
sources of knowledge: a domain model, representing descriptive facts about the
application domain, and a set of domain principles, representing
problem-solving knowledge, to drive the refinement process forward.  As XPLAIN
creates an expert system, it records the decisions it makes in a refinement
structure.  This structure is then used to provide explanations and
justifications of the expert system.

Our current research focuses on three areas.  First, we want to extend the
XPLAIN framework to represent additional kinds of knowledge such as control
knowledge for efficient execution.  Second, we want to investigate the
compilation process that moves from abstract to specific knowledge.  While it
does seem that human experts compile their knowledge, they do not always use
the resulting specific methods.  This may be because the specific methods
often contain compiled-in assumptions which are usually (but not always)
correct.  Third, we intend to use the richer framework provided by XPLAIN for
enhanced knowledge acquisition.

HOST:  Professor Peter Szolovits

------------------------------

End of AIList Digest
********************
 6-Dec-83 20:41:40-PST,20774;000000000001
Mail-From: LAWS created at  6-Dec-83 20:38:18
Date: Tue  6 Dec 1983 20:24-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #110
To: AIList@SRI-AI


AIList Digest           Wednesday, 7 Dec 1983     Volume 1 : Issue 110

Today's Topics:
  AI and Manufacturing - Request,
  Bindings - HPP,
  Programming Languages - Environments & Productivity,
  Vision - Cultural Influences on Perception,
  AI Jargon - Mental States of Machines,
  AI Challange & Expert Systems,
  Seminar - Universal Subgoaling
----------------------------------------------------------------------

Date: 5 Dec 83 15:14:26 EST  (Mon)
From: Dana S. Nau <dsn%umcp-cs@CSNet-Relay>
Subject: AI and Automated Manufacturing

I and some colleagues at University of Maryland are doing a literature
search on the use of AI techniques in Automated Manufacturing.
The results of the literature search will comprise a report to be
sent to the National Bureau of Standards as part of a research
contract.  We'd appreciate any relevant information any of you may
have--especially copies of papers or technical reports.  In
return, I can send you (on request) copies of some papers I have
published on that subject, as well as a copy of the literature
search when it is completed.  My mailing address is

                Dana S. Nau
                Computer Science Dept.
                University of Maryland
                College Park, MD 20742

------------------------------

Date: Mon 5 Dec 83 08:27:28-PST
From: HPP Secretary <HPP-SECRETARY@SUMEX-AIM.ARPA>
Subject: New Address for HPP

  [Reprinted from the SU-SCORE bboard.]

The HPP has moved.  Our new address is:

    Heuristic Programming Project
    Computer Science Department
    Stanford University
    701 Welch Road, Bldg. C
    Palo Alto, CA 94304

------------------------------

Date: Mon, 5 Dec 83 09:43:51 PST
From: Seth Goldman <seth@UCLA-CS>
Subject: Programming environments are fine, but...

What are all of you doing with your nifty, adequate, and/or brain-damaged
computing environments?  Also, if we're going to discuss environments, it
would be more productive I think to give concrete examples of the form:

        I was trying to do or solve X
        Here is how my environment helped me OR
        This is what I need and don't yet have

It would also be nice to see some issues of AIList dedicated to presenting
1 or 2 paragraph abstracts of current work being pursued by readers and
contributors to this list.  How about it Ken?

        [Sounds good to me.  It would be interesting to know
        whether progress in AI is currentlyheld back by conceptual
        problems or just by the programming effort of building
        large and user-friendly systems.  -- KIL]

Seth Goldman

------------------------------

Date: Monday, 5 December 1983 13:47:13 EST
From: Robert.Frederking@CMU-CS-CAD
Subject: Re: marcel on "lisp productivity question"

        I just thought I should mention that production system languages
share all the desirable features of Prolog mentioned in the previous
message, particularly being "rule-based computing with a clean formalism".
The main differences with the OPS family of languages is that OPS uses
primarily forward inference, instead of backwards inference, and a slightly
different matching mechanism.  Preferring one over the other depends, I
suspect, on whether you think in terms of proofs or derivations.

------------------------------

Date: Mon, 5 Dec 83 10:23:17 pst
From: evans@Nosc (Evan C. Evans)
Subject: Vision & Such

Ken Laws in AIList Digest 1:99 states:  an  adequate  answer [to
the question of why computers can't see yet] requires a guess
at how it is that the human vision system can work in all cases.
I cannot answer Ken's question, but perhaps I  can provide some
useful input.

        language shapes culture    (Sapir-Whorf hypothesis)
        culture  shapes vision     (see following)
        vision   shapes language   (a priori)

The influence of culture on perception (vision) takes many forms.
A  statistical examination (unpublished) of the British newspaper
game "Where's the ball?" is worth consideration.  This  game  has
been appearing for some time in British, Australian, New Zealand,
& Fijian papers.  So far as I know, it has not yet made  its  ap-
pearance in U.S. papers.  The game is played thus:
        A photograph of some common sport  involving  a  ball  is
published  with  the ball erased from the picture & the question,
where's the ball?  Various members  of  the  readership  send  in
their guesses & that closest to the ball's actual position in the
unmodified photo wins.  Some time back the responses  to  several
rounds of this game were subjected to statistical analysis.  This
analysis showed that there were statistically  valid  differences
associated  with  the  cultural  background  of the participants.
This finding was particularly striking in Fiji  with  a  resident
population  comprising  several  very  different cultural groups.
Ball placement by the different groups tended to cluster at  sig-
nificantly  different  locations  in the picture, even for a game
like soccer that was well known & played by all.   It  is  unfor-
tunate that this work (not mine) has not been published.  It does
suggest two things: a.) a cultural influence on vision &  percep-
tion,  &  b.) a powerful means of conducting experiments to learn
more about this influence.  For instance, this same research  was
elaborated  into  various  TV displays designed to discover where
children of various age groups placed an unseen object  to  which
an  arrow  pointed.   The  children responded enthusiastically to
this new TV game, giving their answers by means of a  light  pen.
Yet  statistically significant amounts of data were collected ef-
ficiently & painlessly.
        I've constructed the loop above to suggest that  none  of
the  three:  vision, language, & culture should be studied out of
context.

E. C. Evans III

------------------------------

Date: Sat 3 Dec 83 00:42:50-PST
From: PEREIRA@SRI-AI.ARPA
Subject: Mental states of machines

Steven Gutfreund's criticism of John McCarthy is unjustified.  I
haven't read the article in "Psychology Today", but I am familiar with
the notion put forward by JMC and condemned by SG.  The question can
be put in simple terms: is it useful to attribute mental states and
attitudes to machines? The answer is that our terms for mental states
and attitudes ("believe", "desire", "expect", etc...) represent a
classification of possible relationships between world states and the
internal (inacessible) states of designated individuals. Now, for
simple individuals and worlds, for example small finite automata, it
is possible to classify the world-individual relationships with simple
and tractable predicates. For more complicated systems, however, the
language of mental states is likely to become essential, because the
classifications it provides may well be computationally tractable in
ways that other classifications are not. Remember that individuals of
any "intelligence" must have states that encode classifications of
their own states and those of other individuals. Computational
representations of the language of mental states seem to be the only
means we have to construct machines with such rich sets of states that
can operate in "rational" ways with respect to the world and other
individuals.

SG's comment is analogous to the following criticism of our use of the
terms like "execution", "wait" or "active" when talking about the
states of computers: "it is wrong to use such terms when we all know
that what is down there is just a finite state machine, which we
understand so well mathematically."

Fernando Pereira

------------------------------

Date: Mon 5 Dec 83 11:21:56-PST
From: Wilkins  <WILKINS@SRI-AI.ARPA>
Subject: complexity of formal systems

  From: Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>
  They then resort to arcane languages and to attributing 'mental'
  characteristics to what are basically fuzzy algorithms that have been applied
  to poorly formalized or poorly characterized problems.  Once the problems are
  better understood and are given a more precise formal characterization, one
  no longer needs "AI" techniques.

I think Professor McCarthy is thinking of systems (possibly not built yet)
whose complexity comes from size and not from imprecise formalization.  A
huge AI program has lots of knowledge, all of it may be precisely formalized
in first-order logic or some other well understood formalism, this knowledge
may be combined and used by well understood and precise inference algorithms,
and yet because of the (for practical purposes) infinite number of inputs and
possible combinations of the individual knowledge formulas, the easiest
(best? only?) way to desribe the behavior of the system is by attributing
mental characteristics.  Some AI systems approaching this complex already
exist.  This has nothing to do with "fuzzy algorithms" or "poorly formalized
problems", it is just the inherent complexity of the system.  If you think
you can usefully explain the practical behavior of any well-formalized system
without using mental characteristics, I submit that you haven't tried it on a
large enough system (e.g. some systems today need a larger address space than
that available on a DEC 2060 -- combining that much knowledge can produce
quite complex behavior).

------------------------------

Date: 28 Nov 83 3:10:20-PST (Mon)
From: harpo!floyd!clyde!akgua!sb1!sb6!bpa!burdvax!sjuvax!rbanerji@Ucb-
      Vax
Subject: Re: Clarifying my "AI Challange"
Article-I.D.: sjuvax.157

        [...]
        I am reacting to Johnson, Helly and Dietterich.  I really liked
[Ken Laws'] technical evaluation of Knowledge-based programming. Basically
similar to what Tom also said in defense of Knowledge-based programming
but KIL said it much clearer.
        On one aspect, I have to agree with Johnson about expert systems
and hackery, though. The only place there is any attempt on the part of
an author to explain the structure of the knowledge base(s) is in the
handbook. But I bet that as the structures are changed by later authors
for various justified and unjustified reasons, they will not be clearly
explained except in vague terms.
        I do not accept Dietterich's explanation that AI papers are hard
to read because of terminology; or because what they are trying to do
are so hard. On the latter point, we do not expect that what they are
DOING be easy, just that HOW they are doing it be clearly explained:
and that the definition of clarity follow the lines set out in classical
scientific disciplines. I hope that the days are gone when AI was
considered some sort of superscience answerable to none. On the matter
of terminology, papers (for example) on algebraic topology have more
terminology than AI: terminology developed over a longer period of time.
But if one wants to and has the time, he can go back, back, back along
lines of reference and to textbooks and be assured he will have an answer.
In AI, about the only hope is to talk to the author and unravel his answers
carefully and patiently and hope that somewhere along the line one does not
get "well, there is a hack there..it is kind of long and hard to explain:
let me show you the overall effect"
        In other sciences, hard things are explained on the basis of
previously explained things. These explanantion trees are much deeper
than in AI; they are so strong and precise that climbing them may
be hard, but never hopeless.
        I agree with Helly in that this lack is due to the fact that no
attempt has been made in AI to have workers start with a common basis in
science, or even in scientific methodology. It has suffered in the past
because of this. When existing methods of data representation and processing
in theorem proving was found inefficient, the AI culture developed this
self image that its needs were ahead of logic: notwithstanding the fact
that the techniques they were using were representable in logic and that
the reason for their seeming success was in the fact that they were designed
to achieve efficiency at the cost (often high) of flexibility. Since
then, those words have been "eaten": but at considerable cost. The reason
may well be that the critics of logic did not know enough logic to see this.
In some cases, their professors did--but never cared to explain what the
real difficulty in logic was. Or maybe they believed their own propaganda.
        This lack of uniformity of background came out clear when Tom said
that because of AI work people now clearly understood the difference between
the subset of a set and the element of a set. This difference has been well
known at least since early this century if not earlier. If workers in AI
did not know it before, it is because of their reluctance to know the meaning
of a term before they use it. This has also often come from their belief
that precise definitions will rob their terms of their richness (not realising
that once they have interpreted their terms by a program, they have a precise
definition, only written in a much less comprehensible way: set theorists
never had any difficulty understanding the diffeence between subsets and
elements). If they were trained, they would know the techniques that are
used in Science for defining terms.
        I disagree with Helly that Computer Science in general is unscientific.
There has always been a precise mathematical basis of Theorem proving (AI,
actually) and in computation and complexity theory. It is true, however, that
the traditional techniques of experimental research have not been used in
AI at all: people have tried hard to use it in software, but seem to
be having difficulties.
        Would Helly disagree with me if I say that Newell and Simon's work
in computer modelling of psychological processes have been carried out
with at least the amount of scientific discipline that psychologists use?
I have always seen that work as one of the success stories in AI.  And
at least some psychologists seem to agree.

        I agree with Tom that AI will have to keep going even if someone
proves that P=NP. The reason is that many AI problems are amenable to
N^2 methods already: except that N is too big. In this connection I have
a question, in case someone can tell me. I think Rabin has a theorem
that given any system of logic and any computable function, there is
a true statement which takes longer to prove than that function predicts.
What does this say about the relation between P and NP, if anything?
        Too long already!

                                ..allegra!astrovax!sjuvax!rbanerji

------------------------------

Date: 1 Dec 83 13:51:36-PST (Thu)
From: decvax!duke!mcnc!ncsu!fostel @ Ucb-Vax
Subject: RE: Expert Systems
Article-I.D.: ncsu.2420

Are expert systems new? Different?  Well, how about an example.  Time
was, to run a computer system, one needed at least one operator to care
and feed for the system.  This is increasingly handled by sophisticated
operating systems.  As such is an operating system an "expert system"?

An OS is usually developed using a style of programming which is quite
different from those of wimpy, unskilled, un-enlightenned applications
programmers.  It would be very hard to build an operating system in the
applications style.  (I claim).  The people who developed the style and
practice it to build systems are not usually AI people although I would
wager the presonality profiles would be quite similar.

Now, that is I think a major point.  Are there different type of people in
Physics as compared to Biology?  I would say so, having seen some of each.
Further, biologists do research in ways that seem different (again, this is
purely idiosynchratic evidence) differently than physists.  Is it that one
group know how to do science better, or are the fields just so differnt,
or are the people attracted to each just different?

Now, suppose a team of people got together and built an expert system which
was fully capable of taking over the control of a very sophisticated
(previously manual, by highly trained people) inventory, billing and
ordering system.  I claim that this is at least as complex as diagnosis
of and dosing of particular drugs (e.g. mycin).  My expert system
was likely written in Cobol by people doing things in quite different ways
from AI or systems hackers.

One might want to argue that the productivity was much lower, that the
result was harder to change and so on.  I would prefer to see this in
Figures, on proper comparisons.  I suspect that the complexity of the
commercial software I mentioned is MUCH greater than the usual problem
attacked by AI people, so that the "productivity" might be comparable,
with the extra time reflecting the complexity.  For example, designing
the reports and generating them for a large complex system (and doing
a good job)  may take a large fraction of the total time, yet such
reporting is not usually done in the AI world.  Traces of decisions
and other discourse are not the same.  The latter is easier I think, or
at least it takes less work.

What I'm getting at is that expert systems have been around for a long
time, its only that recently AI people have gotten in to the arena. There
are other techniques which have been applied to developing these, and
I am waiting to be convinced that the AI people have a priori superior
strategies.  I would like to be so convinced and I expect someday to
be convinced, but then again, I probably also fit the AI personality
profile so I am rather biased.
----GaryFostel----

------------------------------

Date: 5 Dec 1983 11:11:52-EST
From: John.Laird at CMU-CS-ZOG
Subject: Thesis Defense

                 [Reprinted from the CMU-AI bboard.]

Come see my thesis defense: Wednesday, December 7 at 3:30pm in 5409 Wean Hall

                        UNIVERSAL SUBGOALING

                             ABSTRACT

A major aim of Artificial Intelligence (AI) is to create systems that
display general problem solving ability.  When problem solving, knowledge is
used to avoid uncertainty over what to do next, or to handle the
difficulties that arises when uncertainity can not be avoided.  Uncertainty
is handled in AI problem solvers through the use of methods and subgoals;
where a method specifies the behavior for avoiding uncertainity in pursuit
of a goal, and a subgoal allows the system to recover from a difficulty once
it arises.  A general problem solver should be able to respond to every task
with appropriate methods to avoid uncertainty, and when difficulties do
arise, the problem solver should be able to recover by using an appropriate
subgoal.  However, current AI problem solver are limited in their generality
because they depend on sets of fixed methods and subgoals.

In previous work, we investigated the weak methods and proposed that a
problem solver does not explicitly select a method for goal, with the
inherent risk of selecting an inappropriate method.  Instead, the problem
solver is organized so that the appropriate weak method emerges during
problem solving from its knowledge of the task.  We called this organization
a universal weak method and we demonstrated it within an architecture,
called SOAR.  However, we were limited to subgoal-free weak methods.

The purpose of this thesis is to a develop a problem solver where subgoals
arise whenever the problem solver encounters a difficulty in performing the
functions of problem solving.  We call this capability universal subgoaling.
In this talk, I will describe and demonstrate an implementation of universal
subgoaling within SOAR2, a production system based on search in a problem
space.  Since SOAR2 includes both universal subgoaling and a universal weak
method, it is not limited by a fixed set of subgoals or methods.  We provide
two demonstrations of this: (1) SOAR2 creates subgoals whenever difficulties
arise during problem solving, (2) SOAR2 extends the set of weak methods that
emerge from the structure of a task without explicit selection.

------------------------------

End of AIList Digest
********************
10-Dec-83 15:21:29-PST,16132;000000000001
Mail-From: LAWS created at 10-Dec-83 15:15:30
Date: Sat 10 Dec 1983 14:46-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #111
To: AIList@SRI-AI


AIList Digest           Saturday, 10 Dec 1983     Volume 1 : Issue 111

Today's Topics:
  Call for Papers - Special Issue of AJCL,
  Linguistics - Phrasal Analysis Paper,
  Intelligence - Purpose of Definition,
  Expert Systems - Complexity,
  Environments - Need for Sharable Software,
  Jargon - Mental States,
  Administrivia - Spinoff Suggestion,
  Knowledge Representation - Request for Discussion
----------------------------------------------------------------------

Date: Thu 8 Dec 83 08:55:34-PST
From: Ray Perrault <RPERRAULT@SRI-AI.ARPA>
Subject: Special Issue of AJCL

         American Journal of Computational Linguistics

The American Journal of Computational Linguistics is planning a
special issue devoted to the Mathematical Properties of Linguistic
Theories.  Papers are hereby requested on the generative capacity of
various syntactic formalisms as well as the computational complexity
of their related recognition and parsing algorithms.  Articles on the
significance (and the conditions for the significance) of such results
are also welcome.  All papers will be subjected to the normal
refereeing process and must be accepted by the Editor-in-Chief, James
Allen.  In order to allow for publication in Fall 1984, five copies of
each paper should be sent by March 31, 1984 to the special issue
editor,

C. Raymond Perrault                Arpanet: Rperrault@sri-ai
SRI International                  Telephone: (415) 859-6470
EK268
Menlo Park, CA 94025.

Indication of intention to submit would also be appreciated.

------------------------------

Date: 8 Dec 1983 1347-PST
From: MEYERS.UCI-20A@Rand-Relay
Subject: phrasal analysis paper


Over a month ago, I announced that I'd be submitting
a paper on phrasal analysis to COLING.  I apologize
to all those who asked for a copy for not getting it
to them yet.  COLING acceptance date is April 2,
so this may be the earliest date at which I'll be releasing
papers.  Please do not lose heart!

Some preview of the material might interest AILIST readers:

The paper is entitled "Conceptual Grammar", and discusses
a grammar that uses syntactic and 'semantic' nonterminals.
Very specific and very general information about language
can be represented in the grammar rules.  The grammar is
organized into explicit levels of abstraction.
The emphasis of the work is pragmatic, but I believe it
represents a new and useful approach to Linguistics as
well.

Conceptual Grammar can be viewed as a systematization of the
knowledge base of systems such as PHRAN (Wilensky and Arens,
at UC Berkeley).  Another motivation for a conceptual grammar is
the lack of progress in language understanding using syntax-based
approaches.  A third motivation is the lack of intuitive appeal
of existing grammars -- existing grammars offer no help in manipulating
concepts the way humans might.  Conceptual Grammar is
an 'open' grammar at all levels of abstraction.  It is meant
to handle special cases, exceptions to general rules, idioms, etc.

Papers on the implemented system, called VOX, will follow
in the near future.  VOX analyzes messages in the Navy domain.
(However, the approach to English is completely general).

If anyone is interested, I can elaborate, though it is
hard to discuss such work in this forum.  Requests
for papers (and for abstracts of UCI AI Project papers)
can be sent by computer mail, or 'snail-mail' to:

        Amnon Meyers
        AI Project
        Department of Computer Science
        University of California
        Irvine, CA  92717

PS: A paper has already been sent to CSCSI.  The papers emphasize
    different aspects of Conceptual Grammar.  A paper on VOX as
    an implementation of Conceptual Grammar is planned for AAAI.

------------------------------

Date: 2 Dec 83 7:57:46-PST (Fri)
From: ihnp4!houxm!hou2g!stekas @ Ucb-Vax
Subject: Re: Rational Psych (and science)
Article-I.D.: hou2g.121

It is true that psychology is not a "science" in the way a physicist
defines "science". Of course, a physicist would be likely to bend
his definition of "science" to exclude psychology.

The situation is very much the same as defining "intelligence".
Social "scientists" keep tightening their definition of intelligence
as required to exclude anything which isn't a human being.  While
AI people now argue over what intelligence is, when an artificial system
is built with the mental ability of a mouse (the biological variety!)
in no time all definitions of intelligence will be bent to include it.

The real significance of a definition is that it clarifies the *direction*
in which things are headed.  Defining "intelligence" in terms of
adaptability and self-consciousness are evidence of a healthy direction
to AI.

                                               Jim

------------------------------

Date: Fri 9 Dec 83 16:08:53-PST
From: Peter Karp <KARP@SUMEX-AIM.ARPA>
Subject: Biologists, physicists, and report generating programs

I'd like to ask Mr. Fostel how biologists "do research in ways that seem
different than physicists".  It would be pretty exciting to find that
one or both of these two groups do science in a way that is not part of
standard scientific method.

He also makes the following claim:

   ... the complexity of the commercial software I mentionned is
   MUCH greater than the usual problem attacked by AI people...

With the example that:

   ... designing the reports and generating them for a large complex
   system (and doing a good job) may take a large fraction of the total
   time, yet such reporting is not usually done in the AI world.

This claim is rather absurd.  While I will not claim that deciding on
the best way to present a large amount of data is a trivial task, the
point is that report generating programs have no knowledge about data
presentation strategies.  People who do have such knowledge spend hours
and hours deciding on a good scheme and then HARD CODING such a scheme
into a program.  Surely one would not claim that a program consisting
soley of a set of WRITELN (or insert your favorite output keyword)
statements has any complexity at all, much less intelligence or
knowledge?  Just because a program takes a long time to write doesn't
mean it has any complexity, in terms of control structures or data
structures.  And in fact this example is a perfect proof of this
conjecture.

------------------------------

Date: 2 Dec 83 15:27:43-PST (Fri)
From: sri-unix!hplabs!hpda!fortune!amd70!decwrl!decvax!duke!mcnc!shebs
      @utah-cs.UUCP (Stanley Shebs)
Subject: Re: RE: Expert Systems
Article-I.D.: utah-cs.2279

A large data-processing application is not an expert system because
it cannot explain its action, nor is the knowledge represented in an
adequate fashion.  A "true" expert system would *not* consist of
algorithms as such.  It would consist of facts and heuristics organized
in a fashion to permit some (relatively uninteresting) algorithmic
interpreter to generate interesting and useful behavior. Production
systems are a good example.  The interpreter is fixed - it just selects
rules and fires them.  The expert system itself is a collection of rules,
each of which represents a small piece of knowledge about the domain.
This is of course an idealization - many "expert systems" have a large
procedural component.  Sometimes the existence of that component can
even be justified...

                                                stan shebs
                                                utah-cs!shebs

------------------------------

Date: Wed, 7 Dec 1983  05:39 EST
From: LEVITT%MIT-OZ@MIT-MC.ARPA
Subject: What makes AI crawl

    From: Seth Goldman <seth@UCLA-CS>
    Subject: Programming environments are fine, but...

    What are all of you doing with your nifty, adequate, and/or brain-damaged
    computing environments?  Also, if we're going to discuss environments, it
    would be more productive I think to give concrete examples...
            [Sounds good to me.  It would be interesting to know
            whether progress in AI is currently held back by conceptual
            problems or just by the programming effort of building
            large and user-friendly systems.  -- KIL]

It's clear to me that, despite a relative paucity of new "conceptual"
AI ideas, AI is being held back entirely by the latter "programming
effort" problem, AND by the failure of senior AI researchers to
recognize this and address it directly.  The problem is regressive
since programming problems are SO hard, the senior faculty typically
give up programming altogether and lose touch with the problems.

Nobody seems to realize how close we would be to practical AI, if just
a handful of the important systems of the past were maintained and
extended, and if the most powerful techniques were routinely applied
to new applications - if an engineered system with an ongoing,
expanding knowledge base were developed.  Students looking for theses
and "turf" are reluctant to engineer anything familiar-looking.  But
there's every indication that the proven techniques of the 60's/early
70's could become the core of a very smart system with lots of
overlapping knowledge in very different subjects, opening up much more
interesting research areas - IF the whole thing didn't have to be
(re)programmed from scratch.  AI is easy now, showing clear signs of
diminishing returns, CS/software engineering are hard.

I have been developing systems for the kinds of analogy problems music
improvisors and listeners solve when they use "common sense"
descriptions of what they do/hear, and of learning by ear.  I have
needed basic automatic constraint satisfaction systems
(Sutherland'63), extensions for dependency-directed backtracking
(Sussman'77), and example comparison/extension algorithms
(Winston'71), to name a few.  I had to implement everything myself.
When I arrived at MIT AI there were at least 3 OTHER AI STUDENTS
working on similar constraint propagator/backtrackers, each sweating
out his version for a thesis critical path, resulting in a draft
system too poorly engineered and documented for any of the other
students to use.  It was idiotic.  In a sense we wasted most of our
programming time, and would have been better off ruminating about
unfamiliar theories like some of the faculty.  Theories are easy (for
me, anyway).  Software engineering is hard.  If each of the 3 ancient
discoveries above was an available module, AI researchers could have
theories AND working programs, a fine show.

------------------------------

Date: Thu, 8 Dec 83 11:56 EST
From: Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>
Subject: re: mental states of machines

I have no problem with using anthropomorphic (or "mental") descriptions of
systems as a heuristic for dealing with difficult problems. One such
trick I especially approve of is Seymour Papert's "body syntonicity"
technique. The basic idea is to get young children to understand the
interaction of mathematical concepts by getting them to enter into a
turtle world and become an active participant in it, and to use this
perspective for understanding the construction of geometric structures.

What I am objecting to is that I sense that John McCarthy is implying
something more in his article: that human mental states are no different
than the very complex systems that we sometimes use mental descriptions
as a shorthand to describe.

I would refer to Ilya Prigogine's 1976 Nobel Prize winning work in chemistry on
"Dissapative Structures" to illustrate the foolishness of McCarthy's
claim.

Dissapative structures can be explained to some extent to non-chemists by means
of the termite analogy. Termites construct large rich and complex domiciles.
These structures sometimes are six feet tall and are filled with complex
arches and domed structures (it took human architects many thousands of
years to come up with these concepts). Yet if one watches termites at
the lowest "mechanistic" level (one termite at a time), all one sees
is a termite randomly placing drops of sticky wood pulp in random spots.

What Prigogine noted was that there are parallels in chemistry. Where random
underlying processes spontaneously give rise to complex and rich ordered
structures at higher levels.

If I accept McCarthy's argument that complex systems based on finite state
automata exhibit mental characteristics, then I must also hold that termite
colonies have mental characteristics, Douglas Hofstadter's Aunt Hillary also
has mental characteristics, and that certain colloidal suspensions and
amorphous crystals have mental characteristics.

                                                - Steven Gutfreund
                                                  Gutfreund.umass@csnet-relay

  [I, for one, have no difficulty with assigning mental "characteristics"
  to inanimate systems.  If a computer can be "intelligent", and thus
  presumably have mental characteristics, why not other artificial
  systems?  I admit that this is Humpty-Dumpty semantics, but the
  important point to me is the overall I/O behavior of the system.
  If that behavior depends on a set of (discrete or continuous) internal
  states, I am just as happy calling them "mental" states as calling
  them anything else.  To reserve the term mental for beings having
  volition, or souls, or intelligence, or neurons, or any other
  intuitive characteristic seems just as arbitrary to me.  I presume
  that "mental" is intended to contrast with "physical", but I side with
  those seeing a physical basis to all mental phenomena.  Philosophers
  worry over the distinction, but all that matters to me is the
  behavior of the system when I interface with it.  -- KIL]

------------------------------

Date: 5 Dec 83 12:08:31-PST (Mon)
From: harpo!eagle!mhuxl!mhuxm!pyuxi!pyuxnn!pyuxmm!cbdkc1!cbosgd!osu-db
      s!lum @ Ucb-Vax
Subject: Re: defining AI, AI research methodology, jargon in AI
Article-I.D.: osu-dbs.426

Perhaps Dyer is right.  Perhaps it would be a good thing to split net.ai/AIList
into two groups, net.ai and net.ai.d, ala net.jokes and net.jokes.d.  In one
the AI researchers could discuss actual AI problems, and in the other, philo-
sophers could discuss the social ramifications of AI, etc.  Take your pick.

Lum Johnson (cbosgd!osu-dbs!lum)

------------------------------

Date: 7 Dec 83 8:27:08-PST (Wed)
From: decvax!tektronix!tekcad!franka @ Ucb-Vax
Subject: New Topic (technical) - (nf)
Article-I.D.: tekcad.155

        OK, some of you have expressed a dislike for "non-technical, philo-
sophical, etc." discussions on this newsgroup. So for those of you who are
tired of this, I pose a technical question for you to talk about:

        What is your favorite method of representing knowlege in a KBS?
Do you depend on frames, atoms of data jumbled together randomly, or something
in between? Do you have any packages (for public consumption which run on
machines that most of us have access to) that aid people in setting up knowlege
bases?

        I think that this should keep this newsgroup talking at least partially
technically for a while. No need to thank me. I just view it as a public ser-
vice.

                                        From the truly menacing,
   /- -\                                but usually underestimated,
    <->                                 Frank Adrian
                                        (tektronix!tekcad!franka)

------------------------------

End of AIList Digest
********************
14-Dec-83 10:18:34-PST,18110;000000000001
Mail-From: LAWS created at 14-Dec-83 10:17:18
Date: Wed 14 Dec 1983 10:03-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #112
To: AIList@SRI-AI


AIList Digest           Wednesday, 14 Dec 1983    Volume 1 : Issue 112

Today's Topics:
  Memorial Fund - Carl Engelman,
  Programming Languages - Lisp Productivity,
  Expert Systems - System Size,
  Scientific Method - Information Sciences,
  Jargon - Mental States,
  Perception - Culture and Vision,
  Natural Language - Flame
----------------------------------------------------------------------

Date: Fri 9 Dec 83 12:58:53-PST
From: Don Walker <WALKER@SRI-AI.ARPA>
Subject: Carl Engelman Memorial Fund

                      CARL ENGELMAN MEMORIAL FUND

        Carl Engelman, one of the pioneers in artificial intelligence
research, died of a heart attack at his home in Cambridge, Massachusetts,
on November 26, 1983.  He was the creator of MATHLAB, a program developed
in the 1960s for the symbolic manipulation of mathematical expressions.
His objective there was to supply the scientist with an interactive
computational aid of a "more intimate and liberating nature" than anything
available before. Many of the ideas generated in the development of MATHLAB
have influenced the architecture of other systems for symbolic and algebraic
manipulation.

        Carl graduated from the City College of New York and then earned
an MS Degree in Mathematics at the Massachusetts Institute of Technology.
During most of his professional career, he worked at The MITRE Corporation
in Bedford, Massachusetts.  In 1973 he was on leave as a visiting professor
at the Institute of Information Science of the University of Turin, under a
grant from the Italian National Research Council.

        At the time of his death Carl was an Associate Department Head
at MITRE, responsible for a number of research projects in artificial
intelligence.  His best known recent work was KNOBS, a knowledge-based
system for interactive planning that was one of the first expert systems
applied productively to military problems.  Originally developed for the
Air Force, KNOBS was then adapted for a Navy system and is currently being
used in two NASA applications.  Other activities under his direction
included research on natural language understanding and automatic
programming.

        Carl published a number of papers in journals and books and gave
presentations at many conferences.  But he also illuminated every meeting
he attended with his incisive analysis and his keen wit.  While he will
be remembered for his contributions to artificial intelligence, those
who knew him personally will deeply miss his warmth and humor, which he
generously shared with so many of us.  Carl was particularly helpful to
people who had professional problems or faced career choices; his paternal
support, personal sponsorship, and private intervention made significant
differences for many of his colleagues.

        Carl was a member of the American Association for Artificial
Intelligence, the American Institute of Aeronautics and Astronautics, the
American Mathematical Society, the Association for Computational
Linguistics, and the Association for Computing Machinery and its Special
Interest Group on Artificial Intelligence.

        Contributions to the "Carl Engelman Memorial Fund" should be
sent to Judy Clapp at The MITRE Corporation, Bedford, Massachusetts 01730.
A decision will be made later on how those funds will be used.

------------------------------

Date: Tue, 13 Dec 83 09:49 PST
From: Kandt.pasa@PARC-MAXC.ARPA
Subject: re: lisp productivity question

Jonathan Slocum (University of Texas at Austin) has a large natural
language translation program (thousands of lines of Interlisp) that was
originally in Fortran.  The compression that he got was 16.7:1.  Also, I
once wrote a primitive production rule system in both Pascal and
Maclisp.  The Pascal version was over 2000 lines of code and the Lisp
version was about 200 or so.  The Pascal version also was not as
powerful as the Lisp version because of Pascal's strong data typing and
dynamic allocation scheme.

-- Kirk

------------------------------

Date: 9 Dec 83 19:30:46-PST (Fri)
From: decvax!cca!ima!inmet!bhyde @ Ucb-Vax
Subject: Re: RE: Expert Systems - (nf)
Article-I.D.: inmet.578

I would like to add to Gary's comments.  There are also issues of
scale to be considered.  Many of the systems built outside of AI
are orders of magnitude larger.  I was amazed to read that at one
point the largest OPS production system, a computer game called Haunt,
had so very few rules in it.  A compiler written using a rule based
approach would have 100 times as many rules.  How big are the
AI systems that folks actually build?

The engineering component of large systems obscures the architectural
issues involved in their construction.  I have heard it said that
AI isn't a field, it is a stage of the problem solving process.

It seems telling that the ARPA 5-year speech recognition project
was successful not with Hearsay ( I gather that after it was too late
it did manage to met the performance requirements ), but by Harpy.  Now,
Harpy as very much like a signal processing program.  The "beam search"
mechanisms it used are very different than the popular approachs of
the AI comunity.  In the end it seems that it was an act of engineering,
little insight into the nature of knowledge gained.

The issues that caused AI and the rest of computing to split a few
decades ago seem almost quaint now.  Allan Newell has a pleasing paper
about these.  Only the importance of an interpreter based program
development enviroment seem to continue.  Can you buy a work station
capable of sharing files with your 360 yet?

[...]
                                ben hyde

------------------------------

Date: 10 Dec 83 16:33:59-PST (Sat)
From: decvax!ittvax!dcdwest!sdcsvax!davidson @ Ucb-Vax
Subject: Information sciences vs. physical sciences
Article-I.D.: sdcsvax.84

I am responding to an article claiming that psychology and computer
science aren't sciences.  I think that the author is seriously confused
by his prefered usage of the term ``science''.  The sciences based on
mathematics, information processing, etc., which I will here call
information sciences, e.g., linguistics, computer science, information
science, cognitive science, psychology, operations research, etc., have
very different methods of operation from sciences based upon, for
example, physics.  Since people often view physics as the prototypical
science, they become confused when they look at information sciences.
This is analogous to the confusion of the early grammarians who tried
to understand English from a background in Latin:  They decided that
English was primitive and in need of fixing, and proceeded to create
Grammar schools in which we were all supposed to learn how to speak
our native language properly (i.e., with intrusions of latin grammar).

If someone wants to have a private definition of the word science to
include only some methods of operation, that's their privilege, as
long as they don't want to try to use words to communicate with other
human beings.  But we shouldn't waste too much time definining terms,
when we could be exploring the nature and utility of the methodologies
used in the various disciplines.  In that light, let me say something
about the methodologies of two of the disciplines as I understand and
practice them, respectively.

Physics:  There is here the assumption of a simple underlying reality,
which we want to discover through elegant theorizing and experimenting.
Compared to other disciplines, e.g., experimental psychology, many of
the experimental tools are crude, e.g., the statistics used.  A theoretical
psychologist would probably find the distance that often separates physical
theory from experiment to be enormous.  This is perfectly alright, given
the (assumed) simple nature of underlying reality.

Computer Science:  Although in any mathematically based science one
might say that one is discovering knowledge; in many ways, it makes
better sense in computer science to say that one is creating as much
as discovering.  Someone will invent a new language, a new architecture,
or a new algorithm, and people will abandon older languages, architectures
and algorithms.  A physicist would find this strange, because these objects
are no less valid for having been surpassed (the way an outdated physical
theory would be), but are simply no longer interesting.

Let me stop here, and solicit some input from people involved in other
disciplines.  What are your methods of investigation?  Are you interested
in creating theories about reality, or creating artificial or abstract
realities?  What is your basis for calling your discipline a science,
or do you?  Please do not waste any time saying that some other discipline
is not a science because it doesn't do things the way yours does!

-Greg

------------------------------

Date: Sun, 11 Dec 83 20:43 EST
From: Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>
Subject: re: mental states

Ken Laws in his little editorializing comment on my last note seems to
have completely missed the point. Whether FSA's can display mental
states is an argument I leave to others on this list. However, John
McCarthy's definition allows ant hills and colloidal suspensions to
have mental states.

------------------------------

Date: Sun, 11 Dec 1983 15:04:10 EST
From: AXLER.Upenn-1100@Rand-Relay (David M. Axler - MSCF Applications
      Mgr.)
Subject: Culture and Vision

    Several people have recently been bringing up the question of the
effects of culture on visual perception.  This problem has been around
in anthropology, folkloristics, and (to some extent) in sociolinguistics
for a number of years.  I've personally taken a number of graduate courses
that focussed on this very topic.
     Individuals interested in this problem (or, more precisely, group of
problems) should look into the Society for the Anthropology of Visual
Communication (SAVICOM) and its journal.  You'll find that the terminology
is often unfamiliar, but the concerns are similar.  The society is based
at the University of Pennsylvania's Annenberg School of Communications,
and is formally linked with such relevant groups as the American Anthro-
pological Assn.
     Folks who want more info, citations, etc. on this can also contact
me personally by netmail, as I'm not sure that this is sufficiently
relevant to take up too much of AI's space.
     Dave Axler
     (Axler.Upenn-1100@Rand-Relay)


[Extract from further correspondence with Dave:]

     There is a thing called "Visual Anthropology", on the
other hand, which deals with the ways that visual tools such as film, video,
still photography, etc., can be used by the anthropologist.  The SAVICOM
journal occasionally has articles dealing with the "meta" aspects of visual
anthropology, causing it, at such times, to be dealing with the anthropology
of visual anthropology (or, at least, the epistemology thereof...)

                                     --Dave Axler

------------------------------

Date: Mon 12 Dec 83 21:16:43-PST
From: Martin Giles <MADAGIL@SU-SIERRA.ARPA>
Subject: A humanities view of computers and natural language

The following is a copy of an article on the Stanford Campus report,
7th December, 1983, in response to an article describing research at
Stanford.  The University has just received a $21 million grant for
research in the fields of natural and computer languages.

                                Martin

[I have extracted a few relevant paragraphs from the following 13K-char
flame.  Anyone wanting the full text can contact AIList-Request or FTP
it from <AILIST>COHN.TXT on SRI-AI.  I will deleted it after a few weeks.
-- KIL]


  Mail-From: J.JACKSON1 created at 10-Dec-83 10:29:54
  Date: Sat 10 Dec 83 10:29:54-PST
  From: Charlie Jackson  <J.JACKSON1@LOTS-A>
  Subject: F; (Gunning Fog Index 20.18); Cohn on Computer Language Study
  To: bboard@LOTS-A

  Following is a letter found in this week's Campus Report that proves
  Humanities profs make as good flames as any CS hacker.  Charlie

        THE NATURE OF LANGUAGE IS ALREADY KNOWN WITHOUT COMPUTERS

  Following is a response from Robert Greer Cohn, professor of French, to
the Nov. 30 Campus Report article on the study of computer and natural
language.

        The ambitious program to investigate the nature of language in
connection with computers raises some far-reaching questions.  If it is
to be a sort of Manhattan project, to outdo the Japanese in developing
machines that "think" and "communicate" in a sophisticated way, that is
one thing, and one may question how far a university should turn itself
towards such practical, essentially engineering, matters.  If on the
other hand, they are serious about delving into the  nature of languages
for the sake of disinterested truth, that is another pair of shoes.
        Concerning the latter direction: no committee ever instituted
has made the kind of breakthrough individual genius alone can
accomplish. [...]
        Do they want to know the nature of language?  It is already
known.
        The great breakthrough cam with Stephane Mallarme, who as Edmund
Wilson (and later Hugh Kenner) observed, was comparable only to Einstein
for revolutionary impact.  He is responsible more than anyone, even
Nietzsche, for the 20th-century /episteme/, as most French first-rank
intellectuals agree (for example, Foucault, in "Les mots et les choses";
Sartre, in his preface to the "Poesies"' Roland Barthes who said in his
"Interview with Stephen Hearth," "All we do is repeat Mallarme";
Jakobson; Derrida; countless others).
        In his "Notes" Mallarme saw the essence of language as
"fiction," which is to say it is based on paradox.  In the terms of
Darwin, who describes it as "half art, half instinct," this means that
language, as related to all other reality (hypothetically nonlinguistic,
experimental) is "metaphorical" -- as we now say after Jakobson -- i.e.
above and below the horizontal line of on-going, spontaneous,
comparatively undammmed, life-flow or experience; later, as the medium
of whatever level of creativity, it bears this relation to the
conventional and rational real, sanity, sobriety, and so on.
        In this sense Chomsky's view of language as innate and
determined is a half-truth and not very inspired.  He would have been
better off if he had read and pondered, for example, Pascal, who three
centuries ago knew that "nature is itself only a first 'custom'"; or
Shakespeare: "The art itself is nature" (The Winter's Tale).
        [...]

        But we can't go into all the aspects of language here.
        In terms of the project:  since, on balance, it is unlikely the
effects will go the way of elite French thought on the subject, there
remains the probability that they will try to recast language, which is
at its best creatively free (as well as determined at its best by
organic totality, which gives it its ultimate meaning, coherence,
harmony), into the narrow mold of the computer, even at /its/ best.
        [...]

        COMPUTERS AND NEWSPEAK

        In other words, there is no way to make a machine speak anything
other than newspeak, the language of /1984/.  They may overcome that
flat dead robotic tone that our children enjoy -- by contrast, it gives
them the feeling that they are in command of life -- but the thought and
the style will be sprirtually inert.  In that sense, the machines, or
the new language theories, will reflect their makers, who, in harnessing
themselves to a prefabricated goal, a program backed by a mental arms
race, will have been coopted and dehumanized.  That flat (inner or
outer) tone is a direct result of cleaving to one-dimensionality, to the
dimension of the linear and "metonymic," the dimension of objectivity,
of technology and science, uninformed and uninspired by the creatively
free and whole-reflecting ("naive") vertical, or vibrant life itself.
        That unidimensionality is visible in the immature personalities
of the zealots who push these programs:  they are not much beyond
children in their Frankenstein eagerness to command the frightening
forces of the psyche, including sexuality, but more profoundly, life
itself, in its "existential" plenitude involving death.
        People like that have their uses and can, with exemplary "tunnel
vision," get certain jobs done (like boring tunnels through miles of
rock).  A group of them can come up with /engineering/ breakthroughs in
that sense, as in the case of the Manhattan project.  But even that
follows the /creative/ breakthroughs of the Oppenheimers and Tellers and
Robert D. (the shepherd in France) and is rather pedestrian endeavor
under the management of some colonel.
        When I tried to engage a leader of the project in discussion
about the nature of language, he refused, saying, "The humanities and
sciences are father apart than ever," clearly welcoming this
development.  This is not only deplorable in itself; far worse,
according to the most accomplished mind on /their/ side of the fence in
this area; this man's widely-hailed thinking is doomed to a dead end,
because of its "unidimensionality!"
        This is not the place to go into the whole saddening bent of
our times and the connection with totalitarianism, which is "integrated
systems" with a vengeance.  But I doubt that this is what our founders
had in mind.

------------------------------

End of AIList Digest
********************
16-Dec-83 10:09:38-PST,15021;000000000001
Mail-From: LAWS created at 16-Dec-83 10:07:31
Date: Fri 16 Dec 1983 10:02-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #113
To: AIList@SRI-AI


AIList Digest            Friday, 16 Dec 1983      Volume 1 : Issue 113

Today's Topics:
  Alert - Temporal Representation & Fuzzy Reasoning
  Programming Languages - Phrasal Analysis Paper,
  Fifth Generation - Japanese and U.S. Views,
  Seminars - Design Verification & Fault Diagnosis
----------------------------------------------------------------------

Date: Wed 14 Dec 83 11:21:47-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: CACM Alert - Temporal Representation & Fuzzy Reasoning

Two articles in the Nov. issue of CACM (just arrived) may be of
special interest to AI researchers:


"Maintaining Knowledge about Temporal Intervals," by James F. Allen
of the U. of Rochester, is about representation of temporal information
using only intervals -- no points.  While this work does not lead to a
fully general temporal calculus, it goes well beyond state space and
date line systems and is more powerful and efficient than event chaining
representations.  I can imagine that the approach could be generalized
to higher dimensions, e.g., for reasoning about the relationships of
image regions or objects in the 3-D world.


"Extended Boolean Information Retrieval," by Gerald Salton, Edward A. Fox,
and Harry Wu, presents a fuzzy logic or hierarchical inference method for
dealing with uncertainties when evaluating logical formulas.  In a
formula such as ((A and B) or (B and C)), they present evidential
combining formulas that allow for:

  * Uncertainty in the truth, reliability, or applicability of the
    the primitive terms A and B;

  * Differing importance of establishing the primitive term instances
    (where the two B terms above could be weighted differently);

  * Differing semantics of the logical connectives (where the two
    "and" connectives above could be threshold units with different
    thresholds).

The output of their formula evaluator is a numerical score.  They use
this for ranking the pertinence of literature citations to a database
query, but it could also be used for evidential reasoning or for
evaluating possible worlds in a planning system.  For the database
query system, they indicate a method for determining term weights
automatically from an inverted index of the database.

The weighting of the Boolean connectives is based on the infinite set
of Lp vector norms.  The connectives and[INF] and or[INF] are the
ones of standard logic; and[1] and or[1] are equivalent and reduce
formula evaluation to a simple weighted summation; intermediate
connective norms correspond to "mostly" gates or weighted neural
logic models.  The authors present both graphical illustrations and
logical theorems about these connectives.

                                        -- Ken Laws

------------------------------

Date: 14 Dec 83 20:05:25-PST (Wed)
From: hplabs!hpda!fortune!phipps @ Ucb-Vax
Subject: Re: Phrasal Analysis Paper/Programming Languages Applications ?
Article-I.D.: fortune.1981

Am I way off base, or does this look as if the VOX project
would be of interest to programming languages (PL) researchers ?
It might be interesting to submit to the next
"Principles of Programming Languages" (POPL) conference, too.

As people turn from traditional programming languages
(is Ada really pointing the way of the future ? <shudder !>) to other ways
(query languages and outright natural language processing)
to obtain and manipulate information and codified knowledge,
I believe that AI and PL people will find more overlap in their ends,
although probably not their respective interests, approaches, and style.
This institutionalized mutual ignorance doesn't benefit either field.
One of these days, AI people and programming languages people
ought to heal their schism.

I'd certainly like to hear more of VOX, and would cheerfully accept delivery
of a copy of your paper (US Mail (mine): PO Box 2284, Santa Clara CA 95055).
My apologies for using the net for a reply, but he's unreachable
thru USENET, and I wanted to make a general point anyhow.

-- Clay Phipps

--
   {allegra,amd70,cbosgd,dsd,floyd,harpo,hollywood,hpda,ihnp4,
    magic,megatest,nsc,oliveb,sri-unix,twg,varian,VisiA,wdl1}
   !fortune!phipps

------------------------------

Date: 12 Dec 83 15:29:10 PST (Monday)
From: Ron Newman <Newman.es@PARC-MAXC.ARPA>
Subject: New Generation computing: Japanese and U.S. views

  [The following is a direct submission to AIList, not a reprint.
  It has also appeared on the Stanford bboards, and has generated
  considerable discussion there.  I am distributing this and the
  following two reprints because they raise legitimate questions
  about the research funding channels available to AI workers.  My
  distribution of these particular messages should not be taken as
  evidence of support for or against military research. -- KIL]

from Japan:

  "It is necessary for each researcher in the New Generation Computer
technology field to work for world prosperity and the progress of
mankind.

  "I think it is the responsibility of each researcher, engineer and
scientist in this field to ensure that KIPS [Knowledge Information
Processing System] is used for good, not harmful, purposes.  It is also
necessary to investigate KIPS's influence on society concurrent with
KIPS's development."

  --Tohru Moto-Oka, University of Tokyo, editor of the new journal "New
Generation Computing", in the journal's founding statement (Vol. 1, No.
1, 1983, p. 2)



and from the U.S.:

  "If the new generation technology evolves as we now expect, there will
be unique new opportunities for military applications of computing.  For
example, instead of fielding simple guided missiles or remotely piloted
vehicles, we might launch completely autonomous land, sea, and air
vehicles capable of complex, far-ranging reconnaissance and attack
misssions.  The possibilities are quite startling, and suggest that new
generation computing could fundamentally change the nature of future
conflicts."

  --Defense Advanced Research Projects Agency, "Strategic Computing:
New Generation Computing Technology: A Strategic Plan for its
Development and Application to Critical Problems in Defense,"  28
October 1983, p. 1

------------------------------

Date: 13 Dec 83 18:18:23 PST (Tuesday)
From: Ron Newman <Newman.es@PARC-MAXC.ARPA>
Subject: Re: New Generation computing: Japanese and U.S. views

                [Reprinted from the SU-SCORE bboard.]

My juxtaposition of quotations is intended to demonstrate the difference
in priorities between the Japanese and U.S. "next generation" computer
research programs.  Moto-Oka is a prime mover behind the Japanese
program, and DARPA's Robert Kahn is a prime mover behind the American
one.  Thus I consider the quotations comparable.

To put it bluntly:  the Japanese say they are developing this technology
to help solve human and social problems.  The Americans say they are
developing this technology to find more efficient ways of killing
people.

The difference in intent is quite striking, and will undoubtedly produce
a "next-generation" repetition of an all too familiar syndrome.  While
the U.S. pours yet more money and scientific talent into the military
sinkhole, the Japanese invest their monetary and human capital in
projects that will produce profitable industrial products.

Here are a couple more comparable quotes, both from IEEE Spectrum, Vol.
20, No. 11, November 1983:

  "DARPA intends to apply the computers developed in this program to a
number of broad military applications...
  "An example might be a pilot's assistant that can respond to spoken
commands by a pilot and carry them out without error, drawing upon
specific aircraft, sensor, and tactical knowledge stored in memory and
upon prodigious computer power.  Such capability could free a pilot to
concentrate on tactics while the computer automatically activated
surveillance sensors, interpreted radar, optical, and electronic
intelligence, and prepared appropriate weapons systems to counter
hostile aircraft or missiles....
  "Such systems may also help in military assessments on a battlefield,
simulating and predicting the consequences of various courses of
military action and interpreting signals acquired on the battlefield.
This information could be compiled and presented as sophisticated
graphics that would allow a commander and his staff to concentrate on
the larger strategic issues, rather than having to manage the enormous
data flow that will[!] characterize future battles."
    --Robert S. Cooper and Robert E. Kahn, DARPA, page 53.

  "Fifth generation computers systems are exptected to fulfill four
major roles:  (1) enhancement of productivity in low-productivity areas,
such as nonstandardized operations in smaller industries;  (2)
conservation of national resources and energy through optimal energy
conversion; (3) establishment of medical, educational, and other kinds
of support systems for solving complex social problems, such as the
transition to a society made up largely of the elderly;  and (4)
fostering of international cooperation through the machine translation
of languages."
    --Tohru Moto-Oka, University of Tokyo, page 46


Which end result would *you* rather see?

/Ron

------------------------------

Date: Tue 13 Dec 83 21:29:22-PST
From: John B. Nagle <NAGLE@SU-SCORE.ARPA>
Subject: Comparable quotes

                [Reprinted from the SU-SCORE bboard.]

     The goals of an effort funded by the military will be different
than those of an effort aimed at trade dominance.  Intel stayed out of
the DoD VHSIC program because the founder of Intel felt that concentrating
on fast, expensive circuits would be bad for business.  He was right.
The VHSIC program is aimed at making a few hundred copies of an IC for
a few thousand each.  Concentration on that kind of product will bankrupt
a semiconductor company.
     We see the same thing in AI.  There is getting to be a mini-industry
built around big expensive AI systems on big expensive computers.  Nobody
is thinking of volume.  This is a direct consequence of the funding source.
People think in terms of keeping the grants coming in, not selling a
million copies.  If money came from something like MITI, there would be
pressure to push forward to a volume product just to find out if there
is real potential for the technology in the real world.  Then there would
be thousands of people thinking about the problems in the field, not
just a few hundred.
     This is divirging from the main thrust of the previous flame, but
think about this and reply.  There is more here than another stab at the
big bad military.

------------------------------

Date: Tue 13 Dec 83 10:40:04-PST
From: Sumit Ghosh <GHOSH@SU-SIERRA.ARPA>
Subject: Ph.D. Oral Examination: Special Seminar

             [Reprinted from the SU-SCORE bboard.]


   ADA Techniques for Implementing a Rule-Based Generalised Design Verifier

                     Speaker: Sumit Ghosh

                    Ph.D. Oral Examination
             Monday, 19th Dec '83. 3:30pm. AEL 109


This thesis describes a top-down, rule-based design verifier implemented in
the language ADA. During verification of a system design, a designer needs
several different kinds of simulation tools such as functional simulation,
timing verification, fault simulation etc. Often these tools are implemented
in different languages, different machines thereby making it difficult to
correlate results from different kinds of simulations. Also the system design
must be described in each of the different kinds of simulation, implying a
substantial overhead. The rule-based approach enables one to create different
kinds of simulations, within the same simulation environment, by linking
appropriate type of models with the system nucleus. This system also features
zooming whereby certain subsections of the system design (described at a high
level) can be expanded at a lower level, at run time, for a more detailed
simulation. The expansion process is recursive and should be extended down to
the circuit level. At the present implementation stage, zooming is extended to
gate level simulation. Since only those modules that show discrepancy (or
require detailed analysis) during simulation are simulated in details, the
zoom technique implies a substantial reduction in complexity and CPU time.
This thesis further contributes towards a functional deductive fault simulator
and a generalised timing verifier.

------------------------------

Date: Mon 12 Dec 83 12:46-EST
From: Philip E. Agre <AGRE%MIT-OZ@MIT-MC.ARPA>
Subject: Walter Hamscher at the AI Revolving Seminar

                 [Reprinted from the MIT-AI bboard.]

AI Revolving Seminar
Walter Hamscher

Diagnostic reasoning for digital devices with static storage elements

Wendesday 14 December 83 4PM
545 Tech Sq 8th floor playroom


We view diagnosis as a process of reasoning from anomalous observations to a
set of components whose failure could explain the observed misbehaviors.  We
call these components "candidates."  Diagnosing a misbehaving piece of
hardware can be viewed as a process of generating, discriminating among, and
refining these candidates.  We wish to perform this diagnosis by using an
explicit representation of the hardware's structure and function.

Our candidate generation methodology is based on the notions of dependency
directed backtracking and local propagation of constraints.  This
methodology works well for devices without storage elements such as
flipflops.  This talk presents a representation for the temporal behavior of
digital devices which allows devices with storage elements to be treated
much the same as combinatorial devices for the purpose of candidate
generation.

However, the straightforward adaptation requires solutions to subproblems
that are severely underconstrained.  This in turn leads to an overly
conservative and not terribly useful candidate generator.  There exist
mechanism-oriented solutions such as value enumeration, propagation of
variables, and slices; we review these and then demonstrate what domain
knowledge can be used to motivate appropriate uses of those techniques.
Beyond this use of domain knowledge within the current representation, there
are alternative perspectives on the problem which offer some promise of
alleviating the lack of constraint.

------------------------------

End of AIList Digest
********************
18-Dec-83 12:04:33-PST,17282;000000000001
Mail-From: LAWS created at 18-Dec-83 12:01:20
Date: Sun 18 Dec 1983 11:48-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #114
To: AIList@SRI-AI


AIList Digest            Sunday, 18 Dec 1983      Volume 1 : Issue 114

Today's Topics:
  Intelligence - Confounding with Culture,
  Jargon - Mental States,
  Scientific Method - Research Methodology
----------------------------------------------------------------------

Date: 13 Dec 83 10:34:03-PST (Tue)
From: hplabs!hpda!fortune!amd70!dual!onyx!bob @ Ucb-Vax
Subject: Re: Intelligence = culture
Article-I.D.: onyx.112

I'm surprised that there have been no references  to  culture  in
all of these "what is intelligence?" debates...

The simple fact of the matter is, that "intelligence" means  very
little  outside  of  any specific cultural reference point.  I am
not referring just to culturally-biased vs. non-culturally-biased
IQ tests, although that's a starting point.

Consider someone raised from infancy in the jungle  (by  monkeys,
for  the  sake  of the argument). What signs of intelligence will
this person show?  Don't expect them  to  invent  fire  or  stone
axes;  look  how  long it took us the first time around. The most
intelligent thing that person could do would be on par with  what
we  see chimpanzees doing in the wild today (e.g. using sticks to
get ants, etc).

What I'm driving at is that there  are  two  kinds  of  "intelli-
gence"; there is "common sense and ingenuity" (monkeys, dolphins,
and a few people), and there is  "cultural  methodology"  (people
only).

Cultural methodologies include  all  of  those  things  that  are
passed  on  to  us  as a "world-view", for instance the notion of
wearing clothes, making fire, using arithmetic to figure out  how
many  people  X  bags of grain will feed, what spices to use when
cooking, how to talk (!), all of these things were at one time  a
brilliant  conception  in  someones' mind. And it didn't catch on
the first time around. Probably not  the  second  or  third  time
either.  But eventually someone convinced other people to try his
idea, and it became part of that culture. And  using  that  as  a
context  gives  other  people  an  opportunity  to bootstrap even
further. One small step for a man, a giant leap for his culture.

When we think about intelligence and get impressed by how wonder-
ful  it  is, we are looking at its application in a world stuffed
to the gills with prior context that is indispensible  to  every-
thing we think about.

What this leaves us with is people trying to define and measure a
hybrid  of  common  sense  and culture without noticing that what
they are interested in is actually two different things, plus the
inter-relations  between  those  things,  so  no wonder the issue
seems so murky.

For those who may be interested, general systems theory,  general
semantics,  and  epistemology  are  some fascinating related sub-
jects.

Now let's see some letters about what "common sense" is  in  this
context,  and about applying that common sense to (cultural) con-
texts. (How recursive!)

------------------------------

Date: Tue, 13 Dec 83 11:24 EST
From: Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>
Subject: re: mental states

I am very intriguied by Ferenando Pereira's last comment:

    Sorry, you missed the point that JMC and then I were making. Prygogine's
    work (which I know relatively well) has nothing to say about systems
    which have to model in their internal states equivalence classes of
    states of OTHER systems. It seems to me impossible to describe such
    systems unless certain sets of states are labeled with things
    like "believe(John,have(I,book))". That is, we start associating
    classes of internal states to terms that include mentalistic
    predicates.

I may be missing the point, since I am not sure what "model in their internal
states equivelence classes of states of OTHER systems" means. But I think
you are saying is that `reasoning systems' that encode in their state
information about the states of other systems (or their own) are not
coverered by Ilya Prygogine's work.

I think think you are engaging in a leap of faith here. What is the basis
for believing that any sort of encoding of the state of other systems is
going on here. I don't think even the philosophical guard phrase
`equivalence class' protects you in this case.

To continue in my role of sceptic: if you make claims that you are constructing
systems that model their internal state (or other systems' internal states)
[or even an equivalence class of those states]. I will make a claim that
my Linear Programming Model of an computer parts inventory is also
exhibiting `mental reasoning' since it is modeling the internal states
of that computer parts inventory.

This means that Prygogine's work is operative in the case of FSA based
`reasoning systems' since they can do no more modeling of the internal
state of another system than a colloidal suspension, or an inventory
control system built by an operations research person.


                                - Steven Gutfreund
                                  Gutfreund.umass@csnet-relay

------------------------------

Date: Wed 14 Dec 83 17:46:06-PST
From: PEREIRA@SRI-AI.ARPA
Subject: Mental states of machines

The only reason I have to believe that a system encodes in its states
classifications of the states of other systems is that the systems we
are talking about are ARTIFICIAL, and therefore this is part of our
design. Of course, you are free to say that down at the bottom our
system is just a finite-state machine, but that's about as helpful as
making the same statement about the computer on which I am typing this
message when discussing how to change its time-sharing resource
allocation algorithm.

Besides this issue of convenience, it may well be the case that
certain predicates on the states of other or the same system are
simply not representable within the system. One does not even need to
go as far as incompleteness results in logic: in a system which has
means to represent a single transitive relation (say, the immediate
accessibility relation for a maze), no logical combination can
represent the transitive closure (accessibility relation) [example due
to Bob Moore]. Yet the transitive closure is causally connected to the
initial relation in the sense that any change in the latter will lead
to a change in the former. It may well be the case (SPECULATION
WARNING!) that some of the "mental state" predicates have this
character, that is, they cannot be represented as predicates over
lower-level notions such as states.

-- Fernando Pereira

------------------------------

Date: 12 Dec 83 7:20:10-PST (Mon)
From: hplabs!hao!seismo!philabs!linus!utzoo!dciem!mmt @ Ucb-Vax
Subject: Re: Mental states of machines
Article-I.D.: dciem.548

Any discussion of the nature and value of mental states in either
humans of machines should include consideration of the ideas of
J.G.Taylor (no relation). In his "Behavioral Basis of Perception"
Yale University Press, 1962, he sets out mathematically a basis
for changes in perception/behaviour dependent on transitions into
different members of "sets" of states. These "sets" look very like
the mental states referenced in the earlier discussion, and may
be tractable in studies of machine behaviour. They also tie in
quite closely with the recent loose talk about "catastrophes" in
psychology, although they are much better specified than the analogists'
models. The book is not easy reading, but it is very worthwhile, and
I think the ideas still have a lot to offer, even after 20 years.

Incidentally, in view of the mathematical nature of the book, it
is interesting that Taylor was a clinical psychologist interested
initially in behaviour modification.

Martin Taylor
{allegra,linus,ihnp4,uw-beaver,floyd,ubc-vision}!utzoo!dciem!mmt

------------------------------

Date: 14 Dec 1983 1042-PST
From: HALL.UCI-20B@Rand-Relay
Subject: AI Methods

After listening in on the communications concerning definitions
of intelligence, AI methods, AI results, AI jargon, etc., I'd
like to suggest an alternate perspective on these issues.  Rather
than quibbling over how AI "should be done," why not take a close
look at how things have been and are being done?  This is more of
a social-historical viewpoint, admitting the possibility that
adherents of differing methodological orientations might well
"talk past each other" - hence the energetic argumentation over
issues of definition.  In this spirit, I'd like to submit the
following for interested AILIST readers:

         Toward a Taxonomy of Methodological
    Perspectives in Artificial Intelligence Research

                 Rogers P. Hall
               Dennis F. Kibler

                   TR  108
                September 1983

      Department of Information and Computer Science
         University of California, Irvine
             Irvine, CA   92717

                    Abstract

    This paper is an attempt to explain the apparent confusion of
efforts in the field of artificial intelligence (AI) research in
terms of differences between underlying methodological perspectives
held by practicing researchers.  A review of such perspectives
discussed in the existing literature will be presented, followed by
consideration of what a relatively specific and usable taxonomy of
differing research perspectives in AI might include.  An argument
will be developed that researchers should make their methodological
orientations explicit when communicating research results, both as
an aid to comprehensibility for other practicing researchers and as
a step toward providing a coherent intellectual structure which can
be more easily assimilated by newcomers to the field.

The full report is available from UCI for a postage fee of $1.30.
Electronic communications are welcome:

    HALL@UCI-20B
    KIBLER@UCI-20B

------------------------------

Date: 15 Dec 1983 9:02-PST
From: fc%usc-cse%USC-ECL@MARYLAND
Subject: Re: AIList Digest   V1 #112 - science

        In my mind, science has always been the practice of using the
'scientific method' to learn. In any discipline, this is used to some
extent, but in a pure science it is used in its purest form. This
method seems to be founded in the following principles:

1       The observation of the world through experiments.

2       Attempted explanations in terms of testable hypotheses - they
        must explain all known data, predict as yet unobserved results,
        and be falsifiable.

3       The design and use of experiments to test predictions made by these
        hypotheses in an attempt to falsify them.

4       The abandonment of falsified hypotheses and their replacement
        with more accurate ones - GOTO 2.

        Experimental psychology is indeed a science if viewed from this
perspective. So long as hypotheses are made and predictions tested with
some sort of experiment, the crudity of the statistics is similar to
the statistical models of physics used before it was advanced to its
current state. Computer science (or whatever you call it) is also a
science in the sense that our understanding of computers is based on
prediction and experimentation. Anyone that says you don't experiment
with a computer hasn't tried it.

        The big question is whether mathematics is a science. I guess
it is, but somehow any system in which you only falsify or verify based
on the assumptions you made leaves me a bit concerned. Of course we are
context bound in any other science, and can't often see the forest for
the trees, but on the other hand, accidental discovery based on
experiments with results which are unpredictable under the current theory
is not really possible in a purely mathematical system.

        History is probably not a science in the above sense because,
although there are hypotheses with possible falsification, there is
little chance of performing an experiment in the past. Archeological
findings may be thought of as an experiment of the past, but I think
this sort of experiment is of quite a different nature than those that
are performed in other areas I call science. Archeology by the way is
probably a science in the sense of my definition not because of the
ability to test hypotheses about the past through experimental
diggings, but because of its constant development and experimental
testing of theory in regards to the way nature changes things over time.
The ability to determine the type of wood burned in an ancient fire and
the year in which it was burned is based on the scientific process that
archeologists use.

                        Fred

------------------------------

Date: 13 Dec 83 15:13:26-PST (Tue)
From: hplabs!hao!seismo!philabs!linus!utzoo!dciem!mmt @ Ucb-Vax
Subject: Re: Information sciences vs. physical sciences
Article-I.D.: dciem.553

*** This response is routed to net.philosophy as well as the net.ai
where it came from. Responders might prefer to edit net.ai out of
the Newsgroups: line before posting.


    I am responding to an article claiming that psychology and computer
    science arn't sciences.  I think that the author is seriously confused
    by his prefered usage of the term ``science''.


I'm not sure, but I think the article referenced was mine. In any case,
it seems reasonable to clarify what I mean by "science", since I think
it is a reasonably common meaning. By the way, I do agree with most of
the article that started with this comment, that it is futile to
define words like "science" in a hard and fast fashion. All I want
here is to show where my original comment comes from.

"Science" has obviously a wide variety of meanings if you get too
careful about it, just as does almost any word in a natural language.
But most meanings of science carry some flavour of a method for
discovering something that was not known by a method that others can
repeat. It doesn't really matter whether that method is empirical,
theoretical, experimental, hypothetico-deductive, or whatever, provided
that the result was previously uncertain or not obvious, and that at
least some other people can reproduce it.

I argued that psychology wasn't a science mainly on the grounds that
it is very difficult, if not impossible, to reproduce the conditions
of an experiment on most topics that qualify as the central core of
what most people think of as psychology. Only the grossest aspects
can be reproduced, and only the grossest characterization of the
results can be stated in a way that others can verify. Neither do
theoretical approaches to psychology provide good prediction of
observable behaviour, except on a gross scale. For this reason, I
claimed that psychology was not a science.

Please note that in saying this, I intend in no way to downgrade the
work of practicing psychologists who are scientists. Peripheral
aspects, and gross descriptions are susceptible to attack by our
present methods, and I have been using those methods for 25 years
professionally. In a way it is science, but in another way it isn't
psychology. The professional use of the word "psychology" is not that
of general English. If you like to think what you do is science,
that's fine, but remember that the definition IS fuzzy. What matters
more is that you contribute to the world's well-being, rather than
what you call the way you do it.
--

Martin Taylor
{allegra,linus,ihnp4,uw-beaver,floyd,ubc-vision}!utzoo!dciem!mmt

------------------------------

Date: 14 Dec 83 20:01:52-PST (Wed)
From: hplabs!hpda!fortune!rpw3 @ Ucb-Vax
Subject: Re: Information sciences vs. physical sc - (nf)
Article-I.D.: fortune.1978

I have to throw my two bits in:

The essence of science is "prediction". The missing steps in the classic
paradigm of hypothesis-experiment-analysis- presented above is
that "hypothesis" should be read "theory-prediction-"

That is, no matter how well the hypothesis explains the current data, it
can only be tested on data that has NOT YET BEEN TAKEN.

Any sufficiently overdetermined model can account for any given set of data
by tweaking the parameters. The trick is, once calculated, do those parameters
then predict as yet unmeasured data, WITHOUT CHANGING the parameters?
("Predict" means "within an reasonable/acceptable confidence interval
when tested with the appropriate statistical methods".)

Why am I throwing this back into "ai"? Because (for me) the true test
of whether "ai" has/will become a "science" is when it's theories/hypotheses
can successfully predict (c.f. above) the behaviour of existing "natural"
intelligences (whatever you mean by that, man/horse/porpoise/ant/...).

------------------------------

End of AIList Digest
********************
20-Dec-83 21:56:56-PST,12199;000000000001
Mail-From: LAWS created at 20-Dec-83 21:56:22
Date: Tue 20 Dec 1983 21:48-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #115
To: AIList@SRI-AI


AIList Digest           Wednesday, 21 Dec 1983    Volume 1 : Issue 115

Today's Topics:
  Neurophysics - Left/Right-Brain Citation Request,
  Knowledge Representation,
  Science & Computer Science & Expert Systems,
  Science - Definition,
  AI Funding - New Generation Computing
----------------------------------------------------------------------

Date: 16 Dec 83 13:10:45-PST (Fri)
From: decvax!microsoft!uw-beaver!ubc-visi!majka @ Ucb-Vax
Subject: Left / Right Brain
Article-I.D.: ubc-visi.571

From: Marc Majka <majka@ubc-vision.UUCP>

I have heard endless talk, and read endless numbers of magazine-grade
articles about left / right brain theories.  However, I have not seen a
single reference to any scientific evidence for these theories. In fact,
the only reasonably scientific discussion I heard stated quite the opposite
conclusion about the brain:  That although it is clear that different parts
of the brain are associated with specific functions, there is no logical
(analytic, mathematical, deductive, sequential) / emotional (synthetic,
intuitive, inductive, parallel) pattern in the hemispheres of the brain.

Does anyone on the net have any references to any studies that have been
done concerning this issue?  I would appreciate any directions you could
provide.  Perhaps, to save the load on this newsgroup (since this is not an
AI question), it would be best to mail directly to me.  I would be happy to
post a summary to this group.

Marc Majka - UBC Laboratory for Computational Vision

------------------------------

Date: 15 Dec 83 20:12:46-PST (Thu)
From: decvax!wivax!linus!utzoo!watmath!watdaisy!rggoebel @ Ucb-Vax
Subject: Re: New Topic (technical) - (nf)
Article-I.D.: watdaisy.362

Bob Kowalski has said that the only way to represent knowledge is
using first order logic.   ACM SIGART Newsletter No. 70, February 1980
surveys many of the people in the world actually doing representation
research, and few of them agree with Kowalski.   Is there anyone out
there than can substantiate a claim for actually ``representing'' (what
ever that means) ``knowledge?''   Most of the knowledge representation
schemes I've seen are really deductive information description languages
with quasi-formal extensions.   I don't have a good definition of what
knowledge is...but ask any mathematical logician (or mathematical
philosopher) what they think about calling something like KRL a
knowledge representation language.

Randy Goebel
Logic Programming Group
University of Waterloo
Waterloo, Ontario, CANADA N2L 3G1

------------------------------

Date: 13 Dec 83 8:14:51-PST (Tue)
From: hplabs!hao!seismo!philabs!linus!security!genrad!wjh12!foxvax1!br
      unix!jah @ Ucb-Vax
Subject: Re: RE: Expert Systems
Article-I.D.: brunix.5992

I don't understand what the "size" of a program has to do with anything.
The notion that size is important seems to support the idea that the
word "science" in "computer science" belongs in quote marks.  That is,
that CS is just a bunch of hacks anyhow.
 The theory folks, whom I think most of us would call computer scientists,
write almost no programs.  Yet, I'd say their contribution to CS is
quite important (who analyzed the sorting algorithm you used this morning?)
 At least some parts of AI are still Science (with a capital "S").  We are
exploring issues involving cognition and memory, as well as building the
various programs that we call "expert systems" and the like.  Pople's group,
for example, are examining how it is that expert doctors come to make
diagnoses.  He is interested in the computer application, but also in the
understanding of the underlying process.


 Now, while we're flaming, let me also mention that some AI programs have
been awfully large.  If you are into the "bigger is better" mentality, I
suggest a visit to Yale and a view of some of the language programs there.
How about FRUMP, which in its 1978 version took up three processes each
using over 100K of memory, the source code was several hundred pages, and
it contained word definitions for over 10,000 words.  A little bigger
than Haunt??

  Pardon all this verbiage, but I think AI has shown itself both on
the scientific level, by contributions to the field of psychology,
(and linguistics for that matter) and by contributions to the state of
the art in computer technology, and also in the engineering level, by
designing and building some very large programs and some new
programming techniques and tools.

  -Jim Hendler

------------------------------

Date: 19 Dec 1983 15:00-EST
From: Robert.Frederking@CMU-CS-CAD.ARPA
Subject: Re: Math as science

        Actually, my library's encyclopedia says that mathematics isn't
a science, since it doesn't study phenomena, but rather is "the
language of science".  Perhaps part of the fuzziness about
AI-as-science is that we are creating most of the phenomena we are
studying, and the more theoretical components of what we are doing look
a lot like mathematical logic, which isn't a science.

------------------------------

Date: Mon, 19 Dec 1983 10:21:47 EST
From: AXLER.Upenn-1100@Rand-Relay (David M. Axler - MSCF Applications Mgr.)
Subject: Defining "Science"

     For better or worse, there really isn't such a thing as a prototypical
science.  The meaning of the word 'science' has always been different in
different realms of discourse:  what the "average American" means by the term
differs from what a physicist means, and neither of them would agree with an
individual working in one of the 'softer' fields.
     This is not something we want to change, in my view.  The belief that
there must be one single standardized definition for a very general term is
not a useful one, especially when the term is one that does not describe a
explicit, material thing (e.g., blood, pencil, etc.).  Abstract terms are
always dependent on the social context of their use for their definition; it's
just that academics often forget (or fail to note) that contexts other than
their own fields exist.
     Even if we try and define science in terms of its usage of the "scientific
method," we find that there's no clear definition.  If you've yet to read it,
I strongly urge you to take a look at Kuhn's "The Structure of Scientific
Revolutions," which is one of the most important books written about science.
He looks at what the term has meant, and does mean, in various disciplines
at various periods, and examines very carefully how the definitions were, in
reality, tied to other socially-defined notions.  It's a seminal work in the
study of the history and sociology of science.
     The social connotations of words like science affect us all every day.
In my personal opinion, one of the major reasons why the term 'computer
science' is gaining popularity within academia is that it dissociates the
field from engineering.  The latter field has, at least within most Western
cultures, a social stigma of second-class status attached to it, precisely
because it deals with mundane reality (the same split, of course, comes up
twixt pure and applied mathematics).  A good book on this, by the way, is
Samuel Florman's "The Existential Pleasures of Engineering"; his more recent
volume, "Blaming Technology", is also worth your time.
--Dave Axler

------------------------------

Date: Fri 16 Dec 83 17:32:56-PST
From: Al Davis <ADavis at SRI-KL>
Subject: Re: AIList Digest V1 #113


In response to the general feeling that Gee the Japanese are good guys
and the Americans are schmucks and war mongers view, and as a member of
one of the planning groups that wrote the DARPA SC plan, I offer the
following questions for thought:

1.  If you were Bob Kahn and were trying to get funding to permit
continued growth of technology under the Reagan administration, would
you ask for $750 million and say that you would do things in such a
way as to prevent military use?

2.  If it were not for DARPA how would we be reading and writing all
this trivia on the ARPAnet?

3.  If it were not for DARPA how many years (hopefully fun, productive,
and challenging) would have been fundamentally different?

4.  Is it possible that the Japanese mean "Japanese society" when they
target programs for "the good of ?? society"?

5.  Is it really possible to develop advanced computing technology that
cannot be applied to military problems?  Can lessons of
destabilization of the US economy be learned from the automobile,
steel, and TV industries?

6.  It is obvious that the Japanese are quick to take, copy, etc. in
terms of technology and profit.  Have they given much back? Note:  I like
my Sony TV and Walkman as much as anybody does.

7.  If DARPA is evil then why don't we all move to Austin and join MCC
and promote good things like large corporate profit?

8.  Where would AI be if DARPA had not funded it?

Well the list could go on, but the direction of this diatribe is
clear.  I think that many of us (me too) are quick to criticize and
slow to look past the end of our noses.  One way to start to improve
society is to climb down off the &%^$&^ ivory tower ourselves.  I for
one have no great desire to live in Japan.

                                                Al Davis

                                                ADAVIS @ SRI-KL

------------------------------

Date: Tue, 20 Dec 1983  09:13 EST
From: HEWITT%MIT-OZ@MIT-MC.ARPA
Subject: New Generation computing: Japanese and U.S. motivations

Ron,

I believe that you have painted a misleading picture of a complex situation.

From talking to participants involved, I believe that MITI is
funding the Japanese Fifth Generation Project primarily for commercial
competitive advantage.  In particular they hope to compete with IBM
more effectively than as plug-compatible manufacturers.  MITI also
hopes to increase Japanese intellectual prestige.  Congress is funding
Strategic Computing to maintain and strengthen US military and
commercial technology.  A primary motivation for strengthening the
commercial technology is to meet the Japanese challenge.

------------------------------

Date: 20 Dec 83 20:41:06 PST (Tuesday)
From: Ron Newman <Newman.es@PARC-MAXC.ARPA>
Subject: Re: New Generation computing: Japanese and U.S. motivations

Are we really in disagreement?

It seems pretty clear from my quotes, and from numerous writings on the
subject, that the Japanese intend to use the Fifth Generation Project to
strengthen their position in commercial markets.  We don't disagree
there.

It also seems clear that, as you say, "Congress is funding a project
called Strategic Computing to maintain and strengthen US military and
commercial technology."  That should be parsed as "Military technology
first, with hopes of commercial spinoff."

If you think that's a misleading distortion, read the DARPA Strategic
Computing Report.  Pages 21 through 29 contain detailed specifications
of the requirements of three specific military applications.   There is
no equivalent specification of non-military application
requirements--only a vague statement on page 9 that commercial spinoffs
will occur.  Military requirements and terminology permeate the entire
report.

If the U.S. program is aimed at military applications, that's what it
will produce.  Any commercial or industrial spinoff will be incidental.
If we are serious about strengthening commercial computer technology,
then that's what we should be aiming for.  As you say, that's certainly
what the Japanese are aiming for.

Isn't it about time that we put our economic interests first, and the
military second?

/Ron

------------------------------

End of AIList Digest
********************
22-Dec-83 19:42:33-PST,10328;000000000001
Mail-From: LAWS created at 22-Dec-83 19:41:19
Date: Thu 22 Dec 1983 19:37-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #116
To: AIList@SRI-AI


AIList Digest            Friday, 23 Dec 1983      Volume 1 : Issue 116

Today's Topics:
  Optics - Request for Camera Design,
  Neurophysiology - Split Brain Research,
  Expert Systems - System Size,
  AI Funding - New Generation Computing,
  Science - Definition
----------------------------------------------------------------------

Date: Wed, 21 Dec 83 14:43:29 PST
From: Philip Kahn <v.kahn@UCLA-LOCUS>
Subject: REFERENCES FOR SPECIALIZED CAMERA DESIGN USING FIBER OPTICS

        In a conventional TV camera, the image falls upon a staring
array of transducers.  The problem is that it is very difficult to
get very close to the focal point of the optical system using this
technology.
        I am looking for a design of a camera imaging system
that projects the light image onto a fiber optic bundle.
The optical fibers are used to transport the light falling upon
each pixel  away from the camera focal point so that the light
may be quantitized.
        I'm sure that such a system has already been designed, and
I would greatly appreciate any references that would be appropriate
to this type of application.  I need to computer model such a system,
so the pertinent optical physics and related information would be
MOST useful.
        If there are any of you that might be interested in this
type of camera system, please contact me.  It promises to provide
the degree of resolution which is a constraint in many vision
computations.

                                Visually yours,
                                Philip Kahn

------------------------------

Date: Wed 21 Dec 83 11:38:36-PST
From: Richard F. Lyon <DLyon at SRI-KL>
Subject: Re: AIList Digest V1 #115

  In reply to <majka@ubc-vision.UUCP> on left/right brain research:

    Most of the work on split brains and hemispheric specialization
has been done at Caltech by Dr. Roger Sperry and colleagues.  The 1983
Caltech Biology annual report has 5 pages of summary results, and 11
recent references by Sperry's group.  Previous year annual reports
have similar amounts.  I will mail copies if given an address.
        Dick Lyon
        DLYON@SRI-KL

------------------------------

Date: Wednesday, 21 December 1983 13:48:54 EST
From: John.Laird@CMU-CS-H
Subject: Haunt and other production systems.

A few facts on productions systems.

1. Haunt consists of 1500 productions and requires 160K words of memory on a
KL10. (So Frumps is a bit bigger than Haunt.)

2. Expert systems (R1, XSEL and PTRANS) written in a similar language
consist of around 1500-2500 productions.

3. An expert system to perform VLSI design (DAA) consists of around 200
productions.

------------------------------

Date: 19 Dec 83 17:37:56-PST (Mon)
From: decvax!dartvax!lorien @ Ucb-Vax
Subject: Re: Humanistic Japanese vs. Military Americans
Article-I.D.: dartvax.536

Does anyone know of any groups doing serious AI in the U.S. or Europe
that emulate the Japanese attitude?

--Lorien

------------------------------

Date: Wed 21 Dec 83 13:04:21-PST
From: Andy Freeman <ANDY@SU-SCORE.ARPA>
Subject: Re: AIList Digest   V1 #115

"If the U.S. program is aimed at military applications, that's what it
will produce.  Any commercial or industrial spinoff will be
incidental."

It doesn't matter what DoD and the Japanese project aim for.  We're
not talking about a spending a million on designing bullets but
something much more like the space program.  The meat of that
specification was "American on Moon with TV camera" but look what else
happened.  Also, the goal was very low volume, but many of the
products aren't.

Hardware, which is probably the majority of the specification, could
be where the crossover will be greatest.  Even if they fail to get "a
lisp machine in every tank", they'll succeed in making one for an
emergency room.  (Camping gear is a recent example of something
similar.)  Yes, they'll be able to target software applications, but
at least the tools, skills, and people move.  What distinguishes a US
Army database system anyway?

I can understand the objection that the DoD shouldn't have "all those
cycles", but that isn't one of the choices.  (How they are to be used
is, but not through the research.)  The new machines are going to be
built - if nothing else the Dod can use Japanese ones.  Even if all
other things were equal, I don't think the economic ones are, why
should they have all the fun?

-andy

------------------------------

Date: Wednesday, 21 December 1983, 19:27-EST
From: Hewitt at MIT-MC
Subject: New Generation Computing: Japanese and U.S. motivations

Ron,

For better or worse, I do not believe that you can determine what will
be the motivations or structure of either the MITI Fifth Generation
effort or the DARPA Strategic Computing effort by citing chapter and
verse from the two reports which you have quoted.

/Carl

------------------------------

Date: Wed, 21 Dec 83 22:55:04 EST
From: BRINT <abc@brl-bmd>
Subject: AI Funding - New Generation Computing

It seems to me that intelligent folks like AIList readers
should realize that the only reason Japan can fund peaceful
and humanitarian research to the exclusion of
military projects is that the United States provides the
military protection and security guarantees (out of our own
pockets) that make this sort of thing possible.

(I believe Al Davis said it well in the last Digest.)

------------------------------

Date: 22 Dec 83 13:52:20 EST
From: STEINBERG@RUTGERS.ARPA
Subject: Strategic Computing: Defense vs Commerce

Yes, it is a sad fact about American society that a project like
Strategic Computing will only be funded if it is presented as a
defense issue rather than a commercial/economic one.  (How many people
remember that the original name for the Interstate Highway system had
the word "Defense" in it?)  This is something we can and
should work to change, but I do not believe that it is the kind of
thing that can be changed in a year or two.  So, we are faced with the
choice of waiting until we change society, or getting the AI work done
in a way that is not perfectly optimal for producing
commercial/economic results.

It should be noted that achieving the military goals will require very
large advances in the underlying technology that will certainly have
very large effects on non-military AI.  It is not just a vague hope
for a few spinoffs.  So while doing it the DOD way may not be optimal
it is not horrendously sub-optimal.

There is, of course, a moral issue of whether we want the military to
have the kinds of capabilities implied by the Strategic Computing
plan.  However, if the answer is no then you cannot do the work under
any funding source.  If the basic technology is achieved in any way,
then the military will manage to use it for their purposes.

------------------------------

Date: 18 Dec 83 19:47:50-PST (Sun)
From: pur-ee!uiucdcs!parsec!ctvax!uokvax!andree @ Ucb-Vax
Subject: Re: Information sciences vs. physical sc - (nf)
Article-I.D.: uiucdcs.4598

    The definitions of Science that were offered, in defense of
    "computer Science" being a science, were just irrelevant.
    A field can lay claim to Science, if it uses the "scientific method"
    to make advances, that is:

    Hypotheses are proposed.
    Hypotheses are tested by objective experiments.
    The experiments are objectively evaluated to prove or
            disprove the hypotheses.
    The experiments are repeatable by other people in other places.

                                    - Keremath,  care of:
                                      Robison
                                      decvax!ittvax!eosp1
                                      or:   allegra!eosp1


I have to disagree. Your definition of `science' excludes at least one
thing that almost certainly IS a science: astronomy. The major problem
here is that most astronomers (all extra-solar astronomers) just can not
do experiments. Which is why they call it `obervational astronomy.'

I would guess what is needed is three (at least) flavors of science:

        1) experimental sciences: physics, chemistry, biology, psychology.
        Any field that uses the `scientific method.'

        2) observational sciences: astronomy, sociology, etc. Any field that,
        for some reason or another, must be satisfied with observing
        phenomena, and cannot perform experiments.

        3) ? sciences: mathematics, some cs, probably others. Any field that
        explores the universe of the possible, as opposed to the universe of
        the actual.

What should the ? be? I don't know. I would tend to favor `logical,' but
something tells me a lot of people will object.

        <mike

------------------------------

Date: 21 Dec 1983 14:36-PST
From: fc%usc-cse%USC-ECL@SRI-NIC
Subject: Re: AIList Digest   V1 #115

        Th reference to Kuhn's 'The Structure of Scientific Revolutions'
is appreciated, but if you take a good look at the book itself, you
will find it severely lacking in scientific practice. Besides being
palpably inconsistent, Kuhn's book claims several facts about history
that are not correct, and uses them in support of his arguments. One of
his major arguments is that historians rewrite the facts, thus he acted
in this manner to rewrite facts to support his contentions. He defined
a term 'paradigm' inconsistently, and even though it is in common use
today, it has not been consistently defined yet. He also made several
other inconsistent definitions, and has even given up this view of
science (if you bother to read the other papers written after his book).

    It just goes to show you, you shouldn't believe everything you read,
                                        Fred

------------------------------

End of AIList Digest
********************
29-Dec-83 23:53:23-PST,17170;000000000001
Mail-From: LAWS created at 29-Dec-83 23:51:45
Date: Thu 29 Dec 1983 23:42-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #117
To: AIList@SRI-AI


AIList Digest            Friday, 30 Dec 1983      Volume 1 : Issue 117

Today's Topics:
  Reply - Fiber Optic Camera,
  Looping Problem - Loop Detection and Classical Psychology,
  Logic Programming - Horn Clauses, Disjunction, and Negation,
  Alert - Expert Systems & Molecular Design,
  AI Funding - New Generation Discussion,
  Science - Definition
----------------------------------------------------------------------

Date: 23 Dec 1983 11:59-EST
From: David.Anderson@CMU-CS-G.ARPA
Subject: fiber optic camera?

The University of Pittsburgh Observatory is experimenting with just
such an imaging system in one of their major projects, trying to
(indirectly) observe planetary systems around nearby stars.  They claim
that the fiber optics provide so much more resolution than the
photography they used before that they may well succeed.  Another major
advantage to them is that they have been able to automate the search;
no more days spent staring at photographs.

--david

------------------------------

Date: Fri 23 Dec 83 12:01:07-EST
From: Michael Rubin <RUBIN@COLUMBIA-20.ARPA>
Subject: Loop detection and classical psychology

I wonder if we've been incorrectly thinking of the brain's loop detection
mechanism as a sort of monitor process sitting above a train of thought,
and deciding when the latter is stuck in a loop and how to get out of it.
This approach leads to the problem of who monitors the monitor, ad
infinitum.  Perhaps the brain detects loops in *hardware*, by classical
habituation.  If each neuron is responsible for one production (more or
less), then a neuron involved in a loop will receive the same inputs so
often that it will get tired of seeing those inputs and fire less
frequently (return a lower certainty value), breaking the loop.  The
detection of higher level loops such as "Why am I trying to get this PhD?"
implies that there is a hierarchy of little production systems (or
whatever), one for each chunk of knowledge.  [Next question - how are
chunks formed?  Maybe there's a low-level explanation for that too, having
to do with classical conditioning....]

BTW, I thought of this when I read some word or other so often that it
started looking funny; that phenomenon has gotta be a misfeature of loop
detection.  Some neuron in the dictionary decides it's been seeing that damn
word too often, so it makes its usual definition less certain; the parse
routine that called it gets an uncertain definition back and calls for
help.
                        --Mike Rubin <Rubin@Columbia-20>

------------------------------

Date: 27 Dec 1983 16:30:08-PST
From: marcel.uiuc@Rand-Relay
Subject: Re: a trivial reasoning problem?

This is an elaboration of why a problem I submitted to the AIList seems
to be unsolvable using regular Horn clause logic, as in Prolog. First I'll
present the problem (of my own devising), then my comments, for your critique.

        Suppose you are shown two lamps, 'a' and 'b', and you
        are told that, at any time,

                1. at least one of 'a' or 'b' is on.
                2. whenever 'a' is on, 'b' is off.
                3. each lamp is either on or off.

        WITHOUT using an exhaustive generate-and-test strategy,
        enumerate the possible on-off configurations of the two
        lamps.

If it were not for the exclusion of dumb-search-and-filter solutions, this
problem would be trivial. The exclusion has left me baffled, even though
the problem seems so logical. Check me on my thinking about why it's so
difficult.

1. The first constraint (one or both lamps on) is not regular Horn clause
   logic. I would like to be able to state (as a fact) that

        on(a) OR on(b)

   but since regular Horn clauses are restricted to at most one positive
   literal I have to recode this. I cannot assert two independent facts
   'on(a)', 'on(b)' since this suggests that 'a' and 'b' are always both
   on. I can however express it in regular Horn clause form:

        not on(b) IMPLIES on(a)
        not on(a) IMPLIES on(b)

   As it happens, both of these are logically equivalent to the original
   disjunction. So let's write them as Prolog:

        on(a) :- not on(b).
        on(b) :- not on(a).

   First, this is not what the disjunction meant. These rules say that 'a'
   is provably on only when 'b' is not provably on, and vice versa, when in
   fact 'a' could be on no matter what 'b' is.

   Second, a question   ?- on(X).  will result in an endless loop.

   Third, 'a' is not known to be on except when 'b' is not known to be on
   (which is not the same as when 'b' is known to be off). This sounds as
   if the closed-world assumption might let us get away with not being able
   to prove anything (if we can't prove something we can always assume its
   negation). Not so. We do not know ANYTHING about whether 'a' or 'b' are
   on OR off; we only know about constraints RELATING their states. Hence
   we cannot even describe their possible states, since that would require
   filling in (by speculative hypothesis) the states of the lamps.

   What is wanted is a non-regular Horn clause, but some of the nice
   properties of Logic Programming (eg completeness and consistency under the
   closed-world assumption, alias a reasonable negation operator) do not apply
   to non-regular Horn clauses.

2. The second constraint (whenever 'a' is on, 'b' is off) shares some of the
   above problems, and a new one. We want to say

        on(a) IMPLIES not on(b),   or    not on(b) :- on(a).

   but this is not possible in Prolog; we have to say it in what I feel to
   be a rather contrived manner, namely

        on(b) :- on(a), !, fail.

   Unfortunately this makes no sense at all to a theoretician. It is trying
   to introduce negative information, but under the closed-world assumption,
   saying that something is NOT true is just the same as not saying it at all,
   so the clause is meaningless.

   Alternative: define a new predicate off(X) which is complementary to on(X).
   That is the conceptualization suggested by the third problem constraint.

3.      off(X) :- not on(X).
        on(X)  :- not off(X).

   This idea has all the problems of the first constraint, including the
   creation of another endless loop.

It seems this problem is beyond the capabilities of present-day logic
programming. Please let me know if you can find a solution, or if you think
my analysis of the difficulties is inaccurate.

                                        Marcel Schoppers
                                        U of Illinois at Urbana-Champaign
                                        {pur-ee|ihnp4}!uiucdcs!marcel

------------------------------

Date: Mon 26 Dec 83 22:15:06-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: High Technology Articles

The January issue of High Technology has a fairly good introduction
to expert systems for commercial applications.  As usual for this
magazine, there are corporate names and addresses and product
prices.  The article mentions that there are probably fewer than
200 "knowledge engineers" in the country, most at universities
and think tanks; an AI postdoc willing to go into industry, but
with no industry experience, can command $70K.

The business outlook section is not the usual advice column
for investors, just a list of some well-known AI companies.  The
article is also unusual in that it bases a few example of knowledge
representation and inference on the fragment BIRD IS-A MAMMAL.


Another interesting article is "Designing Molecules by Computer".
Several approaches are given, but one seems particularly pertinent
to the recent AIList discussion of military AI funding.  Du Pont
researchers are studying how a drug homes in on its receptor site.
They use an Army program that generates line-of-sight maps for
TV-controlled antitank missiles to "fly" a drug in and observe how its
ability to track its receptor site on the enzyme surface is influenced
by a variety of force fields and solvent interactions.  A different
simulation with a similar purpose uses robotic software for assembling
irregular components to "pick up" the drug and "insert" it in the
enzyme.

                                        -- Ken Laws

------------------------------

Date: 23 December 1983 21:41 est
From: Dehn at MIT-MULTICS (Joseph W. Dehn III)
Subject: "comparable" quotes

Person at University of Tokyo, editor of a scientific/engineering
journal, says computers will be used to solve human problems.

Person at DARPA says computers will be used to make better weapons
("ways of killing people").

Therefore, Japanese are humane, Americans are warmongers.

Huh?

What is somebody at DARPA supposed to say is the purpose of his R&D
program?  As part of the Defense Department, that agency's goal SHOULD
be to improve the defense of the United States.  If they are doing
something else, they are wasting the taxpayer's money.  There are
undoubtedly other considerations involved in DARPA's activities,
bureaucratic, economic, scientific, etc., but, nobody should be
astonished when an official statement of purpose states the official
purpose!

Assuming the nation should be defended, and assuming that advanced
computing can contribute to defense, it makes sense for the national
government to take an interest in advanced computing for defense.  Thus,
the question should not be, "why do Americans build computers to kill
people", but rather why don't they, like the Japanese, ALSO, and
independent of defense considerations (which are, as has been pointed
out, different in Japan), build computers " to produce profitable
industrial products"?

Of course, before we try to solve this puzzle, we should first decide
that there is something to be solved.  Is somebody suggesting that
because there are no government or quasi-government statements of
purpose, that Americans are not working on producing advanced and
profitable computer products?  What ARE all those non-ARPA people doing
out there in netland, anyway?  Where are IBM's profits coming from?

How can we meaningfully compare the "effort" being put into computer
research in Japan and the U.S.?  Money? People?  How about results?
Which country has produced more working AI systems (you pick the
definition of "working" and "AI")?

                           -jwd3

------------------------------

Date: 29 Dec 1983 09:11:34-PST
From: Mike Brzustowicz <mab@aids-unix>
Subject: Japan again.

Just one more note.  Not only do we supply Japan's defense, but by treaty
they cannot supply their own (except for a very small national guard-type
force).

------------------------------

Date: 21 Dec 83 19:49:32-PST (Wed)
From: harpo!eagle!mhuxl!ulysses!princeton!eosp1!robison @ Ucb-Vax
Subject: Re: Information sciences vs. physical sc - (nf)
Article-I.D.: eosp1.466

I disagree -  astronomy IS an experimental science.  Even before the
age of space rockets, some celebrated astronomical experiments have
been performed.  In astronomy, as in all sciences, one observes,
makes hypotheses, and then tries to verify the hypotheses by
observation.  In chemistry and physics, a lot of attention is paid
to setting up an experiment, as well as observing the experiment;
in astronomy (geology as well!), experiments consist mostly
of observation, since there is hardly anything that people are capable
of setting up.  Here are some pertinent examples:

(1) An experiment to test a theory about the composition of the sun has
been going on for several years.  It consists of an attempt to trap
neutrinos from the sun in a pool of chlorine underground.  The amount
of neutrinos detected has been about 1/4 of what was predicted, leading
to new suggestions about both the composition of the sun,
and (in particle physics) the physical properties of neutrinos.

(2) An experiment to verify Einstein's theory of relativity,
particularly the hypothesis that the presence of large masses curves
space (gravitational relativity) -- Measurements of Mercury's apparent
position, during an eclipse of the sun, were in error to a degree
consistent with Einstein's theory.

Obviously, Astronomical experiments will seem to lie half in the realm
of physics, since the theories of physics are the tools with which we
try to understand the skies.

Astronomers and physicists, please help me out here; I'm neither.
In fact, I don't even believe in neutrinos.

                                - Keremath,  care of:
                                  Robison
                                  decvax!ittvax!eosp1
                                  or:   allegra!eosp1

------------------------------

Date: Thu, 29 Dec 83 15:44 EST
From: Hengst.WBST@PARC-MAXC.ARPA
Subject: Re: AIList Digest   V1 #116

The flaming on the science component of computer science intrigues me
because it parallels some of the 1960's and 1970's discussion about the
science component of social science. That particular discussion, to
which Thomas Kuhn also contributed, also has not yet reached closure
which leaves me with the feeling that science might best be described as
a particular form of behavior by practitioners who possess certain
qualifications and engage in certain rituals approved by members of the
scientific tribe.

Thus, one definition of science is that it is whatever it is that
scientists do in the name of science ( a contextual and social
definition). Making coffee would not be scientific activity but reading
a professional book or entertaining colleagues with stimulating thoughts
and writings would be. From this perspective, employing the scientific
method is merely a particular form of engaging in scientific practice
without judging the outcome of that scientific practice. Relying upon
the scientific method by unlicensed practitioners would not result in
science but in lay knowledge. This means that authoritative statements
by members of scientific community are automatically given a certain
truth value. "Professor X says this", "scientific study Y demonstrates
that . . ." should all be considered as scientific statements because
they are issued as authorative statements in the name of science. This
interpretation of science discounts the role of Edward Teller as a
credible spokesman in the area of nuclear weapons policy in foreign
affairs.

The "licensing" of the practitioners derives from the formalization of
the training and education in the particular body of knowledge: eg. a
university degree is a form of license. Scientific knowledge can
differentiate itself from other forms of knowledge on the basis of
attempts (but not necesssarily success) at formalization. Physical
sciences study phenomena which lend themselves to better quantification
(they do have better metrics!) and higher levels of formalization. The
deterministic bodies of knowledge of the physical science allow for
better prediction than the heavily probabilistic bodies of knowledge of
the social science which facilitate explanation more so than prediction.
I am not sure if a lack of predictive power or lack of availability of
the scientific method (experimental design in its many flavors) makes
anyone less a scientist. The social sciences are rich in description and
insight which in my judgment compensates for a lack of hierarchical,
deductive formal knowledge.

From this point of view computer science is science if it involves
building a body of knowledge with attempts at formulating rules in some
consistent and verfiable manner by a body of trained practitioners.
Medieval alchemy also qualifies due to its apprenticeship program (rules
for admitting members) and its rules for building knowledge.
Fortunately, we have better rules now.

Acco

------------------------------

Date: Thu 29 Dec 83 23:38:18-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Philosophy of Science Discussion

I hate to put a damper on the discussion of Scientific Method,
but feel it is my duty as moderator.  The discussion has been
intelligent and entertaining, but has strayed from the central
theme of this list.  I welcome discussion of appropriate research
techniques for AI, but discussion of the definition and philosophy
of science should be directed to Phil-Sci@MIT-OZ.  (Net.ai members
are free to discuss whatever they wish, of course, but I will
not pass further messages on this topic to the ARPANET readership.)

                                        -- Ken Laws

------------------------------

End of AIList Digest
********************
�