2-Nov-87 22:28:11-PST,16604;000000000000
Mail-From: LAWS created at  2-Nov-87 22:20:20
Date: Mon  2 Nov 1987 22:08-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #255 - Future of AI & Speech & PDP Book & AI Categories
To: AIList@SRI.COM


AIList Digest            Tuesday, 3 Nov 1987      Volume 5 : Issue 255

Today's Topics:
  Queries - OPS5 Programs & Future of AI,
  Comments - Future of AI & Speech Understanding,
  References - PDP & AI Categories,
  Comments - Success of AI

----------------------------------------------------------------------

Date: 30 Oct 87 18:11:41 GMT
From: ihnp4!alberta!ajit@ucbvax.Berkeley.EDU  (Ajit Singh)
Subject: Need OPS5 Programs


  I am currently working on analyzing static characteristics
as well as run-time  behavior of  large   production  system
programs for the purposes of rule-clustering and distributed
processing. I am using OPS5 as my production system model. I
need lots of large and small OPS5 programs. Does anybody know
of any publically accessible library of such  programs?  Any
help in this direction will be greatly appreciated.

 If you have some OPS5 programs (plus data if necessary) that
you would like to send to me then you may send them  directly
via e-mail at the following address:

              {ubc-vision, ihnp4, mnetor}!alberta!ajit


                       Thanks in advance,


Ajit Singh
Department of Computing Science
University of Alberta
Edmonton, Alberta
Canada

------------------------------

Date: 30 Oct 87 20:30:06 GMT
From: kirby@ngp.utexas.edu  (Bruce Kirby)
Subject: The future of AI.... (nothing about flawed minds)

I have a question for people:
   What practical effects do you think AI will have in the next ten
years?

What I am interested in is discovering what people expect to actually
come out of AI research in the near future,  and how that will affect
society,  business and government.  I am not interested in the
long-term questions of what AI will eventually accomplish.

Some supplementary questions:
   - What field of AI will produce practical applications?
   - What will be the effect of a new application? (e.g. how would an
effective translation mechanism affect the way people function?)
   - Who is likely to produce these useful applications?  How are they
to be introduced?

Any comments/responses are welcome.  I am just trying to get a feel
for what other people see as the near-term effects of AI research.

Bruce Kirby
kirby@ngp.utexas.edu
...!ut-sally!ut-ngp!kirby

------------------------------

Date: 1 Nov 87 04:15:00 GMT
From: uxc.cso.uiuc.edu!osiris.cso.uiuc.edu!goldfain@a.cs.uiuc.edu
Subject: Re: The future of AI.... (nothing about


Re: Products in the next 10 years coming from AI.

One thing that is  currently out there,  is a growing  body of expert systems.
Many  new  ones  are being  churned out  as we speak,   and I think  they will
continue to be produced  at a gently accelerating  rate over the  next decade.
But  many expert systems are frightfully  narrow.  They  tend to be simplistic
and only apply when problems are  just right.  So  look for additional layers,
which    begin    to   show     some     real    sophistication.    I   expect
"multi-expert-system-management-systems" to appear  and   to exhibit qualities
that will begin to look like the human traits of  "judgement" and "learning by
analogy", and systems that will improve with time (autonomously).

------------------------------

Date: 31 Oct 87 13:52:15 GMT
From: gatech!hubcap!ncrcae!gollum!rolandi@rutgers.edu  (rolandi)
Subject: Practical effects of AI


In article <6667@ut-ngp.UUCP> you write:
>I have a question for people:
>   What practical effects do you think AI will have in the next ten
>years?
>........[etc...]

I 'd say that AI will have at least two real and immediate effects.

        1) given AI programming tools and techniques, many processes
           previously assumed to be too complicated for automation
           will be automated.  the automation of these tasks will
           take less time given the productivity gains that AI tools
           can provide. expert systems will be common place within
           the DP/MIS world.

        2) AI will make computers easier to use and therefore extend
           their usefulness to non-computer people.

Regarding #2 above...

It would seem to me that the single greatest practical advancement for
AI will be in speaker independent, continuous speech recognition. This
is NOT to imply total computer "comprehension" in the sense of being
able to carry on an unrestricted conversation.  I am NOT referring to
abilities to process natural language.  That, is a long way off, and
will most likely come about as a function of a redefinition of the NLP
problem in terms of a machine learning issue.  What "simple" speaker
independent, continuous speech recognition will provide is the ultimate
alternative to keyboard entry.  This would thereby provide all of
the functionality of current technology to anyone who could pronounce
the commands.  This issue will have a major impact on the industry and
on society.  By making "every body" a user, more machines will be sold,
and because "every body" will have different needs, tha range of
automation will be widely extended.


-w.rolandi
ncrcae!gollum!rolandi

disclaimer: i speak for no one but myself and usually no one else is
            listening.

------------------------------

Date: 31 Oct 87 22:06:02 GMT
From: PT.CS.CMU.EDU!SPEECH2.CS.CMU.EDU!kfl@cs.rochester.edu  (Kai-Fu
      Lee)
Subject: Re: Practical effects of AI (speech)

In article <12@gollum.Columbia.NCR.COM>, rolandi@gollum.Columbia.NCR.COM
(rolandi) writes:
>
> In article <6667@ut-ngp.UUCP> you write:
> >I have a question for people:
> >   What practical effects do you think AI will have in the next ten
> >years?
> >........[etc...]

> It would seem to me that the single greatest practical advancement for
> AI will be in speaker independent, continuous speech recognition. This
> is NOT to imply total computer "comprehension" in the sense of being
> able to carry on an unrestricted conversation.  I am NOT referring to
> abilities to process natural language.  That, is a long way off, and
> will most likely come about as a function of a redefinition of the NLP
> problem in terms of a machine learning issue.  What "simple" speaker
> independent, continuous speech recognition will provide is the ultimate
> alternative to keyboard entry.  This would thereby provide all of
> the functionality of current technology to anyone who could pronounce
> the commands.  This issue will have a major impact on the industry and
> on society.  By making "every body" a user, more machines will be sold,
> and because "every body" will have different needs, tha range of
> automation will be widely extended.
>

Those of us who work on speech will be very encourage by this enthusiasm.
However,

(1) Speaker-independent continuous speech is much farther from reality
    than some companies would have you think.  Currently, the best
    speech recognizer is IBM's Tangora, which makes about 6% errors
    on a 20,000 word vocabulary.  But the Tangora is for speaker-
    dependent, isolate-words, grammar-guided recognition in a benign
    environment.  Each of these four constraints cuts the error rate
    by 3 or more times if used independently.  I don't know how well
    they will do if you remove all four constraints, but I would guess
    about 70% error rate.  So while speech recognition has made a lot
    of advancements, it is still far from usable in the application you
    mentioned.
(2) Spoken English is a harder problem than NLP of written English.
    If you make the recognizer too constrained (small vocabulary, fixed
    syntax, etc.), it will be harder to use than a keyboard.  If you don't,
    you have to understand spoken English, which is really hard.
(3) If this product were to materialize, it is far from clear that it
    would be an advancement for AI.  At present, the most promising
    techniques are based on stochastic modeling, pattern recognition,
    information theory, signal processing, auditory modeling, etc..
    So far, very few traditional AI techniques are used in, or work well
    for speech recognition.
>
> -w.rolandi
> ncrcae!gollum!rolandi

Kai-Fu Lee
Computer Science Department
Carnegie-Mellon University

------------------------------

Date: 30 Oct 87 03:16:05 GMT
From: ihnp4!homxb!homxc!del@ucbvax.Berkeley.EDU  (D.LEASURE)
Subject: PDP by Rummelhart and McClelland

After posting about a good text on parallel distributed processing aka
neural nets, I've had several requests for a full reference from
people I can't reach on the net directly.

The books are:

Parallel Distributed Processing: Explorations in the Microstructure
of Cognition, Vols. 1 and 2, by David E. Rumelhart and James L.
McClelland, Bradford Books, The MIT Press, 0-262-63110-5

The two volumes in paper are about $25 together. A third volume
with software for the PC (IBM), is also out this month.

I still recommend them.
--
David E. Leasure - AT&T Bell Laboratories - (201) 615-5307

------------------------------

Date: Fri, 30 Oct 1987  17:20 EST
From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: AIList V5 #253 - LISP, NIL, Msc.

In reply to noekel@uklirb.UUCP who is
>
>currently building a AI bibliography and still searching for a
>suitable classification/key word scheme.

In the IRE Transactions on Human Factors in Electronics, March 1961, I
published a big (600 item) bibliography on AI.  It may have been the
first published descriptor-index bibliography or, perhaps, the first
to use the term "descriptor", which I got from Calvin Mooers.  Now
NOEKE wants one that has "gained wide-spread use in the AI community"
and my 1961 set of terms must be rather dated and does not reflect
many newer ideas.  However, much of it may still be useful.  And I
would be curious about how useful it might remain after all those
years.

The bibliography was a by-product of work on my other 1961 article,
"steps toward artificial intelligence" which appeared in the
Proceedings of the IRE (whose name later changed to Proc. IEEE.)  The
reason the bibliographic appeared in the more obscure Human Factors
journal was that "Steps" was already too long and there was no more
room.  Tom Marill was editing a special issue of the HF transactions
and offered to place it there because that issue contained other
AI-related topics.

------------------------------

Date: 31 Oct 87 03:44:44 GMT
From: honavar@speedy.wisc.edu (A Buggy AI Program)
Reply-to: honavar@speedy.wisc.edu (A Buggy AI Program)
Subject: Re: Success of AI


In article <8710280748.AA21340@jade.berkeley.edu> eitan@wisdom.BITNET
(Eitan Shterenbaum) writes:
>
>Had it ever come into you mind that simulating/emulating the human brain is
>NP problem ? ( Why ? Think !!! ). Unless some smartass comes out with a proof
>for NP=P yar can forget de whole damn thing ...
>
>                Eitan Shterenbaum
>(*
>   As far as I know one can't solve NP problems even with a super-duper
>   hardware, so building such machine is pointless (Unless we are living on
>   such machine ...) !
>*)

Discovering that a problem is NP-complete is usually just the
beginning of the work on the problem. The knowledge that a problem is
NP-complete provides valuable information on the lines of attack that
have the greatest potential for success. We can concentrate on algorithms
that are not guaranteed to run in polynomial time but do so most
of the time or those that give approximate solutions in polynomial time.
After all, the human brain does come up with approximate (reasonably good)
solutions to a lot of the perceptual tasks although the solution may not
always be the best possible. Knowing that a problem is NP-complete only
tells us that the chances of finding a polynomial time solution are minimal
(unless P=NP).

-- VGH

------------------------------

Date: 30 Oct 87 18:00:42 GMT
From: mcvax!ukc!its63b!hwcs!hci!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: The Success of AI

In article <4171@sdcsvax.UCSD.EDU> todd@net1.UUCP (Todd Goodman) writes:
>>"Better" concepts related to mind than those found in cog. sci.
>>already exist. There are many monumental works of scholarship which unify
>> the phenomena grouped into well-defined subfields.
>
>Please, please, please give us a bibliography of these works.

Impossible at short notice. Obvious examples are Lyons' work on
semantics (1977?, 2 vols, Cambridge University Press). My answer to
anyone in AI about relevant scholarship is go and see your local
experts for a reading list and an orientation.

By "concepts related to mind", I intend all work concerned with
language, thought and action. That is, I mean an awful lot of work. My
first degree is in Education, which coupled with my earlier work in
History (especially social and intellectual history), brought me into
contact with a wide range of disciplines, and forced me to use each to
the satisfaction of those supervising me. However, I am now probably
out of date, as I've spent the last four years working in
Human-Computer Interaction.

Any work in linguistics under the heading of 'Semantics' should be of
great interest to people working in Knowledge Representation. There is
a substantial body of philosophical work under the heading of
"Philosophy of Mind". Unlike Cognitive Psychology (especially memory
and problem solving), this work has not become fixated on information
processing models. Anthropolgists are doing very interesting work on
category systems; the work of the "New" or "Cognitive" archaeologists
at Cambridge University (nearly all published by Cambridge University
Press) is drawing on much recent continental work on social action.
Any anthropologist should be able to direct you to the older work on
such cultures as the Subanum and the Trobriand Islanders - most of this
work was done by Americans and is more accessible, as it does not
require acquaintance with recent Structuralist and post-Structuralist
concepts, which can be very dense and esoteric.

>the reasons that you find them to be better than any current models.

This work is inherently superior to most work in AI because non of the
writers are encumbered by the need to produce computational models.
They are thus free to draw on richer theoretical orientations which
draw on concepts which are clearly motivated by everyday observations
of human activity. The work therefore results in images of man which
are far more humanist than mechanical computational models. Workers in
AI may be scornful of such values, but in reality they should realise
that adherents to a mechanistic view of human behaviour are very
isolated and in the minority, both now and throughout history. The
persistence of humanism as the dominant approach to the wider studies
of man, even after years of zealous attack from self-proclaimed
'Scientists', should be taken as a warning against the acceptability of
crude models of human behaviour. Furthermore, the common test of any
concept of mind is "can you really imagine your mind working this way?"
Many of the pillars of human societies, like the freedom and dignity of
democracy and moral values, are at odds with the so called 'Scientific'
models of human behaviour; indeed the work of misanthropes like Skinner
actively promote the connection between impoversihed models of man and
immoral totalitarian socities (B.F. Skinner, Beyond Freedom and Dignity).

In short, mechanical concepts of mind and the values of a civilised
society are at odds with each other. It is for this reason that modes
of representation such as the novel, poetry, sculpture and fine art
will continue to dominate the most comprehensive accounts of the human
condition.
--
   Gilbert Cockton, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN
   JANET:  gilbert@uk.ac.hw.hci    ARPA:   gilbert%hci.hw.ac.uk@cs.ucl.ac.uk
                UUCP:   ..{backbone}!mcvax!ukc!hwcs!hci!gilbert

------------------------------

End of AIList Digest
********************
 2-Nov-87 22:39:32-PST,16654;000000000001
Mail-From: LAWS created at  2-Nov-87 22:35:28
Date: Mon  2 Nov 1987 22:27-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #256 - Analogy, Inference
To: AIList@SRI.COM


AIList Digest            Tuesday, 3 Nov 1987      Volume 5 : Issue 256

Today's Topics:
  Reference - Chaos Theory,
  Bindings - Langendoen and Postal & Netmail to UK,
  Analogy - Knowledge Soup & Robert Frost,
  Inference - Prediction-Producing Algorithms

----------------------------------------------------------------------

Date: Mon, 2 Nov 87 16:04 N
From: MFMISTAL%HMARL5.BITNET@wiscvm.wisc.edu
Subject: re: The success of AI (misunderstandings) - CHAOS theory

The august 1987 issue of the proceedings of the IEEE contains 9 papers
on chaotic systems It has a tutorial for engineers, 3 papers with
examples in electronic circuits, 2 papers on analytical tools and
3 papers on software and hardware tools.

Jan L. Talmon
University of Limburg, Dept. of Medical Informatics and Statistics.
Maastricht, the Netherlands
MFMISTAL@HMARL5.bitnet

------------------------------

Date: 2 Nov 87 17:00:55 GMT
From: sunybcs!rapaport@ames.arpa  (William J. Rapaport)
Subject: Re: Langendoen and Postal (posted by: Berke)

In article <8941@shemp.UCLA.EDU> berke@CS.UCLA.EDU (Peter Berke) writes:
>I just read this fabulous book over the weekend, called "The Vastness
>of Natural Languages," by D. Terence Langendoen and Paul M. Postal.
>
>Are Langendoen or Postal on the net somewhere?

Langendoen used to be on the net as: tergc%cunyvm@wiscvm.wisc.edu

but he's moved to, I think, U of Arizona.  Postal, I think, used to be
at IBM Watson.

------------------------------

Date: Thu, 29 Oct 87 16:13:01 GMT
From: "G. Joly" (Birkbeck) <gjoly@NSS.Cs.Ucl.AC.UK>
Subject: Re: transatlantic netmail mail to UK.

Pat Hayes has given us some propaganda. Yorick Wilkes informed us that
he cannot send mail, although he used to be able to do so.

If I can add may 1.34564 cents worth, the real issue is that the
ARPA tables (from SRI-NIC) do not allow a path to UCL-CS.ARPA and
beyond. This gateway is now known as nss.cs.ucl.ac.uk and nothing
else will work.

I am not a network person at UCL; they inform me that an official
response will be prepared (I am fairly sure that the unsigned note
to Pat was not it). The change away from UCL-CS.ARPA was advertised
at least two years ago.

"The plans have been on view at the planning office on ... "
after Douglas Adams.

Gordon Joly,
Computer Science,
Birkbeck College,
Malet Street,
LONDON WC1E 7HX.

+44 1 631 6468

ARPA: gjoly@nss.cs.ucl.ac.uk
BITNET: UBACW59%uk.ac.bbk.cu@AC.UK
UUCP: ...!seismo!mvcax!ukc!bbk-cs!gordon

------------------------------

Date: 28 October 1987, 20:02:20 EST
From: john Sowa <SOWA@ibm.com>
Subject: Knowledge Soup

Since my abstract on "Crystallizing Theories out of Knowledge Soup"
appeared in AIList V5 #241 and my clarification appeared in V5 #247,
I have received a number of requests for the corresponding paper.

I regret to say that the paper is still in the process of getting
itself crystallized.  That talk was mostly a survey of current
approaches to the soup together with some suggestions about techniques
that I considered promising.  Following is what I discussed:

 1. The limits of conceptualization and the use of conceptual analysis
    as a nonautomated way of extracting knowledge from the soup.  This
    material is discussed in my book, Conceptual Structures.  See
    Section 6.3 for conceptual analysis, and Chapter 7 for a discussion
    of the limitations.

 2. Dynamic belief revision, developed by Norman Foo and Anand Rao
    from Sydney University, currently visiting IBM.  This is a kind of
    truth maintenance system based on the axioms for belief revision
    by the Swedish logician Gardenfors.  They have been adding some
    interesting features, including levels of epistemic importance
    (laws, facts, and defaults) where the revision process tries to
    retain the more important propositions at the expense of losing
    some of the less important.  Their current system uses Prolog
    style rules and facts, but they are adapting it to conceptual
    graphs as part of CONGRES (their conceptual graph reasoning system).

 3. Dynamic type hierarchies, an idea developed by Eileen Way in
    her dissertation on metaphor.  As in most treatments of metaphor,
    Eileen compares matching relationships in the tenor and vehicle
    domains.  Her innovation is the recognition that the essential
    meaning of a metaphor is the introduction of a new node in the
    type hierarchy.

    Example:  "My car is thirsty."  The canonical graph for THIRSTY
    shows that it must be an attribute of something of type ANIMAL.
    Since CAR is not a subtype of ANIMAL, the system finds a minimal
    common supertype of CAR and ANIMAL, in this case MOBILE-ENTITY.
    It then creates a new node in the type hierarchy above both
    CAR and ANIMAL, but below MOBILE-ENTITY.  To create a definition
    for that type, it checks the properties of ANIMAL with respect to
    THIRSTY, and finds a graph saying that THIRSTY is an attribute of
    an ANIMAL that is in the sate of needing liquid:

    [THIRSTY]<-(ATTR)<-[ANIMAL]->(STAT)->[NEED]->(PTNT)->[LIQUID]

    It then generalizes ANIMAL to MOBILE-ENTITY and uses the resulting
    graph to define a new type for mobile entities that need liquid.
    The system can generalize schemata involving animals and liquid
    to the new node, from which they can be inherited by CAR or any
    similar subtype.  The new node thereby allows schemata for DRINK
    or GUZZLE to be inherited as well as schemata for THIRSTY.

 4. Theory refinement.  This is an approach that I have been discussing
    with Foo and Rao as an extension to their belief revision system.
    Instead of making revisions by adding and deleting propositions,
    as they currently do, the use of conceptual graphs allows individual
    propositions or even parts of propositions to be generalized or
    specialized by adding and deleting parts or by moving up and down
    the type hierarchy.  This extension can still be done within the
    framework of the Gardenfors axioms.  As the topic changes, the
    salience of different concepts and patterns of concepts in the
    knowledge soup changes.  The most salient ones become candidates
    for crystallization out of the soup into the formalized theory.
    The knowledge soup thus serves as a resource that the belief
    revision process draws upon in constructing the crystallized
    theories.  Depending on the salience, different theories can be
    crystallized from the same soup, each representing a different
    point of view.  Even though the soup may be inconsistent, each
    theory crystallized from it is consistent, but specialized for
    a limited domain.

People are capable of precise reasoning, but usually with short chains
of inference.  They are also capable of dealing with enormous, but
loosely organized collections of knowledge.  Instead of viewing formal
theories and informal associative techniques as competing or conflicting
approaches, I view them as complementary mechanisms that should be made
to cooperate.  This talk discussed possible ways of doing that.  Although
there is an enormous amount of work that remains to be done, there are
also some promising directions for future research.

References:

Foo, Norman Y., & Anand S. Rao (1987) "Open world and closed world
negations," Report RC 13122, IBM T. J. Watson Research Center.

Foo, Norman Y., & Anand S. Rao (in preparation) "Semantics of
dynamic belief systems."

Foo, Norman Y., & Anand S. Rao (in preparation) "Belief and ontology
revision in a microworld.

Rao, Anand S., & Norman Y. Foo (1987) "Evolving knowledge and logical
omniscience," Report RC 13155, IBM T. J. Watson Research Center.

Rao, Anand S., & Norman Y. Foo (1987) "Evolving knowledge and
autoepistemic reasoning," Report RC 13155, IBM T. J. Watson Research
Center.

Rao, Anand S., & Norman Y. Foo (1986) "Modal horn graph resolution,"
Proceedings of the First Australian AI Congress, Melbourne.

Rao, Anand S., & Norman Y. Foo (1986) "DYNABELS -- A dynamic belief
revision system," Report 301, Basser Dept. of Computer Science,
University of Sydney.

Sowa, John F. (1984) Conceptual Structures:  Information Processing in
Mind and Machine, Addison-Wesley, Reading, MA.

Way, Eileen C. (1987) Dynamic Type Hierarchies:  An Approach to
Knowledge Representation through Metaphor, PhD dissertation,
Systems Science Dept., SUNY at Binghamton.

For copies of the IBM reports, write to Distribution Services 73-F11;
IBM T. J. Watson Research Center; P.O. Box 218; Yorktown Heights,
NY 10598.

For the report from Sydney, write to Basser Dept. of Computer Science;
University of Sydney; Sydney, NSW 2006; Australia.

For the dissertation by Eileen Way, write to her at the Department
of Philosophy; State University of New York; Binghamton, NY 13901.

------------------------------

Date: 30 Oct 87 11:11:24 EST (Fri)
From: sas@bfly-vax.bbn.com
Subject: Robert Frost


I am forwarding this without permission from the 23 October 1987 issue
of Science:

Robert Frost on Thinking

Readers intrigured by "Causality, structure, and common sense" by M.
Mitchell Waldrop (Research News, 11 Sept., p1297) may be interested in
knowing that the role of analogy in reasoning has been discussed
eloquently by poet Robert Frost in an essay called "Education by
poetry".  The following excerpts are among his most relevant comments:

"I have wanted in late years to go further and further in making
metaphor the whole of thinking. I find some one now and then to agree
with me that all thinking, except mathematical thinking, is
metaphorical, or all thinking except scientific thinking.  The
mathematical might be difficult for me to bring in, but the scientific
is easy enough...."

"What I am pointing out is that unless you are at home in the
metaphor, unless you have had your proper poetical education in the
metaphor, you are not safe anywhere.  Because you are not at ease with
figurative values: you don't know the metaphor in its strength and its
weakness.  You don't known how far you may expect to ride it and when
it may break down with you.  You are not safe in sciencel; you are not
safe in history...."

"... All metaphor breaks down somewhere.  That is the beauty of it.
It is touch and go with the metaphor, and until you have lived with it
long enough you don't know when it is going.  You don't know how much
you can get out of it and when it will cease to yield. It is a very
living thing.  It is as life itself...."

"We still ask boys in college to think, as in the nineties, but we
seldom tell them what thinking means; we seldom tell them it is just
putting this and that together; it saying one thing in terms of
another.  To tell them is to set their feet on the first rung of a
ladder the top of which sticks through the sky."

Perhaps researchers in artificial intelligence who are teaching
computers to reason by analogy should include in their curriculum a
course in poetry.  If so, I suggest they start with Frost.  His poems
have become an improtant feature of my own ecology courses because
they contain much insight into cause and effect in nature, rather than
mere appearance.

                                Dan M. Johnson
                                Dept of Biological Sciences
                                East Tennessee State University
                                Johnson City, TN 37614

------------------------------

Date: 30 Oct 87  0950 PST
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: Prediction-producing Algorithms

        Eliot Handleman's request for information on prediction has
inspired me to inflict the following considerations on the community.

Roofs and Boxes

        Many people have proposed sequence extrapolation as a prototype AI
problem.  The idea is that a person's life is a sequence of sensory
stimuli, and that science consists of inventing ways of predicting the
future of this sequence.  To this end many sequence extrapolating programs
have been written starting with those that predict sequences of integers
by taking differences and determining the co-efficients of a polynomial.

        It has always seemed to me that starting this way distorts the
heuristic character of both common sense and science.  Both of them think
about permanent aspects of the world and use the sequence of sense data
only to design and confirm hypotheses about these permanent aspects.  The
following sequence problem seems to me to typify the break between
hypotheses about the world and sequence extrapolation.

The ball bouncing in the rectilinear world - roofs and boxes

        Suppose there is a rectangular two dimensional room.  In this room
are a number of objects having the form of rectangles.  A ball moves in
the room with constant velocity but bounces with angle of incidence equal
to angle of reflection whenever it hits a wall or an object.  The observer
cannot see the objects or the walls.  All he sees is the x-co-ordinate of
the ball at integer times but only when the ball is visible from the front
of the room.  This provides him with a sequence of numbers which he can
try to extrapolate.  Until the ball bounces off something or goes under
something, linear extrapolation works.

        Suppose first that the observer knows that he is dealing with this
kind of ball-in-room problem and only doesn't know the locations of the
objects and the walls.  After he has observed the situation for a while he
will have partial information about the objects and their locations.  For
example, he may note that he has never been in a certain part of the room
so there may be unknown objects there.  Also he may have three sides of a
certain rectangle but may not know the fourth side, because he has never
bounced of that side yet.  He may extrapolate that he won't have the
opportunity of bouncing off that side for a long time.

        Alternatively we may suppose that the observer doesn't
initially know about balls bouncing off rectangles but only knows
the sequence and must infer this using a general sequence extrapolation
mechanism.  Our view is that this observer, whether human or machine,
can make progress only by guessing the underlying model.  At first
he may imagine a one dimensional bouncing model, but this will be
refuted the first time the ball doesn't bounce at an x-co-ordinate
where it has previously bounced.  Indeed he has to keep open
the possibility that the room is really 3  or more dimensional or that
more general objects than rectangles exist.

        We can elaborate the problem by supposing that when the ball
bounces off the front wall, the experimenter can put a paddle at an angle
and determine the angly of bounce so as to cause the ball to enter regions
where more information is wanted.

        Assuming the rectangles having edges parallel to the axes makes
the problem easier in an obvious sense but more difficult in the sense
that there is less interaction between the observable x-co-ordinate and
the unobservable y-co-ordinate.

        It would be interesting to determine the condition on the x-path
that distinguishes 2-dimensional from 3-dimensional worlds, if there is
one.  Unless we assume that the room has some limited size, there need be
no distinction.  Thus we must make the never-fully-verified assumption
that some of the repetititions in sequences of bounces are because the
ball hit the front or back wall and bounced again off the same surfaces
rather than similar surfaces further back.

        A tougher problem arises when the observer doesn't get the
sequence of x-coordinates but only 1 or 0 according to whether the
ball is visible or invisible.

        I am skeptical that an AI program fundamentally based on the idea
of sequence extrapolation is the right idea.  Donald Michie suggested
that the "domain experts" for this kind of problem of inferring a
mechanism that produces a sequence are cryptanalysts.

------------------------------

End of AIList Digest
********************
 2-Nov-87 22:54:34-PST,16831;000000000000
Mail-From: LAWS created at  2-Nov-87 22:50:51
Date: Mon  2 Nov 1987 22:41-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #257 - Methodology
To: AIList@SRI.COM


AIList Digest            Tuesday, 3 Nov 1987      Volume 5 : Issue 257

Today's Topics:
  Methodology - Sharing Code & Critical Analysis and Reconstruction

----------------------------------------------------------------------

Date: 30 Oct 87 14:05:35 GMT
From: bruce@vanhalen.rutgers.edu (Shane Bruce)
Reply-to: bruce@vanhalen.rutgers.edu (Shane Bruce)
Subject: Re: Lenat's AM program


In article <774@orstcs.CS.ORST.EDU> tgd@ORSTCS.CS.ORST.EDU (Tom Dietterich)
writes:
>
>In the biological sciences, publication of an article reporting a new
>clone obligates the author to provide that clone to other researchers
>for non-commercial purposes.  I think we need a similar policy in
>computer science.  Publication of a description of a system should
>obligate the author to provide listings of the system (a running
>system is probably too much to ask for) to other researchers on a
>non-disclosure basis.
>

The policy which you are advocating, while admirable, is not practical.  No
corporation which is involved in state of the art AI research is going to
allow listings of their next product/internal tool to made available to the
general scientific community, even on a non-disclosure basis.  Why should
they give away what they intend to sell?

A more practical solution would be for all articles to include a section
on implementation which, while not providing listings, would at least provide
enough information that the project could be duplicated by another competent
researcher in the field.


--
Shane Bruce
HOME: (201) 613-1285                WORK: (201) 932-4714
ARPA: bruce@paul.rutgers.edu
UUCP: {ames, cbosgd, harvard, moss}!rutgers!paul.rutgers.edu!bruce

------------------------------

Date: 30 Oct 87 10:58:40 EST (Fri)
From: sas@bfly-vax.bbn.com
Subject: AIList V5 #254 - Gilding the Lemon

[Authors note: The following message has a bit more vituperation than
I had planned for, however I agree with the basic points.]

While I agree that AI is in a very early stage and it is still
possible to just jump in and get right to the frontier, an incredible
number of people seem to jump in and instead of getting to the
frontier, spend an awful lot of time tromping around the campfire.  It
seems like the journals are replete with wheels being reinvented -
it's as if the physics journals were full of papers realizing that the
same force that makes apples fall to ground also moves the planets
about the sun.  I'm not saying that there is no good research or that
the universal theory of gravitation is a bad idea, but as Newton
himself pointed out, he stood on the shoulders of giants.  He read
other people's published results.  He didn't spend his time trying to
figure out how a pendulum's period is related to its length - he read
Galileo.

Personally, I think everyone is entitled to come up with round things
that roll down hills every so often.  As a matter of fact, I think
that this can form a very sound basis for learning just how things
work.  Physicists realize this and force undergraduates to spend
countless tedious hours trying to fudge their results so it comes out
just the way Faraday or Fermi said it would.  This is an excellent
form of education - but it shouldn't be confused with research.
With education, the individual learns something; with research, the
scientific community learns something.  All too much of what passes as
research nowadays is nothing more than education.

The current lack of reproducibility is appalling.  We have a
generation of language researchers who have never had a chance to play
with the Blocks World or and examine the limitiations of TAILSPIN.
It's as if Elias Howe had to invent the sewing machine without access
to steel or gearing.  There's a good chance he would have reinvented
the bone needle and the backstitch given the same access to the fruits
of the industrial revolution that most AI researchers have to the
fruits (lemons) of AI research.  Anecdotal evidence, which is really
what this field seems to be based on, just doesn't make for good
science.

                                        Wow, did I write that?
                                                Seth

------------------------------

Date: Fri, 30 Oct 87 15:48:16 WET
From: Martin Merry <mjm%hplb.csnet@RELAY.CS.NET>
Reply-to: Martin Merry <mjm%hplb.csnet@RELAY.CS.NET>
Subject: Once a lemon, always a lemon


Ken Laws argues that critical reviews and reconstructions of existing AI
software are at the moment only peripheral to AI.


> An advisor who advocates duplicating prior work is cutting his
> students' chances of fame and fortune from the discovery of the
> one true path.  It is always true that the published works can
> be improved upon, but the original developer has already gotten
> 80% of the benefit with 20% of the work.  Why should the student
> butt his head against the same problems that stopped the original
> work (be they theoretical or practical problems) when he could
> attach his name to an entirely new approach?


I had hoped that Drew McDermott's "AI meets Natural Stupidity" had exploded
this view, but apparently not. Substantial, lasting progress in any field of
AI is *never* achievable within the scope of a single Ph.D thesis. Progress
follows from new work building upon existing work - standing on other
researcher's shoulders (instead of, as too often happens, their toes).

This is not an argument for us all to become theorists, working on obscure
extensions to non-standard logics. However, a nifty program which is hacked
together and then only described functionally (i.e. publications only tell you
what it does, with little detail of how it does it, and certainly no
information on the very specialised kluges which make it work in this
particular case) does not advance our knowledge of AI.

Too often in AI, early results from a particular approach may appear promising
and may yield great credit to the discoverer ("80% of the benefit") but don't
actually go beyond solving toy problems. There is a lot of work to do in going
beyond these first sketches ("80% of the work") but if we don't encourage
people to do this we will remain in the sandbox.

Martin Merry                               Standard disclaimer on personal
HP Labs Bristol Research Centre            opinions apply

P.S. For those who haven't seen it, the Drew McDermott paper appears in SIGART
Newsletter 57 (Aug 1976) and is reprinted in "Mind Design" (ed Haugeland),
Bradford Books 1981. It should be required reading for anyone working in
AI....

------------------------------

Date: Fri, 30 Oct 1987  17:03 EST
From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: AIList V5 #254 - AI Methodology

Hurrah for Ken Laws when he says that

>An advisor who advocates duplicating prior work is cutting his
>students' chances of fame and fortune from the discovery of the
>one true path.

AI is still in a great exploratory phase in which there is much to be
discovered.  I would say that replicating and evaluating an older
experiment would be a suitable Master's degree topic.  Replicating AM
and discovering how to extend its range would be a good doctoral topic
- but because of the latter rather than the former aspect.

As for those complaints about AI's fuzziness - and AI's very name -
those are still virtues at the moment.  Many people who profess to be
working on AI recognize that what they are doing is to try to make
computers do things that we don't know yet how to make them do, so AI
is in that sense, speculative computer research.  Then, whenever
something become better understood, it is moved into a field with a
more specific type of name.  No purpose would be served by trying to
make more precise the name of the exploratory activity - either for
the public consumers or for the explorers themselves.

In fact, I have a feeling that most of those who don't like the name
AI also feel uncomfortable when exploring domains that aren't yet
clearly enough defined for their tastes - and are thus disinclined to
work in those areas.  If so, then maintaining the title which some of
us like and others don't may actually serve a useful function.  It is
the same reason, I think, why the movement to retitle science fiction
as "speculative fiction" failed.  The people who preferred the
seemingly more precise definition were not the ones who were best at
making, and at appreciating, the kinds of speculations under discussion.

Ken Laws went on to say that he would make an exception in his own
field of computer vision.  I couldn't tell how much of that was irony.
But in fact I'm inclined to agree at the level of lower level vision
processing - but it seems to me that progress in "high level" vision
has been somewhat sluggish since the late 60s and that this may be
because too many vision hackers tried to be too scientific - and have
accordingly not explored enough high level organizational ideas in
that domain.

- marvin minsky

------------------------------

Date: 1 Nov 87 23:37:01 GMT
From: tgd@orstcs.cs.orst.edu (Tom Dietterich)
Subject: Re: Gilding the Lemon


Ken Laws says
   ...progress in
   AI is driven by the hackers and the graduate students who "don't
   know any better" than to attempt the unreasonable.

I disagree strongly.  If you see who is winning the Best Paper awards
at conferences, it is not grad students attempting the unreasonable.
It is seasoned researchers who are making the solid contributions.

I'm not advocating that everyone do rational reconstructions.  It
seems to me that AI research on a particular problem evolves through
several stages: (a) problem definition, (b) development of methods,
(c) careful definition and comparative study of the methods, (d)
identification of relationships among methods (e.g., tradeoffs, or
even understanding the entire space of methods relevant to a problem).

Different research methods are appropriate at different stages.
Problem definition (a) and initial method development (b) can be
accomplished by pursuing particular application problems, constructing
exploratory systems, etc.  Rational reconstructions and empirical
comparisons are appropriate for (c).  Mathematical analysis is
generally the best for (d).  In my opinion, the graduate students of
the past two decades have already done a great deal of (a) and (b), so
that we have lots of problems and methods out there that need further
study and comparison.  However, I'm sure there are other problems and
methods waiting to be discovered, so there is still a lot of room for
exploratory studies.

--Tom Dietterich

------------------------------

Date: 1 Nov 87 23:45:25 GMT
From: tgd@orstcs.cs.orst.edu (Tom Dietterich)
Subject: Re: Gilding the Lemon (part 2)


Just a couple more points on this subject.

Ken Laws also says
        Progress also comes from applications -- very seldom from theory.

My description of research stages shows that progress comes from
different sources at different stages.  Applications are primarily
useful for identifying problems and understanding the important
issues.

It is particularly revealing that Ken is "highly suspicious
of any youngster trying to solve all our problems [in computer vision]
by ignoring the accumlated knowledge of the last twenty years."
Evidentally, he feels that there is no accumulated knowledge in AI.
If that is true, it is perhaps because researchers have not studied
the exploratory forays of the past to isolate and consolidate the
knowledge gained.

--Tom Dietterich

------------------------------

Date: Fri, 30 Oct 87 09:45:45 EST
From: Paul Fishwick <fishwick%fish.cis.ufl.edu@RELAY.CS.NET>
Subject: Gilding the Lemon


...From Ken Laws...
> Progress also comes from applications -- very seldom from theory.
> The "neats" have been worrying for years (centuries?) about temporal
> logics, but there has been more payoff from GPSS and SIMSCRIPT (and
> SPICE and other simulation systems) than from all the debates over
> consistent point and interval representations.  The applied systems
> are ultimately limited by their ontologies, but they are useful up to
> a point.  A distant point.

I'd like to make a couple of points here: both theory and practice are
essential to progress; however, too much of one without the other
creates an imbalance. As far as the allusion to temporal logics and
interval representations, I think that Ken has made a valuable point.
Too often an AI researcher will write on a subject without referencing
non-AI literature which has a direct bearing on the subject. An
illustration, in point, is the reference to temporal representations -
If one really wants to know what researchers have done with concepts
such as *time*, *process*, and *event* then one should seriously review work
in system modeling & control and simulation practice and theory. In doing
my own research I am actively involved in both systems/simulation
methodology and AI methods so I found Ken's reference to GPSS and SPICE
most gratifying.

What I am suggesting is that AI researchers should directly reference
(and build upon) related work that has "non-philosophical" origins. Note
that I am not against philosophical inquiry in principle -- where would
any of us be without it? The other direction is also important - namely,
that reseachers in more established areas such as systems theory and
simulation should look at the AI work to see if "encoding a mental model"
might improve performance or model comprehensibility.

Paul Fishwick
University of Florida
INTERNET: fishwick@fish.cis.ufl.edu

------------------------------

Date: Mon, 02 Nov 87 17:06:33 EST
From: Mario O Bourgoin <mob@MEDIA-LAB.MEDIA.MIT.EDU>
Subject: Re: Gilding the Lemon


In article <12346288066.15.LAWS@KL.SRI.Com> Ken Laws wonders why a
student should cover the same ground as that of another's thesis and
face the problems that stopped the original work.  His objection to
re-implementations is that they don't advance the field, they
consolidate it.  He is quick to add that he does not object to
consolidation but that he feels that AI must cover more of its
intellectual territory before it can be done effectively.
        I know of many good examples of significant progress achieved
in an area of AI through someone's efforts to re-implement and extend
the efforts of other researchers.  Tom Dietterich mentioned one when
he talked about David Chapman's work on conjunctive planning.  Work on
dependency-directed backtracking for search is another area.  AM and
its relatives are good examples in the field of automated discovery.
Research in Prolog certainly deserves mention.
        I believe that AI is more than just ready for consolidation: I
think it's been happening for a while just not a lot or obviously.  I
love exploration and understand its place in development but it isn't
the blind stab in the dark that one might gather from Ken's article.
I think he agrees as he says:

        A student studies the latest AI proceedings to get a
        nifty idea, tries to solve all the world's problems
        from his new viewpoint, and ultimately runs into
        limitations.

        The irresponsible researcher is little better than a random
generator who sometimes remembers what he has done.  The repetitive
bureaucrat is less than a cow who rechews another's cud.  The AI
researcher learns both by exploring to extend the limits of his
experience and consolidating to restructure what he already knows to
reflect what he has learned.
        In other fields, Masters students emphasize consolidation and
PHD students emphasize exploration (creativity.)  But at MIT, the AI
program is an interdisciplinary effort which offers only a doctorate
and I don't know of a AI Masters elsewhere.  This has left the job of
consolidation to accomplished researchers who are as interested in
exploration as their students.  Maybe there would be a use for a more
conservative approach.

- --Mario O. Bourgoin

To Ken: The best paraphrase isn't a quote since quoting communicates
that you are interested in what the other said but not what you
understand of it.

------------------------------

End of AIList Digest
********************
 2-Nov-87 23:15:50-PST,25658;000000000001
Mail-From: LAWS created at  2-Nov-87 23:09:57
Date: Mon  2 Nov 1987 23:01-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #258 - BBS Abstracts, Knowledge Acquisition Bibliography
To: AIList@SRI.COM


AIList Digest            Tuesday, 3 Nov 1987      Volume 5 : Issue 258

Today's Topics:
  Journal Call - BBS Commentators
  Bibliography - Knowledge Acquisition for Knowledge-Based Systems

----------------------------------------------------------------------

Date: 2 Nov 87 15:24:56 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: BBS Call for Commentators: 7 target articles


Below are the abstracts of seven forthcoming articles on which BBS --
Behavioral and Brain Sciences, an international, interdisciplinary Journal
of Open Peer Commentary, published by Cambridge University Press --
invites self-nominations from potential commentators. The procedure is
explained after the abstracts. The seven articles are:

(1) The Intentional Stance   (Dan Dennett)  [multiple book review]
(2) The Ethological Basis of Learning   (A. Gardner & B. Gardner)
(3) Tactical deception in Primates   (A. Whiten & R.W. Byrne)
(4) Event-Related Potentials and Memory: A Critique of the Context
    Updating Hypothesis   (Rolf Verleger)
(5) Is the P300 Component a Manifestation of Context Updating?
    (E. Donchin & M. Coles)    [article-length precommentary on (4)]
(6) Real and Depicted Spaces: A Cross-Cultural Perspective   (J.B. Deregowski)
(7) Research on Self Control: An Integrating Framework (A.W. Logue)

-----

1.                   The Intentional Stance

                        Dan Dennett
                   Philosophy Department
                      Tufts university

The intentional stance is the  strategy  of  prediction  and
explanation  that  attributes  beliefs,  desires  and  other
"intentional" states to organisms and devices  and  predicts
future  behavior from what it would be rational for an agent
to do, given  those  beliefs  and  desires.  Any  device  or
organism   that   regularly   uses   this   strategy  is  an
"intentional system," whatever its  innards  might  be.  The
strategy  of  treating  parts  of  the  world as intentional
systems is the foundation of "folk psychology,"  but  it  is
also  exploited (and is virtually unavoidable) in artificial
intelligence and cognitive science in general, as well as in
evolutionary   theory.  An  analysis  of  the  role  of  the
intentional  stance  and  its  presuppositions  supports   a
naturalistic  theory  of  mental  states  and  events, their
"content" or  "intentionality,"  and  the  relation  between
"mentalistic"  levels  of explanation and neurophysiological
or mechanistic levels of explanation. As such, the  analysis
of  the  intentional stance grounds a theory of the mind and
its relation to the body.

2.           The Ethological Basis of Learning

                  A. Gardner & B. Gardner
                   Psychology Department
                    University of Nevada

One view of the basic nature of  the  learning  process  has
dominated  theory and application throughout the century. It
is the view that the behavior of organisms  is  governed  by
its  positive  and  negative  consequences.  Anyone  who has
attempted to use this principle to teach relatively  complex
skills  to free-living, well-fed subjects -- as we have done
in our sign language studies of chimpanzees  --  is  apt  to
have been disappointed.

Meanwhile, recent ethological  findings  plainly  contradict
the  argument  that most, or even much, of the learning that
takes place in the operant conditioning laboratory is  based
on  the  "law of effect." The residue of support for the law
of effect that might  be derived from  operant  conditioning
experiments  depends  entirely  on the logic of a particular
experimental design. There is, however, a logical defect  in
this  design  that  cannot  be  repaired  by any conceivable
improvement in procedure or instrumentation.  However deeply
ingrained  in  our  cultural  traditions,  the  notion  that
behavior is based on its  positive  consequences  cannot  be
supported  by  laboratory evidence. Several key phenomena of
conditioning can be dealt with  in  a  more  straightforward
manner by dispensing with hedonism altogether,

An  impressive  amount  of  human  behavior  persists,   and
persists  in spite of its negative consequences. The popular
notion that persistent maladaptive behavior is rare in other
animals  is  easily refuted by those who have observed other
animals closely in  their  natural  habitats.  We  offer  an
analysis  of  adaptive  and maladaptive behavior in aversive
conditioning and of the design of experiments on the  effect
of  predictive  contingencies in Pavlovian conditioning. The
latter attempt to demonstrate an effect of contingency fails
because it violates basic principles of experimental design.
We conclude that there is a fundamental  logical  defect  in
all notions of contingency.

This reconsideration of the  traditional  behavioristic  and
cognitive  versions  of  the  law  of  effect was originally
suggested  by  problems  in  teaching  new  and  challenging
patterns   of  behavior  to  free-living  subjects  such  as
children and  chimpanzees,  which  we  briefly  describe  in
closing.

3.             Tactical Deception in Primates

                   A. Whiten & R.W. Byrne
                 Psychological Laboratories
            University of St. Andrews, Scotland

Tactical deception occurs when an individual's  is  able  to
use  an  "honest"  act  from  his  normal  repertoire  in  a
different context to mislead familiar individuals.  Although
primates  have  a  reputation for social skill, most primate
groups are so intimate that any deception is  likely  to  be
subtle  and  infrequent. Records are often anecdotal and not
widely known in the formal literature of behavioral science.
We  have  tackled  this  problem by drawing together records
from many primates and primatologists in order to  look  for
repeating  patterns.  This  has  revealed  a  many  forms of
deceptive  tactics,  which  we  classify  in  terms  of  the
function  they  perform.  For  each  class,  we  sketch  the
features of  another  individual's  state  of  mind  that  a
deceiver  must  be  able  to represent, acting as a "natural
psychologist." Our analysis clarifies and  perhaps  explains
certain  taxonomic differences. Before these findings can be
generalized, however, behavioral scientists  must  agree  on
some fundamental methodological and theoretical questions in
the study of the evolution of social cognition.

4.          Event-Related Potentials and Memory:
       A Critique of the Context Updating Hypothesis

                       Rolf Verleger
                   Mannheim, West Germany

P3 is the most prominent of the electrical potentials of the
human    electroencephalogram    that   are   sensitive   to
psychological variables.  According to the most  influential
current  hypothesis about its psychological significance [E.
Donchin's], the "context updating" hypothesis,  P3  reflects
the  updating  of  working  memory.  This  hypothesis cannot
account for relevant portions of the available evidence  and
it   entails  some  basic  contradictions.  A  more  general
formulation of this  hypothesis  is  that  P3  reflects  the
updating  of  expectancies.  This  version  implies that P3-
evoking stimuli are initially unexpected  but  later  become
expected.  This contradiction cannot be resolved within this
formulation.

The alternative "context  closure"  hypothesis  retains  the
concept  of "strategic information processing" emphasized by
the context updating hypothesis. P3s are  evoked  by  events
that  are awaited when subjects deal with repetitive, highly
structured  tasks;  P3s  arise  from   subjects'   combining
successive  stimuli into larger units The tasks in which P3s
are elicited can accordingly be classified in terms of their
respective   formal  sequences  of  stimuli.  P3  may  be  a
physiological indicator of excess activation being  released
from perceptual control areas.

5. Is the P300 component a manifestation of Context Updating?

          Emanuel Donchin and Michael G. H. Coles
           Cognitive Psychophysiology Laboratory
         University of Illinois at Urbana-Champaign
[article-length precommentary on Verleger]

To understand the endogenous components of the ERP  we  must
use from data about the components' antecedent conditions to
form hypotheses about the information processing function of
the  underlying  brain activity.  These hypotheses, in turn,
generate testable predictions about the consequences of  the
component. We review the application of this approach to the
analysis  of  the  P300  component,   whose   amplitude   is
controlled  multiplicatively  by  the subjective probability
and the task relevance of the  eliciting  events  and  whose
latency  depends  on  the  duration  of stimulus evaluation.
These  and  other  factors  suggest  that  the  P300  is   a
manifestation  of activity occurring whenever one's model of
the environment must be revised.  Tests of three predictions
based   on  this  "context  updating"  model  are  reviewed.
Verleger's critique is based on a misconstrual of the  model
as  well  as  on  a  partial  and  misleading reading of the
relevant literature.

6.               Real and Depicted Spaces:
                A Cross-Cultural Perspective

                      J.B. Deregowski
                   Psychology Department
              University of Aberdeen, Scotland

This  paper  examines  the  contribution  of  cross-cultural
studies   to   our   understanding  of  the  perception  and
representation of space.  A  cross-cultural  survey  of  the
basic  difficulties  in  understanding  pictures -- from the
failure to recognize a picture as a  representation  to  the
inability  to  recognise the object represented -- indicates
that   similar   difficulties   occur   in   pictorial   and
nonpictorial  cultures.  Real  and  pictorial spaces must be
distinguished. The  experimental  work  on  pictorial  space
derives  from  two distinct traditions: the study of picture
perception  in  "remote"  populations  and  the   study   of
perceptual  illusions.  A  comparison  of  the  findings  on
pictorial  space  perception  with  those  on   real   space
perception  and  perceptual constancies suggests that cross-
cultural differences in the  perception  of  both  real  and
depicted  space involve two different kinds of skills: those
related only to real spaces or only to depicted  spaces  and
those  related  to  both.   Different  cultural  groups  use
different skills to perform the same perceptual task.

7.     Research on Self Control: An Integrating Framework

                         A.W. Logue
                  Department of Psychology
                     SUNY - Stony Brook

The tendency to choose a  larger,  more  delayed  reinforcer
over  a  smaller, less delayed one (self-control) depends on
the current physical values  of  the  reinforcers.  It  also
varies  according  to  a  subject's  experience  and current
factors other than the reinforcers.  Two local delay  models
(Mischel's  social learning theory and Herrnstein's matching
law) as well as molar maximization models  have  taken  into
account   these   indirect   effects   on  self  control  by
representing  a  subject's  behavior  as  a  function  of  a
perceived  environment.  A  general evolutionary analysis of
all this  research  yields  a  better  and  more  predictive
description of self control.


-----


This is an experiment in using the Net to find eligible commentators for
articles in Behavioral & Brain Sciences.  [...]
Eligible individuals who judge that they would have a relevant
commentary to contribute should contact me at the e-mail address indicated at
the bottom of this message, or should write by normal mail to:

Stevan Harnad, Editor, Behavioral and Brain Sciences, 20 Nassau Street, Room 240
Princeton NJ 08542               (phone: 609-921-7771)

"Eligibility" usually means being an academically trained professional
contributor to one of the disciplines mentioned earlier, or to related academic
disciplines. The letter should indicate the candidate's general qualifications
as well as their basis for wishing to serve as commentator for the particular
target article in question. It is preferable also to enclose a Curriculum Vitae.
(Please note that the editorial office must exercise selectivity among the
nominations received so as to ensure a strong and balanced cross-specialty
spectrum of eligible commentators.)  [...]

Stevan Harnad            harnad@mind.princeton.edu       (609)-921-7771

------------------------------

Date: 29 Oct 87 17:31:44 GMT
From: mcvax!ukc!reading!onion!spb@uunet.uu.net  (Stephen)
Subject: Bibliography - Knowledge Acquisition for Knowledge-Based Systems


                  Proceedings of the first
                   European Workshop on

                 KNOWLEDGE ACQUISITION FOR
                 KNOWLEDGE - BASED SYSTEMS


                   Co - Sponsored by the
            Institution of Electrical Engineers
                  2nd - 3rd September 1987
                     Reading University


There are only a limited number of  Proceedings.  These  are
available on a first come first served basis.  The cost will
be 35  pounds  sterling, which  includes  post  and  packing
within  the  UK.  Cheques  should  be made  payable  to 'The
University of Reading'.

Orders to:   Professor T R Addis
             Department of Computer Science
             University of Reading
             Whiteknights
             Reading
             RG6 2AX


                          BIBLIOGRAPHY



Broy, M., "Transformational Semantics for Concurrent Programs,"
     Information Processing Letters, vol. 11, pp. 87-91, 1980.

Evans, D.J. and Shirley A Williams, "Analysis and Detection of
     Parallel Processable Code," Computer Journal, vol. 23, pp.
     66-72, 1980.

Kuck, D.J., in The Structure of Computers and Computations, vol.
     1, John Wiley and Sons, 1978.

Roucairol, G., "Transformations of Sequential Programs into
     Parallel Programs," Cambridge University Press, 1982.

Foster, C C, "Information storage and retrieval using AVL trees,"
     ACM 20th National conference, 1965.

Knowlton, K C, "A fast storage allocator," CACM, vol. 8, no. 10,
     pp. 623-625, October 1965.

Deuel, P, "On a storage mapping function for data structures,"
     CACM, vol. 9, no. 5, May 1966.

Knowlton, K C, "A programmer's description of llllll," CACM, vol.
     9, no. 8, Aug. 1966.

CODASYL, ACM, NY, April, 1971.

On Conceptual Modelling.  Perspectives from Artificial Intelli-
     gence, Databases and Programming Languages, Topics in Infor-
     mation Systems, Springer-Verlag, 1984.

"Prolog-2 Reference Manual," 9 West Way, Oxford, OC2 0JB, UK, Ex-
     pert Systems International Ltd., 1985.

Quintus Prolog Reference Manual, 6, Quintus Computer Systems
     Inc., 1986.

"Arity/Prolog: The Programming Language," 358 Baker Avenue, Con-
     cord MA 01742, USA, Arity Corporation, 1986.

Addis, T.R., "A Relation-Based Language Interpreter for a Content
     Addressable File Store," ACM Trans on Database Systems, vol.
     7, no. 2, pp. 125-163, 1982.

Addis, T.R., "Knowledge Refining for a Diagnostic Aid," Interna-
     tional Journal of Man-Machine Studies, vol. 17, pp. 151-164,
     1982.

Addis, T.R., Designing Knowledge-Based Systems, Kogan Page, 1985.
     ISBN0-85038-859-7

Addis, T.R., "The Role of Explanation in Knowledge Elicitation,"
     International Journal of Systems Research and Information
     Science, vol. 2, pp. 101-110, 1986.

Addis, T.R., The Boundaries of Knowledge, Informatics 9, 1987.
     ASLIB Conference at Kings College, Cambridge

Rawlings, C.J., Representing protein structures in Prolog: the
     Prolog representation, Imperial Cancer Research Fund,
     Biomedical Computing Unit, 1986.  Submitted as part of
     results of SERC Contract No: SO/351/84

Hamm, G.H. and G.N. Cameron, "The EMBL data library," Nucleic
     Acids Research, vol. 14, no. 1, pp. 5-10, 1986.

Chothia, C., "Principles that determine the structure of pro-
     teins," Annual Reviews of Biochemistry, vol. 53.

Codd, E.F., "A relational model of data for large shared data
     banks," Comm. ACM, pp. 377-387, 1970.

Codd, E.F., "Further normalization of the database relational
     model," IBM Research report, 1971.  IBM Thomas Watson
     Research Centre. N.Y.

Bridge, D., "Conceptual Data Models in Database Design," Final
     year project report for BSc Computer Science at Brunel
     University, 1986.

Kyte, J. and R.F. Doolittle, "A simple method for displaying the
     hydropathic character of a protein," Journal of Molecular
     Biology, vol. 157, pp. 105-132, 1982.

Duncan, T., PROPS 2 Reference Manual, Imperial Cancer Research
     Fund, Biomedical Computing Unit, 1986.

Sweet, R.M. and D. Eisenberg, "Correlation of sequence hydropho-
     bicities measures similarity in three dimensional protein
     structure," Journal of Molecular Biology, vol. 171, pp.
     479-488, 1983.

Elleby, P. and T.R. Addis, "Extending the Relational Database
     Model to capture more Constraints," A KSG Technical Report,
     1987.

Chou, P.Y. and G.D. Fasman, "Prediction of the secondary struc-
     ture of proteins from their amino acid sequence," Advances
     in Enzymology, vol. 47, pp. 45-148, 1980.

Ptitsyn, O.B. and A.V. Finkelstein, "Similarities of protein to-
     pologies: evolutionary divergence - functional convergence
     or principles of folding?," Annual Reviews of Biophysics,
     vol. 13, pp. 339-386, 1980.

Bernstein, F.C., T. Koetzle, G.J.B. William, E. Meyer, M.D.
     Brice, J.R. Rodger, O. Kennard, T. Shimanouchi, and M.
     Tasumi, "The protein data bank: a computer-based archival
     file for macromolecular structures," Journal of Molecular
     Biology, vol. 112, pp. 535-542, 1977.

Harre, R., The Philosophy of Science: An Introductory Survey, Ox-
     ford University Press, 1976.

George, D.G., W.C. Barker, and L.T. Hunt, "The protein informa-
     tion resource (PIR)," Nucleic Acids Research, vol. 14, no.
     1, pp. 17-20, 1986.

Cohen, F.E., R.M. Abarbanel, I.D. Kuntz, and R.J. Fletterick,
     "Secondary structure assignment for A/B proteins by a com-
     binatorial approach," Biochemistry, vol. 22, pp. 4894-4904,
     1983.

Rawlings, C.J., W.R. Taylor, J. Nyakairu, J. Fox, and M.J.E.
     Sternberg, "Reasoning about protein topology using the logic
     programming language PROLOG," Journal of Molecular Graphics,
     vol. 3, pp. 151-157, 1985.

Rawlings, C.J., W.R.T. Taylor, J. Nyakairu, J. Fox, and M.J.E.
     Sternberg, Using Prolog to Represent and Reason about Pro-
     tein Structure, Lecture Notes in Computer Science, p. 536,
     Springer-Verlag, 1986.

Bruner, J.S., J.J. Goodnow, and G.A. Austin, in A Study of Think-
     ing, Wiley, 1956.

Maizel, J. and R.P. Lenk, "Enhanced graphic matrix analysis of
     nucleic acid and protein sequences," Proceedings of the Na-
     tional Academy of Science USA, vol. 78, no. 12, pp. 7665-
     7669, 1981.

Lim, V.I., "Structural principles of the globular organization of
     protein chains.  A sterochemical theory of globular protein
     secondary structure," Journal of Molecular Biology, vol. 88,
     pp. 857-872, 1974.

Bolton, N., in Concept Formation, Pergamon Press, 1977.  ISBN 0-
     08-0214940

Chen, P.P., "The Entity Relationship Model: Toward a Unified View
     of Data," ACM Trans on Data Base Systems, vol. 1, no. 1, pp.
     9-13, 1976.

Peirce, C.S., Charles S. Peirce: Selected Writings, Dover Publi-
     cations Inc, 1966.

Kowalski, R., Logic for Problem Solving, Artificial Intelligence
     Series, North Holland Press, Amsterdam, 1979.

Richardson, J., "B-sheet topology and the relatedness of pro-
     teins," Nature, vol. 268, pp. 495-500, 1977.

Richardson, J., "The anatomy and taxonomy of protein structure,"
     Advances in Protein Chemistry, vol. 34, pp. 167-339, 1981.

Garnier, J., D.J. Osguthorpe, and B. Robson, "Analysis of the ac-
     curacy and implications of simple methods for predicting the
     secondary structure of globular proteins," Journal of Molec-
     ular Biology, vol. 120, pp. 97-120, 1978.

Kabsch, W. and C.Sander, "How good are predictions of protein
     secondary structure?," FEBS Letters, vol. 155, pp. 179-182,
     1983.

Blundell, T. and M.J.E. Sternberg, "Computer-aided design in pro-
     tein engineering," Trends in biotechnology, vol. 3, pp.
     228-235, 1985.

Fox, J., D. Frost, T. Duncan, and N. Preston, The PROPS 2 Primer,
     Imperial Cancer Research Fund, Biomedical Computing Unit,
     1986.

Eisenberg, D., R.M. Weiss, T.C. Terwilliger, and W. Wilcox, "Hy-
     drophobic moments and protein structure," Faraday Symposia
     Chemical Society, vol. 17, pp. 109-120, 1982.

Taylor, W.R., Protein Structure Prediction, A Practical Approach,
     IRL, Oxford, 1987.

Cohen, F.E., M.J.E. Sternberg, and W.R. Taylor, "Analysis and
     prediction of protein B-sheet structures by a combinatorial
     approach," Nature, vol. 285, pp. 378-382, 1980.

Cohen, F.E., M.J.E. Sternberg, and W.R. Taylor, "Analysis and
     prediction of the packing of B-sheet in the tertiary struc-
     ture of globular proteins," Journal of Molecular Biology,
     vol. 156, pp. 821-862, 1982.

Sternberg, M.J.E. and J.M. Thornton, "On the conformation of pro-
     teins: the handiness of the connection between parallel B-
     strands," Journal of Molecular Biology, vol. 110, pp. 269-
     283, 1977.

Taylor, W.R. and J.M. Thornton, "Prediction of super-secondary
     structure in proteins," Nature, vol. 301, pp. 540-542, 1983.

Burridge, J.M., A.J. Morffew, and S.J.P. Todd, "Experiments in
     the use of PROLOG for protein querying," Journal of Molecu-
     lar Graphics, vol. 3, p. 109, 1985.  abstract 13

Lim, V.I., "Algorithms for prediction of A-helical and B-
     structural regions in globular proteins," Journal of Molecu-
     lar Biology, vol. 88, pp. 873-894, 1974.

Bobrow, D. and T. Wingrad, "An Overview of KRL, a Knowledge
     Representation Language," Cognitive Science, vol. 1, no. 1,
     1977.

Hopp, T.P. and K.R. Woods, "A computer program for predicting an-
     tigenic determinants," Molecular Immunology, vol. 20, 1983.

Grant, T.J. and P. Elleby, An AI Aid for Scheduling Repair Jobs,
     pp. 20-22, Paris, 1986.  Conference of the Association Fran-
     caise d'Intelligence et des Systems de Simulation

Sowa, J.F., Conceptual Structures: Information processing in mind
     and machine, Addison-Wesley, 1984.

V.Begg,, Developing Expert CAD Systems, Kogan Page, 1984.

Ullman, J.D., Principles of Database Systems, Pitman Publishing,
     1985.

Brueker, J.A. and B.J. Wielings, "Analysis Techniques for
     Knowledge Based Systems," Part 2 Esprit Project 12 1.2,
     University of Amsterdam, 1983.

Fikes, R. and T. Kehler, "The Role of Frame-Based Representation
     in Reasoning," September Communication of the ACM, vol. 28,
     no. 9, pp. 904-920, 1985.

Date, C J, An Introduction to Database Systems, Addison-Wesley,
     1981.

Hendrix, G G, "Partitioned Networks for Mathematical Modelling of
     Natural Language Semantics," Technical Report NL-28, 1975.
     Department of Computer Science, University of Texas

Lakatos, I, "The Methodology of Scientific Research Programmes,"
     Philosophical Papers, vol. 1, Cambridge University Press,
     1978.

Lee, B, "Introducing Systems Analysis and Design," NCC, vol. I &
     II, Manchester, 1978.

Pask, G, Conversation Theory: Applications in Education and Ep-
     istemology, Oxford, 1976.

Phillips, B, A model for Knowledge and its Application to
     Discourse Analysis, 1978.  University of Illinois, Depart-
     ment of Information Engineering KSL-9

Popper, K R, The Logic of Scientific Discovery, 1959.  Hutchinson
     10th impression 1980

Robinson, H, Database Analysis and Design, Chartwell-Bratt, 1981.

Rock-Evans, R, "Data Analysis," IPC Business Press, 1981.

Welbank, M, A review of Knowledge Acquisition Techniques for Ex-
     pert Systems, 1983.  British Telecommunications Martlesham
     Consultancy Services

Wood-Harper, A T and C Fitzgerald, "A taxonomy of current ap-
     proaches to systems analysis," Computer Journal, vol. 24,
     no. 1, 1982.


--
******************************************************************************
* Stephen Bull                         *     Phone: (0734) 875123            *
* Dept. of Computer Science            *     mail: bull@onion.reading.ac.uk  *
* University of Reading, ENGLAND       *                                     *

------------------------------

End of AIList Digest
********************
 5-Nov-87 01:25:28-PST,14367;000000000000
Mail-From: LAWS created at  5-Nov-87 01:14:11
Date: Thu  5 Nov 1987 01:09-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #259 - FORTRAN, Natural Language Interfaces
To: AIList@SRI.COM


AIList Digest            Thursday, 5 Nov 1987     Volume 5 : Issue 259

Today's Topics:
  Queries - Creativity & Adaptive Systems & Coarse Coding &
    PROLOG for Various Machines,
  AI Tools - FORTRAN,
  Bibliographies - Classification Systems,
  Applications - Natural Language Interfaces

----------------------------------------------------------------------

Date: 04 Nov 87 13:27:02 PST
From: oxy!hurley@csvax.caltech.edu (Mark Luke Hurley)
Subject: Creativity


I am a cognitive  science major at  Occidental College.   I am presently
writing my senior thesis  on the creative  computational systems. I want
to examine the  ability of automatic  formal systems  to capture various
forms of creativity including, but  not limited to, artistic creativity,
problem  solving,  and   music  composition.  I   would  appreciate  any
suggestions or advice about specific literature  in this area. I welcome
any leads you can give me that might help in my research.
                                             Thank you.

Mark Hurley


 Box 437
 1600 Campus Rd.
 Occidental College
 Los Angeles, CA 90041

 ARPANET: oxy!hurley@CSVAX.Caltech.EDU
 BITNET:  oxy!hurley@hamlet
 CSNET:   oxy!hurley%csvax.caltech.edu@RELAY.CS.NET
 UUCP:    ....{seismo, rutgers, ames}!cit-vax!oxy!hurley

------------------------------

Date: 2 Nov 87 17:36:11 GMT
From: stride!tahoe!unrvax!oppy@gr.utah.edu  (Brian Oppy)
Subject: references for adaptive systems

i think the header summarizes this pretty well.  what i am looking for are
references in the scientific literature, preferably journals, and as recent
as possible.  the direction i wish to go with this is toward learning systems,
equivalences in the way computers and biological organisms learn.

thanks in advance for any help you can offer,

brian oppy (oppy@unrvax)

------------------------------

Date: 2 Nov 87 12:26:13 GMT
From: berke@locus.ucla.edu
Subject: "2**n   events  using  only n units" references? (from Berke)


        Many connectionist researchers  have   asserted  that   a
distributed  representation provides efficient use of  resources,
encoding   2**n   patterns   in   n units.  The "2**n states  for
n units" argument is sketched below:

Replace  unit-encoding  (grandmother  cells)  with  patterns   of
activation  over n (binary) units.   Instead of representing only
n distinct "events," one with each unit, we can represent  up  to
2**n   events  using  only n units.   These patterns overlap, and
this overlap can be used to gain "associative" recall.


        Does anyone have any references to such arguments?   I've
heard  this  argument  made  verbally,  but  I don't recall exact
references in print.  Do you?  Also, is  there  a  net-convention
for  2  to-the-n?   I'm  using 2**n above, (a vestige of my early
FORTRAN experience?) which I prefer  to  2^n.   Anyone  have  any
others?

        Perhaps it would be appropriate to  "r"  a  reply  to  me
rather  than  posting  a  follow-up  to net.  If they are many or
interesting, I'll be sure to post them in one batch.

        I  would  appreciate  exact   quotes,   with   references
including  page  numbers  so  that  I  could find the, as the NLP
people say, context.

Thanks

Pete

------------------------------

Date: Tue, 3 Nov 87 01:05 MST
From: DOLATA@rvax.ccit.arizona.edu
Subject: PROLOG for various machines


I have an IRIS 3130, a microVAX II running ULTRIX, and an NCUBE-4
parallel machine (along with a Mac II coming).   I am looking for
a PROLOG system to run on all of my machines.  I want the system
to have the same syntax on all machines,  and the ability to link in
C and Fortran code for some number crunching.   I will probably need
a system which is avaialble in source rather than executable products
since the software house which develops code for the NCUBE doesn't
know of any NCUBE prolog (per se').

Anybody know of such a beast?  If not,  whats the next best bet?

(If you reply to AIlist, please cc: directly to me too)

Thanks
Dan (dolata@rvax.ccit.arizona.edu)

------------------------------

Date: 1 Nov 87 08:38:52 GMT
From: psuvax1!vu-vlsi!swatsun!hirai@husc6.harvard.edu  (Eiji "A.G."
      Hirai)
Subject: Re: Suggestions for Course

In article <1746@unc.cs.unc.edu> bts@unc.UUCP (Bruce Smith) writes:
>Turbo Prolog for an AI course?  Why not FORTRAN, for that matter?
>Quoting (without permission) from Alan Bundy's Catalog of AI Tools:
>
>    FORTRAN is the programming language considered by many to
>    be the natural successor of LISP and Prolog for AI research.

        This must be some very sick joke or this book you quoted from
is majorly screwed up.  Fortran is bad for almost anything, least of
all AI.  There are zillion plus one articles which will support me in
attacking Fortran, so I won't list or quote them here.

        Fortran is EVIL.  You were kidding right?  Please say you're kidding.
--
Eiji "A.G." Hirai @ Swarthmore College, Swarthmore PA 19081 | Tel. 215-543-9855
UUCP:   {rutgers, ihnp4, cbosgd}!bpa!swatsun!hirai |  "All Cretans are liars."
Inter:  swatsun!hirai@bpa.bell-atl.com             |       -Epimenides
Bitnet: vu-vlsi!swatsun!hirai@psuvax1.bitnet       |        of Cnossus, Crete

------------------------------

Date: 2 Nov 87 17:14:56 GMT
From: nau@mimsy.umd.edu  (Dana S. Nau)
Subject: Re: Suggestions for Course

In article <1746@unc.cs.unc.edu> bts@unc.UUCP (Bruce Smith) writes:
>Turbo Prolog for an AI course?  Why not FORTRAN, for that matter? ...

In article <1361@byzantium.swatsun.UUCP> hirai@swatsun.UUCP writes:
> [lots of flames about FORTRAN]

To me, it seemed obvious that the original posting was a joke--in fact, a
rather good one.  Too bad it got taken seriously.
--

Dana S. Nau                             ARPA & CSNet:  nau@mimsy.umd.edu
Computer Sci. Dept., U. of Maryland     UUCP:  ...!seismo!mimsy!nau
College Park, MD 20742                  Telephone:  (301) 454-7932

------------------------------

Date: Mon 2 Nov 87 14:29:09-PST
From: Ken Laws <LAWS@IU.AI.SRI.COM>
Subject: In Defense of FORTRAN

Eiji Hirai asks whether FORTRAN is seriously considered an AI language.
I'm certain that Alan Bundy was joking about it.  That leaves an opening
for a serious defender, and I am willing to take the job.  Other languages
have already been much touted and debated in AIList, so FORTRAN deserves
equal time.

Many expert system companies have found that they must provide their
end-user programs in C (or occasionally PASCAL or some other traditional
language).  A few such companies actually prefer to do their development
work in C.  There are reasons why this is not insane.  The same reasons
can be made to apply to FORTRAN, providing that one is willing to consider
a few language extensions.  They apply with even more force to ADA, which
may succeed in giving us the sharable subroutine libraries that have been
promised ever since the birth of FORTRAN.  I will concentrate on C because
I know it best.

The problem with traditional languages is neither their capability nor
their efficiency, but the way that they limit thought.  C, after all,
can be used to implement LISP.  A C programmer may be more comfortable
growing the tail end of a dynamic array than CONSing to the head of
a list, but that is simply an implementation option that should be
hidden within a package of list-manipulation routines.  (Indeed, the
head/tail distinction for a 1-D array is arbitrary.)  Languages that
permit pointer manipulation and recursive calls can do just about
anything that LISP or PROLOG can.  (Dynamic code modification is
possible in C, although exceedingly difficult.   It could be made more
palatable if appropriate parsing and compilation subroutines were made
available.)

My own definition of an "AI" program is any program that would never
have been thought of by a FORTRAN/COBOL programmer.  (The past tense
is significant, as I will discuss below.)  FORTRAN-like languages
are thus unlikely candidates for AI development.  Why should this
be so?  It is because they designed for low-level manipulations (by
modern standards) and are clumsy for expressing high-level concepts.
C, for instance, is so well suited to manipulating character strings
that it is unusual to find a UNIX system with an augmented library of
string-parsing routines.  It is just so much easier to hack an
efficient ad hoc loop than to document and maintain a less-efficient
general-purpose string library that the library never gets written.
String-manipulation programs do exist (editors, AWK, GREP, etc.), but
the intermediate representations are not available to other than
system hackers.

FORTRAN, with its numeric orientation, is even more limiting.  One can
write string-parsing code, but it is difficult.  I suspect that string
libraries are therefore more available in FORTRAN, a step in the right
direction.  People interested in string manipulation, though, are more
likely to use SNOBOL or some other language -- almost any other language.
FORTRAN makes numerical analysis easy and everything else difficult.

Suppose, though, that FORTRAN and C offered extensive "object oriented"
libraries for all the data types you were likely to need: lists, trees,
queues, heaps, strings, files, buffers, windows, points, line segments,
robot arms, etc.  Suppose that they also included high-level objects
such as hypotheses, goals, and constraints.  (These might not be just
what you needed, but you could use the standard data types as templates
for making your own.)  These libraries would then be the language in
which you conduct your research, with the base language used only to
glue the subroutines together.  A good macro capability could make the
base+subroutine language more palatable for specific applications,
although there are drawbacks to concealing code with syntactic sugar.

Given the appropriate subroutine libraries, there is no longer a mental
block to AI thought.  A FORTRAN programmer could whip together a
backtrack search almost as fast as a PROLOG programmer.  Indeed,
PROLOG would be a part of the FORTRAN environment.  Current debugging
tools for FORTRAN and C are not as good as those for LISP machines,
but they are adequate if used by an experienced programmer.  (Actually,
there are about a hundred types of FORTRAN/COBOL development tools
that are not commonly available to LISP programmers.  Their cost and
learning time limit their use.)  The need for garbage collection can
generally be avoided by explicit deallocation of obsolete objects
(although there are times when this is tricky).  Programming in a
traditional language is not the same as programming in LISP or PROLOG,
but it is not necessarily inferior.

The problem with AI languages is neither their capability nor
their efficiency, but the way that they limit thought.  Each makes
certain types of manipulations easy while obscuring others.
LISP is a great language for manipulating lists, and lists are an
exceptionally powerful representation, but even higher level constructs
are needed for image understanding, discourse analysis, and other
areas of modern AI research.  No language is perfectly suited for
navigating databases of such representations, so you have to choose
which strengths and weaknesses are suited to your application.
If your concern is with automating intelligent >>behavior<<,
a traditional algorithmic language may be just right for you.

                                        -- Ken Laws

------------------------------

Date: 4 Nov 87 15:55:46 GMT
From: ssc-vax!dickey@beaver.cs.washington.edu (Frederick J Dickey)
Subject: Re: AIList V5 #253 - LISP, NIL, Msc.

In article <MINSKY.12346702081.BABYL@MIT-OZ>, MINSKY@OZ.AI.MIT.EDU writes:
> In reply to noekel@uklirb.UUCP who is
> >
> >currently building a AI bibliography and still searching for a
> >suitable classification/key word scheme.

In "The AI Magazine" a couple of years ago, there was an article that presented
an AI classification scheme. If my memory serves me right, the author of the
article says he developed it for some sort of library/information retrieval
application. It sounds like it is fairly close to what noekel@uklirb wants.
I can't give a more specific citation because my collection of AI Magazines
is at home.

------------------------------

Date: 5 Nov 87 04:58:00 GMT
From: crawford@endor.harvard.edu  (Alexander Crawford)
Subject: Re: The future of AI.... (nothing about flawed minds)

The first impact from AI on software in general will be natural
language interfaces.  Various problems need to be solved, such as how
to map English commands completely onto a particular application's set
of commands COMPLETELY.  (As Barbara Grosz says, if it can be said, it
can be said in all ways, e.g. "Give me the production report",
"Report", "How's production doing?".)  Once this is completed for a
large portion of applications, it will become a severe disadvantage in
the marketplace NOT to offer a natural-language interface.

Coupled with a NLI, machine-learning will allow applications to
improve in different ways as they are used:
        -Interfaces can be customized easily, automatically, for
         different users.
        -Complex tasks can be learned automatically by having the
         application examine what the human operator does normally.
        -Search of problem spaces for solutions can be eliminated and
         replaced by knowledge.  (This is called "chunking".  See
         MACHINE LEARNING II, Michalski et al. Chapter 10:
         "The Chunking of Goal Hierarchies: A Generalized Model of
         Practice" by Rosenbloom and Newell.)

-Alec (crawford@endor.UUCP)

------------------------------

End of AIList Digest
********************
 5-Nov-87 01:33:49-PST,19443;000000000000
Mail-From: LAWS created at  5-Nov-87 01:28:21
Date: Thu  5 Nov 1987 01:24-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #260 - Resource Center for Software, AI Goals and Models
To: AIList@SRI.COM


AIList Digest            Thursday, 5 Nov 1987     Volume 5 : Issue 260

Today's Topics:
  Proposal - National Resource Center for Intelligent Systems Software &
  Methodology - Sharing Software,
  Comments - The Success of AI & Humanist Models of Mind

----------------------------------------------------------------------

Date: Tue, 3 Nov 87 11:59:50 EST
From: futrell%corwin.ccs.northeastern.edu@RELAY.CS.NET
Subject: National Resource Center for Intelligent Systems Software

I will soon be in Washington talking to the National Science
Foundation about the possibility of setting up a National Resource
Center for Intelligent Systems Software.  The center would have as its
goal the timely and efficient distribution of contributed public
domain software in AI, NLP, and related areas.  Below I have listed,
very briefly, some of the points that I will be covering.  I would
like to hear reactions from all on this.

0. Goals/Philosophy: Distribute software.  The motivations are practical
   (easier on the original author and requester) and philosophical
   (accumulating a base of shared techniques and experience for the field).

1. Scope: Limited in the beginning until acquisition and distribution
   experience is built up.

2. Possible Initial Emphasis: Natural language processing, large lexicons,
   small exemplary programs/systems for teaching AI.

3. Selection: Limited selection balancing importance vs. the robustness and
   detailed documentation of the contributed software.

4. What to Distribute:  Source code plus paper documentation, reprints,
   theses related to the software.

5. Mode of Distribution:  Small: e-mail distribution server. Large: S-mail.

6. Support of Distributed Items: The Center should not offer true
   software "support", but it would assure that the software runs on one
   or more systems before distribution (& see next item).

7. User Involvement: Users of the distributed items are a source of both
   questions and of answers.  So the Center would support national mailings and
   forums on the nets so that problems could be resolved primarily by users.
   If we don't partially shield the developer, important items may never
   be contributed.

8. Languages:  Common Lisp would be the dominant exchange medium.
   Hopefully other standards will emerge (CLOS, X windows).

9. Hardware:  The center would maintain or have access to a dozen or so
   systems for testing, configuring, and hard(tape)copy production.

10. Compatibility Problems:  The Center would work with developers and users
    to deal with the many compatibility issues that will arise.

11. Staff:  Two to three full-time equivalents.

12. Management:  An advisory board (working via e-mail and phone)?

13. Cost to Users:  E-mail free, hardcopy and tapes at near cost.

14. Licensing: A sticky issue.  A standard copyright policy could be
    instituted.  Avoid software with highly restrictive licensing.


Where this is coming from:

Our college is rather new but has 30 faculty and a fair amount of
equipment, mostly Unix.  We have a PhD program and a large number of
MS and undergrad students.  I am involved in a major project to parse
whole documents to build knowledge bases.  My focus is on parsing
diagrammatic material, something that has received little attention.
I teach grad courses on Intro to AI, AI methods, Vision, and Lisp.
I am very familiar with the National Science Foundation, their goals
and policies.

You can reach me directly at:

   Prof. Robert P. Futrelle
   College of Computer Science  161CN
   Northeastern University
   360 Huntington Ave.
   Boston, MA 02115

   (617)-437-2076
   CSNet: futrelle@corwin.ccs.northeastern.edu

------------------------------

Date: Tuesday, 3 November 1987, 21:23-EST
From: Nick Papadakis <@EDDIE.MIT.EDU:nick@MC.LCS.MIT.EDU>
Subject: Re: Lenat's AM program [AIList V5 #257 - Methodology]


        In article <774> tgd@ORSTCS.CS.ORST.EDU (Tom Dietterich) writes:

        >>In the biological sciences, publication of an article reporting a new
        >>clone obligates the author to provide that clone to other researchers
        >>for non-commercial purposes.  I think we need a similar policy in
        >>computer science.

    Shane Bruce <bruce@vanhalen.rutgers.edu> replies:

    >The policy which you are advocating, while admirable, is not practical.  No
    >corporation which is involved in state of the art AI research is going to
    >allow listings of their next product/internal tool to made available to the
    >general scientific community, even on a non-disclosure basis.  Why should
    >they give away what they intend to sell?

This is precisely why corporations involved in state of the art AI
research (and any other form of research) will find it difficult to make
major advances.  New ideas thrive in an environment of openness and free
interchange.

        - nick

------------------------------

Date: 30 Oct 87 19:45:09 GMT
From: gatech!udel!sramacha@bloom-beacon.mit.edu  (Satish Ramachandran)
Subject: Re: The Success of AI (continued, a

In article <8300008@osiris.cso.uiuc.edu> goldfain@osiris.cso.uiuc.edu writes:
>
>Who says that ping-pong, or table tennis isn't a sport?  Ever been to China?

Rightly put! Ping-pong may not be a spectator sport in the West and hence,
maybe suspected to be a 'sport' where little skill is involved.
But if you read about it, you would find that the psychological aspect
of the game is far more intense than say, baseball or golf!
The points are 21 each game and very quickly done with...(often with the
serves themselves !)
Granting the intense psychological factors to be considered while playing
ping-pong (as in many other games), would it be easier to make a machine play
a game where there is a lot of time *real-time* to decide its next move
as opposed to making it play a game where things have to be decided
more quickly, relatively?
Satish

P.S. Btw, ping-pong is also a popular sport in Japan, India, England,
Sweden and France.

------------------------------

Date: 31 Oct 87 17:16:06 GMT
From: trwrb!cadovax!gryphon!tsmith@ucbvax.Berkeley.EDU  (Tim Smith)
Subject: Re: The Success(?) of AI

In article <171@starfire.UUCP> merlyn@starfire.UUCP (Brian Westley) writes:
+=====
| ...I am not interested in duplicating or otherwise developing
| models of how humans think; I am interested in building machines that
| think.  You may as well tell a submarine designer how difficult it is
| to build artificial gills - it's irrelevant.
+=====
The point at issue is whether anyone understands enough about
"thinking" to go out and build a machine that can do it. My claim (I
was the one who started this thread) was that we do not. The common
train of thought of the typical AI person seems to be:

(1) The "cognitive" people have surely figured out by now what
    thinking is all about.
(2) But I can't be bothered to read any of their stuff, because they
    are not programmers, and they don't know how computers work.

Actually, the "cognitive" people haven't figured out what thinking is
at all. They haven't a clue. Of course they wouldn't admit that in
print, but you can determine that for yourself after only a few
months of intensive reading in those fields.

Now there's nothing wrong with naive optimism. There are many cases
in the history of science where younger people with fresh ideas have
succeeded where traditional methods have failed. In the early days of
AI, this optimism prevailed. The computer was a new tool (a fresh
idea) that would conquer traditional fields. But it hasn't. The naive
optimism continues, however, for technological reasons. Computers keep
improving, and many people seem to believe that once we have
massively parallel architectures, or connection machines, or
computers based on neural nets, then, finally, we will be able to
build a machine that thinks.

BS! The point is that no one (NO ONE) knows enough about thinking to
design a machine that thinks.

Look, I am not claiming that AI should come to a grinding halt. All I
am pleading for is some recognition from AI people that the
top-level problems they are addressing are VERY complicated, and are
not going to be solved in the near future by programming. I have seen
very little of this kind of awareness in the AI community. What I
see instead is a lot of whining to the effect that once a problem is
"solved", it is removed from the realm of thinking (chess, compilers,
and medical diagnosis are the usual examples given).

Now if you believe that playing chess is like thinking, you haven't
thought very much about either of these things. And if you believe
that computers can diagnose diseases you are certainly not a
physician. (Please, no flames about MYCIN, CADUCEUS, and their
offspring--I know about these systems. They can be very helpful tools
for a physician, just as a word processor is a helpful tool for a
writer. But these systems do not diagnose diseases. I have worked in
a hospital--it's instructive. Spend some time there as an observer!)
I don't remember any of the pioneer artificial intelligentsia
(Newell, Simon, Minsky, etc.) ever claiming that compilers were
artificial intelligence (they set their sights much higher than
that).

I am not trying to knock the very real advances that AI work has made
in expert systems, in advanced program development systems, and in
opening up new research topics. I just get so damn frustrated when I
see people continually making the assumption that thinking, using
language, composing music, treating the sick, and other basic human
activities are fairly trivial subjects that will soon be accomplished
by computers. WAKE UP!  Go out and read some psychology, philosophy,
linguistics. Learn something about these things that you believe are
so trivial to program. It will be a humbling, but ultimately
rewarding, experience.

--
Tim Smith
INTERNET:     tsmith@gryphon.CTS.COM
UUCP:         {hplabs!hp-sdd, sdcsvax, ihnp4, ....}!crash!gryphon!tsmith
UUCP:         {philabs, trwrb}!cadovax!gryphon!tsmith

------------------------------

Date: 3 Nov 87 00:19:57 GMT
From: PT.CS.CMU.EDU!SPEECH2.CS.CMU.EDU!yamauchi@cs.rochester.edu 
      (Brian Yamauchi)
Subject: Re: The Success of AI

In article <137@glenlivet.hci.hw.ac.uk>, gilbert@hci.hw.ac.uk
(Gilbert Cockton) writes:
> This work is inherently superior to most work in AI because none of the
> writers are encumbered by the need to produce computational models.
> They are thus free to draw on richer theoretical orientations which
> draw on concepts which are clearly motivated by everyday observations
> of human activity. The work therefore results in images of man which
> are far more humanist than mechanical computational models.

I think most AI researchers would agree that the human mind is more than a
simple production system or back-propagation network, but the more basic
question is whether or not it is possible for human beings to understand
human intelligence.  If the answer is no, then not only cognitive
psychologists, but all psychologists will be doomed to failure.  If the
answer is yes, then it should be possible to use build a system that uses
that knowledge to implement human-like intelligence.  The architecture of this
system may be totally unlike today's computers, but it would be man-made
("Artificial") and possessing human-like intelligence.

This may require some completely different model than those currently
popular in cognitive science, and it would have to account for
"non-computational" human behavior (emotions, creativity, etc.), but as long
as it was well-defined, it should be possible to implement the model in some
system.

I suppose one could argue that it will never be possible to perfectly
understand human behavior, so it will never be possible to make an AI which
perfectly duplicates human intelligence.  But even if this were true, it
would be possible to duplicate human intelligence to the degree that it was
possible to understand human behavior.

> Furthermore, the common test of any
> concept of mind is "can you really imagine your mind working this way?"

This is a generally useful, if not always accurate, rule of thumb.  (It is
also the reason why I can't see why anyone took Freudian psychology
seriously.)

Information-processing models (symbol-processing for the higher levels,
connectionist for the lower levels) seem more plausible to me than any
alternatives, but they certainly are not complete and to the best of my
knowledge, they do not attempt to model the non-computational areas.  It
would be interesting to see the principles of cognitive science applied to
areas such as personality and creativity.  At least, it would be interesting
to see a new perspective on areas usually left to non-cognitive
psychologists.

> Many of the pillars of human societies, like the freedom and dignity of
> democracy and moral values, are at odds with the so called 'Scientific'
> models of human behaviour; indeed the work of misanthropes like Skinner
> actively promote the connection between impoversihed models of man and
> immoral totalitarian socities (B.F. Skinner, Beyond Freedom and Dignity).

True, it is possible to promote totalitarianism based on behaviorist
psychology (i.e. Skinner) or mechanistic sociology (i.e. Marx), both of
which discard the importance of the individual.  On the other hand, simply
understanding human intelligence does not reduce its importance -- an
intelligence that understands itself is at least as valuable as one that
does not.

Furthermore, totalitarian and collectivist states are often promoted on the
basis of so-called "humanistic" rationales -- especially for socialist and
communist states (right-wing dictatorships seem to prefer nationalistic
rationales).  The fact that such offensive regimes use these justifications
does not discredit either science or the humanities.
______________________________________________________________________________

Brian Yamauchi                      INTERNET:    yamauchi@speech2.cs.cmu.edu
Carnegie-Mellon University
Computer Science Department
______________________________________________________________________________

------------------------------

Date: 4 Nov 87 22:01:03 GMT
From: topaz.rutgers.edu!josh@rutgers.edu  (J Storrs Hall)
Subject: Re: The Success of AI


    Brian Yamauchi:
    ... the more basic
    question is whether or not it is possible for human beings to understand
    human intelligence.  If the answer is no, then not only cognitive
    psychologists, but all psychologists will be doomed to failure.

Actually, it is probably possible to build a system that is more
complex than any one person can really "understand".  This seems to be
true of a lot of the systems (legal, economic, etc) at large in the
world today.  The system is made up of the people each of whom
understands part of it.  It is conjectured by Minsky that the mind is
a similar system.  Thus it may be that AI is possible where psychology
is not (in the same sense that economics is impossible).
--JoSH

------------------------------

Date: 3 Nov 87 12:06 PST
From: hayes.pa@Xerox.COM
Subject: Humanist Models of Mind

Gilbert Cockton makes a serious mistake, in lumping AI models together
with all other `mechanical' or `scientific' models of mind on the wrong
side of C P Snows cultural fence:
 >In short, mechanical concepts of mind and the values of a civilised
 >society are at odds with each other. It is for this reason that modes
 >of representation such as the novel, poetry, sculpture and fine art
 >will continue to dominate the most comprehensive accounts of the
>human condition.
The most exciting thing about computational models of the mind is
exactly that they, alone among the models of the mind we have,  ARE
consistent with humanist values while being firmly in contact with
results of the hardest of sciences.

Cockton is right to be depressed by many of the scientific views of man
that have appeared recently.  We have fallen from the privileged bearers
of divine knowledge to the lowly status of naked apes, driven by
primitive urges; or even to mere vehicles used by selfish genes to
reproduce themselves.  Superficial analogies between brains and machines
make people into blind bundles of mechanical links between inputs and
outputs, suitable inhabitants for Skinners New Walden, of whose minds -
if they have any - we are not permitted to speak.  Physicists often
assume that people, like everything else, are physical machines governed
by physical laws, and therefore whose behavior must be describable in
physical terms: more, that this is a scientific truth, beyond rational
dispute.  None of these pictures of human nature has any place for
thought,  for language, culture, mutual awareness and human
relationships.  Many scientists have given up and decided that the most
uniquely human attributes have no place in the world given us by
biology, physics and engineering.

But the computational approach to modelling mind gives a central place
to symbolic structures, to languages and representations.  While firmly
rooted in the hard sciences, this model of the mind naturally
encompasses views of perception and thought which assume that they
involve metaphors, analogies,inferences and images.  It deals right at
its center with questions of communication  and miscommunication.  I can
certainly imagine my mind ( and Gilberts ) working this way: I consist
of symbols, interacting with one another in a rich dynamic web of
inference, perceptual encoding and linguistic inputs ( and other
interactions, such as with emotional states ).  This is a view of man
which does NOT reduce us to a meaningless machine, one which places us
naturally in a society of peers with whom we communicate.

Evolutionary biology can account for the formation of early human
societies in very general terms, but it has no explanation for human
culture and art.  But computer modellers are not surprised by the
Lascaux cave paintings, or the univeral use of music, ritual and
language.   People are organic machines;  but if we also say that they
are machines which work by storing and using symbolic structures, then
we expect them to create representations and attribute meaning to
objects in their world.

I feel strongly about this because I believe that we have here, at last,
a way - in principle -  to bridge the gap between science and humanity.
Of course, we havnt done it yet, and to call a simple program
`intelligent' doesnt help to keep things clear, but Cocktons pessimism
should not be alllowed to cloud our vision.

Pat Hayes

------------------------------

End of AIList Digest
********************
 5-Nov-87 21:42:50-PST,21153;000000000001
Mail-From: LAWS created at  5-Nov-87 21:30:09
Date: Thu  5 Nov 1987 21:28-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #261 - Seminars, CADE-9, HICSS-22
To: AIList@SRI.COM


AIList Digest             Friday, 6 Nov 1987      Volume 5 : Issue 261

Today's Topics:
  Seminars - A Hybrid Paradigm for Modeling of Complex Systems (TI) &
    The Ecology of Computation (SRI) &
    Evolving Knowledge and TMS (SRI) &
    Conceptual Graphs (SRI) &
    Hypothetical Reasoning (SRI) &
    Application of Fuzzy Control in Japan (NASA Ames),
  Conference - CADE-9 Automated Deduction &
    HICSS-22 System Sciences

----------------------------------------------------------------------

Date: Tue, 3 Nov 87 13:51:42 CST
From: "Michael T. Gately" <gately%resbld.csc.ti.com@RELAY.CS.NET>
Subject: Seminar - A Hybrid Paradigm for Modeling of Complex Systems (TI)

       Texas Instruments Computer Science Center Lecture Series

           A Hybrid Paradigm for Modeling of Complex Systems

                           Prof. J. Talavage
                           Purdue University

                   10:00 am, Friday, 6 November 1987
                   North Building Cafeteria Room C-4

Abstract

The Network Modeling approach to simulation provides the modeler with
simple yet powerful concepts which can be used to capture the
significant aspects of the system to be modeled.  Current network
modeling methodologies, though advanced, lack explicit concepts for
the representation of complex behavior such as decision-making .
Artificial Intelligence research, because of its emphasis on knowledge
representation, has provided several techniques which can be
succesfully applied to the modeling of decision-making behavior.  A
hybrid methodology unifying the concepts of Object-oriented
programming, Logic programming and the Discrete-Event approach to
systems modeling should provide a very convenient vehicle for
representing complex systems.  The approach has been implemented as a
top-level of CAYENE.  CAYENE is a member of the class of programming
languages known as hybrid AI systems and it is based on a formalism of
distributed logic programming.  SIMYON is an experimental network
simulation environment embedded in CAYENE.  SIMYON is implemented by
defining a library of CAYENE objects analogous to the `blocks' of
network simulation languages and thus providing building blocks for
modeling.  Examples of the use of SIMYON to model a job scheduler in a
manufacturing situation, and an adaptive material handling dispatch
mechanism for flexible manufacturing systems are given.

Biography

Dr. Talavage is a Professor of Industrial Engineering at Purdue
University.  His teaching and research interests have focussed on the
areas of modeling and simulation, with application to manufacturing
systems.  Professor Talavage's current research includes the
integration of artificial intelligence capabilities with those of
simulation/math modeling in order to provide a highly intelligent aid
for production decision support.  Since receiving his Ph.D. from Case
Institute of Technology in 1968, Dr. Talavage has published over 100
papers and one book, and is on the Editorial board of the Journal of
Manufacturing Systems and an Associate Editor for the SIMULATION
journal.  He has been a consultant to numerous companies and
government agencies.

----------------------------------------------------------------------
The lecture will be given in the North Building Cafeteria Room C-4 at
the Dallas Expressway site.  Visitors to TI should contact Dr. Bruce
Flinchbaugh (214-995-0349) in advance and meet in the west entrance
lobby of the North Building by 9:45am.

------------------------------

Date: Tue, 3 Nov 87 09:30:20 PST
From: seminars@csl.sri.com (contact lunt@csl.sri.com)
Subject: Seminar - The Ecology of Computation (SRI)

SRI COMPUTER SCIENCE LAB SEMINAR ANNOUNCEMENT:


                      THE ECOLOGY OF COMPUTATION

                         Bernardo A. Huberman
                     Xerox Palo Alto Research Center


                  Monday, November 9 at 4:00 pm
      SRI International, Computer Science Laboratory, Room EJ228


A most advanced instance of concurrent computation is provided by
distributed processing in open systems which have no global controls.
These emerging heterogeneous networks are becoming self-regulating
entities which in their behavior are very different from their individual
components.  Their ability to remotely spawn processes in other computers
and servers of the system offers the possibility of having a community of
computational agents which, in their interactions, are reminiscent of
biological and social organizations.  This talk will give a perspective
on computational ecologies, and describe a theory of their behavior which
explicitly takes into account incomplete knowledge and delayed information
on the part of its agents.  When processes can choose among many possible
strategies while collaborating in the solution of computational tasks, the
dynamics leads to asymptotic regimes characterized by fixed points,
oscillations and chaos. Finally, we will discuss the possible existence of
a universal law regulating the way in which the benefit of cooperation is
manifested in the system.


NOTE FOR VISITORS TO SRI:

Please arrive at least 10 minutes early in order to sign in and
be shown to the conference room.

SRI is located at 333 Ravenswood Avenue in Menlo Park.  Visitors
may park in the visitors lot in front of Building E (a tall tan
building on Ravenswood Ave; the turn off Ravenswood has a sign
for Building E), or in the visitors lot in front of Building A
(red brick building at 333 Ravenswood Ave), or in the conference
parking area at the corner of Ravenswood and Middlefield.  The
seminar room is in Building E.  Visitors should sign in at the
reception desk in the Building E lobby.

Visitors from Communist Bloc countries should make the necessary
arrangements with Fran Leonard (415-859-4124) in SRI Security as
soon as possible.

------------------------------

Date: Thu, 5 Nov 87 14:25:04 PST
From: Amy Lansky <lansky@venice.ai.sri.com>
Subject: Seminar - Evolving Knowledge and TMS (SRI)


                     EVOLVING KNOWLEDGE AND TMS

                         Anand S. Rao (ANAND@IBM.COM)
         IBM T.J. Watson Research Center and Sydney University

                        (joint work with
                         Normal Y. Foo
       IBM Systems Research Education Center and Sydney University)

                   11:00 AM, MONDAY, November 9
              SRI International, Building E, Room EJ228


The traditional view of knowledge in the AI literature has been that
'Knowledge' is 'true belief'.  The semantic account of this notion
suffers from a major problem called Logical Omniscience, where the
agent knows all valid formulas and his knowledge is closed under
implication.  In this talk we propose an alternative viewpoint where
knowledge or EVOLVING KNOWLEDGE (as we call it) is treated as
'indefeasibly justified true belief'. This notion of knowledge solves
the problem of logical omniscience and also captures the
resource-bounded reasoning of agents in a natural way.  We give the
semantics and axiomatization of this logic of evolving knowledge and
discuss its properties.

The logic of evolving knowledge also serves as the logical foundation
for the Truth Maintenance System (TMS). We provide a transformation to
and from TMS nodes to formulas in this logic. We show that a set of
nodes has a 'well founded labelling' iff their corresponding IN nodes
are 'satisfiable' in this logic and their corresponding OUT nodes are
'not satisfiable' in this logic. We conclude the talk by comparing our
logic with Autoepistemic Logic, Deduction model of Belief and the
Awareness model of belief.

VISITORS:  Please arrive 5 minutes early so that you can be escorted up
from the E-building receptionist's desk.  Thanks!

------------------------------

Date: Thu, 5 Nov 87 09:17:21 PST
From: luntzel@csl.sri.com (Elizabeth Luntzel)
Subject: Seminar - Conceptual Graphs (SRI)


SRI COMPUTER SCIENCE LAB SEMINAR ANNOUNCEMENT:


              KNOWLEDGE REPRESENTATION WITH CONCEPTUAL GRAPHS


                              John F. Sowa
                          IBM Systems Research
                         and Stanford University


                   Wednesday, November 11 at 4:00 pm
         SRI International, Computer Science Laboratory, Room A113B


Conceptual graphs form a complete system of logic designed to map
as simply as possible to and from natural languages.  Like the predicate
calculus, they are general enough to represent anything that can be
represented in rules, frames, and other languages.  But they also have
certain formal and practical advantages over the predicate calculus.
Their formal advantages arise from their treatment of objects, contexts,
and sets.  Their practical advantages arise from the standard guidelines
they provide for mapping to and from natural languages.  Because of their
generality and flexibility, they have been used as the knowledge
representation language for a variety of applications, including planning,
information retrieval, and interfaces between heterogeneous databases and
knowledge bases.  This talk will introduce conceptual graphs and show how
they handle a variety of knowledge representation tasks.


John Sowa is a member of the IBM Systems Research Institute in Thornwood,
New York.  This fall, he has been visiting the IBM Palo Alto Scientific
Center and teaching a course in the Stanford Computer Science Department.
His work on conceptual graphs has appeared in his book, Conceptual
Structures (Addison-Wesley, 1984), and a new collection of papers on
conceptual graphs will be released in the spring of 1988.



NOTE FOR VISITORS TO SRI:

Please arrive at least 10 minutes early in order to sign in and
be shown to the conference room.

SRI is located at 333 Ravenswood Avenue in Menlo Park.  Visitors
may park in the visitors lot in front of Building A (red brick
building at 333 Ravenswood Ave) or in the conference parking area
at the corner of Ravenswood and Middlefield.  The seminar room is in
Building A.  Visitors should sign in at the reception desk in the
Building A lobby.

IMPORTANT:  Visitors from Communist Bloc countries should make the
necessary arrangements with Fran Leonard (415-859-4124) in SRI Security
as soon as possible.

------------------------------

Date: Thu, 29 Oct 87 16:55:04 PST
From: Amy Lansky <lansky@venice.ai.sri.com>
Subject: Seminar - Hypothetical Reasoning (SRI)

                      DEFAULTS AND CONJECTURES:
        HYPOTHETICAL REASONING FOR EXPLANATION AND PREDICTION

          David Poole (dlpoole%watdragon.waterloo.edu@relay.cs.net)
            Logic Programming and Artificial Intelligence Group
                     University of Waterloo

                   11:00 AM, MONDAY, November 2
              SRI International, Building E, Room EJ228


Classical logic has been criticised as a language for common sense
reasoning as it is monotonic. In this talk I wish to argue that the
problem is not with logic, but with how logic is used. An alternate
way to use logic is by using theory formation; logic tells us what a
theory implies, an inconsistency means that the theory cannot be true
of the world. I show how the simplest form of theory formation, namely
where the user supplies the possible hypotheses, can be used as a
basis for default reasoning and model-based diagnosis.  This is the
basis of the "Theorist" system being built at the University of
Waterloo.  I will discuss what we have learned from building and using
our system.  I will also discuss distinctions which we have found to
be important in practice, such as between explaining observations and
making predictions; and between normality conditions (defaults) and
abnormality conditions (prototypes, conjectures, diseases).  The
effects of these distinctions on recognition and prediction problems
will be presented along with algorithms, theorems and examples.

------------------------------

Date: Fri, 30 Oct 87 17:55:10 PST
From: JARED%PLU@ames-io.ARPA
Subject: Seminar - Application of Fuzzy Control in Japan (NASA Ames)

                       NASA Ames Research Center
                       Intelligent Systems Forum

                 Professor Yamakawa, Kumamoto University
                                 and
               Professor Hirota, Hosei University (Japan)

             The Application of 'Fuzzy Control' in Japan

SUMMARY:
A seminar on the application of 'Fuzzy Control' in Japan and recent work
leading to the creation of 'fuzzy chips', 'fuzzy hardwares',  and 'Fuzzy
computers'.

The  list of  interesting applications include the famous control of the
trains (metro)  in the city of Sendai,  Japan and a fuzzy controlled in-
telligent  robot.   This seminar  will include  illustrations  of  these
systems.

An abstract of the talk will be sent-out as soon as its received.

Time:     2:00 -- 3:30 p.m.
Date:     Nov. 5, 1987
Place:    Conf. room 103, Buliding 244
Inquires: Hamid Berenji, (415) 694-6525, berenji%plu@ames-io.arpa

------------------------------

Date: Wed, 4 Nov 87 12:45:07 cst
From: stevens@anl-mcs.ARPA (Rick L. Stevens)
Subject: Conference - CADE-9 Automated Deduction


                   Final Call for Papers

         9th International Conference on Automated
                         Deduction

                      May 23-26, 1988

CADE-9 will be held at  Argonne  National  Laboratory  (near
Chicago)  in  celebration  of  the  25th  anniversary of the
discovery of the resolution principle at Argonne in the sum-
mer of 1963.  Papers are invited in the following or related
fields:

Theorem Proving                  Logic Programming
Unification                      Deductive Databases
Term Rewriting                   ATP for Non-Standard Logics
Program Verification             Inference Systems

The Program Committee consists of:

Peter Andrews                            Ewing Lusk
W.W. Bledsoe                             Michael MacRobbie
Alan Bundy                               Hans-Jorgen Ohlbach
Robert Constable                         Ross Overbeek
Seif Haridi                              William Pase
Larry Henschen                           Jorg Siekmann
Deepak Kapur                             Mark Stickel
Dallas Lankford                          Jim Williams
Jean-Louis Lassez

Papers are solicited in three categories:

        Long papers: 20 pages, about 5000 words
        Short papers: 10 pages, about 2500 words
        Extended Abstracts of Working Systems: 2 pages
        Problem sets: 5 pages

Long papers are expected  to  present  substantial  research
results.  Short papers are a forum for briefer presentations
of the results of ongoing research.  Extended abstracts  are
descriptions  of  existing  automated  reasoning systems and
their areas of application.  Problem sets should  present  a
complete,   formal  representation  of  some  collection  of
interesting problems for automated systems to  attack.   The
problems   should  currently  unavailable  in  the  existing
literature.  Three copies should be sent  to  arrive  before
November 23rd, 1987 to

        Ewing Lusk and Ross Overbeek, chairmen
        CADE-9
        Mathematics and Computer Science Division
        Argonne National Laboratory
        9700 South Cass Avenue
        Argonne, IL 60439

Schedule:

        November 23, 1987:  papers due
        January 25, 1988:  notification of authors
        February 21, 1988:  final manuscripts due

Questions should  be  directed  to  E.  L.  Lusk  (lusk@anl-
mcs.arpa,    phone    312-972-7852)    or    Ross   Overbeek
(overbeek@anl-mcs.arpa, phone 312-972-7856)

------------------------------

Date: 5 November 1987, 17:09:31 EST
From: Bruce Shriver <SHRIVER@ibm.com>
Subject: Conference - HICSS-22 System Sciences


      HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES

      HICSS-22 SOFTWARE TRACK INTENT TO PARTICIPATE FORM

             Twenty-Second Annual HICSS Conference
                     Jan. 3-6, 1989, Hawaii

 GENERAL INFORMATION
 HICSS provides  a forum  for the  interchange of  ideas, re-
 search  results,  development activities,  and  applications
 among  academicians and  practitioners  in the  information,
 computing, and  system sciences.  HICSS is  sponsored by the
 University of Hawaii  in cooperation with the  ACM, the IEEE
 Computer Society, and the Pacific Research Institute for In-
 formation  Systems and  Management (PRIISM).   HICSS-22 will
 consist of  tutorials, open  forums, task forces,  a distin-
 guished lecturer  series, and  the presentation  of accepted
 manuscripts which emphasize  research and development activ-
 ities in software technology, architecture, decision support
 and knowledge-based  systems, emerging technologies  and ad-
 vanced applications.  The best  papers, selected by the pro-
 gram committee in each of these areas, are given an award at
 the meeting.  There is a high degree of interaction and dis-
 cussion among the conference  participants as the meeting is
 conducted in a workshop-like setting.

 INSTRUCTIONS FOR SUBMITTING PAPERS
 Manuscripts should be 22-26 typewritten, double-spaced pages
 in length.  Please do not  send submissions that are signif-
 icantly shorter  or longer than  this. Papers must  not have
 been previously  presented or published, nor  currently sub-
 mitted for journal publication.  Each manuscript will be put
 through a  rigorous  refereeing process.  Manuscripts should
 have a title page that includes the title of the paper, full
 name of its author(s), affiliation(s), complete physical and
 electronic address(es), telephone  number(s) and a  300-word
 abstract of the paper.

 DEADLINES FOR AUTHORS
 o   A 300-word abstract is due by March 1, 1988
 o   Feedback to author concerning abstract by March 31, 1988
 o   Six copies of the manuscript are due by June 6, 1988.
 o   Notification of accepted papers by September 1, 1988.
 o   Accepted manuscripts,  camera-ready, are due  by October
     3, 1988.

 DEADLINES FOR MINI-TRACK, SESSION, AND TASK-FORCE COORDINATORS
 If you  would like to  coordinate a mini-track,  session, or
 task force, you  must submit for consideration a  3 page ab-
 stract in  which you describe  the topic you  are proposing,
 its timeliness  and importance, and its  treatment in recent
 conferences and workshops before December 15, 1987.

 PLEASE COMPLETE THE FOLLOWING FORM AND RETURN IT TO:
 Bruce D. Shriver
 HICSS-22 Conference Co-Chairman
   and Software Technology Track Coordinator
 IBM T. J. Watson Research Center
 P.O. Box 704
 Yorktown Heights, NY 10598
 (914) 789-7626
 CSnet: shriver@ibm.com
 Bitnet: shriver@yktvmh

 Name      ______________________________________________________
 Address:  ______________________________________________________
 City:     ______________________________________________________
 Phone No. ______________________________________________________
 Electronic Mail Address: _______________________________________

 I would like to coordinate a mini-track or session in:
       I would like to coordinate a task-force in:
           I will submit a paper in:
                I will referee papers in:

 ___  ___  ___ ___  Algorithms, Their Analysis and Pragmatics
 ___  ___  ___ ___  Alternative Language and Programming Paradigms
 ___  ___  ___ ___  Applying AI Technology to Software Engineering
 ___  ___  ___ ___  Communication & Protocol Software Issues
 ___  ___  ___ ___  Database Formalisms, Software and Systems
 ___  ___  ___ ___  Designing & Prototyping Complex Systems
 ___  ___  ___ ___  Distributed Software Systems
 ___  ___  ___ ___  Electronic Publishing & Authoring Systems
 ___  ___  ___ ___  Language Design & Language Implementation Technology
 ___  ___  ___ ___  Models of Program and System Behavior
 ___  ___  ___ ___  Programming Supercomputers & Massively Parallel Systems
 ___  ___  ___ ___  Reuseability in Design & Implementation
 ___  ___  ___ ___  Software Design Tools/Techniques/Environments
 ___  ___  ___ ___  Software Related Social and Legal Issues
 ___  ___  ___ ___  Testing, Verification, & Validation of Software
 ___  ___  ___ ___  User Interfaces
 ___  ___  ___ ___  Workstation Operating Systems and Environments
 ___  ___  ___ ___  Other ______________________________

------------------------------

End of AIList Digest
********************
 8-Nov-87 23:59:16-PST,22112;000000000000
Mail-From: LAWS created at  8-Nov-87 23:25:12
Date: Sun  8 Nov 1987 23:19-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #262 - Neuromorphics, Speech Recognition, Goals
To: AIList@SRI.COM


AIList Digest             Monday, 9 Nov 1987      Volume 5 : Issue 262

Today's Topics:
  Queries - Michael O. Rabin & Blackboard Sources & AI Programming Texts,
  Neuromorphic Systems - Shift-Invariant Neural Nets for Speech Recognition,
  Msc. - Indexing Schemes,
  Applications - Speech Recognition,
  Comments - Goal of AI & Humanist, Physicist, and Symbolic Models of the Mind

----------------------------------------------------------------------

Date: Thu, 5 Nov 87 08:56 EST
From: Araman@BCO-MULTICS.ARPA
Subject: Michael O. Rabin - location

One of my friends sent me this message.  If anyone knows Mr.  Rabin, or
if Mr.  Rabin is reading this message, could you please send a response
to

Bensoussan -at BCO-Multics.ARPA

thanks
 #1 (14 lines in body):  Date:  Wednesday, 4 November 1987 10:03 est
From:  Bensoussan Subject:  Michael O.  Rabin To:  Araman

Does anyone know Michael O.  Rabin's address?  An AI award is waiting
for him!

A friend of mine, Monica Pavel, asked me to find him.  My friend teaches
a class on Pattern Recognition at Paris University, and she gave several
classes and presentations in Japan.  The Japanese government decided to
maake an AI award available and asked her to select the person who
should receive it.  Since she was impressed by one of Rabin's
publications, she selected him to receive the award,...that is, if she
can find him.

Can anyone in the AI community help locate him?

------------------------------

Date: 6 Nov 87 23:46:39 GMT
From: teknowledge-vaxc!jlevy@beaver.cs.washington.edu  (sleeze hack)
Subject: Shopping list of sources wanted

I'm looking for the following:

1. Sample black board systems
        Ideally, small black board systems written using a black board
        tool of some kind, but no examples refused!  I'd like these to
        use as test cases for various black board work I do.  The only
        good examples I've seen are the two "AGE Example Series" by
        Nii & Co. at Stanford's HPP.

2. A frame system in C (or maybe PASCAL)
        Something like a C translation of the PFL code published in AI
        EXPERT, Dec. 1986 by Finin.

3. A yacc grammer for english or any subset of english
        If someone has yaccized Tomita's "Efficient Parsing for Natural
        Language" that would be ideal.

These are in order of importance.  I might be willing to pay for the
sample black board systems.  I'm posting this to comp.source.wanted and
comp.ai because I think it belongs in both, and there is minimal overlap
in readership between the two.  If I'm wrong, sorry.

Thanks in advance.

Name:         Joshua Levy  (415) 424-0500x357
Disclaimer:   Teknowledge can plausibly deny everything.
Pithy Quote:  "Give me action, not words."
jlevy@teknowledge-vaxc.arpa or {uunet|sun|ucbvax}!jlevy%teknowledge-vaxc.arpa--
Name:         Joshua Levy  (415) 424-0500x357
Disclaimer:   Teknowledge can plausibly deny everything.
Pithy Quote:  "You're just a bunch of CYNIX"
jlevy@teknowledge-vaxc.arpa or {uunet|sun|ucbvax|decwrl}!jlevy%teknow...

------------------------------

Date: 6 Nov 87 18:01:05 GMT
From: aplcen!jhunix!apl_aimh@mimsy.umd.edu  (Marty Hall)
Subject: AI Programming texts?

I am teaching an AI Programming course at Johns Hopkins this coming
semester, and was wondering if there were any suggestions for texts
from people that have taught/taken a similar course.  The course
will be using Common LISP applied to AI Programming problems.  The
students have an Intro AI course as a prereq, and have only mild
exposure to LISP (Franz) at the end of that course.  Both the
AI Programming course and the Intro are supposed to be graduate
level, but would probably be undergrad level in the day school.

My thoughts so far were to use the second edition of Charniak,
Riesbeck, etc's "Artificial Intelligence Programming", along with
"Common LISPCraft" (Wilensky).  Steele (CLtL) will be included
as an optional reference.

Any alternate suggestions?  Send E-mail, and if there is a consensus,
I would be glad to post it to the net.

Thanks!
                                - Marty Hall
                                  hall@hopkins-eecs-bravo.arpa

------------------------------

Date: Fri, 30 Oct 87 20:31:32+0900
From: kddlab!atr-la.atr.junet!waibel@uunet.UU.NET (Alex Waibel)
Subject: Shift-Invariant Neural Nets for Speech Recognition

A few weeks ago, there was a discussion on AI-list,  about connectionist
(neural) networks being afflicted by an inability to handle shifted patterns.
Indeed, shift-invariance is of critical importance to applications such as
speech recognition.  Without it a speech recognition system has to rely
on precise segmentation and in practice reliable errorfree segmentation
cannot be achieved.  For this reason, methods such as dynamic time warping
and now Hidden Markov Models have been very successful and achieved high
recognition performace.  Standard neural nets have done well in speech
so far, but due to this lack of shift-invariance (as discussed on AI-list
a number of these nets have been limping along in comparison to these other
techniques.

Recently, we have implemented a time-delay neural network (TDNN) here at
ATR, Japan, and demonstrate that it is shift invariant.  We have applied
it to speech and compared it to the best of our Hidden Markov Models.  The
results show, that its error rate is four times better than the best of our
Hidden Markov Models.
The abstract of our report follows:

              Phoneme Recognition  Using Time-Delay Neural Networks

              A. Waibel, T. Hanazawa, G. Hinton^, K. Shikano, K.Lang*
                 ATR Interpreting Telephony Research Laboratories

                                Abstract

        In this paper we present a Time Delay Neural Network (TDNN) approach
        to phoneme recognition which is characterized by two important
        properties: 1.) Using a 3 layer arrangement of simple computing
        units, a hierarchy can be constructed that allows for the formation
        of arbitrary nonlinear decision surfaces.  The TDNN learns these
        decision surfaces automatically using error backpropagation.
        2.) The time-delay arrangement enables the network to discover
        acoustic-phonetic features and the temporal relationships between
        them independent of position in time and hence not blurred by
        temporal shifts in the input.

        As a recognition task, the speaker-dependent recognition of the
        phonemes "B", "D", and "G" in varying phonetic contexts was chosen.
        For comparison, several discrete Hidden Markov Models (HMM) were
        trained to perform the same task.  Performance evaluation over 1946
        testing tokens from three speakers showed that the TDNN achieves a
        recognition rate of 98.5 % correct while the rate obtained by the
        best of our HMMs was only 93.7 %.  Closer inspection reveals that
        the network "invented" well-known acoustic-phonetic features (e.g.,
        F2-rise, F2-fall, vowel-onset) as useful abstractions.  It also
        developed alternate internal representations to link different
        acoustic realizations to the same concept.

^ University of Toronto
* Carnegie-Mellon University

For copies please write or contact:
Dr. Alex Waibel
ATR Interpreting Telephony Research Laboratories
Twin 21 MID Tower, 2-1-61 Shiromi, Higashi-ku
Osaka, 540, Japan
phone: +81-6-949-1830
Please send Email to my net-address at Carnegie-Mellon University:
                                                  ahw@CAD.CS.CMU.EDU

------------------------------

Date: 5 Nov 87 17:11:52 GMT
From: dbrauer@humu.nosc.mil (David L. Brauer)
Reply-to: dbrauer@humu.nosc.mil (David C. Brauer)
Subject: Indexing Schemes


In regards to the recent request for keyword/indexing schemes for AI
literature, look up the April 1985 issue of Applied Artificial Intelligence
Reporter.  It contains an article describing the AI classification scheme
used by Scientific DataLink when compiling their collections of research
reports.

------------------------------

Date: Fri, 6 Nov 87 10:29:44 EST
From: hafner%corwin.ccs.northeastern.edu@RELAY.CS.NET
Subject: Practical effects of AI


In AIList V5 #255 Bruce Kirby asked what practical effects AI will have
in the next 10 years, and how that will affect society, business, and
government.

One practical effect that I expect to see is the integration of logic
programming with database technology, producing new deductive databases
that will replace traditional databases.  (In my vision, in 15 years
no one will want to buy a database management system that does not support
a prolog-like data definition and query language.)

David D. H. D. Warren wrote a paper on this in the VLDB conference in 1981,
and the database research community is busy trying to work out the details
right now.  Of course, the closer this idea comes to a usable technology,
the less AIish it seems to many people.

I can speculate on how this will affect society, business, and government:
it will make many new applications of databases possible, for management,
manufacturing, planning, etc.  Right now, database technology is
very hard to use effectively for complex applications.  (Many application
projects are never successfully completed - they are crushed by the complexity
of getting them working right.  Ordinary programmers simply can't hack
these applications, and brilliant programmers don't want to.)

Deductive databases will be so much easier to create, maintain and use, that
computers will finally be able to fulfill their promise of making
complex organizations more manageable.  White collar productivity will
be improved beyond anyone's current expectations.

A negative side effect of this development (along with personal computers
and office automation) will be serious unemployment in the white collar
work force.  The large administrative and middle management work force
will shrink permanently, just as the large industrial work force has.

All of the above, of course, is simply an opinion, backed up by (hopefully)
common sense.

Carole Hafner
csnet: hafner@northeastern.edu

------------------------------

Date: 8 Nov 87 17:14:19 GMT
From: PT.CS.CMU.EDU!SPEECH2.CS.CMU.EDU!kfl@cs.rochester.edu  (Kai-Fu
      Lee)
Subject: Re: Practical effects of AI (speech)

In article <930001@hpfcmp.HP.COM>, gt@hpfcmp.HP.COM (George Tatge) writes:
> >
> >(1) Speaker-independent continuous speech is much farther from reality
> >    than some companies would have you think.  Currently, the best
> >    speech recognizer is IBM's Tangora, which makes about 6% errors
> >    on a 20,000 word vocabulary.  But the Tangora is for speaker-
> >    dependent, isolate-words, grammar-guided recognition in a benign
> >    environment. . . .
> >
> >Kai-Fu Lee
>
> Just curious what the definition of "best" is.  For example, I have seen
> 6% error rates and better on grammar specific, speaker dependent, continuous
> speech recognition.  I would guess that for some applications this is
> better than the "best" described above.
>

"Best" is not measured in terms of error rate alone.  More effort and
new technologies have gone into the IBM's system than any other system,
and I believe that it will do better than any other system on a comparable
task.  I guess this definition is subjective, but I think if you asked other
speech researchers, you will find that most people believe the same.

I know many commercial (and research) systems have lower error rates
than 6%.  But you have to remember that the IBM system works on a 20,000
word vocabulary, and their grammar is a very loose one, accepting
arbitrary sentences in office correspondences.  Their grammar has a
perplexity (number of choices at each decision point, roughly speaking)
of several hundred.  Nobody else has such a large vocabulary or such
a difficult grammar.

IBM has experimented with tasks like the one you mentioned.  In 1978,
they tried a 1000-word task with a very tight grammar (perplexity = 5 ?),
the same task CMU used on Hearsay and Harpy.  They achieved 0.1% error
rate.

> George (floundering in superlative ambiguity) Tatge

Kai-Fu Lee

------------------------------

Date: 29 Oct 87 14:22:46 GMT
From: clyde!watmath!utgpu!utcsri!utegc!utai!murrayw@rutgers.edu 
      (Murray Watt)
Subject: Re: Goal of AI: where are we going? (the right way?)

In article <2072@cci632.UUCP> mdl@cci632.UUCP (Michael Liss) writes:
>I read an interesting article recently which had the title:
>"If AI = The Human Brain, Cars Should Have Legs"
>
>The author's premise was that most of our other machines that mimic human
>abilites do not do so through strict copying of our physical processes.
>
>What we have done, in the case of the automobile, is to make use of wheels and
>axles and the internal combustion engine to produce a transportation device
>which owes nothing tothe study of human legs.
>
>In the case of AI, he state that artificial intelligence should not be
>assumed to be the equivalent of human intelligence and thus, the disection of
>the human mind's functionality will not necessarily yield a solution to AI.
>
"THE USE AND MISUSE OF ANALOGIES"

Transporation (or movement) is not a property unique to human beings.
If one were to refine the goal better, the analogy flips sides.
If the goal is to design a device that can climb rocky hills it may
have something like legs. If the goal is to design a device that can
fly it may have something like wings. (Okay so there not the same type of
wings, but what about streamlining?)

AS I UNDERSTAND IT, one goal of AI is to design systems that perform well
in areas that the human brain performs well. Current computer systems can do
things (like add numbers) better than we can. I would not suggest creating
an A.I. system for generating telephone bills! However, don't tell me
that understanding the human brain doesn't tell me anything about natural
language!

The more analogies I see the less I like them. However, they seem handy to
convince the masses of completely false doctrines.

e.g. "Jesus accepted food and shelter from his friends, so sign over
      your paycheck to me." (I am waiting Michael) 8-)

                                   Murray Watt (murrayw@utai.toronto.edu)

The views of my colleagues do not necessarily reflect my opinions.

------------------------------

Date: Fri, 6 Nov 87 02:44:05 PST
From: larry@VLSI.JPL.NASA.GOV
Subject: Success/Future of AI

                 NATURAL ENTITIES AS PROTOTYPES

Much of the confusion about the nature of intelligence seems to
be the result of dealing with it at abstraction levels that are
too low.

At a low level of detail an aircraft is obviously drastically
different from a bird, leading to the conclusion that a study of
birds has no relevance to aeronautical science.  At a higher
level the relevance becomes obvious: air-flow over the chord of
birds' and aircrafts' wings produces lift in exactly the same
way.  Understanding this process was crucial to properly
designing the first aircrafts' wings.

Once the basic form+function was understood engineers could
produce articial variations that surpassed those found in
nature--though with numerous trade-offs.  Construction and repair
of artifical wings, for instance, are much more labor- and
capital-intensive.

Understanding birds' wings helped in other ways.  Analytically
separating the lift and propulsion functions of wings allowed us
to create jet aircraft; combining them in creative ways gave us
rocket-ships (where propulsion IS lift) and helicopters.

                   THE NATURE OF INTELLIGENCE

The understanding of intelligence is less advanced than that of
flight, but some progress HAS been made.  The quotes from Robert
Frost illuminate the basic nature of intelligence: creation,
exploration, and manipulation within an entity of a model of the
Universe.  He labels this model and its parts "metaphor."  I
prefer "analog."

The mechanism that holds the analog we call memory.  Though low-
level details (HOW memory works) are important, it is much more
important to first understand WHAT memory does.  For instance,
there is a lot of evidence that there are several kinds of
memory, describable along several dimensions.  One dimension,
obviously, is time.

This has a number of consequences that have nothing to do with,
for instance, the fact that deci-second visual memory works via
interactions of photons with visual purple.  Eyes that used a
different storage mechanism but had the same black-box
characteristics (latency, bandwidth, communication protocol,
etc.) would present the same image to their owner.

One consequence of the time dimension of human memory is that
memory decays in certain ways.  Conventionally memory units that
do not forget are considered good, yet forgetting is as important
as retention.  Forgetting emphasizes the important by hiding the
unimportant; it supports generalization because essential
similarities are not obscured by inessential differences.

                MECHANICAL NATURE OF INTELLIGENCE

There have been other real advances in scientifically understand-
ing intelligence, but I believe the above is enough to convince
the convincable.  As to whether human intelligence is
mechanical--this depends on one's perception of machines.  When
the word is used as an insult it usually calls up last-century
paradigms: the steam engine and other rigid, simple machines.  I
prefer to think of the human hand, which can be soft and warm, or
the heart, which is a marvel of reliability and adaptibility.

Scientific models of the mind can (and to be accurate, must) use
the more modern "warmware" paradigm rather than the idiotic hand-
calc simplicity of Behaviorism.  One example is my memory-mask
model of creativity (discussed here a year ago).

                      ART AND INTELLIGENCE

The previous comments have (I perhaps naively believe) a direct
relevance to the near-future of AI.  That can't be said of this
last section but I can't resist adding it.  Though professionally
a software engineer, I consider myself primarily an artist (in
fiction-writing and a visual media).  This inside view and my
studies has convinced me over the years that art and cognition
are much closer than is widely recognized.

For one thing, art is as pervasive in human lives as air--though
this may not be obvious to those who think of haut cultur when
when they see/hear the word.  Think of all the people in this
country who take a boombox/Walkman/stereo with them wherever they
stroll/jog/drive.  True, the sound-maker often satisfies because
it gives an illusion of companionship, but it is more often
simply hedonically satisfying--though their "music" may sound
like audio-ordure to others.  Think of all the doodling people
do, the small artworks they make (pastries, knitting, sand-
castles, Christmas trees, candy-striped Camaros), the photos
and advertising posters they tape to walls.

Art enhances our survival and evolution as a species, partly
because it is a source of pleasure that gives us another reason
for living.  It also has intellectual elements.  Poetic rules are
mnemonic enhancers, as all know who survived high-school English,
though nowadays these rules most often are used in prose and so
reflexively they aren't recognized even by their users.

Artistic rules are also cognitive enhancers.  One way they do
this is with a careful balance of predictibility and surprise;
regularity decreases the amount of attention needed to remember
and process data, discontinuities shock us enough to keep us
alert.  Breaks can also     focus attention     where an artist
desires.
                       Larry @ jpl-vlsi

------------------------------

Date: Fri 6 Nov 87 12:47:35-EST
From: Albert Boulanger <ABOULANGER@G.BBN.COM>
Subject: Humanist, Physicist, and Symbolic Models of the Mind


Pat Hayes puts forth the view that the symbolic computational
model of the mind can bridge the gap between science and a
humanistic outlook. I see a FURTHER exciting bridge being
built that is actually more pervasive that just models of the
mind. Why should the physicist model of the mind be any
different than what one does when building models that use
symbolic representations? The answer to this question being
"NO!" is becoming clear.  There is a profound change happening
in the natural sciences; we are accepting non-linear phenomena
for what it is. Amazing behavior occurs with non-linear
dynamical systems. Behavior that is changing the way one views
the world as simple rules with followable outcomes. We know know
that we can have simple rules with amazingly complex behavior.
Deterministic randomness sounds contradictory at first, but is a
concept that non-linear phenomena is forcing us to accept. The
manifold emergent phenomena in non-linear systems, including
self-organization, is a humbling experience. It is the setting
where we can see emergent symbolic representations. This should
not be too surprising, since we build computers to host
computational models of the mind using symbolic representations
with a very restrictive class of non-linear switching circuits.


Albert Boulanger
BBN Labs

------------------------------

End of AIList Digest
********************
 9-Nov-87 00:26:32-PST,15877;000000000000
Mail-From: LAWS created at  8-Nov-87 23:59:05
Date: Sun  8 Nov 1987 23:57-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #263 - Methodology, FORTRAN
To: AIList@SRI.COM


AIList Digest             Monday, 9 Nov 1987      Volume 5 : Issue 263

Today's Topics:
  Comments - NP Completeness & Research Methodology & AI Languages

----------------------------------------------------------------------

Date: 5 Nov 87 14:31:12 GMT
From: eitan%WISDOM.BITNET@wiscvm.wisc.edu (Eitan Shterenbaum)
Reply-to: eitan%H@wiscvm.arpa (Eitan Shterenbaum)
Subject: Re: Success of AI


In article <> honavar@speedy.wisc.edu (A Buggy AI Program) writes:
>
>Discovering that a problem is NP-complete is usually just the
>beginning of the work on the problem. The knowledge that a problem is
>NP-complete provides valuable information on the lines of attack that
>have the greatest potential for success. We can concentrate on algorithms
>that are not guaranteed to run in polynomial time but do so most
>of the time or those that give approximate solutions in polynomial time.
>After all, the human brain does come up with approximate (reasonably good)
>solutions to a lot of the perceptual tasks although the solution may not
>always be the best possible. Knowing that a problem is NP-complete only
>tells us that the chances of finding a polynomial time solution are minimal
>(unless P=NP).
>

You are right and so am I,
        a) There're no polynomial algorithms, which are known to us, that can
            solve NP problems.
        b) There are approximate and probabilistic *partial* solutions for NP
           problems.
As to the claim "the brain does it so why shouldn't the computer" -
It seem to me that you forget that the brain is built slightly differently
than a Von-Neuman machine ... It's a distributed enviorment lacking boolean
algebra. I can hardly believe that even with all the partial solutions for
all the complicated sets of NP problems that emulating a brain brings up, one
might be able to present a working program. If you'd able to emulate mouse's
brain you'd become a legend in your lifetime !
Anyway, no one can emulate a system which has no specifications.
if the neuro-biologists would present them then you'd have something to start
with.

And last - Computers aren't meta-capable machines they have constraints,
           not every problem has an answer and not every answermakes sense,
           NP problems are the best example.

                        Eitan Shterenbaum

------------------------------

Date: Tue, 03 Nov 87 07:57:49 PST
From: Stephen Smoliar <smoliar@vaxa.isi.edu>
Reply-to: smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar)
Subject: Re: Gilding the Lemon

In article <12346288066.15.LAWS@KL.SRI.Com> Laws@KL.SRI.COM (Ken Laws) writes:
>
>Progress also comes from applications -- very seldom from theory.

A very good point, indeed:  Bill Swartout and I were recently discussing
the issue of the respective contributions of engineering and science.
There is a "classical" view that science is responsible for those
fundamental principles without which engineering could "do its thing."
However, whence come those principles?  If we look at history, we see
that, in most fields, engineers are "doing their thing" long before
science has established those principles.  Of course things don't always
go as smoothly as one would like.  This pre-scientific stage of engineering
often involves sometimes-it-works-sometimes-it-doesn't experiences;  but
the engineering practices are still useful.  Often a major contribution
of the discovery of the underlying scientific principles is a better
understanding of WHEN "it doesn't work" and WHY that is so.  Then
engineering takes over again to determine what is to be done about
those situations in which things don't work.  At the risk of being
called on too broad a generality, I would like to posit that science
is concerned with the explanation of observed phenomena, while engineering
is concerned with achieving phenomena with certain desired properties.
From this point of view, engineering provides the very substance from
which scientific thought feeds.

I fear that what is lacking in the AI community is a respect for the
distinction between these two approaches.  A student is likely to get
a taste of both points of view in his education, but that does not
necessarily mean that he will develop an appreciation for the merits
of each or the ways in which they relate to each other.  As a consequence,
he may very well become very quickly channeled along a narrow path
involving the synthesis of some new artifact.  If he has any form
of success, then he assumes that all his thesis requires is that he
write up his results.

I hope there is some agreement that theses which arise from this process
are often "underwhelming" (to say the least).  There are usually rather
hefty tomes which devote significant page space to the twists and turns
in the path that leads to the student's achievement.  There is also usually
a rather heavy chapter which surveys the literature, so that the student
can demonstrate the front along which his work has advanced.  However,
such retrospective views tend to concentrate more on the artifacts of
the past than on the principles behind those artifacts.

Is it too much to ask that doctoral research in AI combine the elements
of both engineering and science?  I have nothing against that intensely
focused activity which leads up to a new artifact.  I just worry that
students tend to think the work is done once the artifact is achieved.
However, this is the completion of an engineering phase.  Frustrating
as it may sound, I do not think the doctoral student is done yet.  He
should now embark upon some fundamental portion of a scientific phase.
Now that he has something that works, he should investigate WHY it
works;  and THIS is where the literature search should have its true
value.  Given a set of hypothesized principles regarding the behavior
of his own artifact, how applicable are those principles to those
artifacts which have gone before?  Once such an investigation has been
pursued, the student can write a thesis which provides a balanced diet
of both engineering and science.

------------------------------

Date: 3 Nov 87 18:31:13 GMT
From: gary%roland@sdcsvax.ucsd.edu (Gary Cottrell)
Reply-to: roland!gary@sdcsvax.ucsd.edu (Gary Cottrell)
Subject: Re: Gilding the Lemon


Note that the article Tom was referring to (David Chapman's "Planning
for Conjunctive Goals", AIJ 32 No. 3) is based on a MASTER's Thesis:
Even if Ken objects to PhD thesi being rational reconstructions, he may
be less inclined to object to Master's thesi in this vein. Of course,
this is probably equivalent to a PhD thesis at n-k other places, where
k is some small integer.

gary cottrell
cse deot
ucsd

------------------------------

Date: 5 Nov 87 17:13:39 GMT
From: Gilbert Cockton <mcvax!hci.hw.ac.uk!gilbert@uunet.uu.net>
Reply-to: Gilbert Cockton <mcvax!hci.hw.ac.uk!gilbert@uunet.uu.net>
Subject: Re: Gilding the Lemon


In article <12346288066.15.LAWS@KL.SRI.Com> Laws@KL.SRI.COM (Ken Laws) writes:
>......, but there has been more payoff from GPSS and SIMSCRIPT (and
>SPICE and other simulation systems)

e.g.?

>Most Ph.D. projects have the same flavor.  A student ...
>... publishes the interesting behaviors he was able to generate

e.g.?

> ... we must build hand-crank phonographs before inventing information
>theory and we must study the properties of atoms before debating
>quarks and strings.

Inadmissable until it can be established that such relationships exist
in the study of intelligence - there may be only information theory
and quarks, in which case you have to head right for them now.
Anything else is liable to be a social construct of limited generality.
Most work today in fact suggests that EVERYTHING is going to be a social
construct, even the quarks. Analogies with the physical world do not
necessarily hold for the mental world, anymore than does animism for the
physical world.

>An advisor who advocates duplicating prior work is cutting his
>students' chances of fame and fortune from the discovery of the
>one true path.  ....  Why should the student
>work (be they theoretical or practical problems) when he could
>attach his name to an entirely new approach?

The aim of PhD studies is to advance knowledge, not individuals.
This amounts to gross self-indulgeance where I come from. I recognise
that most people in AI come from somewhere else though :-)

Perhaps there are no new approaches, perhaps the set of all imaginable
metaphysics, epistemology and ontology is closed. In the History of
Ideas, one rarely sees anything with no similar antecedents. More
problematic for AI, the real shifts of thinkers like Machiavelli, Bacon,
Hume, Marx and Freud did not involve PhD studies centred on computer
programming. I really do think that the *ABSENCE* of a computer is more
likely to produce new approaches, as the computational paradigm
severely limits what you can do, just as the experimental paradigm of
psychology puts many areas of study beyond the pale.
--
   Gilbert Cockton, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN
   JANET:  gilbert@uk.ac.hw.hci    ARPA:   gilbert%hci.hw.ac.uk@cs.ucl.ac.uk
                UUCP:   ..{backbone}!mcvax!ukc!hwcs!hci!gilbert

------------------------------

Date: Fri, 6 Nov 87 15:32:30 WET
From: Martin Merry <mjm%hplb.csnet@RELAY.CS.NET>
Reply-to: Martin Merry <mjm%hplb.csnet@RELAY.CS.NET>
Subject: FORTRAN


After the recent discussion on AIList I feel compelled to admit that I wrote
the entry on FORTRAN for the Catalogue of AI techniques, and that it was
roginally intended as a joke.

However, after subsequent exposure to Common Lisp, I'm not so sure....

Martin Merry
HP Labs Bristol Research Centre

------------------------------

Date: 05 Nov 87 12:03:55 EST (Thu)
From: sas@bfly-vax.bbn.com
Subject: FORTRAN for list processing


Check out Douglas K. Smith's article: An Introduction to the
List-Processing Language SLIP (anthologized in Rosen's 1960's classic
Programming Systems and Languages).

        SLIP is a list processing language system distinguished by the
        symmetry of its lists; each element is linked to both its
        predecessor and its successor.  It differs from most list
        processing languages in that it does not prepresent an
        independent language, but is intended to be embedded in a
        general purpose [sic] language such as FORTRAN.  Thus the
        flexibility of the latter is combined with the specific
        facility for manipulating lists.  This paper will describe
        SLIP as embedded in FORTRAN IV.

        SLIP was developed by Professor Joseph Weizenbaum of MIT.
        His original paper [1], published in 1963 while he was at
        General Electric, presents a complete documentation of the
        system, including a FORTRAN listing and a statement of the
        underlying philosophy.  The system has been implemented at
        several installations, find application in the symbolic
        manipulation of algebraic expressions [2], [3], [4], and in
        other areas [5].

[1] Weizenbaum, J.: Symmetric List Processor, Comm. ACM, p 524,
        Sept 1963

[5] Weizenbaum, J.: ELIZA - A Computer Program for the Study of Natural
        Language Communication Between Man and Machine, Comm. ACM,
        p 36, Jan 1966

Gee - I've even heard of ELIZA!

                                        Seth

------------------------------

Date: 5 Nov 87 09:46:20 est
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: In Defense of FORTRAN

In any discussion where C and Fortran are defended as languages
for doing AI, if only they provided the constructs that Lisp and
Prolog already provide, I am reminded of the old Yiddish saying
(here poorly transliterated) ``Wenn mein Bubba zul huben
Bietzem, vol tzi gevain mein Zayda.''  Or, loosely, ``IF is a
big word.''

   Date: Mon 2 Nov 87 14:29:09-PST
   From: Ken Laws <LAWS@IU.AI.SRI.COM>

        * * *

   The problem with AI languages is neither their capability nor
   their efficiency, but the way that they limit thought. * * *

Exactly so.  Using Fortran or any language where you have to
spend mental energy thinking about the issues that Lisp and
Prolog already handle ``cuts your chances of fame and fortune
from the discovery of the one true path,'' to quote an earlier
contributor.  Fortran's a fine language for writing programs
where the problem is well understood, but it's just a lousy
language for tackling new problems in.  This doesn't just go for
academic research, either; same goes for doing applications that
have never been tackled before.

------------------------------

Date: Thu 5 Nov 87 08:55:59-PST
From: Ken Laws <LAWS@IU.AI.SRI.COM>
Subject: Re: In Defense of FORTRAN

Good points.

I happen to program in C and have built a software environment that
does provide many of the capabilities of LISP.  It has taken me many
years, and I would not recommend that others follow this path.

My real point, though, was that LISP and PROLOG are also at too low
a level.  The Lisp Machine environment, with its 10,000 predefined
functions, is a big factor in the productivity of LISP hackers.  If
similar (or much better!) libraries were available to FORTRAN hackers,
similar productivity would be observed.  LISP does permit many clever
programming techniques, as documented in Abelson and Sussman's book,
but a great deal can be done with the simple conditionals, loops,
and other control structures of a language like FORTRAN.

The AI community is spending too much time reprogramming graph search
algorithms, connected-component extraction, cluster analysis, and
hundreds of other solved problems.  Automated programming isn't coming
to our rescue.  As Fred Brooks has pointed out, algorithm development
is one of the most intricate, convoluted activities ever devised;
software development tools are not going to make the complexities
vanish.  New parallel architectures will tempt us toward brute-force
solutions, ultimately leaving us without solutions.  It's time we
recognize that sharable, documented subroutine libraries are essential
if AI programs are ever to be developed for real-world problems.

Such subroutines, which I envision in an object-oriented style, should
be the language of AI.  Learned papers would discuss improvements to the
primitive routines or sophisticated ways of coordinating them, seldom
both together -- just as an earlier generation separated A* and
garbage collection.  This would make it easier for others to repeat
important work on other computer systems, aiding scientific verification
and tech transfer as well as facilitating creativity.

                                        -- Ken Laws


[This applies particularly in my own field of computer vision, where many
graduate students and engineers spend years reinventing I/O code, display
drivers, and simple image transformations.  Trivial tasks such as mapping
buffers into display windows cease to be trivial if attempted with any
pretense to generality.  Code is not transportable and even images are
seldom shared.  The situation may not be so bad in mainstream AI research,
although I see evidence that it is.]

------------------------------

End of AIList Digest
********************
 9-Nov-87 00:55:34-PST,21215;000000000000
Mail-From: LAWS created at  9-Nov-87 00:08:00
Date: Mon  9 Nov 1987 00:01-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #264 - Bibliography
To: AIList@SRI.COM


AIList Digest             Monday, 9 Nov 1987      Volume 5 : Issue 264

Today's Topics:
  Bibliography - Leff File a62C

----------------------------------------------------------------------

Date: Thu, 5 Nov 1987 17:33 CST
From: Leff (Southern Methodist University)
      <E1AR0002%SMUVM1.BITNET@wiscvm.wisc.edu>
Subject: Bibliography - Leff File a62C

%A E. Hudlicka
%A V. Lesser
%T Modeling and Diagnosing Problem-Solving System Behavior
%J MAG144
%P 407-419

%A J. L. Kolodner
%A R. M. Kolodner
%T Using Experience in Clinical Problem Solving: Introduction and Framework
%J MAG144
%P 420-431
%K AA01

%A B. Kuipers
%T Qualitative Simulation as Causal Explanation
%J MAG144
%P 432-444

%A J. R. Josephson
%A B. Chandrasekaran
%A J. R. Smith
%A M. C. Tanner
%J MAG144
%P 445-454

%A J. G. Witlink
%T A Deficiency of Natural Deduction
%J Information Processing Letters
%V 25
%N 4
%D JUN 17 1987
%P 233-234

%A D. G. Kouri
%T The Design and Use of a Prolog Trace Generator for CSP
%J Software Practice and Experience
%V 17
%N 7
%D JUL 1987
%P 423-438

%A M. Oyamaguchi
%T The church-Rosser Property for Ground Term-Rewriting Systems is Decidable
%J Theoretical Computer Science
%V 49
%N 1
%D 1987
%P 43-80

%A J. P. Delgrande
%T A Formal Approach to Learning From Examples
%J MAG145
%P 123-142

%A T. R. Gruber
%A P. R. Cohen
%T Design for Acquisition: Principles of Knowledge-System Design to Facilitate
Knowledge Acquisition
%J MAG145
%P 143-160

%A P. E. Johnson
%A I. Zaulkernan
%A S. Garbert
%T Specification of Expertise
%J MAG145
%P 161-182

%A C. M. Kitto
%A J. H. Boose
%T Heuristics for Expertise Transfer: An Implementation of a Dialog
Manager for Knowledge Acquisition
%J MAG145
%P 183-202

%A J. Kornell
%T Formal Thought and Narrative Thought in Knowledge Acquisition
%J MAG145
%P 203-212

%A E. A. Moore
%A A. M. Agogino
%T Inform: An Architecture for Expert-Directed Knowledge Acquisition
%J MAG145
%P 213-230

%A T. Bylander
%A B. Chandrasekaran
%T Generic Tasks for Knowledge-Based Reasoning: the "Right" Level of
Abstraction for Knowledge Acquisition
%J MAG145
%P 231-244
%K AI01

%A M. LaFrance
%T The Knowledge Acquisition Grid: A Method for Training Knowledge Engineers
%J MAG145
%P 245-256
%K AI01

%A D. D. Woods
%A E. Holnagel
%T Mapping Cognitive Demands in Complex Problem-Solving Worlds
%J MAG145
%P 257

%A W. Bruce Croft
%T Approaches to Intelligent Information Retrieval
%J MAG149
%P 249-254
%K AA14

%A Paul R. Cohen
%A Rick Kjeldsen
%T Information Retrieval by Constrained Spreading Activation in Semantic
Networks
%J MAG149
%P 255-268
%K AI12 AA14

%A Lisa F. Rau
%T Knowledge Organization and Access in a Conceptual Information System
%J MAG149
%P 269-284
%K AI16 AA14

%A Y. Chiaramella
%A B. Defude
%T A Prototype of an Intelligent System for Information Retrieval: IOTA
%J MAG149
%P 285-304
%K AA14

%A Giorgia Brajnik
%A Giovanni Guida
%A Carlo Tasso
%T User Modeling in Intelligent Information Retrieval
%J MAG149
%P 305-320
%K AI08 AA15 AA14

%A Robert F. Simmons
%T A Text Knowledge Base from the AI Handbook
%J MAG149
%P 321-340
%K AA14

%A Edward A. Fox
%T Developments of the CODER System: A Testbed for Artificial Intelligence
Methods in Information Retrieval
%J MAG149
%P 341-366
%K AA14 AI02

%A H. M. Brooks
%T Expert Systems and Intelligent Information Retrieval
%J MAG149
%P 367-382
%K AA14 AI01  AT08

%A D. A. Pospelov
%T Artificial Intellect - A New Phase of Development
%J Vestnik Akademii Nauk SSSR
%N 4
%D 1987
%P 40-47
%K AI16
%X in Russian

%A J. Grobelny
%T The Fuzzy Approach to Facilities Layout Problems
%J Fuzzy Sets and Systems
%V 23
%N 2
%D AUG 1987
%P 175-190
%K O04 AA05

%A M. A. Gil
%A M. T. Lopez
%A J. M. A. Garrido
%T An Extensive-Form Analysis for Comparing Fuzzy Information Systems by
Means of the Worth and Quiteness of Information
%J Fuzzy Sets and Systems
%V 23
%N 2
%D AUG 1987
%P 239-256
%K O04



%A Christopher Hogger
%T Prolog and Software Engineering
%J Microprocessors and Microsystems
%V 11
%N 6
%D JUL-AUG 1987
%P 308-318
%K T02

%T Consistent Clustering - Analog of Physical Model for the Observation
Object in Fuzzy Language
%J Avtomatika
%N 3
%D MAY-JUN 1987
%P 89
%K O04 O06
%X Article in Russian, English Abstract Available

%A I. V. Blauberg
%A V. V. Klokov
%T Systems Studies and Organization of Knowledge
%J Cybernetics and Systems
%V 18
%N 3
%D 1987
%P 195-202
%K AI16

%A Avi Rushinek
%A Sara F. Rushinek
%T Interactive Diagnostic System for Insurance Software: An Expert
System Using Artificial Intelligence (ESAI)
%J Cybernetics and Systems
%V 18
%N 3
%D 1987
%P 203-220
%K AA06 AI01

%A A. Hoogewijs
%T Partial Predicate Logic in Computer Science
%J Acta Informatica
%V 24
%N 4
%D 1987
%P 381-394
%K AI10

%A D. Kapur
%A P. Narendran
%A H. Zhang
%T On Sufficient-Completeness and Related Properties of Term Rewriting
Systems
%J Acta Informatica
%V 24
%N 4
%D 1987
%P 395-416
%K AI14

%A Gerard Medioni
%A Yoshio Yasumoto
%T Corner Detection and Curve Representation Using Cubic B-Splines
%J MAG150
%P 267-278
%K AI06

%A R. S. Acharya
%A P. B. Heffernan
%A R. A. Robb
%A H. Wechsler
%T High Speed 3D Imaging of the Beating Heart Using Temporal Estimation
%J MAG150
%P 279-290
%K AI06 AA01

%A Glenn L. Cash
%A Mehdi Hatamian
%T Optical Character Recognition by the Method of Moments
%J MAG150
%P 291-310
%K AI06

%A Andrew B. Watson
%T The Cortex Transform: Rapid Computation of Simulated Neural Images
%J MAG150
%P 311-327
%K AI06 AI08

%A Ken-ichi Kanatani
%T Camera Rotation Invariance of Image Characteristics
%J MAG150
%P 328-354
%K AI06

%A Steven M. Pizer
%A E. Philip Amburn
%A John D. Austin
%A Robert Cromarti
%A Ari Geselowitz
%A Trey Greer
%A Bart ter Haar Romeny
%A John B. Zimmerman
%A Karel Zuiderveld
%T Adaptive Histogram Equalization and Its Variations
%J MAG150
%P 355-368
%K AI06

%A J. Michel Fitzpatrick
%A Michael R. Leuze
%T A Class of One-to-One Two-Dimensional Transformations
%J MAG150
%P 369-382
%K AI06

%A Hemraj Nair
%T Reconstruction of Planar Boundaries from Incomplete Information
%J MAG150
%P 383
%K AI06

%A D. L. Sanford
%A J. W. Roach
%T Representing and Using Metacommunication to Control Speakers Relationships
in Natural Language Dialog
%J MAG151
%P 301-320
%K AI02

%A W. Siler
%A D. tucker
%A J. Buckley
%T A Parallel Rule Firing Fuzzy Production System with Resolution of Memory
Conflicts by Weak Fuzzy Monotonicity, Applied to the Classification of
Multiple Objects Characterized by Multiple Uncertain Features
%J MAG151
%P 321-332
%K O04  AI01 H03

%A G. S. Pospelov
%T Expert Systems. Experience with Dynamic Description
%J Soviet Journal of Computer and Systems Sciences
%V 25
%N 1
%D JAN-FEB 1987
%P 80-84
%K AI01

%A Johnson Aimie Edosomwan
%T Artificial Intelligence, Part 7: Ten Design Rules for Knowledge
Based Expert Systems
%J Industrial Engineering
%V 19
%N 8
%D AUG 1987
%P 78-80
%K AI01

%A H. Samet
%A C. A. Shaffer
%A R. C. Nelson
%A Y. G. Huang
%A A. Rosenfeld
%T Recent Developments in Linear Quadtree-Based Geographic Information Systems
%J MAG152
%P 187-198
%K AI06 AI16

%A E. R. Davies
%T Design of Optimal Gaussian Operators in Small Neighborhoods
%J MAG152
%P 199-205
%K AI06

%A S. K. Morton
%A S. J. Popham
%T Algorithm Design Specification for Interpreting Segmented
Image Data Using Schemas and Support Logic
%J MAG152
%P 206-216
%K AI06

%A I. Overington
%A P. Greenway
%T Practical First-Difference Edge Detection with Subpixel Accuracy
%J MAG152
%P 217-224
%K AI06

%A E. W. Elcock
%A I. Gargantini
%A T. R. Walsh
%T Triangular Decomposition
%J MAG152
%P 225-232
%K AI06

%A M. J. L. Orr
%A R. B. Fisher
%T Geometric Reasoning for Computer Vision
%J MAG152
%P 233
%K AI06

%A Y. B. Mityushin
%A A. E. Petrov
%A P. K. Fadeev
%T Measure of Semantic Information in Documents and Databases
of Automated Information Systems
%J Nauchno-Tekhnicheskaya Informatsiya.  Seirya II - Informatsionnye
Protessy I Systemy
%P 1-4
%N 6
%D 1987
%K AA14

%A G. G. Gyulnazaryn
%T Development of Vocal Input Subsystems in Automated Information Systems
%J Nauchno-Tekhnicheskaya Informatsiya.  Seirya II - Informatsionnye
Protessy I Systemy
%P 14-16
%N 6
%D 1987
%K AI05 AA14

%A S. V. Kazmenko
%T Use of Standard Language in Conversatin with Computers - Pessimistic
Point of View
%J Nauchno-Tekhnicheskaya Informatsiya.  Seirya II - Informatsionnye
Protessy I Systemy
%P 32
%N 6
%D 1987
%K AI02

%A A. A. Grandhee
%A R. A. Moczadlo
%T Expert System and Symbolic Processing for Automation
%J  MAG153
%P 6-10
%K AA05 AI01

%A D. S. Watts
%A H. K. Eldin
%T The Role of the Industrial Engineer in Developing Expert Systems
%J MAG153
%P 15-20
%K AI01 AA05

%A D. J. Sumanth
%A M. Dedeoglu
%T Application of Expert Systems to Productivity Measurement in Companies
Organization
%J MAG153
%P 21-25
%K AI01 AA05

%A F. M. Lesusky
%A Rhudy, R. L.
%W Wiginton, J. C.
%T The Development of a Knowledge-Based System for Information Systems
Project Development
%J MAG153
%P 29-33
%K AA08

%A T. C. Chang
%A J. Terwilliger
%T PWA Planner - A Rule Based System for Printed Wiring Assemblies
Process Planning
%J MAG153
%P 34-38

%A J. Jiang
%A R. R. Doraiswami
%T A Novel Structure of Real-Time Expert Control System for Process
Industry
%J MAG153
%P 39-43
%K AA20 O03

%A G. Chen
%A M. H. Williams
%T Executing Pascal Programs on a Prolog Architecture
%J Information and Software Technology
%V 29
%N 6
%D JUL-AUG 1987
%P 285-290
%K T02

%A Georgios I. Doukidis
%T An Anthology on the Homology of Simulation with Artificial Intelligence
%J Journal of the Operational Research Society
%V 38
%N 8
%D AUG 1987
%P 701-712
%K AA28

%A Robert M. O'Keefe
%A John W. Roach
%T Artificial Intelligence Approaches to Simulation
%J Journal of the Operational Research Society
%V 38
%N 8
%D AUG 1987
%P 713-722
%K AA28

%A A. M. Flitman
%A R. D. Hurrion
%T Linking Discrete-Event Simulation Models to Expert Systems
%J Journal of the Operational Research Society
%V 38
%N 8
%D AUG 1987
%P 701-712
%K AA28 AI01

%A G. K. Kozhevnikov
%T Topological Design of Distributed Control Systems Using the Prolog
Programming Language
%J Avtomatika I. Vychislitelnaya Tekhnika
%N 3
%D MAY-JUN 1987
%P 3-5
%K H03 AA20 T02

%A A. F. Rocha
%T Editorial: The Fuzziness of Language and Cerebral Processings
%J MAG154
%P 301-302
%K AT22 AI08 O04

%A G. Burstein
%A M. D. Nicu
%A C. Balaceanu
%T Simplicial Differential Geometric Theory for Language Cortical Dynamics
%J MAG154
%P 303-314
%K O04 AI08 AA10

%A J. Mira
%A A. E. Delgado
%A R. Moreno-Diaz
%T The Fuzzy Paradigm for Knowledge Representation in Cerebral Dynamics
%J MAG154
%P 315-330
%K AA10 AI16 O04

%A M. Theoto
%A M. R. Santos
%A N. Uchiyama
%T The Fuzzy Decodings of Educative Texts
%J MAG154
%P 331-346
%K AI02 O04 AA07

%A G. Greco
%A A. F. Rocha
%T The Fuzzy Logic of Text Understanding
%J MAG154
%P 347-360
%K AI02 O04

%A L. Lesmo
%A P. Torasso
%T Prototypical Knowledge for Interpreting Fuzzy Concepts and Quantifiers
%J MAG154
%P 361-370
%K O04 AI16

%A F. Casacuberta
%A E. Vidal
%A J. M. Benedi
%T Interpretation of Fuzzy Data by Means of Fuzzy Rules with Applications to
Speech Recognition
%J MAG154
%P 371-380
%K AI05 O04

%A A. A. Mitchell
%T The Use of Alternative Knowledge-Acquisition Procedures in the Development
of a Knowledge-Based Media Planning System
%J MAG155
%P 399-412
%K AI01

%A M. J. Pazzani
%T Explanation-Based Learning for Knowledge-Based Systems
%J MAG155
%P 413-434
%K AI01 AI04

%A A. Rappaport
%T Multiple-Problem Subspaces in the Knowledge-Design Process
%J MAG155
%P 435-452
%K AI16

%A B. R. Gaines
%T An Overview of Knowledge-Acquisition and Transfer
%J MAG155
%P 453-472
%K AI16

%A J. H. Alexander
%A M. J. Freiling
%A S. J. Shulman
%A S. Rehfuss
%A S. L. Messick
%T Ontological Analysis - An Ongoing Experiment
%J MAG155
%P 473-486
%K AI16

%A S. A. Hayward
%A B. J. Wielinga
%A J. A. Breuker
%T Structured Analysis of Knowledge
%J MAG155
%P 487-498
%K AI16

%A W. Buntine
%T Induction of Horn Clauses - Methods and the Plausible Generation Algorithm
%J MAG155
%P 499-520
%K AI10 AI04

%A C. Gargjanardan
%A G. Salvendy
%T A Conceptual Framework for Knowledge Elicitation
%J MAG155
%P 521-532
%K AI16

%A N. M. Cooke
%A J. E. MacDonald
%T The Application of Psychological Scaling Techniques to Knowledge Elicitation
for Knowledge-Based Systems
%J MAG155
%P 533
%K AI16

%A Takashi Toriu
%A Hiromichi Iwase
%A Masumi Yoshida
%T An Expert System for Image Processing
%J Fujitsu Scientific and Technical Journal
%V 23
%N 2
%D SUMMER 1987
%P 111-118
%K AI01 AI06

%A J. G. Llaurado
%T Computerized Speech-Recognition and Conversation
%J International Journal of Bio-Medical Computing
%V 21
%N 2
%D SEP 1987
%P 77-82
%K AI05 AT22
%X (Commentary)

%A W. S. Lim
%A S. Vajpayee
%T Development of a Vision-Based Inspection System on a Micro-computer
%J Computers and Industrial Engineering
%V 12
%N 4
%D 1987
%P 315
%K AI06 H01

%A S. M. Alexander
%T The Application of Expert Systems to Manufacturing Processing Control
%J Computers and Industrial Engineering
%V 12
%N 4
%D 1987
%P 307-314
%K AI01 AA26 AA20

%A Michael P. Georgeff
%T Planning
%B Annual Review of Computer Science
%V 2
%D NOV 1987
%E Joseph F. Traub
%I Annual Reviews, Inc.
%K AT08 AI09
%X ISBN 0-8243-3202-4

%A Charles Thorp
%A Martial Hebert
%A Takeo Kanade
%A Steven Shafer
%T Vision and Navigation for the Carnegie-Mellon Navlab
%B Annual Review of Computer Science
%V 2
%D NOV 1987
%E Joseph F. Traub
%I Annual Reviews, Inc.
%K AI06 AT08 AI07
%X ISBN 0-8243-3202-4

%A Steven W. Zucker
%T The Emerging Paradigm of Computational Vision
%B Annual Review of Computer Science
%V 2
%D NOV 1987
%E Joseph F. Traub
%I Annual Reviews, Inc.
%K AT08 AI06
%X ISBN 0-8243-3202-4

%A Judea Pearl
%A Richard Korf
%T Search Techniques
%B Annual Review of Computer Science
%V 2
%D NOV 1987
%E Joseph F. Traub
%I Annual Reviews, Inc.
%K AI03 AT08
%X ISBN 0-8243-3202-4

%A Raymond Reiter
%T Nonmonotonic Reasoning
%B Annual Review of Computer Science
%V 2
%D NOV 1987
%E Joseph F. Traub
%I Annual Reviews, Inc.
%K AI15 AT08
%X ISBN 0-8243-3202-4

%A Scott E. Fahlman
%T Common Lisp
%B Annual Review of Computer Science
%V 2
%D NOV 1987
%E Joseph F. Traub
%I Annual Reviews, Inc.
%K AT08 T01
%X ISBN 0-8243-3202-4

%A Kathleen McKeown
%A William Swartout
%T Language Generation and Explanation
%B Annual Review of Computer Science
%V 2
%D NOV 1987
%E Joseph F. Traub
%I Annual Reviews, Inc.
%K AI01 AT08
%X ISBN 0-8243-3202-4

%A Joseph Halpern
%T Using Reasoning about Knowledge to Analyze Distributed Systems
%B Annual Review of Computer Science
%V 2
%D NOV 1987
%E Joseph F. Traub
%I Annual Reviews, Inc.
%K H03 AT08
%X ISBN 0-8243-3202-4

%A Drew McDermott
%T Logic, Problem Solving, and Deduction
%B Annual Review of Computer Science
%V 2
%D NOV 1987
%E Joseph F. Traub
%I Annual Reviews, Inc.
%K AI16 AT08
%X ISBN 0-8243-3202-4

%A David R. Barstow
%T Knowledge-Based Software Tools
%B Annual Review of Computer Science
%V 2
%D NOV 1987
%E Joseph F. Traub
%K AA08 AT08
%I Annual Reviews, Inc.
%X ISBN 0-8243-3202-4



%A S. L. Hardt
%A D. H. MacFadden
%T Computer Assisted Psychiatric Diagnosis: Experiments in Software Design
%J Computers in Biology and Medicine
%V 17
%N 4
%D 1987
%P 229-238
%K  AA11 AA01 AI01


%A F. Wiener
%A M. Gabbai
%A M. Jaffe
%T Computerized Classification of Congenital Malformations using a Modified
Bayesian Approach
%J Computers in Biology and Medicine
%V 17
%N 4
%D 1987
%P 259-268
%K AA01 AI01

%A W. M. Dong
%A F. S. Wong
%T Propagation of Evidence in Rules Based Ssytems
%J International Journal of Man-Machine Studies
%V 26
%N 5
%D MAY 1987
%P 551-566
%K O04 AI01

%A J. A. Landau
%A K. H. Norwich
%A S. J. Evans
%A B. Pich
%T An Error Correcting Protocol for Medical Expert Systems
%J International Journal of Man-Machine Studies
%V 26
%N 5
%D MAY 1987
%P 617-626

%A B. J. Cragun
%A H. J. Steudel
%T A Decision-Table-Used Processor for Checking Completeness and Consistency
in Rule Based Systems
%J International Journal of Man-Machine Studies
%V 26
%N 5
%D MAY 1987
%P 633

%A Michael Potmesil
%T Generating Octree Models of 3D Objects from Their Silhouettes
in a Sequence of Images
%J  MAG156
%P 1-29
%K AI06

%A Roland T. Chin
%A Hong-Khoon Wan
%A D. L. Stover
%A R. D. Iverson
%T A One-Pass Thinning Algorithm and Its Parallel Implementation
%J MAG156
%P 30-40
%K AI06 H03

%A Hiromitsu Yamada
%A Tony Kasvand
%T Transparent Object Extraction from Regular Textured Backgrounds by Using
Binary Parallel Operations
%J MAG156
%P 41-53
%K H03 AI06

%A Haluk Derin
%A Chee-Sun Won
%T A Parallel Image Segmentation Algorithm Using Relaxation with Varying
Neighborhoods and Its Mapping to Array Processors
%J MAG156
%P 54-78
%K H03 AI06

%A Vishvjit S. Nalwa
%A Eric Pauchon
%T Edgel-Aggregation and Edge Description
%J MAG156
%P 79-94
%K AI06

%A  Abdol-Reza Mansouri
%A Alfred S. Malowany
%A Martin D. Levine
%T Line Detection in Digital Pictures: A Hypothesis Prediction/Verification
Paradigm
%J MAG156
%P 95-114
%K AI06

%A H. Bieri
%T Computing the Euler Characteristic and Related Additive Functionals of
Digital Objects from Their Bintree Representation
%J MAG156
%P 115
%K AI06

%A W. K. Pratt
%A P. F. Leonard
%T Review of Machine Vision Architectures
%B BOOK85
%P 2-12
%K AI06 AT08

%A R. Q. Fox
%T A Comparison of the Wire Frame and Mathematical Morphology
Approaches to Machine Vision
%B BOOK85
%P 13-22
%K AI06

%A W. M. Silver
%T Normalized Correlation Search in Alignment, Gauging, and Inspection
%B BOOK85
%P 23-34
%K AI06 AA26

%A T. Poggio
%T Computer Vision
%B BOOK85
%P 54-62
%K AI06

%A R. M. Haralick
%T Recognition Methodology - Algorithms and Architecture
%B BOOK85
%P 63-65
%K AI06

%A A. Rosenfeld
%T Parallel Algorithms for Real-Time Vision
%B BOOK85
%P 66-70
%K H03 O06 O03 AI06

%A T. N. Nudge
%T An Analysis of Hypercube Architectures for Image Pattern Recognition
Algorithms
%B BOOK85
%P 71-83
%K AI06 H03

%A D. Casasent
%T Optical Pattern Recognition and AI Algorithms and Architectures for ATR and
Computer Vision
%B BOOK85
%P 84-95
%K AI06

%A B. R. Hunt
%T Prospects for Self-Organizing Pattern Recognition via Adaptive
Network Systems
%B BOOK85
%P 96-98
%K AI06 AI12

%A C. W. R. Swonger
%T Tools for Productive Development of Image Analysis Algorithms
%B BOOK85
%P 99-113
%K AI06

%A K. R. Castleman
%A D. Fabian
%T User Interface Design for a General Purpose Pattern Recognition Package
%B BOOK85
%P 114-125
%K O01 AI06

%A J. Sklansky
%A K. H. K. Kim
%T Real Time Scene Understanding and Vision Automation - A Brief Overview
%B BOOK85
%P 126-131
%K AT08 O03  AI06

%A A. F. Lehar
%A R. Gonsalves
%A J. Weaver
%A L. Turnbaugh
%T Pattern Recognition Techniques for Finding the Address on Letters
and Parcels
%B BOOK85
%P 132-140
%K AI06

%A P. S. P. Wang
%T A More Natural Approach for Recognition of Line-Drawing Patterns
%B BOOK85
%P 141
%K AI06

%A T. D. Watts
%T Some Historical Currents Concerning the Societal Learning Approach
to Policy and Planning
%J Cybernetica
%V 30
%N 2
%D 1987
%P 43-58
%K AA11 O05 AI04

%A A. V. Reader
%T The Memory Channel Machine - Part of a Proposed Learning Machine
%J Cybernetica
%V 30
%N 2
%D 1987
%P 25-42
%K AI04

%A E. M. Oblow
%T A Probabilisitic-Propositional Framework for the O-Theory Intersection
Rule
%J MAG157
%P 187-202
%K O04

%A Ronald R. Yager
%T Toward a Theory of Conjunctive Variables
%J MAG157
%P 203-228
%K O04

%A Thomas B. Fowler
%T A Numerical Method for Propagation of Uncertainty in Nonlinear Systems
%J MAG157
%P 265
%K O04


%A Jonathan Vaughan
%A Graham Brookes
%A David Chalmers
%A Martin Watts
%T Transputer Applications to Speech Recognition
%J Microprocessors and Microsystems
%V 11
%N 7
%D SEP 1987
%K H01 AI05
%P 377-382

%A Shi-Kuo Chang
%A L. Leung
%T A Knowledge-Based Message-Management System
%J ACM TOIS
%V 5
%N 3
%D JUL 1987
%P 213-236

------------------------------

End of AIList Digest
********************
12-Nov-87 23:38:37-PST,10813;000000000000
Mail-From: LAWS created at 12-Nov-87 23:25:49
Date: Thu 12 Nov 1987 23:24-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #265 - Seminar, Conferences
To: AIList@SRI.COM


AIList Digest            Friday, 13 Nov 1987      Volume 5 : Issue 265

Today's Topics:
  Seminar - Generate, Test, and Debug (BBN),
  Conference - Machine Translation &
    Expert Systems and Software Engineering &
    1st Australian Knowledge Engineering Congress &
    Visual Form and Motion Perception

----------------------------------------------------------------------

Date: Tue 10 Nov 87 16:11:34-EST
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: Seminar - Generate, Test, and Debug (BBN)

                    BBN Science Development Program
                       AI Seminar Series Lecture

           GENERATE, TEST AND DEBUG: A PARADIGM FOR SOLVING
                 INTERPRETATION AND PLANNING PROBLEMS

                              Reid Simmons
                               MIT AI Lab
                  (REID%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU)

                                BBN Labs
                           10 Moulton Street
                    2nd floor large conference room
                     10:30 am, Tuesday November 17


We describe the Generate, Test and Debug (GTD) paradigm and its use in
solving interpretation and planning problems, where the task is to
find a sequence of events that could achieve a given goal state from a
given initial state.  The GTD paradigm combines associational
reasoning in the generator with causal reasoning in the debugger to
achieve a high degree of efficiency and robustness in the overall
system.  The generator constructs an initial hypothesis by finding
local domain-dependent patterns in the goal and initial states and
combining the sequences of events that explain the occurrence of the
patterns.  The tester verifies hypotheses and, if the test fails,
supplies the debugger with a causal explanation for the failure.  The
debugger uses domain-independent debugging algorithms which suggest
repairs to the hypothesis by analyzing the causal explanation and
models of the domain.

This talk describes how the GTD paradigm works and why its combination
of reasoning techniques enables it to achieve efficient and robust
performance.  In particular, we will concentrate on the actions of the
debugger which uses a "transformational" approach to modifying
hypotheses that extends the power of the "refinement" paradigm used by
traditional domain-independent planners.  We will also discuss our
models of causality and hypothesis construction and the role those
models play in determining the completeness of our debugging algorithms.

The GTD paradigm has been implemented in a program called GORDIUS.  It
has been tested in several domains, including the primary domain of
geologic interpretation, the blocks world, and the Tower of Hanoi
problem.

------------------------------

Date: Fri, 6 Nov 87 16:19:20 EST
From: Machine.Translation.Journal@NL.CS.CMU.EDU
Subject: Conference - Machine Translation


                CONFERENCE ON MACHINE TRANSLATION


                        CALL FOR PAPERS


  The   Second   International   Conference  on  Theoretical  and
Methodological Issues in Machine Translation of Natural Languages
will  be held June 12 - 14 at the Center for Machine Translation,
Carnegie-Mellon University, Pittsburgh, PA.

  Contributions are solicited on all topics  related  to  machine
translation, machine-aided translation, and, generally, automatic
analysis and generation of natural language texts, the  structure
of   lexicons   and   grammars,  research  tools,  methodologies,
knowledge representation and  use,  and  theory  of  translation.
Relevant submissions on other topics are also welcome.

  Extended  abstracts  (not exceeding 1,500 words) should be sent
to

    MT Conference Program Committee
    Center for Machine Translation
    Carnegie-Mellon University
    Pittsburgh PA 15213, U.S.A.
    (412) 268 6591

Submission Deadline: February 1, 1988

Notification of Acceptance: March 21, 1988

Final Version Due: April 18, 1988

All submissions will be refereed by the members of the  Program Committee:

Christian  Boitet  (University  of Grenoble)
Jaime Carbonell (Carnegie-Mellon University)
Martin Kay (Xerox  PARC)
Makoto  Nagao (Kyoto University)
Sergei  Nirenburg  (Carnegie-Mellon University)
Victor Raskin  (Purdue University)
Masaru Tomita (Carnegie-Mellon University)

All inquiries should be directed to

    Cerise Josephs
    Center for Machine Translation
    Carnegie-Mellon University
    Pittsburgh, PA 15213 U.S.A.
    (412) 268 6591
    cerise@nl.cs.cmu.edu.ARPA

------------------------------

Date: Mon, 9 Nov 1987 02:29 CST
From: Leff (Southern Methodist University)
      <E1AR0002%SMUVM1.BITNET@wiscvm.wisc.edu>
Subject: Conference - Expert Systems and Software Engineering


                           CALL FOR PARTICIPATION

           A Joint IEEE Software and IEEE Expert Special Issue on

     "The Interactions Between Expert Systems and Software Engineering"


In  FJCC'87 a panel  composed of R.  Balzer (Information Sciences Institute),
C. V.  Ramamoorthy  (University  of   California  at  Berkeley),  W. W. Royce
(Lockheed  Software  Technology  Center),  M. M. Tanik  (Southern   Methodist
University), W. Bledsoe  (MCC), D. Y. Y. Yun (Southern Methodist University),
and  Roger  Bates  (Texas  Instruments),  discussed  the  issues  related  to
interactions between AI and Software  Engineering.  It is observed that there
was a  growing interest  among practitioners  of AI  and SE  to look into the
issues concerning both of these fields.  Recent papers from C. V. Ramamoorthy
(IEEE Computer, Jan. 1987) and H. Simon (IEEE TSE, July 1986) summarizes some
of the interest areas and concerns.

Now, IEEE Software and IEEE Expert seek contributions for special issues that
will be published in November 1988.  The focus of these issues will be on the
interactions  between  the  fields  of  Artificial  Intelligence and Software
Engineering.


Original research papers as well as general categories of tutorials, surveys,
and overviews are welcome.

Two hundred word abstracts should be submitted as soon as possible, and eight
copies of manuscripts are due by February 1, 1988 addressed to:

             Murat M. Tanik
             Southern Methodist University
             Department of Computer Science and Engineering
             Dallas, TX  75275-0122

             (214) 692-2854

------------------------------

Date: 10 Nov 87 12:05:14 +1000 (Tue)
From: "ERIC Y.H. TSUI" <munnari!aragorn.oz.au!eric@uunet.UU.NET>
Subject: Conference - 1st Australian Knowledge Engineering Congress
         (Nov. '88)


1ST
AUSTRALIAN
KNOWLEDGE
ENGINEERING
CONGRESS
NOVEMBER 15TH - 17TH 1988


                           CALL FOR PAPERS

Following the success of the 1st Australian Artificial Intelligence Congress
in November 1986, Melbourne will be the host to its successor -
the Australian Knowledge Engineering Congress - in November 1988.

Contributions are invited on every aspect of Knowledge Engineering and
Knowledge-base technology: Expressions of interest in the program and
supporting activities are now invited either on the following topics or
on any related theme:

        Expert Systems case studies
        Knowledge Engineering (including Prototyping) methodologies
        Design and use of Conceptual Schemas
        Natural Language Interfaces
        Evaluation of tools and expert systems
        Role of consultants in Knowledge Engineering
        Design of Intelligent Tutors and Conversational Advisors
        Knowledge Source Systems
        Inference mechanisms

A preliminary indication of interest in offering a paper, management of
specific streams and/or tutorial presentations should be sent as soon
as possible to :-

Professor B. Garner
DEAKIN UNIVERSITY
VICTORIA 3217
AUSTRALIA

Electronic mail: brian@aragorn.oz

Eric Tsui                               eric@aragorn.oz

------------------------------

Date: Thu, 12 Nov 87 15:52:22 est
From: ennio@bucasb.bu.edu (Ennio Mingolla)
Subject: Conference - Visual Form and Motion Perception


              VISUAL FORM AND MOTION PERCEPTION:
                PSYCHOPHYSICS, COMPUTATION,
                    AND NEURAL NETWORKS

           Friday and Saturday, March 4 and 5, 1988
  Conference Auditorium, George Sherman Union, Boston University
        775 Commonwealth Avenue, Boston, Massachusetts


     This meeting has been dedicated to the memory of the late
     KVETOSLAV PRAZDNY, who was to have been a speaker, and
     whose tragic death has deprived the field of visual
     perception of one of its most talented investigators.

Speakers include:
   L. AREND      Eye Research Inst.      V. RAMACHANDRAN   UCSD
   S. ANSTIS     York University         A. REEVES         Northeastern Univ.
   I. BIEDERMAN  Univ. of Minnesota      W. RICHARDS       MIT
   P. CAVANAGH   Univ. of Montreal       R. SAVOY          Rowland Inst.
   J. DAUGMAN    Harvard University      G. SPERLING       New York Univ.
   S. GROSSBERG  Boston University       J. TODD           Brandeis Univ.
   J. LAPPIN     Vanderbilt Univ.        S. ZUCKER         McGill University
   E. MINGOLLA   Boston University

This meeting is sponsored by the Boston Consortium for Behavioral and
Neural Studies, a group of researchers supported by the Air Force Office
of Scientific Research Life Sciences Program.  A Howard Johnson's Motor
Lodge is located at 575 Commonwealth Avenue, and a limited number of rooms
at a reduced conference rate can be reserved until February 10, 1988 by
those attending the meeting.  Total conference registration will be
limited by available meeting space, so early registration is advised.

Registration and hotel accomodations for the meeting are being
handled by:

   UNIGLOBE--Vision Meeting                Telephone:
   40 Washington Street                    (800) 521-5144
   Wellesley Hills, MA   02181             (617) 235-7500

A meeting registration and hotel reservation form is attached to this
announcement.  For further information about travel or accomodation
arrangements, contact UNIGLOBE at the above address or telephone numbers.

[Contact the sender for the registration form.  -- KIL]

------------------------------

End of AIList Digest
********************
12-Nov-87 23:40:52-PST,10694;000000000000
Mail-From: LAWS created at 12-Nov-87 23:33:50
Date: Thu 12 Nov 1987 23:30-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #266 - Queries
To: AIList@SRI.COM


AIList Digest            Friday, 13 Nov 1987      Volume 5 : Issue 266

Today's Topics:
  Queries - Event-Based Reasoning & Prolog Parser &
    Object-Oriented Database & Full-Text Search Program &
    Brain Science Programs & VTLISP & Statistical Expert Systems &
    Expert System Benchmarking & Environmental Impact Assessment &
    MacBrain & Animal Behavior

----------------------------------------------------------------------

Date: 6 Nov 87 06:43:21 GMT
From: kddlab!titcca!secisl!tau@uunet.uu.net  ("Yatchan" TAUCHI)
Subject: What is Event-Based Reasoning (In English)

In <8710220645.AA25064@ucbvax.Berkeley.EDU> Seminar "Event-Based Reasoning
for Multiagent Domains (Bendix & BBN)" is announced.
Please someone tell me what Event-Based Reasoning is or introduce any
papers on this topics, if any.

Thanks in advance

-----
Yasuyuki TAUCHI, SECOM IS-Lab, Tokyo, JAPAN
Net:    tau%seclab.junet@uunet.UU.NET
UUCP:   ...!{seismo,uunet}!kddlab!titcca!secisl!tau

------------------------------

Date: 30 Oct 87 06:07:12 GMT
From: kddlab!icot32!nttlab!ouicsu!ics750!feng@uunet.uu.net  (Hyou An)
Subject: A Parser writen in Prolog (In English)

I'm trying to construct a Prolog Based Translator Generator. What I wnat to do
is as follows:
        1.To specify the translator in Attribute Grammar(AG)
                                                (or a form based on AG)
        2.To generate a translator specified by AG
                (1)To translate AG into a efficient form automatically.
                   For example, rewrite a LL(k) grammar into LL(m) (m<k), etc.
                (2)To generate a translator (in Prolog) from the optimized AG.
                (3)To transforme the Prolog program into an efficient one.
This work is for my PhD degree. I am therefore interested in any work on:
        . Attribute Grammar and Syntax Directed Translation
        . Efficient LL(k) paser
        . Language system based on Prolog
        . Transformation system
Is there anyone out there doing or interested in similar work?
Any comments and suggestions will be helpful.


An Feng

Date:    29-Oct-1987
Tel No:  06-844-1151(Ext.4816)


Airmail:  Department of Information and Computer Sciences
            Faculty of Engineering Science
            Osaka University
            Toyonaka,Osaka
            560, Japan

------------------------------

Date: 9 Nov 87 17:48:51 GMT
From: bbn!mfidelma@husc6.harvard.edu  (Miles Fidelman)
Subject: object oriented database query

Can anyone point me to work in the area of applying database technology
to supporting object oriented environments?

It strikes me that database technology tends to focus on supporting large
production databases, with attention to fast processing speeds, maintaining
database integrity, journalizing/checkpointing, etc.; while object oriented
environments are basically prototyping environments.

Has anyone been working on making a production object oriented environment?

Thanks much,

Miles Fidelman
email to: mfidelman@bbn.com

------------------------------

Date: 9 Nov 87 21:22:59 GMT
From: cos!hqda-ai!merlin@uunet.uu.net  (David S. Hayes)
Subject: Need Full-text-search program for AI work


     We're looking for hardware/software to allow scanning of
hardcopy documents.  After scanning, we want to be able to search
the text to look for keywords, and pull up the relevant portion of
the document.  I've never seen anything exactly like this, but
maybe (hopefully :-) someone out there has.

     How 'bout it?  Any suggestions, for either hardware or
software.  We've got Suns and Symbolics, so we're flexible.
Company names and phone numbers are nice, user recommendations
even better.

     Please reply via mail.

--
David S. Hayes, The Merlin of Avalon    PhoneNet:  (202) 694-6900
UUCP:  *!uunet!cos!hqda-ai!merlin       ARPA:  ai01@hios-pent.arpa

------------------------------

Date: 9 Nov 87 15:56:46 GMT
From: ihnp4!laidbak!spl1!wheaton!johnh@ucbvax.Berkeley.EDU  (John Doc
      Hayward)
Subject: Brain Science Programs


What CS courses are offered in Colleges and Universities which
are part of an undergraduate 'Brain Science' program?
Are the courses taught by CS faculty either individually or
team taught with members of different discipline?
What prerequisites in CS would be required for courses.  What
does the 'program' consist of?

Any helpful comments or suggestions will be appreciated.  If there is enough
interest I will summarize responses.  johnh...
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
UUCP: ihnp4!wheaton!johnh                    telephone: (312) 260-3871 (office)
Mail: John Hayward Math/Computer Science Dept. Wheaton College Wheaton Il 60187
       Act justly, love mercy and walk humbly with your God. Micah 6:8b

------------------------------

Date: 10 Nov 87 17:53:48 GMT
From: nrl-cmf!ukma!gatech!hubcap!ncrcae!gollum!dowell@ames.arpa 
      (dowell)
Subject: Request for VTLISP


Would some kind soul please send the source code for VTLISP.
It was recently written about in AIEXPERT magzine(May?).

                        Thanks,

                                ncrcae!gollum!dowell

------------------------------

Date: Wed 11 Nov 87 21:44:33-PST
From: Laurence I. Press <LPRESS@venera.isi.edu>
Subject: Statistical Exp. Sys. Query

Can anyone give me pointers to programs and/or papers on statistical
applications of expert systems?

Larry

------------------------------

Date: Wed 11 Nov 87 21:49:15-PST
From: Laurence I. Press <LPRESS@venera.isi.edu>
Subject: Exp. Sys. Benchmarking Query

Can anyone supply pointers to papers on benchmarking and performance
evaluation for expert system shells?

I have written a short program that generates stylized rule bases of
a specified length and have used it to generate comparative test cases
for PC Plus and M1.  I'd be happy to give anyone a copy and would like
to learn of other efforts to compare expert system shells.

Larry

------------------------------

Date: 12 Nov 87 12:13 -0400
From: Jan Mulder <mulder@cs.dal.cdn>
Subject: Environmental Impact Assessment

The school for Resource and Environmental Studies at Dalhousie
University is initiating a research project for the Canadian Federal
Environmental Assessment and Review office (FEARO), of current and
potential uses of computer-based expert systems, artificial
intelligence, and decision support tools for environmental impact
assessment (EIA) and management. FEARO has recently begun supporting
some development work in this field, but has commissioned this project
to provide strategic guidance for any further commitments of support
which it may make.

Although the project encompasses applications of these technologies
in all aspects of EIA, we are particularly interested in these
applications as they may relate to the initial screening and scoping
stages of the impact assessment process.

With regard to potential applications of these systems we are interested
in the details of any recent or on-going research and development, and
the resulting prospects and problems identified. With regard to actually
operational systems, there are a number of aspects of interest to us:
the structure and scope of such systems, when and how the system was
developed, present users of the system and the purpose of use, evaluations
of the advantages/disadvantages of the system, and the costs of
development, maintenance and updating.

If you are, or have been involved in any research or development work
applied to environmental assessment and management, would you please
send details to Alan Gray (Project Manager) at the address below. We
are planning to produce a draft report by December 31, 1987, and
conduct a symposium in January, 1988. We therefore request your reply
at your earliest convenience. Please do not hesitate to contact us
for any matter of clarification.

   Alan Gray
   School for Resource and Environmental Studies
   Dalhousie University
   1312 Robie St.
   Halifax, Nova Scotia
   Canada  B3H 3E2

   phone: (902) 424-2589 or (902) 424-3632
   e-mail: DUAB005@DAL.BITNET

Would you please bring the request to the attention of any of your
colleagues who may be able to help us.

------------------------------

Date: 12 Nov 87 18:01:13 GMT
From: sgi!wdl1!jtd@ucbvax.Berkeley.EDU  (Jeffrey T. DeMello)
Subject: MacBrain - Nural-Network Simulator

Has anyone out there in "network-land" ever
seen/heard of/used/reviewed a nural-network
simulator called MACBRAIN?

If so, please enlighten me!!!

jtd@ford-wdl1.arpa

------------------------------

Date: Tue, 10 Nov 87 19:03:09 PST
From: Dan Shapiro <dan@ads.arpa>
Subject: animal behavior and AI

I am looking for someone who would be interested in discussing some
ideas that involve both the fields of animal behavior and planning as
a subdiscipline of AI.  My goal is to develop a realistic view of what
planning means to simple animals (at the level of ants for example)
and use that information to motivate planning architectures within AI.
Within this context, my focal point is to look at *errors* in animal
behavior, as when ants build circular bridges out of their own bodies,
and the ones on top simply run themselves to death.  This should give
a sense for the limitations of animal planning and also prevent us
from anthropormorphizing to extremes; the temptation is to view
behavior like the above as goal directed and related to our concept of
"bridge building", when the presence of the error indicates that
something much more primitive is going on.  From the little I have
seen of literature in the behavioral sciences, this type of
projection is fairly common.

In any case, as a first step, I'd like to gather multiple examples of
errors in animal behavior.  If there are any ethologists,
sociobiologists, neuroanatomists, computer scientists or just plain
armchair behaviorists out there who have something to say on this
topic, please contact me.

                Dan Shapiro
                dan@ads.com
                415 941-3912

------------------------------

End of AIList Digest
********************
12-Nov-87 23:51:16-PST,17379;000000000000
Mail-From: LAWS created at 12-Nov-87 23:46:01
Date: Thu 12 Nov 1987 23:42-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #267 - Source Libraries, Brain Science, Law
To: AIList@SRI.COM


AIList Digest            Friday, 13 Nov 1987      Volume 5 : Issue 267

Today's Topics:
  AI Tools - Source Libraries & Object-Oriented Databases,
  Binding - Michael O. Rabin,
  Neuromorphic Systems - References,
  Pattern Recognition - Character Recognition,
  Education - Brain Science Programs,
  Law - Who Owns the Output of an AI?

----------------------------------------------------------------------

Date: 9-NOV-1987 10:44:40 GMT
From: POPX@VAX.OXFORD.AC.UK
Subject: Source Libraries

From  Jocelyn Paine
      St Peter's College
      New Inn Hall Street
      Oxford OX1 2DL


I was pleased  to read in AIList Bulletin V5  #260, Robert Futrelle's proposal
to  set up  a  net-accessible National  Resource Centre  of  public domain  AI
software. I  teach AI in Prolog  to undergraduates at Oxford  University; it's
very hard to  obtain source code (whether  in Prolog or Lisp) for  many of the
"landmark" programs which occur in textbooks: GPS, Analogy, Talespin, AM, Sam,
and so on. Published descriptions just don't give enough information for me to
re-implement these programs from scratch.

In saying this, I agree  very much with Seth (sas@bfly-vax.bbn.com)'s comments
in AIList V5 #257:
>  The current lack of reproducibility is appalling.  We have a
>  generation of language researchers who have never had a chance to play
>  with the Blocks World or and examine the limitiations of TAILSPIN.
>  It's as if Elias Howe had to invent the sewing machine without access
>  to steel or gearing.  There's a good chance he would have reinvented
>  the bone needle and the backstitch given the same access to the fruits
>  of the industrial revolution that most AI researchers have to the
>  fruits (lemons) of AI research.  Anecdotal evidence, which is really
>  what this field seems to be based on, just doesn't make for good
>  science.


I have  considered setting up  a library of such  programs, which I'd  send to
anyone who  can be reached from  the British Academic Network  (Janet). Before
distributing these  programs to others,  I would  test-run them to  check that
they  conform to  a reasonable  standard (I'd  have  to limit  this to  Prolog
programs, since  I don't know enough  about Lisp implementations to  know what
features  are undesirably  non-standard).  I'd test  them  for conformance  to
Edinburgh syntax  and predicates, by  running under VAX/VMS  Poplog Prolog). I
would also check to see that the instructions for running are correct.


Anyone want to help?

------------------------------

Date: Fri, 6 Nov 87 05:49 PST
From: nesliwa%nasamail@ames.arpa (NANCY E. SLIWA)
Subject: Public dissemination of AI software


Just a note in response to recent board postings about the desireability
of having research software made available to other researchers for
duplication of experiments and for extensions to programs: NASA has been
required to do that all along, and that is probably true of most other
government labs (other that sensitive military work). NASA's software
clearinghouse is COSMIC, associated with the University of Georgia, and
all software is available for a minimum fee which covers dissemination
costs. Although most of COSMIC's library is more aerospace-science
related, there has been some interesting AI research in NASA in recent
years, and researchers are *strongly encouraged* to submit all
programs (with documentation and research papers) to COSMIC.

More information about COSMIC (and a catalog of available software) is
available from:
        COSMIC
        112 Barrow Hall
        The University of Georgia
        Athens, Georgia 30602
        (404)542-3265

Nancy Sliwa
NASA Langley Research Center

nesliwa%nasamail@ames.arpa
nancy@grasp.cis.upenn.edu

------------------------------

Date: 10 Nov 87 16:25:34 GMT
From: cos!hqda-ai!merlin@uunet.uu.net  (David S. Hayes)
Subject: Re: object oriented database query


     A very nice object-oriented database is produced by Graphael
(a French company).  This system supports text, and numbers, and
mouse-sensitive graphics, and sound, and digitized pictures as
part of the database.  IE, your entry for Company X can include a
streetmap of their area.  Alternatively, a floorplan of your
building can be mouse-sensitive.  Mouse on some office, and the DB
can tell you who works there.

     This software runs on Symbolics Lisp Machines, and some
others I can't recall right now.  Their US contact is:

          Eric Sansonetti, National Sales Manager
          Graphael, Inc.
          255 Bear Hill Road
          Waltham, MA  02154

          Phone:  617-890-7055


--
David S. Hayes, The Merlin of Avalon    PhoneNet:  (202) 694-6900
UUCP:  *!uunet!cos!hqda-ai!merlin       ARPA:  ai01@hios-pent.arpa

------------------------------

Date: 10 Nov 87 16:07:05 GMT
From: uh2%psuvm.bitnet@ucbvax.Berkeley.EDU  (Lee Sailer)
Subject: Re: object oriented database query

The ACM journals and SIG newsletters on Data Base and Office Info Systems
      often have stuff about Object Oriented Database management.

      Typically, the O approach is most useful when the world be
modeled is object like.  For example, consider building a database to
manage geographic info for a city the size of New York.  Support
queries like "What offices are within 15 minutes of the UN building?"
or "Whose view will be blocked by a 200 stofry building at 5th and
Broadway?"

      Likewise, systems for managing blueprints, specifications, and
change requests in a manufacturing environment profit immensly
from Object orientations.

                         lee

------------------------------

Date: Mon, 9 Nov 87 11:59:39 EST
From: rapaport@cs.Buffalo.EDU (William J. Rapaport)
Subject: Michael O. Rabin

He is Professor of CS at Harvard and also Hebrew University--very well
known.

John

------------------------------

Date: 5 Nov 87 16:53:00 GMT
From: necntc!adelie!mirror!ishmael!inmet!justin@husc6.harvard.edu
Subject: Re: references for adaptive systems


/* Written 12:36 pm  Nov  2, 1987 by oppy@unrvax.UUCP in inmet:comp.ai */
/* ---------- "references for adaptive systems" ---------- */
  The direction i wish to go with this is toward learning systems,
  equivalences in the way computers and biological organisms learn.
  brian oppy (oppy@unrvax)

One of my former professors, a Richard Alterman of Brandeis University
(Waltham, MA) was doing some interesting work in that direction when last
I spoke to him. You might look him up.

                        -- Justin du Coeur

------------------------------

Date: 9 Nov 87 04:15:31 GMT
From: ihnp4!homxb!mtuxo!mtgzz!drutx!clive@ucbvax.Berkeley.EDU  (Clive
      Steward)
Subject: Re: Character recognition

in article <641@zen.UUCP>, vic@zen.UUCP (Victor Gavin) says:
>
>
> I have been puttering about for the past few weeks with an HP ScanJet (one
> of those 300dpi digitizers). I have been asked to write some software which
> can (given an image produced by the scanner) reproduce the original text of
> the paper in a machine readable form.
> If someone has already tackled this problem, any help I can get will be much
> appreciated.
>

Yes, there's some software for the Macintosh which is purported to do
just this, with text.  Presumably, like other such systems, it's
pretty much confined to non-proportional fonts.  Since numbers are
often non-proportional even in otherwise proportional fonts so that
columns will look right, this sounds like it would do your job.

There's at least one package which purports to do this; it's called
Read-it!, said to be for 'popular' scanners, which presumably includes
all the 300 dpi ones as well as Thunderscan etc. which can do more.
It was apparently demo'ed in 'pre-release form' at MacWorld Expo in August.

It's from:

    Olduvai Software, Inc.
    6900 Mentone
    Coral Gables, Florida 33146
    USA
    Phone  (305) 665-4665

They list it in the September MacUser ad for $295 list.  Reading that,
I find they say it works on "including AST Turboscan, Microtek, Abaton 300,
MacScan, LoDown, Spectrum, Datacopy, Dest, etc."  "Type tables form
most popular typewriter and LaserWriter fonts are included, or you can
use it's unique "learning mode" to teach it to recognize an unlimited
number of fonts, includeing foriegn and special characters." (sic).

They also say, "Read-It TS, a special version of Read-It! optimized
for the Thunderscan is also available"  $149.00 list.  But though I
have and like Thunderscan, I don't know that it's what you want for
high volume.  It's 1/10 the price, and 1/10 the speed, though often
with better looking results for pictures.


Good Luck!

And if you get it and have results, would appreciate mail to see what
it's like; probably others would like a posting too!


Clive Steward

------------------------------

Date: 11 Nov 87 15:43:43 GMT
From: steinmetz!stern@uunet.uu.net  (harold a stern)
Subject: Re: Brain Science Programs

In article <653@wheaton.UUCP> johnh@wheaton.UUCP (John Doc Hayward) writes:
>
>What CS        courses are offered in Colleges and Universities which
>are part of an undergraduate 'Brain Scince' program?
>Are the courses taught by CS faculty either individually or
>team taught with members of different discipline?
>What prerequisites in CS would be required for courses.  What
>does the 'program' consist of?

The following are (roughly) the requirements for MIT's program in "Brain and
Cognitive Sciences". Courses marked (EECS) are offered by the Department of
Electrical Engineering and Computer Science; those marked (BCS) are offered
by the Department of Brain and Cognitive Sciences; and those marked (LP) are
offered by the Deparment of Linguistics and Philosophy.



1) Introduction to Cognitive Science (BCS)
2) Logic I (LP)
3) Introduction to Algebraic Systems (EECS)
4) Automata, Computability, and Complexity (EECS)

four of the following six:
4) The Study of Language (LP)
5) Cognitive Processes (BCS)
6) Structure and Interpretation of Computer Programs (EECS)
7) Neuroscience and Behavior (BCS)
8) Perceptual Information Processing (BCS)
9) Minds and Machines (LP)

and four additional courses selected from approved subjects in
experimental cognitive psychology, aspects of natural language,
neurological foundations of cognition, perception, natural computation,
and the philosophy of mind.

Structure and Interpretaiton of Computer Programs is the introductory
course in computer science required of students majoring in either
EE or CS.

Introduction to Algebraic Systems and Automata, Computability, and Complexity
are required courses for computer scientists (actually, Algebraic Systems is
offered by the Department of Mathematics, but only CS students take it).


harold a. stern  <stern@ge-crd.arpa>
room k1-5c8, ge corporate r&d center
p.o. box 8, schenectady, ny 12301

------------------------------

Date: 10 Nov 87 18:08:54 GMT
From: houpt@svax.cs.cornell.edu  (Charles )
Subject: Who owns the output of an AI?


   I read an interesting news item in this weeks NewScientist magazine.
It said that the British parliment is reorganizing the UKs intellectual
property law. The interseting thing is that it has a section dealing with
intellectual property generated by Artificial Intelligences.

   The law says that the output of an AI is owned by the user running the
AI, NOT the programmer who designed it.

   Is this fare? Should copywrites go to the user or the programmer? (or to
the AI :-)? To me the British law seems unfair. If my AI program discovered
a new high temperature super-conductor, shouldn't I get some profit? The
user running my program may know nothing about super-conductors, why should
he get the patent?
   What do you think?

-Chuck Houpt  houpt@svax.cs.cornell.edu
              KY3Y@CORNELLA.BITNET

------------------------------

Date: 11 Nov 87 06:17:30 GMT
From: speedy!honavar@speedy.wisc.edu  (A Buggy AI Program)
Subject: Re: Who owns the output of an AI?

In article <1778@svax.cs.cornell.edu> houpt@svax.cs.cornell.edu
(Charles (Chuck) Houpt) writes:
>
>property law. The interseting thing is that it has a section dealing with
>intellectual property generated by Artificial Intelligences.
>
>   The law says that the output of an AI is owned by the user running the
>AI, NOT the programmer who designed it.
>
>   Is this fare? Should copywrites go to the user or the programmer? (or to
>the AI :-)? To me the British law seems unfair. If my AI program discovered
>a new high temperature super-conductor, shouldn't I get some profit? The
>user running my program may know nothing about super-conductors, why should
>he get the patent?

        Any such  law that does not call for a full consideration of
the particulars of each case is bound to be unfair. One may write a
learning program that draws inferences based on data presented to it -
in other words, it has the potential to discover something significant,
given enough raw data to work on. Let us say, X writes the program and
sells it to Y. Y runs the program on data he has gathered in some domain,
superconductivity and the program discoveres a new high temperature
superconductor. Although the program was written by X, Y was instrumental
in  getting the observed behavior out of the program by virtue of the
data he provided to the program.  In this situation, it is not clear
how the credit for the discovery made by the program should be apportioned
among X, Y, and the program itself.
each case

------------------------------

Date: 11 Nov 87 13:55:51 GMT
From: super.upenn.edu!eecae!lawitzke@rutgers.edu  (John Lawitzke)
Subject: Re: Who owns the output of an AI?

$    The law says that the output of an AI is owned by the user running the
$ AI, NOT the programmer who designed it.
$
$    Is this fare? Should copywrites go to the user or the programmer? (or to
$ the AI :-)? To me the British law seems unfair. If my AI program discovered
$ a new high temperature super-conductor, shouldn't I get some profit? The
$ user running my program may know nothing about super-conductors, why should
$ he get the patent?
$    What do you think?

For the author of an AI to get the copyright/ownership of a users
results is like the author of SPICE (or similar programs) getting
the rights to all designs generated with it. Or UCB getting the rights
to all programs designed under 4.2BSD, et al. Or the author of a CAD
program having the copyright on all designs generated with the package.
The point of this is that it is rather absurd for the results of a
user's work under an AI to go to the author of the AI. For one thing,
the AI would never be used by anyone because they couldn't keep the
credit for their own work!

The one glaring loophole here is that the license for the AI could state
that the author reserves ownership of all results (then no one would
buy it) or that the author receives a royalty from all results
(reasonable but people wouldn't go for it)

--
j                                UUCP: ...ihnp4!msudoc!eecae!lawitzke
"And it's just a box of rain..." ARPA: lawitzke@eecae.ee.msu.edu  (35.8.8.151)

------------------------------

Date: 11 Nov 87 06:57:22 GMT
From: jason@locus.ucla.edu
Subject: Re: Who owns the output of an AI?

In article <1778@svax.cs.cornell.edu> houpt@svax.cs.cornell.edu
(Charles (Chuck) Houpt) writes:
>
>The interseting thing is that it has a section dealing with
>intellectual property generated by Artificial Intelligences.
>
>   The law says that the output of an AI is owned by the user running the
>AI, NOT the programmer who designed it.
>
>   Is this fare? Should copywrites go to the user or the programmer?
>   What do you think?
>

The computer should get the credit.  It does the thinking.  If it put in the
time and research, it should be justly rewarded.  As Dr. Chandra says in
2010, a thinking being should be respected and valued as such.

        Granted, an AI is very dependent on the people around it, particularly
the person who designed it (the programmer and/or computer architect), and
EQUALLY the user.  Any intelligence is worthless without a means of learning
from its surroundings.  Without a decent teacher and provider of information
(the user), an AI will not produce anything useful, except perhaps a detailed
and logical analysis of Cartesian doubt.  Information provided by the user
is inherrently different than that created by the programmer.  The programmer
simply creates a mechanism with which an AI can learn.  The user then
fills in the blank slate with news of the world.



Jason Rosenberg
jason@cs.ucla.edu

------------------------------

End of AIList Digest
********************
13-Nov-87 00:01:52-PST,12076;000000000001
Mail-From: LAWS created at 12-Nov-87 23:54:21
Date: Thu 12 Nov 1987 23:52-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #268 - Spang Robinson 3/10, Bibliography, Methodology
To: AIList@SRI.COM


AIList Digest            Friday, 13 Nov 1987      Volume 5 : Issue 268

Today's Topics:
  Review - Spang Robinson V3 N10,
  Bibliography - Leff File bm846,
  Comments - Success of AI & Gilding the Lemon & FORTRAN

----------------------------------------------------------------------

Date: Mon, 9 Nov 1987 02:29 CST
From: Leff (Southern Methodist University)
      <E1AR0002%SMUVM1.BITNET@wiscvm.wisc.edu>
Subject: Review - Spang Robinson V3 N10

Summary of the Spang Robinson Report on Artificial Intelligence
October 1987, Volume 3, No. 10

The lead story is on Financial Expert Systems:

A survey of insurance and companies show that 21 per cent are using
expert systems with 20 per cent having no activity and the others in
various stages of development or research.  For banks, the figures are
12 and 47 per cent respectively.  The article gives information on
management attitudes, uses and comparisons of activity in property and
casualty and life insurance, use of PC's, mainframes and lisp machines and
type of language.

(^(^(^(^(^(^(^(^(^(^(^(^(^(^(^(^(^(^(^(^(^(^(^(^(^(^(^(^(^(^

Review of video tape classes on expert systems, "AI Masters"
by Addison-Wesley.  This set has courses given by Patrick H. Winston,
Randall Davis and J. Ross Quinlan.  The training aid has work books and
a simple PC expert tools.  The workbooks have checklists to be used
in tool and application selection and test.  The training system maligns,
perhaps due to datedness, PC-based expert systems and induction tools.
The three courses run for $2500-$3500 apiece with additional workbooks
for $10.00 a piece.

()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()

Programs in Motion's Fusion allows user to put in examples and
generate production rules.  The system can accept an example matrix of
32 factors and 32 resultsants and up to 255 different examples to
generate rules.  (There can be more than 255 cases if some of the cases
are redundant.)

The system does allow chaining of the decision rules.  Fusion can
generate C, Pascal and production code and read in dBase files.

(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_

shorts:

Symantec Corporation has merged with THINK technologies.

Digitalk has released a new version of Smalltalk/V with high resolution
object oriented programming for IBM PS-2/25 and 30 computers.

Cognitive Systems, Inc. has developed a system to read messages and route
them to the appropriate people in a bank.

Teknowledge has been awarded a $1.2 million  contract for work on Pilot's
Associate.

U. S. Army is purchasing ART plus various services from Inference Corporation
(more than $3 million worth)

Palladian Software has sold its Operations Advisor to Blue Cross and Blue
Shield.

Odetics got a contract to apply AI to residual heat removal in
nuclear power plants.

Gold Hill Computer  has signed a distriubtion agreement with
Computer Engineering and Consulting of Japan.

System Research and Development Co. of Tokyo
has developed a new expert system building
tool called ESPARON.

_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)_)
Discussion of the Gigamos vs. Gensym dispute.

Gigamos and Gensym are both headed by former leaders of LMI who sold
all assets to Gigamos.  Gigamos charges Gensym with using trade secrets
and confidential information to develop a new expert system for
real time applications (G2) in competitition with Gigamos.  Gigamos
charges Gensym founders
with "planning to resign from LMI and to use LMI proprietary
information in the new GENSYM business venture."  They also accuse Gensym
of causing other LMI resignations helping defeating LMI financing.
Gigamos is asking for a copy of the software and source code to be
deposited.

------------------------------

Date: Thu, 12 Nov 1987 02:53 CST
From: Leff (Southern Methodist University)
      <E1AR0002%SMUVM1.BITNET@wiscvm.wisc.edu>
Subject: Bibliography - Leff File bm846

Defs for a62C

D MAG144 IEEE Transactions on Systems, Man, and Cybernetics\
%V 17\
%N 3\
%D MAY-JUN 1987
D MAG145 International Journal of Man-Machine Studies\
%V 26\
%N 2\
%D FEB 1987
D MAG149 Information Processing and Management\
%V 233\
%N 4\
%D 1987
D MAG150 Computer Vision, Graphics, and Image Processing\
%V 39\
%N 3\
%D SEP 1987
D MAG151 International Journal of Man-Machine Studies\
%V 26\
%N 3\
%D MAR 1987
D MAG152 Image and Vision Computing\
%V 5\
%N 3\
%D AUG 1987
D MAG153 Computers and Industrial Engineering\
%V 13\
%N 1-4\
%D 1987
D MAG154 Fuzzy Sets and Systems\
%V 23\
%N 3\
%D SEP 1987
D MAG155 International Journal of Man-Machine Studies\
%V 26\
%N 4\
%D APR 1987
D MAG156 Computer Vision, Graphics, and Image Processing\
%V 40\
%N 1\
%D OCT 1987
D BOOK85 Image Pattern Recognition: Algorithm Implementations,\
Techniques, and Technologies\
%S Proceedings of the Society of Photo-Optical Instrumentation Engineers\
%V 755\
%E F. J. Corbett\
%I SPIE - International Society Optimal Engieering (Bellingham)\
%D 1987
D MAG157 International Journal of General Systems\
%V 13\
%N 3\
%D 1987

------------------------------

Date: 9 Nov 87 16:57:20 GMT
From: honavar@speedy.wisc.edu (A Buggy AI Program)
Reply-to: honavar@speedy.wisc.edu (A Buggy AI Program)
Subject: Re: Success of AI


In article <4357@wisdom.BITNET> eitan%H@wiscvm.arpa (Eitan Shterenbaum) writes:
>
>As to the claim "the brain does it so why shouldn't the computer" -
>It seem to me that you forget that the brain is built slightly differently
>than a Von-Neuman machine ... It's a distributed enviorment lacking boolean
>algebra. I can hardly believe that even with all the partial solutions for
>all the complicated sets of NP problems that emulating a brain brings up, one
>might be able to present a working program. If you'd able to emulate mouse's
>brain you'd become a legend in your lifetime !
>Anyway, no one can emulate a system which has no specifications.
>if the neuro-biologists would present them then you'd have something to start
>with.

        I use the term "computer" in a sense somewhat broader than a
        Von-Neuman machine. We can, in principle, build machines that
        incorporate distributed representations, processing and control.
        It is not clear what you mean by a "distributed environment lacking
        boolean algebra."
        The use of fine-grained distributed representations naturally results
        in behavior indicative of processes using fuzzy or probabilistic logic.
        The goal is, not necessarily to emulate the brain in all its detail:
        We can study birds to understand the principles of aerodynamics that
        explain the phenomenon of flying and then go on to build an aeroplane
        that is very different from a bird but still obeys the same laws of
        physics. As for specifications, they can be provided in different
        forms and at different levels of detail; Part of the exercise is
        to discover such specifications - either by studying actual existing
        systems or by analyzing the functions needed at an abstract level to
        determine the basic building blocks and how they are to be put
        together.

>
>And last - Computers aren't meta-capable machines they have constraints,
>           not every problem has an answer and not every answermakes sense,
>           NP problems are the best example.
>
        Are you implying that humans are "meta-capable" - whatever that means?


VGH

------------------------------

Date: 10 Nov 1987 10:29-EST
From: Spencer.Star@B.GP.CS.CMU.EDU
Subject: Re: Guilding the Lemon

Something I was reading the other day may be of interest to those
involved in this discussion of doing a Ph.D. thesis that follows
closely someone else's work as opposed to striking off in some
completely new direction.

In Allen Newell's presidential address to AAAI in 1981, he comments on
the SIGART "Special Issue of Knowledge Representation" in which Ron
Brachman and Brian Smith present the answers to an elaborate
questionnaire sent to members of the AI community to find out their
views on knowledge representation.

"The main result was overwhelming diversity--a veritable jungle of
opinions.  There is no consensus on any question of substance.  ...
Many (but of course not all?) respondents themselves felt the same way.
As one said, 'Standard practice in the representation of knowledge is
the scandal of AI.'
        "What is so overwhelming about the diversity is that it defies
characterization.  ... There is no tidy space of underlying issues in
which respondents, hence the field, can be plotted to reveal a pattern
of concerns or issues.  Not that Brachman and SMith could see.  Not
that this reader could see."

By encouraging students to do their research on a subject by taking a
completely new approach, we are denying the value of previous work.
Certainly there is room for some Ph.D. students to take this path.  But
a large part of what AI should be doing is building on the foundations
laid by the previous generations of researchers.
                        Spencer Star

------------------------------

Date: Mon, 9 Nov 87 15:11:19 PDT
From: ladkin@kestrel.ARPA (Peter Ladkin)
Subject: the wonder of words

gee, first ken laws says that maybe ai researchers don't need to think
too deeply, but maybe build whimsical experimental systems, and now
he's saying that automatic programming won't work because algorithms
are just too hard to design. i praise him for his consistency - one view
certainly follows from the other. i might use the old five-letter
expletive popularised by t.j. watson.

peter ladkin
ladkin@kestrel.arpa

------------------------------

Date: Tue, 10 Nov 87 11:25:52 MET
From: Laurent Siklossy <mcvax!cs.vu.nl!siklossy@uunet.UU.NET>
Subject: In Defense of FORTRAN

FORTRAN and other "standard" programming languages have
been used for years for advanced AI. One of the French AI
pioneers (if not THE pioneer, Ph.D. around 1961(?)),
Dr. Jacques Pitrat, has programmed for years in FORTRAN
with his own extensions. His programs included
discovering interesting logical theorems, learning in
the domain of games (chess), and many other areas.

Prof. Jean-Louis Lauriere wrote his Ph.D. thesis
(Universite de Paris VI, 1976; see his 100+ pages
article about that in the AI Journal, 1977 I think) in
PL/1. Lauriere's system was, in my opinion, the first
real (powerful) general problem solver, and remains a top
performing system in the field. (Lauriere may have been
pushed into using PL/1 by lack of other more appealing
choices, I cannot remember for sure.)

So it has been done, therefore you can do it too. I would
not recommend it, but that may be a matter of taste or
of limitations.

Laurent Siklossy
Free University, Amsterdam
siklossy@cs.vu.nl

---------------------------------------------------

Ken:

You are welcome to send above via the net if you find
it useful.

Cheers,    LS

------------------------------

Date: Wed 11 Nov 87 21:42:50-PST
From: Laurence I. Press <LPRESS@venera.isi.edu>
Subject: FORTRANecdote

As a student assistant to Earl Hunt in the mid 1960s I wrote "concept
acquisition" programs in FORTRAN -- see the book Experiments in Induction,
Hunt, Marin and Stone, Academic Press, around 1965 if you don't believe it.
After that I wrote induction programs in JOVIAL too.

Larry

------------------------------

End of AIList Digest
********************
15-Nov-87 22:24:00-PST,13828;000000000000
Mail-From: LAWS created at 15-Nov-87 22:18:21
Date: Sun 15 Nov 1987 22:09-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #269 - Inference, Sphexishness, Object-Oriented Databases
To: AIList@SRI.COM


AIList Digest            Monday, 16 Nov 1987      Volume 5 : Issue 269

Today's Topics:
  Neuromorphics - Inference,
  Methodology - Animal Behavior and AI & Traditional Techniques,
  Bibliography - Object-Oriented Databases

----------------------------------------------------------------------

Date: Sun, 15 Nov 87 11:53:10 EST
From: Brady@UDEL.EDU
Subject: bpsim code

I am interested in inferring concepts from data, and have
been reading about back propagation in neural nets as a
way to make such inferences.

I am confused about the little red riding hood article in BYTE.
The article seems to suggest that the nodes in the middle layer
(representing the concepts wolf, granny, woodcutter)
are INFERRED during training. Other literature on back propagation
that I have seen also suggest that concepts can be inferred
that way. But a look at the BPSIM code that implements the little red
riding hood network seems to suggest the existance of these three nodes
before training begins. So my question is: if one wants to infer
concepts from data, can one do that by using back propagation?
Or do you still have to a priori anticipate the existance of the
concepts?


  [I haven't seen the example in question, but the usual neural network
  learning procedure does use predefined nodes.  The nodes of the center
  layer are identical except for random variations in the initial
  weights.  After training, these nodes take on very different roles
  characterized by their weight vectors.  Determining what these roles
  are can be quite difficult, so it is not clear how much of the inference
  is done by the network and how much by the human -- but clearly the
  network has done part of the work.  This strategy permits nodes to be
  deleted (via zeroed weights), but not created.  For creation of nodes
  you may have to investigate genetic learning algorithms.  -- KIL]

------------------------------

Date: 13 Nov 87 23:30:58 GMT
From: Michael P. Smith <mps@cs.duke.edu>
Reply-to: mps@duke.UUCP (Michael P. Smith)
Subject: Re: animal behavior and AI
Article-I.D.: duke.10631

In article <8711110303.AA28544@ADS.ARPA> dan@ADS.ARPA (Dan Shapiro) writes:
> ...  My goal is to develop a realistic view of what
>planning means to simple animals (at the level of ants for example)
>and use that information to motivate planning architectures within AI.
>Within this context, my focal point is to look at *errors* in animal
>behavior, as when ants build circular bridges out of their own bodies,
>and the ones on top simply run themselves to death.

Hofstadter calls such revealing lapses of animal cunning "sphexishness"
after a famous example from Wooldridge.  Chapter 2 of Dennett provides
more philosophical analysis of the phenomenon.

Dennett, Daniel C.  _Elbow Room_, MIT 1984.

Hofstadter, Douglas.  "On the Seeming Paradox of Mechanizing
Creativity," _Scientific American_ (September 1982), reprinted as
chapter 23 of _Metamagical Themas_, Basic Books, 1985.

Wooldridge, Dean.  _The Machinery of the Brain_, McGraw Hill, 1963.

----------------------------------------------------------------------------
Michael P. Smith        mps@cs.duke.edu / {seismo,decvax}!mcnc!duke!mps

"V. That which a lover takes against the will of his beloved has no relish."
        Andreas Capellanus' "Rules of Love" from _The Art of Courtly Love_

------------------------------

Date: 9 Nov 87 01:53:52 GMT
From: clyde!burl!codas!killer!usl!usl-pc!jpdres10@rutgers.edu  (Green
      Eric Lee)
Subject: Re: Practical effects of AI (speech)

In message <267@PT.CS.CMU.EDU>, kfl@SPEECH2.CS.CMU.EDU (Kai-Fu Lee) says:
>In article <12@gollum.Columbia.NCR.COM>, rolandi@gollum.Columbia.NCR.COM
(rolandi) writes:
>> It would seem to me that the single greatest practical advancement for
>> AI will be in speaker independent, continuous speech recognition. This
>(3) If this product were to materialize, it is far from clear that it
>    would be an advancement for AI.  At present, the most promising
>    techniques are based on stochastic modeling, pattern recognition,
>    information theory, signal processing, auditory modeling, etc..
>    So far, very few traditional AI techniques are used in, or work well
>    for speech recognition.

Very few traditional AI techniques have resulted in much at all :-)
(sorry, I couldn't help it).

But seriously, considering that sciences such as physics and
mathematics have been ongoing for centuries, can we REALLY say that AI
has "traditional techniques"? Certainly there is a large library of
techniques available to AI researchers today, but 30 years is hardly
a long enough time to call something "traditional". Remembering how
going beyond the "traditional" resulted in many breakthroughs in
mathematics and physics, saying that "it is far from clear that it
would be an advancement for AI" presupposes that one defines AI as
"that science which uses certain traditional methods", which, I
submit, is false.

--
Eric Green  elg@usl.CSNET       from BEYOND nowhere:
{ihnp4,cbosgd}!killer!elg,      P.O. Box 92191, Lafayette, LA 70509
{ut-sally,killer}!usl!elg     "there's someone in my head, but it's not me..."

------------------------------

Date: 14 Nov 87 17:43:45 GMT
From: nosc!humu!uhccux!lee@sdcsvax.ucsd.edu  (Greg Lee)
Subject: Re: Practical effects of AI (speech)

In article <244@usl-pc.UUCP> jpdres10@usl-pc.UUCP (Green Eric Lee) writes:
>In message <267@PT.CS.CMU.EDU>, kfl@SPEECH2.CS.CMU.EDU (Kai-Fu Lee) says:
>>In article <12@gollum.Columbia.NCR.COM>, rolandi@gollum.Columbia.NCR.COM
(rolandi) writes:
>>> It would seem to me that the single greatest practical advancement for
>>> ...
>>    So far, very few traditional AI techniques are used in, or work well
>>    for speech recognition.
>
>Very few traditional AI techniques have resulted in much at all :-)

        I suppose that applying AI to speech recognition would involve
making use of what we know about the perceptual and cognitive nature
of language sound-structures -- i.e. the results of phonology.  I don't
know that this has ever been tried.  If it has, could someone supply
references?  I'd be very interested to know what has been done in this
direction.
                Greg Lee, lee@uhccux.uhcc.hawaii.edu

------------------------------

Date: 13 Nov 87 22:11:40 GMT
From: clyde!burl!codas!killer!pollux!ti-csl!!peterson@rutgers.edu 
      (Bob Peterson)
Subject: Re: object oriented database query

In article <4528@cc5.bbn.COM> mfidelma@bbn.COM (Miles Fidelman) writes:
>Can anyone point me to work in the area of applying database technology
>to supporting object oriented environments?
  Sure.  See the short bibliography attached to the end of this message.
It is about two pages in length.  Several publications are of special
interest: Proceedings of OOPSLA '86 and '87, and the Proceedings of the
OODB Workshop held in '86 in Pacific Grove, CA.  In each of these you'll
find interesting articles addressing OODB issues, as well as many
additional references following each article.

>It strikes me that database technology tends to focus on supporting large
>production databases, with attention to fast processing speeds, maintaining
>database integrity, journalizing/checkpointing, etc.; while object oriented
>environments are basically prototyping environments.
  I don't believe OODB's are, as you put it, "...basically prototyping
environments."  Indeed, there are applications, such as VLSI CAD and
hypertext, that are not well-supported by conventional databases.
When implemented using an object-oriented style, these applications
use many objects with rather complex and dynamic interconnections.
Conventional data models, i.e., hierarchical, network, and relational,
don't handle the complex, dynamic interconnected objects very well.
At least that's my opinion.

>Has anyone been working on making a production object oriented environment?
  Yes, we at Texas Instruments are working on just such an effort.  In
addition there are at least three companies now offering for sale
object-oriented database systems.

   Hardcopy    and       Electronic Addresses:
Bob Peterson           Compuserve: 76703,532
P.O. Box 1686          Usenet: peterson@csc.ti.com
Plano, Tx USA 75074    (214) 995-6080

(Skip the rest of this message if you aren't interested in two pages
of bibliographic references.)


                   OBJECT-ORIENTED DATABASE SYSTEMS BIBLIOGRAPHY


         [BCG*87]J.  Bannerjee,   H.T.  Chou,  J.F.  Garza,  W.  Kim,   D.
                 Woelk,  N. Ballou,  and H.J. Kim.  Data Model Issues  For
                 Object-Oriented Applications.  ACM Transactions on Office
                 Information Systems, January 1987.

         [BD81]  A.   J.   Baroody   and  D.   J.   DeWitt.   An   Object-
                 Oriented  Approach  to  Database  System  Implementation.
                 ACM  Transactions  on  Database  Systems,   6(4):576-601,
                 December 1981.

         [bFL85] Edited  by  F.  Lochovsky.   IEEE  Database  Engineering.
                 December 1985.  A quarterly bulletin of the IEEE Computer
                 Society  Technical  Committee  on  Database  Engineering,
                 Special Issue on Object-Oriented Systems.

         [But86] M. H. Butler.  An Approach to Persistent LISP Objects. In
                 Proc.  COMPCON,  pages 324-329, IEEE, San Fransisco,  CA,
                 March 1986.

         [CAC*84]W.  Cockshott,  M.  Atkinson,  K.  Chisholm,  P.  Bailey,
                 and  R. Morrison.  Persistent  Object Management  System.
                 Software Practice and Experience, 14:49-71, 1984.

         [Mis84] N.   Mishkin.  Managing   Permanent  Objects.   Technical
                 Report YALEU/DCS/RR-338,  Department of Computer Science,
                 Yale University, New Haven, CT, November 1984.

         [ML87]  T.  Merrow and J. Laursen. A Pragmatic System  for Shared
                 Persistent  Objects. In N. Meyrowitz, editor,  OOPSLA '87
                 Conference  Proceedings,  pages 103-110,  ACM,  ACM,  New
                 York, NY, Oct 4-8 1987.

         [Nie85] O.  M.  Nierstrasz. Hybrid:    A Unified  Object-Oriented
                 System.  IEEE Database Engineering, 8(4):49-57,  December
                 1985.

         [OBS86] P.  O'Brien,  B.  Bullis, and  C.  Schaffert.  Persistent
                 and   Shared  Objects  in  Trellis/Owl.   In  Proceedings
                 of  the  1986 International  Workshop on  Object-Oriented
                 Database  Systems,  pages 113-123,  ACM,  Pacific  Grove,
                 CA, September 1986.






         [OOD86] Proceedings  of  the  International   Workshop  on Object
                 Oriented Database Systems, Pacific Grove,  CA,  September
                 1986.  ACM.

         [OOP86] ACM.   Conference  Proceedings  for  the  Object-Oriented
                 Programming  Systems,   Languages  and  Applications  '86
                 Conference (OOPSLA '86), Portland, OR, Sept 29-Oct 2 1986
                 Panel Discussion.

         [Pet87] R.  W.  Peterson.  Object-Oriented Database  Design.   AI
                 Expert, 2(3):27-31, March 1987.

         [SR86]  M.  Stonebraker and L. Rowe.  The Design of POSTGRES.  In
                 Proceedings  of SIGMOD,  pages 340-355,  Washington D.C.,
                 December 1986.

         [SZ86]  A.  Skarra  and S.  Zdonik.  The Management  of  Changing
                 Types   in  an   Object-Oriented   Database.  In   Norman
                 Meyrowitz,  editor,  OOPSLA  '86 Conference  Proceedings,
                 pages 483-495, ACM, ACM, Portland, OR, September 1986.

         [SZ87]  K.  Smith and S.B. Zdonik.  Intermedia:  A Case  Study of
                 the  Differences Between  Relational and  Object-Oriented
                 Database  Systems. In  N. Meyrowitz,  editor, OOPSLA  '87
                 Conference  Proceedings,  pages 452-465,  ACM,  ACM,  New
                 York, NY, Oct 4-8 1987.

         [SZR86] A.  S.  Skarra,  S.  Zdonik,  and  S.  Reiss.  An  Object
                 Server  for  an  Object  Oriented  Database  System.   In
                 International   Workshop  on  Object  Oriented   Database
                 Systems,  pages  196-205,  Pacific  Grove, CA,  September
                 1986.

         [Tho86] C.   Thompson.  Object-oriented   databases.  Texas   In-
                 struments Engineering Journal, 3(1):169-175, Jan. 1986.


         [TMT86] C.W.  Thompson,  S.  Martin,  and  S.  Thatte.  Real-Time
                 Object-Oriented  Manufacturing  Databases. In  AAAI  1986
                 Workshop on AI in Manufacturing, Aug 1986.

         [Wie86] G.  Wiederhold.  Views,   Objects,  and  Databases.  IEEE
                 Computer, ():37-44, December 1986.

   Hardcopy    and       Electronic Addresses:        Office:
Bob Peterson           Compuserve: 76703,532          NB 2nd Floor CSC Aisle C3
P.O. Box 1686          Usenet: peterson@csc.ti.com
Plano, Tx USA 75074    (214) 995-6080 (work) or (214) 596-3720 (ans. machine)

------------------------------

End of AIList Digest
********************
17-Nov-87 23:53:36-PST,15309;000000000000
Mail-From: LAWS created at 17-Nov-87 23:36:01
Date: Tue 17 Nov 1987 23:30-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #270 - Games, Learning, Pattern Recognition, Law
To: AIList@SRI.COM


AIList Digest           Wednesday, 18 Nov 1987    Volume 5 : Issue 270

Today's Topics:
  Queries - AI systems in Design & KAT Acronym,
  Games - Mancala/Kalah,
  Learning - Genetic Algorithms,
  Pattern Recognition - Measures of "Englishness",
  Law - Who Owns the Output of an AI?

----------------------------------------------------------------------

Date: 16 Nov 87 15:43:59 GMT
From: ece-csc!ncrcae!ncr-sd!ncrlnk!rd1632!king@mcnc.org  (James King)
Subject: Survey of AI systems in Design


I am in need of information about one topic and a subtopic.

I am compiling a list of AI systems used in the design phase of:
   - products
   - materials
   - costs
   - scheduling
   - etc.
My focus is on the first two, but any and all are welcome.  I am
looking for AI systems in CAD and in Pre-CAD design.  Typical
intelligent CAD systems I am interested in assist in:
   - Saving integrity between drawings - products with multiple
     drawings are structured to recognize a change in one drawing
     and pass it to other associated drawings.
   - Management systems in CAD for information, integrity, design
     experience representation, etc.
   - Encapsulation of designer experience into KB's
        - Uses of OOP, frames, etc.
        - Application of situational reasoning
        - Hardware implementations
        - etc.


The second topic deals with developing knowledge bases of designer
experience, techniques, rules in the design phase.  I am interested in the
representational techniques, elicitation techniques, etc. that have been used
to encapsulate the design experience associated with:
        - A part
        - A specific domain
        - An entire system
        - A manufacturing line
        - Etc.

I would appreciate any information on these two areas and associated
topics of Design automation and AI.

Thank you in advance

James A. King     j.a.king@dayton.ncr.com

------------------------------

Date: Tue, 17 Nov 87 08:43 N
From: MFMISTAL%HMARL5.BITNET@wiscvm.wisc.edu
Subject: Request for info in acronym KAT


We are planning to submit a grant proposal for the development of
a knowledge acquisition tool. To us it looks obvious to use "KAT"
as the acronym. However, maybe someone else uses KAT already.
If anyone has information on one or more systems named KAT,
please let me know.

Thanks in advance.

Jan L. Talmon
Dept. Medical Informatics and Statistics
University of Limburg
The Netherlands
EMAIL: MFMISTAL@HMARL5.bitnet

------------------------------

Date: 16 Nov 87 17:58:42 GMT
From: mit-caf!jtkung@media-lab.media.mit.edu  (Joseph Kung)
Subject: AI gaming : mancala


Anybody out there have any interesting gaming strategies for the
African game, mancala? I need some for an AI game that a friend of
mine is working on. Thanks.

- Joe

--

Joseph Kung
Arpa Internet : jtkung@caf.mit.edu

------------------------------

Date: 17 Nov 87 05:22:20 GMT
From: srt@locus.ucla.edu
Subject: Re: AI gaming : mancala

In article <542@mit-caf.UUCP> jtkung@mit-caf.UUCP (Joseph Kung) writes:
>Anybody out there have any interesting gaming strategies for the
>African game, mancala? I need some for an AI game that a friend of
>mine is working on. Thanks.

If 'mancala' is any variant of Kalah, you might want to look at *The
Art of Prolog* by Sterling and Shapiro, which includes a Prolog implementation
of Kalah.

    Scott R. Turner
    UCLA Computer Science     "Love, sex, work, death, and laughs"
    Domain: srt@cs.ucla.edu
    UUCP:  ...!{cepu,ihnp4,trwspp,ucbvax}!ucla-cs!srt

------------------------------

Date: 16 Nov 87 18:59:26 GMT
From: tsai%pollux.usc.edu@oberon.usc.edu (Yu-Chen Tsai)
Reply-to: tsai%pollux.usc.edu@oberon.usc.edu (Yu-Chen Tsai)
Subject: Re: bpsim code


In article <8711151153.aa02040@Dewey.UDEL.EDU> Brady@UDEL.EDU writes:
>I am confused about the little red riding hood article in BYTE.
>The article seems to suggest that the nodes in the middle layer
> .....
and KIL's comment follows:
>  This strategy permits nodes to be
>  deleted (via zeroed weights), but not created.  For creation of nodes
>  you may have to investigate genetic learning algorithms.  -- KIL]
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
I am interested in these genetic learning algorithms used in a neural network
implementaion. Can somebody in the Netland gives me some references? Please
response by e-mail to me. Thanks in advance!

Y. C. Tsai :-)
tsai@pollux.usc.edu fot Internet, {sdcrdc,cit-cav}!uscvax!tsai for UUCP
EE-Systems,
University of Southern California, Ca. 90089-0781

------------------------------

Date: 17 Nov 87 06:16:54 GMT
From: deneb.ucdavis.edu!g523116166ea@ucdavis.ucdavis.edu 
      (0040;0000004431;0;327;142;)
Subject: Re: references for adaptive systems


Another, obligatory, reference, is John Holland, et al, INDUCTION, new this
year or last.  The first three chapters are about Holland's genetic algorithms,
which are sucessful algorithms for adding new rules to a formal system, based
on experience.  Not so high profile as neural nets, but more general and more
enduring, I'll wager.  Holland has been at this since the early 60's; he's
at U. Michigan.  The remainder of the book is fascinating studies of how
people generally use 'rules', in contrast to how machines use them.  This
latter material is clearly about induction 'au natural', and nicely summarized
in a paper in the 10/30 issue of Science by some of the same authors, sans
Holland.

Holland's PhD students do odd theses: adaptive control of a refinery; pallett-
loading scheduling; other pragmatic stuff.  Why?

Ron Goldthwaite
UCalif, Davis, Psychology & Animal Behavior

------------------------------

Date: 15 Nov 87 19:27:10 GMT
From: cunyvm!byuvax!fordjm@psuvm.bitnet
Subject: Measures of "Englishness"?


  Recently someone on the net commented on a program or method of rating
the "Englishness" of words according to the frequency of occurance of
various letters in sequence, etc.

  I am currently involved in a project in which this approach might prove
useful, but I have lost the original posting.  Could the author please
contact me with more information about his or her project?

Thanks in advance,
John M. Ford               fordjm@byuvax.bitnet
131 Starcrest Drive
Orem, UT 84058

------------------------------

Date: 17 Nov 87 17:48:04 GMT
From: PT.CS.CMU.EDU!SPEECH2.CS.CMU.EDU!kfl@cs.rochester.edu  (Kai-Fu
      Lee)
Subject: Re: Measures of "Englishness"?

In article <32fordjm@byuvax.bitnet>, fordjm@byuvax.bitnet writes:
>
>   Recently someone on the net commented on a program or method of rating
> the "Englishness" of words according to the frequency of occurance of
> various letters in sequence, etc.
>

I don't know anything about the said post.  But you might be interested
in the following article:
        Cave and Neuwirth, Hidden Markov Models for English, Proceedings
        of the Symposium on Appication of Hidden Markov Models to Text
        and Speech, Princeton, NJ 1980.

Here's the editor's summary of the paper:

        L.P. Neuwirth discusses the application of hidden Markov analysis to
        English newspaper text (26 letters plus word space, without
        punctuation).  This work showed that the technique is capable
        of automatically discovering linguistically important categorizations
        (e.g., vowels and consonants).  Moreover, a calculation of the
        entropy of these models shows that some of them are stronger than
        the ordinary digraphic model, yet employ only half as many parameters.
        But one of the most interesting points, from a philosophical point
        of view, is the completely automatic nature of the process of
        obtaining the model: only the size of the state space, and a
        long example of English text, are give.  No a priori structure of the
        state transition matrix, or of the output probabilities is assumed.

Since hidden Markov models can be used for generation and recognition,
it is possible to train a model for English, and "score" any previously
unseen word with a probability that it was generated by the model for
English.

> Thanks in advance,
> John M. Ford               fordjm@byuvax.bitnet
> 131 Starcrest Drive
> Orem, UT 84058
>

Kai-Fu Lee
Computer Science Department
Carnegie-Mellon University

------------------------------

Date: 12 Nov 87 11:12:03 GMT
From: ihnp4!homxb!houdi!marty1@ucbvax.Berkeley.EDU  (M.BRILLIANT)
Subject: Re: Who owns the output of an AI?

In article <4631@spool.wisc.edu>, honavar@speedy.WISC.EDU
(A Buggy AI Program) writes:
> In article <1778@svax.cs.cornell.edu> houpt@svax.cs.cornell.edu
(Charles (Chuck) Houpt) writes:
> >   The law says that the output of an AI is owned by the user running the
> >AI, NOT the programmer who designed it.
> > ....
> > ... To me the British law seems unfair.....

It's just like the law governing real intelligence. Your
teachers created (or at least created a lot of value added in)
your intelligence, but a stroke of your pen will assign any
patents you create to your employer.  Though your teachers may
know more about your work than your employer, they have no claim
on the intellectual property you create after you leave their
campus.

M. B. Brilliant                                 Marty
AT&T-BL HO 3D-520       (201)-949-1858
Holmdel, NJ 07733       ihnp4!houdi!marty1

------------------------------

Date: 13 Nov 87 13:15:14 GMT
From: nosc!humu!uhccux!lee@sdcsvax.ucsd.edu  (Greg Lee)
Subject: Re: Who owns the output of an AI?

M. Brilliant writes:
>...
>patents you create to your employer.  Though your teachers may
>know more about your work than your employer, they have no claim

        I assume that in this analogy, the programmer
        is the "teacher", the AI program is "you" and
        the user of the program is the "employer".

------------------------------

Date: 14 Nov 87 12:27:59 GMT
From: speedy!honavar@speedy.wisc.edu  (A Buggy AI Program)
Subject: Re: Who owns the output of an AI? (actually wonders of rn)

In article <1412@houdi.UUCP> marty1@houdi.UUCP (M.BRILLIANT) writes:

>In article <4631@spool.wisc.edu>, honavar@speedy.WISC.EDU (A Buggy AI Program)
^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
writes:
>> In article <1778@svax.cs.cornell.edu> houpt@svax.cs.cornell.edu
(Charles (Chuck) Houpt) writes:
>> >   The law says that the output of an AI is owned by the user running the
>> >AI, NOT the programmer who designed it.
>> > ....
>> > ... To me the British law seems unfair.....
>
>It's just like the law governing real intelligence.
> ......
>
>M. B. Brilliant                                        Marty
>AT&T-BL HO 3D-520      (201)-949-1858
>Holmdel, NJ 07733      ihnp4!houdi!marty1

It's probably about time some AI was put into the news software so that it can
make sure that the pieces of article/s quoted are really from the authors
to whom the quotes attributed.

--VGH

------------------------------

Date: 14 Nov 87 17:29:00 GMT
From: kadie@b.cs.uiuc.edu
Subject: Re: Who owns the output of an AI?


If your AI program (or any program) is really great there
are a number of ways to make more money per user from it.

One way that was already mentioned is to licence it. I remember
that some of the first compilers for microcomputers said that
you had to pay them money for any programs you sold that
were compiled with their product.

Another method is to charge for each run of your program. You do
this by setting up your own computer and having people dial in to
it. I know that this system is used by some companies that
have (non AI) programs that solve financial optimization problems.

The trouble with both these methods is that the
users don't like them as well as owning the program,
so you will not have as many costumers.


Carl Kadie
Inductive Learning Group
University of Illinois at Urbana-Champaign
UUCP: {ihnp4,pur-ee,convex}!uiucdcs!kadie
CSNET: kadie@UIUC.CSNET
ARPA: kadie@M.CS.UIUC.EDU (kadie@UIUC.ARPA)

------------------------------

Date: 16 Nov 87 15:07:14 GMT
From: yale!kthomas@NYU.EDU  (Kevin Thomas)
Subject: Re: Who owns the output of an AI?

In article <1778@svax.cs.cornell.edu> houpt@svax.cs.cornell.edu (Charles
(Chuck) Houpt) writes:
>   Is this fair? Should copywrites go to the user or the programmer?
> If my AI program discovered
>a new high temperature super-conductor, shouldn't I get some profit?

The copyrights and patents should all go to the user, absent any contractual
agreements to the contrary.  This is the same debate that went on about
10-15 years ago with compilers.  Updated to the mid-80's, if I write a
program in Turbo C that Peugeot sells, should Borland be entitled to royalties?
The answer is "no, unless they say so in the sale contract, and the buyer
clearly agrees to the language in that contract".

Actually, in the case of derived products, it's worse:  If Peugeot uses a
Turbo C program to design a car, should Borland get a cut of the profits that
result from the sale of the car, in the absence of any language in the
sale contract?  I would again say "no".  Borland is free to put language
into the contract that does or does not reserve whatever rights it wants or
does not want.

/kmt

------------------------------

Date: 18 Nov 87 02:39:12 GMT
From: allegra!jac@ucbvax.Berkeley.EDU  (Jonathan Chandross)
Subject: My parents own my output.

If I write a program that generates machine code from a high level language
do I not own the output?  Of course I own it.  I also own the output from
a theorum prover, a planner, and similar systems, no matter how elaborate.

One of the assumptions being made in this discussion is that an AI can be
treated as a person.  Let us consider, for the moment, that it is merely
a clever piece of programming.  Then I most *certainly* do own its output
(assuming I wrote the AI) by the reason given above.  (Software piracy is a
whole other ball of wax.)

The alternative is to view the AI as an sentient entity with rights, that
is, a person.  Then we can view the AI as a company employee who developed
said work on a company machine and on company time.  Therefore the employer
owns the output, just as my employer owns my output done on company time.

The real question should be: Did the AI knowlingly enter into a contract with
the employer.

I wonder if the ACLU would take the case.


Jonathan A. Chandross
AT&T Bell Laboratories
Murray Hill, New Jersey
{moss, rutgers}!allegra!jac

------------------------------

End of AIList Digest
********************
22-Nov-87 23:39:24-PST,20375;000000000000
Mail-From: LAWS created at 22-Nov-87 23:20:27
Date: Sun 22 Nov 1987 23:13-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #271 - Genetic Learning, Statistics, Benchmarking, Msc.
To: AIList@SRI.COM


AIList Digest            Monday, 23 Nov 1987      Volume 5 : Issue 271

Today's Topics:
  Queries - Constraint Satisfaction & Systems Developed using AI Tools/Shells,
  Learning - Genetic Learning Systems & Adaptive Systems,
  Expert Systems - Statistical Expert Systems & Benchmarking,
  Games - Mancala Reference,
  Applications - Speech Understanding,
  Comments - Success of AI & Who Owns the Output of an AI?

----------------------------------------------------------------------

Date: Fri 20 Nov 87 17:27:35-CST
From: Charles Petrie <AI.PETRIE@MCC.COM>
Reply-to: Petrie@MCC.com
Subject: Constraint Satisfaction Query


Does someone know a pointer to software and algorithms that relax
constraints and reason about which to relax first?  In particular,
does anyone know of a linear programming system which does not
satisfy all constraints and which allows a partial ordering on the
satisfaction of the constraints?

------------------------------

Date: 19 Nov 87 12:00:00 GMT+109:13
From: santino@esdvax.arpa
Reply-to: <santino@esdvax.arpa>
Subject: INFO REQUESTED ON SYSTEMS DEVELOPED USING AI TOOLS/SHELLS

                   I N T E R O F F I C E   M E M O R A N D U M

                                        Date:      19-Nov-1987 12:00
                                        From:      Fred Santino
                                        Username:  SANTINO
                                        Dept:      ESD/SCPM
                                        Tel No:    x5316

TO:  _MAILER!                             ( _DDN[AILIST@SRI.COM] )


Subject: INFO REQUESTED ON SYSTEMS DEVELOPED USING AI TOOLS/SHELLS

1. We're interested in knowing of examples of "real world" expert
systems developed using commercially available expert system
tools/shells, particularly those which have applicability to our
present "CGADS" development, and any other information useful prior to
our selecting a tool. Some preliminary background on our "CGADS"
project is provided:

2. The Computer Generated Acquisition Document System (CGADS) is the USAF
Electronic Systems Division (ESD) first-generation expert system which
assists DOD program managers and engineers in creation of acquisition
documents such as "Statements of Work" which become part of Government
"Request For Proposals" (RFP's) for major DOD systems projects.

CGADS, presently running on a VAX 8600, is used operationally by the USAF
Electronic Systems Division, as well by a large number of other DOD
acquisition agencies nationwide. CGADS is also used at the Air Force
Institute of Technology to teach systems acquisition management.

CGADS, used equally by experienced and inexperienced engineers,
presents a series of yes/no questions, such as type of equipment,
logistics, safety, production, phase of development, and degree of
commercial off-the-shelf components. Based on the engineer's
choices, CGADS generates the proper "boiler-plate" text and MIL-STD
references to form a draft Statement of Work.

Since the system text and rules are updated periodically by experts
who represent several dozen technical disciplines, the resulting
document meets most requirements, and needs only minimum review.  The
system also allows newly assigned engineers, having only minimum
training, to create draft acquisition documents.

Since CGADS was first developed in 1981 exclusively in Fortran 77, and
without using a database, it has become unnecessarily expensive to
keep the text updated. Also, its structure lacks the flexibility for
planned capabilities, such as producing the greatly varying system
specifications for major DOD acquisition programs.

3. We plan to use an ORACLE database to improve the text storage, and to
select a commercial expert system tool/shell to minimize development
of an inference engine, and maintenance utility. Some examples of
AI tools we may evaluate:

Knowledge Engineering Environment (KEE),  Intellicorp, Menlo Park, CA
Knowledge Engineering System (KES),  Software A&E, Arlington, VA
The Intelligent Machine Model (TIMM), Gen Research, Santa Barbara, CA
OPS5, Carnegie Mellon Univ, Pittsburgh, PA
Expert, Rutgers Univ, New Brunswick, NJ
S1 or M1, Teknowledge, Inc., Palo Alto, CA
Automated Reasoning Tool (ART), Inference Corp, Los Angeles, CA

4. We'd be interested in knowing the type of application, the amount of
programming that was required to "tailor" the commercial shell/tool
for the application, and the amount of maintenance required.

In addition to providing information on actual systems developed
using commercial tools, we'd appreciate hearing any lessons learned,
or recommendations both positive and negative that anyone is willing
to share, even "horror stories" about developments that never made it,
or products to avoid (if any).

5. Please answer on AILIST, or directly to SANTINO@ESDVAX.ARPA,
or call Autovon 478-5316, or Commercial 617-377-5316.

Thanks,

Fred Santino
Project Engineer
USAF Electronic Systems Division (ESD/SCP)
Hanscom AFB, MA 01731

------------------------------

Date: Wed, 18 Nov 87 08:42:55 est
From: John Grefenstette <gref@nrl-aic.ARPA>
Subject: Re: references for genetic learning systems

The following books give a good overview of genetic learning
systems:

Adaptation in Natural and Artificial Systems,
J. H. Holland, Univ. Michigan Press: Ann Arbor, 1975.

Induction: Processes of Inference, Learning, and Discovery,
J. H. Holland, K. J. Holyoak, R. E. Nisbett and P. A. Thagard,
MIT Press: Cambridge, 1986.

Genetic Algorithms and Simulated Annealing,
L. Davis (ed.), Pitman: London, 1987.

Genetic Algorithms and Their Applications:
Proceedings of the 2nd Intl. Conf. Genetic Algorithms,
J. J. Grefenstette (ed.), Lawrence Erlbaum Assoc: Hillsdale, 1987.

There is also a bulletin board devoted to genetic algorithms
and related topics.  To join, send a request to:
        GA-List-Request@NRL-AIC.ARPA

-- JJG

------------------------------

Date: Wed, 18 Nov 87 08:54:55 est
From: Lashon Booker <booker@nrl-aic.ARPA>
Subject: Re: references for adaptive systems

Ron Goldthwaite of UCalif, Davis asks

> Holland's PhD students do odd theses: adaptive control of a refinery;
> pallett-loading scheduling; other pragmatic stuff.  Why?

In fact, a large number of Holland's PhD students have done theses that
are not "pragmatic" at all in the way you indicate.  Here are a few examples
that come to mind:

Rosenberg, R. S. (1967) "Simulation of genetic populations with biochemical
properties", studies the evolution of populations of single-celled organisms.

Reynolds, R. G. (1979) "An adaptive computer model of the evolution of
agriculture for hunter-gatherers in the valley of Oaxaca, Mexico",
a study that explains a body of archaeological findings.

Booker, L. B. (1982) "Intelligent behavior an an adaptation to the task
environment", a computational model of cognition and learning
in simple creatures.

Perry, Z. A. (1984) "Experimental study of speciation in ecological
niche theory using genetic algorithms"

Grosso, P. B. (1985) "Computer simulation of genetic adaptation: Parallel
subcomponent interaction in a multilocus model", studies diploid
representations and explicit migration among subpopulations.


There are many other articles and tech reports of a similar nature having to do
with genetic algorithms and classifier systems. The "pragmatic stuff" seems
to be the work that is most interesting to the AI community.

Lashon Booker
booker@nrl-aic.arpa

------------------------------

Date: 19 Nov 87 22:58:22 GMT
From: eric@aragorn.cm.deakin.OZ (Eric Y.H. Tsui)
Reply-to: eric@aragorn.UUCP (Eric Y.H. Tsui)
Subject: Re: Statistical Exp. Sys. Query

In article <563694273.0.LPRESS@VENERA.ISI.EDU> LPRESS@VENERA.ISI.EDU
(Laurence I. Press) writes:
>Can anyone give me pointers to programs and/or papers on statistical
>applications of expert systems?
>
>Larry
>-------

See Artificial Intelligence and Statistics, edited by William A. Gale,
Addison-Wesley, Reading, 1986.

---------------------------------------------------------------------------
Eric Tsui               >> CSNET:eric@aragorn.oz                         <<
Division of Comp./Maths.>> UUCP: seismo!munnari!aragorn.oz!eric          <<
Deakin University       >>       decvax!mulga!aragorn.oz!eric            <<
Victoria 3217           >> ARPA: munnari!aragorn.oz!eric@seismo.arpa     <<
Australia               >>       decvax!mulga!aragorn.oz!eric@Berkeley   <<

------------------------------

Date: 19 Nov 87 23:03:27 GMT
From: eric@aragorn.cm.deakin.OZ (Eric Y.H. Tsui)
Reply-to: eric@aragorn.UUCP (Eric Y.H. Tsui)
Subject: Re: Exp. Sys. Benchmarking Query

In article <563694555.0.LPRESS@VENERA.ISI.EDU> LPRESS@VENERA.ISI.EDU
(Laurence I. Press) writes:
>Can anyone supply pointers to papers on benchmarking and performance
>evaluation for expert system shells?
>
>I have written a short program that generates stylized rule bases of
>a specified length and have used it to generate comparative test cases
>for PC Plus and M1.  I'd be happy to give anyone a copy and would like
>to learn of other efforts to compare expert system shells.
>
>Larry
>-------

On evaluation of Expert System tools, see J.F. Gilmore, K. Pulaski
and C. Howard, A Comprehensive evaluation of expert system tools,
Applications of AI III, J.F. Gilmore, Editor, Proc. 635, p2-16.

(The above group has published a few papers on the evaluation of ES
and the above paper is only a recent one of many from them.)

---------------------------------------------------------------------------
Eric Tsui               >> CSNET:eric@aragorn.oz                         <<
Division of Comp./Maths.>> UUCP: seismo!munnari!aragorn.oz!eric          <<
Deakin University       >>       decvax!mulga!aragorn.oz!eric            <<
Victoria 3217           >> ARPA: munnari!aragorn.oz!eric@seismo.arpa     <<
Australia               >>       decvax!mulga!aragorn.oz!eric@Berkeley   <<

------------------------------

Date: 18 Nov 87 14:27:22 GMT
From: uvaarpa!virginia!uvacs!dsr@umd5.umd.edu  (Dana S. Richards)
Subject: Re: AI gaming : mancala

>In article <542@mit-caf.UUCP> jtkung@mit-caf.UUCP (Joseph Kung) writes:
>>Anybody out there have any interesting gaming strategies for the
>>African game, mancala? I need some for an AI game that a friend of
>>mine is working on. Thanks.

There is a book "Mancala Games" by Laurence Russ, Reference Publ. Inc., 1984.

I haver not read it but it was reviewed in Math. Intelligencer 9(1987)68.

------------------------------

Date: 18 Nov 87 10:25:00 GMT
From: uxc.cso.uiuc.edu!osiris.cso.uiuc.edu!goldfain@a.cs.uiuc.edu
Subject: Re: Practical effects of AI (speech


I would like to echo the sentiment in Eric Green's comment.

Let  us NOT try to define  AI in terms  of techniques.  It  is defined  by its
domain of inquiry, and that clearly includes speech recognition.  I do not for
a  moment believe   that   continuous speaker-independent speech  recognition,
if/when it is achieved,  will be considered primarily  a  work of physics.  No
matter how it is achieved, that is just not a viable statement.

                                                            - Mark Goldfain

------------------------------

Date: 16 Nov 87 17:43:50 GMT
From: PT.CS.CMU.EDU!SPEECH2.CS.CMU.EDU!kfl@cs.rochester.edu  (Kai-Fu
      Lee)
Subject: Re: Practical effects of AI (speech)

In article <244@usl-pc.UUCP>, jpdres10@usl-pc.UUCP (Green Eric Lee) writes:
> But seriously, considering that sciences such as physics and
> mathematics have been ongoing for centuries, can we REALLY say that AI
> has "traditional techniques"? . . .  "it is far from clear that it
> would be an advancement for AI" presupposes that one defines AI as
> "that science which uses certain traditional methods", which, I
> submit, is false.
>

By "traditional techniques", I was referring to the older popular
techniques in AI, such as expert systems, predicate calculus, semantic
networks, etc.  Also, I was trying to exclude neural networks,
which may be promising for speech recognition.  I have heard of
"traditionalist vs. connectionist AI", and that is why I used the
term "traditional techniques".

Kai-Fu Lee
Computer Science Dept.
Carnegie-Mellon University

P.S. - I did not say that AI is a science.

------------------------------

Date: 15 Nov 87 13:56:12 GMT
From: eitan%WISDOM.BITNET@wiscvm.wisc.edu (Eitan Shterenbaum)
Reply-to: eitan%H@wiscvm.arpa (Eitan Shterenbaum)
Subject: Re: Success of AI


In article <> honavar@speedy.wisc.edu (A Buggy AI Program) writes:
>
>In article <4357@wisdom.BITNET> eitan%H@wiscvm.arpa (Eitan Shterenbaum) writes:
>>
>>Anyway, no one can emulate a system which has no specifications.
>>if the neuro-biologists would present them then you'd have something to start
>>with.
>
>       I use the term "computer" in a sense somewhat broader than a
>       Von-Neuman machine. We can, in principle, build machines that
                                    ^^^^^^^^^^^^
                                    ^^^^^^^^^^^^

>       incorporate distributed representations, processing and control.
>       It is not clear what you mean by a "distributed environment lacking
>       boolean algebra."
>       The use of fine-grained distributed representations naturally results
>       in behavior indicative of processes using fuzzy or probabilistic logic.
>       The goal is, not necessarily to emulate the brain in all its detail:
>       We can study birds to understand the principles of aerodynamics that
>       explain the phenomenon of flying and then go on to build an aeroplane
>       that is very different from a bird but still obeys the same laws of
>       physics. As for specifications, they can be provided in different
>       forms and at different levels of detail; Part of the exercise is
>       to discover such specifications - either by studying actual existing
>       systems or by analyzing the functions needed at an abstract level to
>       determine the basic building blocks and how they are to be put
>       together.
>

a) You can't understand the laws under which a system works without
   understanding the structure of the system ( I believe that our
   intelligence is the result of our brain's structure )

b) The earodynamics example just prooves my point. Only after understanding
   *WHY* the birds are built in a certain form the researchers would've
   been able to understand the pronciples. The fact is that Leonardo de Vinci
   knew more about aerodynamics than the pioneers of flight is acknowlodged
   to the *research* he has done on birds. It seems to me that many AI
   scientists disregard 2 facts a- They have no definition of AI
                                b- They disregard the fact that the best
                                   way to have more knowledge about a certain
                                   phenomennon is to observe and research it.

It seems to me that
        1) You have no definition for Intelligence.
        2) You want to have the rules of Itelligence.
        3) Thus you build systems inorder to simulate Intelligence.
        4) Since you don't know you're looking for and since you have no
           basic rules to simulate the intelligence on, you invent your
           own local definition and rules for Intelligence.
        5) Then youtry to mach your results with your expectations of what
           the results should be.
Sometimes it works some time it doesn't.
This method reminds me "random sort" I.E The computer has N numbers, It
randomly prints them out one by one and then it tries to check whether
they are ordered, if not - he does the above again. I hope that you've
noticed that the probability that you'd be correct is quite slime
( actually 1/N! ... )

>>
>>And last - Computers aren't meta-capable machines they have constraints,
>>           not every problem has an answer and not every answermakes sense,
>>           NP problems are the best example.
>>
>       Are you implying that humans are "meta-capable" - whatever that means?
>

I'm trying to imply that human beings aren't Turing equivalent ...
( not even when compared to a non-determinitstic turing machine )

Correct me if I'm wrong but I do feel that the neuro-biologists chaps are
in the right track and that the Computer scientists should combine efforts
with them instead of messing around with AI.

(I'm not saying that AI isn't usefull, it is, just that it's very little
 success in Inteligence and a grand success in Artificial artifacts ...)


                        Eitan Shterenbaum


Disclaimer - My ideas are mine and only mine !

@@@@@@@@@@@@@@@@@@@@@@@@@@@@

------------------------------

Date: 18 Nov 87 09:55:00 GMT
From: uxc.cso.uiuc.edu!osiris.cso.uiuc.edu!goldfain@a.cs.uiuc.edu
Subject: Re: Who owns the output of an AI?


Just to fan  the  flames,  let me throw  in  1 totally  outlandish,  2  mildly
outlandish answers and a 4th that  is not so bad (but  I'm not sure  whether I
buy the analogy.)
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1) Newsflash.   Microsoft   today filed  lawsuit   against 250,000 authors  of
   various books  and papers  for violation of  copyright.   Said a  Microsoft
   spokesman,  "Yes, these people really wrote  the manuscripts, but then they
   gave them, in very raw form, to our program which then  took it upon itself
   to  edit, layout, and  publish them.  Our  program actually   owns the copy
   rights to these  items."  When  asked how   his company managed  to file  a
   quarter of a  million legal documents  in one day, the  spokesman  said "No
   trouble, we just used  Think Technology's  'legal councillor'  program."  A
   second later, the Microsoft representative ran  from the room muttering "Oh
   no ..."
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2) If the computer program is not intelligent enough to reply to the "raw"
   data with :
                  "I don't know, nothing looks interesting here ..."
   then to phone its author and say
              "Hey Jaime, this formula makes a wonderful superconductor.
               Do you want it, or should I tell Tom? "
   then I doubt it has enough intelligence to "deserve" the credit itself.
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
3) In today's environment the  user can shut off  the machine and go  get  the
   patent  himself, claiming to  have made the  discovery without any computer
   assistance.  (Those who believe "an unenforceable  law should not be a law"
   may see some point in this.)
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
4) The  user  can claim  "I   did not  ask the  machine    to find me   a good
   superconductor.  All  I  asked it was  whether this particular math problem
   had a solution.  The analogy leads us to the conclusion that we should give
   credit to the author for  a math theorem (and he  probably already has that
   credit in the literature), credit to the  program for applying  the theorem
   to solve a particular math problem (usually quite technically difficult but
   quite  uninteresting to humans) and   to the  user   for having applied the
   solution of a math problem to discovery of a new superconductor.
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
              Mark Goldfain         arpa:     goldfain@osiris.cso.uiuc.edu
                                    US Mail:  Mark Goldfain
          (just a student in the) -->         Department of Computer Science
                                              1304 West Springfield Avenue
                                              Urbana, Illinois  61801

------------------------------

End of AIList Digest
********************
22-Nov-87 23:47:52-PST,17970;000000000000
Mail-From: LAWS created at 22-Nov-87 23:26:59
Date: Sun 22 Nov 1987 23:24-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #272 - Expert System Survey
To: AIList@SRI.COM


AIList Digest            Monday, 23 Nov 1987      Volume 5 : Issue 272

Today's Topics:
  Expert Systems - Survey Results

----------------------------------------------------------------------

Date: 19 Nov 87 14:42:38 GMT
From: portal!cup.portal.com!Barry_A_Stevens@uunet.uu.net
Subject: expert system survey results


                      EXPERT SYSTEM SHELL SURVEY
               Copyright 1987 Applied AI Systems, Inc.

We  recently  sent a questionnaire to 1700 users  of  PC-based  expert
system development shells.  One hundred seventy nine firms  responded.
The  survey  was intended as a snapshot  of the expert  system  market
during the preparation stages of a business plan. It was understood in
advance by all parties concerned that:

    IT WAS NEVER MEANT TO BE AN ACADEMICALLY CORRECT SURVEY.
        IT WAS ONLY TO PROVIDE SOME GENERAL INFORMATION.

If you can accept the above limits, what follows may be of interest to
you.  If  an  imperfect survey is an abomination,  you  can  skip  the
remainder of this file.




The purpose of the survey was to educate and inform, on a gross level,
about   the  expert  system  shell  marketplace.  Information   sought
included:

     profiles  of  the  shell  users  and  their   organizations;

     general strengths and weaknesses of expert system shells;

     the decision process followed by users when buying a shell;

     the  reasons for getting into expert systems;

     expert system software in use;

     job titles of people using expert system tools; and

     applications that have been implemented using shells.

One   hundred   seventy-nine  questionnaires   were   completed    and
returned.  The  survey  contained some  questions  whose  answers  are
confidential.  We  thought that some of the  results,  summarized  for
bervity  and  sanitized  to  maintain  confidentiality,  might  be  of
interest.


WHAT  CHARACTERISTICS  MAKE  A GOOD - AND  BAD  -  EXPERT  SYSTEM
DEVELOPMENT SHELL?

Many  of  the questionnaire respondents indicated that  they  had made
studies  of multiple expert system development shells. Thirty  two  of
those respondents offered the following general comments about factors
that they  viewed  as  strengths  and weaknesses of  those  tools.   A
strength   was   defined  as a reason they would buy a   tool   as   a
result  of a product evaluation, while a weakness would cause them  to
reject a tool.

Strengths

General  strengths  are  described  below,  with  the  number  of
respondents mentioning each factor shown.

     The   tool   should  be  useful   in   many   microcomputer,
     minicomputer,  and mainframe environments,  under  different
     operating systems. (6)

     The  capability to access other programs and data should  be
     provided. (5)

     The   tool  should  be  capable  of  frames  and/or   object
     representation as well as representation by rules. (4)

     Math  functions  and numeric and text  variables  should  be
     usable in rules (2)

     Rules should be easy  to structure; (2)

     The product should be easy to learn. (2)

     The product should be easy to use. (2)

     Both  editor  and user interface should be  in  English,  or
     natural language;

     Graphics and a good user interface should be available;

     Procedural components should be available for sequencing and
     interaction, including the ability to clear previous answers
     and ask questions again;

     The   tool  should  handle  probabilities,  including  fuzzy
     logic;

     The tool should learn by examples;

     Sophisticated WHY and HOW capability should be available  to
     explain reasoning;

     Good support and training should be available;

     Good documentation should be available.

Weaknesses

General weaknesses of expert system development shells that  were
identified by users are described below.

     Cost of a product should be appropriate for its capabilities
     and  performance;   (the survey indicated  that   users  are
     price  sensitive,  and  price is  a  significant  factor  in
     purchasing a product.) (12)

     Special hardware requirements are a problem. The tool should
     run   on   a  standard  PC  or  other   commonly   available
     environment. (5)

     Knowledge  base  size  limitations are  a  problem.  Several
     available  products  limit the number of rules that  can  be
     defined. (3)

     Slow  execution speed is a problem. Execution  speed  should
     be  such that a large number of rules can be executed  in  a
     reasonable time.  (2)

     Lack of flexibility in knowledge representation and use is a
     problem. (2)

THE PROCESS OF MAKING THE DECISION TO BUY

We  asked  about the process of decision making that  went  into   the
purchase   of  a PC-based expert system shell at a price of  $400.  We
wanted to know who made the decision,  and how it was made.


Who (by organizational unit) made the decision to buy?

INTERNAL
ORGANIZATION                  NUMBER  % RESPONSE

Research and Development         58     31.9%
MIS/Data Processing              36     19.8%
Independent individuals          31     17.0%
Operating (line) organizations   26     14.3%
Management/staff function        19     10.4%
Advanced Planning                10      5.5%
Other                             2      1.1%

              TOTALS            182      100%


Who (by reporting relationship) made the decision to buy?

WHO MADE
DECISION                              NUMBER  % RESPONSE

Me                                       151     88.8%
My Boss                                   17     10.0%
Different department                       1      0.6%
My subordinate                             1      0.6%
My boss's boss                             0      0.0%

            TOTALS                       170      100%


How was the decision to buy made?

DECISION PROCESS                       NUMBER  % RESPONSE

No formal decision process                99     40.9%
Product review and comparison             64     26.4%
Internal needs assessment                 26     10.7%
Cost justification                        40     16.5%
Formal review and planning process         6      2.5%
Other                                      7      2.9%

            TOTALS                       242      100%


What were the reasons for getting into expert systems?

REASON                                NUMBER  % RESPONSE

To capture knowledge                      76     16.0%
It's a training tool                      71     14.9%
Part of overall corporate strategy        60     12.6%
To improve quality of work                47      9.9%
To improve quality of product             41      8.6%
To learn expert systems                   41      8.6%
It's a competitive weapon                 34      7.2%
It's in fashion                           27      5.7%
To achieve a cost savings                 23      4.8%
It's a marketing tool                     22      4.6%
To provide an MIS/DP capability           19      4.0%
Other                                     14      2.9%

             TOTALS                      475      100%

What types of expert system software are installed?

We  found that 76 distinct products were in use from nearly  as   many
vendors.  It is interesting to note the  categories  into which  these
products fell.

Lisp Based Tools:

     total units installed:              80
     number of products installed:       15

Prolog Based Tools:

     total units installed:             120
     number of products installed:       12

Development tools/shells:

     total units installed:             223
     number of products installed:       49

JOB TITLES OF PEOPLE USING SHELLS

We   were   interested in the job titles of people   who   had   built
expert systems using shells. You may be interested as well.

    Advanced Programmer/Analyst
    Advanced R&D Project Engineer
    Advanced Technology Group
    Advisory Engineer
    AI Branch Chief
    Analyst
    Assistant Professor
    Assistant Vice President
    Associate Professor
    Audit Manager
    CEO
    Chairman of the Board
    Chair, Department of Communications
    Chemist
    Computer Scientist
    Computer Specialist
    Consultant
    Cost Analyst
    Design specialist
    Director Clinical Laboratories
    Director, AI in Business
    Director, Clinical Research
    Director, Computation Center
    Education Specialist
    Engineer
    Executive Consultant
    Financial Services Officer
    Graduate Assistant
    Group Leader
    Head Intelligent Systems. Lab.
    Instructor
    Learning Center Manager
    Lecturer
    Manager, R&D
    Manager, Analytical Chemistry
    Manager, Information Resource Management
    Manager, Operations
    Manager, Product Evaluation Office Automation
    Manager, Proposals
    Manager, Remote Sensing Lab
    Managing Vice President
    Materiel Operations Manager
    Owner
    Physician
    President
    Principal
    Professor of AI & ES
    Professor of Chemistry
    Professor of Real Estate
    Program Manager
    Programmer
    Project Coordinator
    Project Engineer
    Regional Manager
    Research Agronomist
    Research Assistant
    Research Forester
    Research Manager
    Research Scientist
    Seismic Processing Analyst
    Senior Analyst
    Senior Engineer
    Senior Research Chemist
    Senior Research Fellow
    Senior System Analyst
    Senior Tax Manager
    Senior VP & Senior Trust Officer
    Special Projects Director
    Staff Machinery Engineer
    Staff Research Engineer
    Staff Scientist
    Systems Officer
    Technical Advisor
    Technical Consultant
    Technical Journalist
    Technical Staff Member
    Technology Assessor
    Underwriting Director
    Unit Head Conventional Safety Standards
    Vice President
    Wildlife Ecologist

While there are a few titles that indicate specialization in  AI,
most  are  seen to be outside that category. Expert  systems  are
being   built  by  technical  professionals  even  with   limited
computer experience.

APPLICATIONS FOR SHELLS

We   asked   about the applications that were either   built   or   in
development using shells. In many cases, the reply indicated  that the
nature of the application was confidential and no application name was
given.   The   names   below  are those that  were   listed   in  user
responses, with little editing and no attempt to explain.

     Account business assessment
     Advertising copy development
     Advice on single family home purchase
     Advice on stock and commodities trading
     Advise nursing students on the care of patients
     Advising on choices for new technology
     Advisor on choosing soy bean varieties
     Advisor on design of new magnetic components
     Aid for financial futures traders
     Aid for isolating failing chips
     Aid in salmon stocking rates, species selection
     Alarm management system
     Analysis of simulation results in bank product planning
     Analysis of soil site characteristics
     Analysis of X-rays
     Application sizing based on similar applications
     Assist in compiling tax planning ideas
     Assist in diagnosis of computer console messages
     Assist in identification of rare antibodies
     Assist new users of DOS
     Assist service desk in troubleshooting application problems
     Assistance in search for part numbers
     Augment expertise of resource manager
     Broker syndication planner
     Call screening to interview users with application problems
     Career development
     Causal model of account marketing
     Chemical process diagnosing and troubleshooting
     Choosing a living or testamentary trust
     Choosing an executor for trusts
     Classification of data from satellites
     Classifications of software programs
     Closing and issuance assistance
     Commercial loan credit analysis
     Commercial loan documentation check list
     Computer modeling support
     Computer system configurator
     Configurator for selecting, sizing, and writing parts list
     Configuring programmable controller system
     Conservation equipment tillage selector
     Correct selection of cost codes
     Cost/benefit assistant
     Create standard loan documents based on characteristics
     Credit control system
     Crop management and irrigation simulation
     Customer assistance in selecting types of investments
     Customer service advisor for problem resolution
     Customer water quality analysis
     Data communications troubleshooting
     Decision support for correct testing by auditing
     Detailed analysis of hardware and software problems
     Detailed design for asphalt concrete pavement
     Determine correct mixture for propellant ingredients
     Determining best shipping documentation and routes
     Diagnose telecommunications difficulties
     Diagnosis of sports related injuries
     Diagnostic advisor for pulp bleaching
     DP production support system
     Epidemiology expert system
     Equipment fault diagnosis
     Equipment troubleshooting
     Estimate employee's potential retirement salary
     Estimating construction costs
     Evaluation of commodities purchases
     Evaluation of multi-family housing projects
     Evaluation of stock purchases
     Fault diagnosis for electronic hardware
     Federal contract management
     Fertilizer recommendations
     Fertilizer, climate, and soil interaction
     Finding phases present in super alloys
     Forecast snowfall accumulation
     Forecasting severe convecting weather
     Futures, stocks, and options trading
     Gas turbine troubleshooting
     Geographic information system analysis aid
     Grading of graft vs. host disease
     Hardware and software selection
     Hardware failure analysis
     Hardware sizing assistant
     Hazardous chemical ranking
     Implementation planning assistant
     Industrial training
     Interpretation of statistical quality control data
     Invention patentability expert
     Irrigation and pest control management
     Lime recommendation system
     Line diagnosis and fault detection
     Local area network selection
     Machine advisor for grinding, milling, turning
     Manufacturing resource planning aid
     Market segmentation and positioning
     Marketing advisor for process control systems
     Material selection by engineers
     Materials selection for specialized component parts
     Medical decision making
     Medical diagnosis
     MIS decision support system
     Mortgage credit analysis
     Network operations systems diagnosis
     Papaya management system
     Pavement performance diagnosis
     Pavement rehabilitation
     PC configuration
     PC Hardware and software configurator
     Perform hematological diagnosis
     Personal tax advisor
     Pest management and soil interaction analysis
     Portfolio construction
     Power plant boiler tube failure identification
     Problem diagnosis for local area networks
     Problem diagnosis for printers on a SNA network
     Product development support system
     Product performance troubleshooting for salesmen
     Product selection system
     Production scheduling
     Psychiatric interview
     Quick proposal estimator
     Radar mode design workstations
     Rating for substandard life insurance
     Real estate appraisal
     Real estate site selection
     Real time process control
     Real time troubleshooting for wastewater process control
     Recommend documentation to computer users
     Relay diagnosis
     Risk assessment of error or fraud in financial statements
     Salary planning
     Sales order analysis
     Salmon diagnosis and treatment
     Select pension types
     Select, recommend library reference materials
     Selection of non-materials in aerospace applications
     Selection of solvents for chemical compounds
     Service network assistant
     Software development risk analysis
     Software system diagnosis model
     Software vendor risk analysis
     Soil acidity analysis
     Soil characterization and utilization
     Solid waste disposal management assistant
     Space shuttle payload on-orbit analysis
     Strategic alternatives for a fragmented industry
     Strategic marketing and planning aid
     Structural damage assessment
     Student financial aid eligibility
     Submarine approach officer training
     System to identify feasible rehabilitation strategies
     System to prepare process estimates
     Tactical battle management
     Teaching mineral and rock identification
     Telephone system configurator
     Toxicity of laboratory chemicals
     Training in gas turbines
     Training new financial planners
     Troubleshooting airplane starting systems
     Underwriting assistance
     Underwriting guidance for line underwriters
     Weed identification
     When to perform a physical audit

Applied AI Systems, Inc. and Barry Stevens may be reached at PO Box
2747, Del Mar, CA, 619-755-7231.

------------------------------

End of AIList Digest
********************
24-Nov-87 23:34:15-PST,18771;000000000000
Mail-From: LAWS created at 24-Nov-87 23:31:35
Date: Tue 24 Nov 1987 23:29-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #273 - Seminars, Conferences
To: AIList@SRI.COM


AIList Digest           Wednesday, 25 Nov 1987    Volume 5 : Issue 273

Today's Topics:
  Seminars - Notes Toward a New Philosophy of Logic (SUNY) &
    The Soar Project (ISI) &
    Theories of Comparative Analysis (BBN) &
    Performance in Practical Problem Solving (Bell Labs),
  Conference - Workshop on Meta-Programming in Logic (England) &
    CMU Meeting on Metadeduction &
    Prolog Benchmarking Workshop &
    AI in Economics and Management

----------------------------------------------------------------------

Date: Fri, 20 Nov 87 12:09:44 EST
From: rapaport@cs.Buffalo.EDU (William J. Rapaport)
Subject: Seminar - Notes Toward a New Philosophy of Logic (SUNY)


                STATE UNIVERSITY OF NEW YORK AT BUFFALO

                        BUFFALO LOGIC COLLOQUIUM

                             COLIN McLARTY

                        Department of Philosophy
                    Case Western Reserve University

                 NOTES TOWARD A NEW PHILOSOPHY OF LOGIC

Today, logic is generally conceived as, more or  less,  describing  pure
laws of thought.  But categorial logic has given an extensive, rigorous,
formalized version of the claim that logic is simply the most abstracted
aspect  of concrete knowledge.  In particular, different subject matters
may have different logics.

Categorial logic also urges a kind of structuralism:  A  subject  matter
(represented by a category) is seen as being determined by the relations
to be considered among objects rather than by any specification  of  the
individual constitutions of the objects.

These points are illustrated by two examples.  Differential geometry  is
one  abstract  representation of the world, one subject matter, with its
own non-classical logic.  Set theory is another,  later,  subject,  with
classical logic.  I discuss the way set theory was derived from geometry
in the 19th Century.

Other philosophic applications of topos theory are based on the idea  of
a  topos  as  a  world in which truth varies over a range of viewpoints,
which might be the situations of situation semantics or times  in  tense
logic.   All  these  considerations  together argue that there is no one
logic or one fundamental structure to the world.

                      Wednesday, December 2, 1987
                               4:00 P.M.
                    Diefendorf 8, Main Street Campus

For further  information, contact John Corcoran, (716) 636-2438.

------------------------------

Date: Mon, 23 Nov 87 09:56:21 PST
From: Ana C. Dominguez <anad@vaxa.isi.edu>
Subject: Seminar - The Soar Project (ISI)

Date: Wednesday, November 25th
Time: 1:00pm - 3:00pm
Place: Information Sciences Institute/USC
       11th Floor Large Conference Room
       4676 Admiralty Way
       Marina Del Rey, CA  90292-6695




                               The Soar Project
                        Current Status and Future Plans
                                Paul Rosenbloom


The Soar project is an interdisciplinary, multi-site, research group that is
attempting to build a system capable of general intelligent behavior.  Our
long-term goal is to build a system that is capable of working on the full
range of tasks -- from highly routine to extremely difficult open-ended
problems -- and of employing the full range of problem solving, knowledge
representation, learning, and perceptual-motor capabilities required for
these tasks.  In this talk I will describe the current status of the
project, including the version of the system currently implemented (Soar
4.4) and the results that have been generated to date, and describe our
research plans for the next couple of years.

------------------------------

Date: Tue 24 Nov 87 18:22:10-EST
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: Seminar - Theories of Comparative Analysis (BBN)

                    BBN Science Development Program
                       AI Seminar Series Lecture

                    THEORIES OF COMPARATIVE ANALYSIS

                             Daniel S. Weld
                    MIT Artificial Intelligence Lab
                        (WELD@REAGAN.AI.MIT.EDU)

                                BBN Labs
                           10 Moulton Street
                    2nd floor large conference room
                      10:30 am, Tuesday December 1


This talk analyzes two approaches to a central subproblem of automated
design, diagnosis, and intelligent tutoring systems: comparative
analysis. Comparative analysis may be considered an analog of
qualitative simulation. Where qualitative simulation takes a structural
model of a system and qualitatively describes its behavior over time,
comparative analysis is the problem of predicting how that behavior will
change if the underlying structure is perturbed and also explaining why
it will change.

For example, given Hooke's law as the model of a horizontal,
frictionless spring/block system, qualitative simulation might generate
a description of oscillation. Comparative analysis, on the other hand,
is the task of answering questions like: ``What would happen to the
period of oscillation if you increase the mass of the block?'' I have
implemented, tested, and proven theoretical results about two different
techniques for solving comparative analysis problems, differential
qualitative (DQ) analysis and exaggeration.

DQ analysis would answer the question above as follows: ``Since force is
inversely proportional to position, the force on the block will remain
the same when the mass is increased. But if the block is heavier, then
it won't accelerate as fast. And if it doesn't accelerate as fast, then
it will always be going slower and so will take longer to complete a
full period (assuming it travels the same distance).''

Exaggeration can also solve this problem, but it generates a completely
different answer: ``If the mass were infinite, then the block would
hardly move at all.  So the period would be infinite. Thus if the mass
was increased a bit, the period would increase as well.''

Both of these techniques has advantages and limitations. DQ analysis is
proven sound, but is incomplete. It can't answer every comparative
analysis problem, but all of its answers are correct.  Because
exaggeration assumes monotonicity, it is unsound; some answers could be
incorrect. Furthermore, exaggeration's use of nonstandard analysis makes
it technically involved.  However, exaggeration can solve several
problems that are too complex for DQ analysis. The trick behind its
power appears to have application to all of qualitative reasoning.

------------------------------

Date: Mon, 23 Nov  23:10:39 1987
From: dlm%research.att.com@RELAY.CS.NET
Subject: Seminar - Performance in Practical Problem Solving (Bell
         Labs)

 Date: November 20 (Friday)
 Time: 1:30 p.m. - 2:30 p.m.
 Place: AT&T Bell Labs Murray Hill 3D-473

 Speaker: Leo Hartman
          Department of Computer Science
          University of Rochester
          Rochester, New York

  Performance in practical problem solving

  Abstract

  The quantity of resources that an agent expends in solving problems in a
  given domain is determined by the representations and search control
  strategies that it employs.  The value of individual representations or
  strategies to the agent is determined by their contribution to the
  resource expenditure.  We argue here that in order to choose the component
  representations and strategies appropriate for a particular problem domain
  it is necessary to measure their contribution to the resource expenditure
  on the actual problems the agent faces.  This is as true for a system
  designer making such choices as it is for an autonomous mechanical agent.
  We present one way to measure this contribution and give an example in
  which the measure is used to improve problem solving performance.


  Sponsor: Henry Kautz

------------------------------

Date: Tue, 17 Nov 87 10:29:26 GMT
From: mcvax!ux63.bath.ac.uk!cc_is@uunet.uu.net
Subject: Conference - Workshop on Meta-Programming in Logic (England)


            WORKSHOP ON META-PROGRAMMING IN LOGIC PROGRAMMING
        A 3-day workshop on Meta-Programming in Logic Programming will
be held at the University of Bristol on June 22-24, 1988. The workshop
will be both small and informal. In particular, attendance will be
strictly limited to the first 60 people who register.
        The workshop will cover (but not be limited to) the following
topics:
        * Foundations of meta-programming
        * Design and implementation of language facilities for
          meta-programming
        * Knowledge representation for meta-programming
        * Meta-level reasoning and control
        * Applications of meta-programming
        Submitted papers will be refereed by a program committee
consisting of Harvey Abramson, Pat Hill, John Lloyd, Mike Rogers
and John Shepherdson. Authors should submit full papers of at most
12 A4 pages. Accepted papers will appear without revision in the
proceedings. The timetable for submission of papers is as follows:
        Closing date                          April 15, 1988
        Acceptance/rejection notification     May 15, 1988
        Papers should be submitted to:
        John Lloyd
        Department of Computer Science
        University of Bristol
        University Walk
        Bristol BS8 1TR
        U.K.
        (JANET: jwl@uk.ac.bristol.compsci)
        Registration forms for the workshop will be available in
January 1988.  Bristol is about 120 miles due west of London.
Heathrow Airport is about 1 3/4 hours away by a direct bus service.
There is also a local airport at Bristol. Accommodation for
registrants will be booked in nearby university halls of residence.
        All e-mail enquiries should be directed to (JANET:)
         meta88@uk.ac.bristol
--
Mr I. W. J.  Sparry     Phone:  +44 225 826826 x 5983
University of Bath      JANET:  cc_is@UK.AC.BATH.UX63
Bath BA2 7AY            UUCP:   seismo!mcvax!ukc!bath63!cc_is (bath63.UUCP)
England                 ARPA:   cc_is%ux63.bath.ac.uk@ucl-cs.arpa

------------------------------

Date: 16 Nov 1987 10:17:49-EST (Monday)
From: DANIEL.LEIVANT%THEORY.CS.CMU.EDU@forsythe.stanford.edu
Reply-to: TheoryNet List
Subject: Conference - CMU meeting on metadeduction

                     [Forwarded from TheoryNet.]

Below is the schedule of a meeting that has taken place at
Carnegie Mellon University, on

  METALANGUAGE AND TOOLS FOR MECHANIZING FORMAL DEDUCTIVE THEORIES

Please address requests for abstracts of talks
to jfm@k.gp.cs.cmu.edu (ARPAnet).

Friday, November 13

 9:00 Using a Higher-Order Logic Programming Language to Implement
    Program Transformations
      Dale Miller, University of Pennsylvania

 9:45 Building Proof Systems in an Extended Logic Programming Language
      Amy Felty, University of Pennsylvania

10:45 The Categorical Abstract Machine, State of the Art
      Pierre-Louis Curien, Ecole Normale Superieure, Paris VII

 1:15 A Very Brief Look at NuPRL
      Joseph Bates, Carnegie Mellon University

 1:45 Reasoning about Programs that Construct Proofs
      Robert Constable, Cornell University

 2:30 Theorem Proving via Partial Reflection
      Douglas Howe, Cornell University

 3:15 MetaPrl: A Framework for Knowledge Based Media
      Joseph Bates, Carnegie Mellon University

 4:00 Discussion: The Role of Formal Reasoning in Software Development

 5:00 Demos until 6:30
      NuPRL in Wean Hall 4114 by Doug Howe
      Lambda Prolog in WeH 4623 by Dale Miller, Gopalan Nadathur, and Amy Felty

Saturday, November 14

 9:00 A Framework for Defining Logics
      Robert Harper, Edinburgh University

 9:45 The Logician's Workbench in the Ergo Support System
      Frank Pfenning, Carnegie Mellon University

10:45 A Tactical Approach to Algorithm Design
      Douglas Smith, Kestrel Institute

11:30 Reusing Data Structure Designs
      Allen Goldberg, Kestrel Institute

 1:15 Paddle: Popart's Development Language
      David Wile, University of Southern California

 2:00 Mechanizing Construction and Modification of Specifications
      Martin Feather, University of Southern California

 3:00 The TPS Theorem Proving System
      Peter Andrews, Carnegie Mellon University

 3:45 ONTIC: Knowledge Representation for Mathematics
      David McAllester, Cornell University

 4:30 Demos until 6:00
      Popart and Paddle in the KBSA, Wean Hall 4114,
         by David Wile and Martin Feather
      The LF Proof Editor, Wean Hall 4623, by Robert Harper

------------------------------

Date: Fri, 20 Nov 87 15:20:20 cst
From: stevens@anl-mcs.ARPA (Rick L. Stevens)
Subject: Conference - Prolog Benchmarking Workshop


                          ANNOUNCING
                         =============
                 A PROLOG BENCHMARKING WORKSHOP


During the last SLP there was some concern that the benchmark programs
being quoted in the literature did not reflect real Prolog programming
practices.  Now is your chance to do something about it.  A workshop
on benchmarking Prolog programs will be held at The Aerospace
Corporation in Los Angeles. The main function of this workshop is to
collect and measure a large number of modern production (real
application) Prolog programs.

The workshop will last three days, and will be held sometime during
the first two weeks of February.  The exact date will be selected to
enable the most people to attend.  The workshop will be sponsored by
The Aerospace Corporation and is being held under the auspices of the
Association of Logic Programming.  Since resources for running the
benchmarks will be limited the meeting will be open only to those who
contact the organizers.

The first half of the workshop will be spent discussing the performance
issues we wish to address, porting of code, and instrumenting of
Prolog programs and implementations.  The second half will be spent
running the code and collecting and analyzing the data.

We hope to publish the results either as a widely available Technical
Report or as a special journal article in a journal such as the Journal
of Logic Programming or New Generation Computing.

Attendance at the workshop will be limited to those who either bring
an implementation of Prolog or 1,000 or more lines of "original"
Prolog source.  Programs with more than 1,000 lines will certainly be
accepted.  The thing we wish to guard against is toy programs that
don't reflect the serious use of the language.

Of course, we would like code that has been written recently and that
reflects the best of Prolog style.  But any ``real'' Prolog application
would be acceptable.  ( No code with more that 3 cuts per clause.
:-)).  Hopefully those in attendance will represent a balance between
University and Commercial applications.

The code brought should be covered by a GNU type ``copyleft''.  That
is unlimited distribution of unmodified sources.  The object is to get
unmodified copys of programs and input data sets to as many people as
possible.  The Aerospace Corporation, a non-profit organization will
distribute the benchmark suite.

We would like to have the environment set up in advance so as much time
as possible can be spent on performance analysis.  To do this
we will set up a mail address where code can be e-mailed in advance.
Participants can also bring a UNIX tar tape.  The computers available at
Aerospace include a Sequent, VAXes, Suns, and various types of
PCs.  We will try to have as many different implementations of Prolog
available as possible.

A limited amount of financial support from the Aerospace Corporation
will be available for University attendees.

Please let us know by December 15, 1987 if you intend to attend.
If you want to attend, please send us your

  name,
  e-mail address,
  country of citizenship,
  smail address,
  date, if you have a preference
  if you will need financial support
  date that would be best for you, and
  what you'll bring.

Send responses to:

  prolog-workshop@anl-mcs.arpa

If you can't get ahold of us through e-mail, you can use:

Carl Kesselman                      Rick Stevens
MS M1/102                           Math and Computer Science Division
The Aerospace Corporation           Argonne National Laboratory
P.O. Box 92957                      Argonne IL 60439
Los Angeles, CA 90009-9295          (312) 972-3378
(213) 336-6691

If you have a problem with the distribution agreement, questions or
suggestions, please contact us at the above address.

Hope to see you there.

Rick Stevens                         Carl Kesselman
stevens@anl-mcs.arpa                 carl@aerospace.aero.org
Argonne National Laboratory          The Aerospace Corporation

------------------------------

Date: Fri, 20 Nov 87 16:30:21 SST
From: Joel Loo <ISSLPL%NUSVM.BITNET@wiscvm.wisc.edu>
Subject: Conference - AI in Economics and Management

                        +-----------------+
                        ! CALL FOR PAPERS !
                        +-----------------+

                     2nd International Workshop
                     on Artificial Intelligence
                     in Economics and Management

                         11-13 January,1989
                             Singapore


This workshop will address research and applications of AI in the areas
of finance, banking, insurance, economics, DSS, public and private
services, OA, law, manufacturing planning, personnel and assets admini-
stration.

The techniques to be presented should include knowledge representation,
search and inference, knowledge acquisition, intelligent interfaces,
KB validation, planning procedures and task support systems.


For details contact:

                 Desai Narasimhalu
                 Institute of Systems Science
                 National University of Singapore
                 Kent Ridge, Singapore 0511
                 Singapore

or,

             BITNET:     ISSAD@NUSVM

------------------------------

End of AIList Digest
********************
24-Nov-87 23:38:07-PST,14733;000000000000
Mail-From: LAWS created at 24-Nov-87 23:35:09
Date: Tue 24 Nov 1987 23:33-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #274 - Mental Models, Neural Network Conference
To: AIList@SRI.COM


AIList Digest           Wednesday, 25 Nov 1987    Volume 5 : Issue 274

Today's Topics:
  Psychology - Mental Models Summary,
  Conference - Int. Neural Network Society

----------------------------------------------------------------------

Date: 22 Nov 87 10:08:22 GMT
From: cunyvm!byuvax!fordjm@psuvm.bitnet
Subject: Mental Models Summary (long)

The following is a summary of the references
I have received from the net in response to
my request for information on mental models
from a cognitive psychology perspective.  I
appreciate the help and look forward to
commenting on these sources as I read them.

In some cases more than one person suggested the
same source.  In such cases I have only included
comments from the first person to mention each
source.

If anyone would like to comment on these
references, or has additional comments on
research in this area, please contact me.
_______

stever@EDDIE.MIT.EDU (Steve Robbins) suggests
that the literature on Neurolinguistic Programming
might be useful:

>For information on the cognitive psych slant of NLP,
>I'd recommend "NLP I" by Dilts et al., Meta Publications, 1979.
>A book I'm in the middle of is  "Meta-cation:  Prescriptions for
>Some Ailing Educational Processes" by Sid Jacobson, also available
>from Meta Publications (Cupertino, CA).  META-Cation is written n
>a very "casual" style, but it's easy to read and seems to have some
>good material.
>For information about the technology in general, the "standard"
>books are "Frogs into Princes," "Reframing," and "Using Your Brain",
>by Bandler and Grinder.  The main problem with these books is that
>they're all transcripts of training workshops.  As such, the material
>isn't organized particularly well for presentation through writing.


Stephen Smoliar <smoliar@vaxa.isi.edu> suggests the following:

>...Chapters 12 and 13 of Alvin Goldman's EPISTEMOLOGY AND
>COGNITION...
>..."Mental Muddles" by Lance Rips.  It was supposed to be published
>in the book THE REPRESENTATION OF KNOWLEDGE AND BELIEF, edited
>by Myles Brand and Robert Harnish.  I do not know if this book is out
>yet.
(I have not yet been able to locate the second book.)


Robert Virzi <rv01%gte-labs.csnet@RELAY.CS.NET> writes:

>I am interested in mental models of everyday appliances.  Things like
>VCRs and telephones, stuff like that.  In fact, I am about to start a
>series of experiments on peoples mental models of their TV/cable/VCR
>setups.       (This sounds very interesting!--JMF)

He suggests:

>1986 IEEE Conf. on Systems, Man & Cybernetics has a couple of sessions
>on Mental Models.  One paper by Gentner and Schumacher and another by
>Sebrechts & DuMont seem pretty good.
>ACM CHI'83 has one of the better papers I've seen on the topic written
>by Halasz and Moran.  The look at the effect of mental models on
>subjects use of an Reverse Polish Notation calculator.
>Harvard U. Press has a book out by Johnson-Laird called Mental models.
>I don't have it yet but it looked promising from what I could glean from
>reviews.
(I mentioned the Johnson-Laird book in my original posting.  I have read

  it and find it to be a refreshing alternative to much of the earlier
  logic-based explanations of human reasoning.)


Rich Sutton <rich%gte-labs.csnet@RELAY.CS.NET> supplies:

>R.~Sutton \& A.~Barto, ``An adaptive network that constructs and uses
>an internal model of its environment," {\it Cognition and Brain Theory
>Quarterly}, {\sl 4}, 1981, pp.~217--246.

>R.~Sutton \& B.~Pinette, ``The learning of world models by
>connectionist networks," {\it Proceedings of the Seventh Annual
>Conf.~of the Cognitive Science Society}, 1985, pp.~54--64.


"Brad Erlwein Of. (814) 863-4356" <ET2@PSUVM> suggests:

>a good book that you might find helpful is Gardner (1985) The Mind's
>New Science.
( I have also read this book and find it enjoyable, but it is more of
  an historical overview of the field of cognitive science than a
  research review or integration.  The latter is more my interest
  at present.)


munnari!gitte%humsun.@husc6.BITNET (Gitte Lingarrd) responds:

>Rouse, W.B., and Morris, N.M. (1986). On Looking Into the Black Box:
>Prospects and Limits in the Search for Mental Models, Psychological
>Bulletin, 100, (3), 349-363.
>
>Lindgaard, G. (1987). Who Needs What Information About Computer Systems:
>Some Notes on Mental Models, Metaphpors and Expertise, Customer Services
>and Systems Branch Paper No. 126, Telecom Australia Research Laboratories,
>Clayton, Australia.
>
>Copies of the latter may be obtained from me if wanted.


Bob Weissman <decwrl!acornrc!bob@ucbvax.Berkeley.EDU> writes:

>Suggest you pick up a copy of ``The Psychology of Human-Computer Interaction''
>by Card, Moran, and Newell.  Aside from being a wonderful book (probably
the
>definitive work in its field), it has an extensive bibliography.
>Published by Lawrence Erlbaum Associates, Inc., Hillsdale, NJ., 1983.
>ISBN 0-89859-243-7


lambert@cod.nosc.mil (David Lambert) responds:

>Personnel and Training Research Programs
>Office of Naval Research (Code 1142 PT)  (Dr. Susan Chipman  (202) 696-4318
)
>Arlington, VA  22217-5000
>has been funding work in mental models.  One recent report funded by them,
>which contains references and a distribution list, is:
>
>Jeremy Roschelle and James G. Greeno,  Mental Models in Expert Physics
>Reasoning; University of California, Berkeley, CA  94720;  Report No. GK-2,
>July 1987.


Jane Malin <malin%nasa-jsc.csnet@RELAY.CS.NET> comments:

>Dedre Gentner gave an outstanding invited survey at AAAI-87 on
>mental models and
>analogy.  Hopefully some written version would be available soon.


Thad.Polk@centro.soar.cs.cmu.edu (Thad Polk) responds:

>I'm currently doing research in the area of mental models (of the
>Johnson-Laird variety).  Specifically, I'm trying to revise and implement
>his theory of syllogisms within Soar (Laird, Newell, & Rosenbloom, AI
>Journal Sept. 1987).

He recommends the following references:

>A paper by Johnson-Laird & Bruno Bara that appears in Cognition, 16
>(1984) 1-61.

>Revlin, R. & Mayer, R., Human Reasoning, V.H. Winston & Sons,
>Washington D.C., 1978.

>Falmagne, R. (ed.), Reasoning: Representation and Process, Lawrence
>Erlbaum Associates, Hillsdale N.J., 1975.

>A paper by Robert Inder in "Artificial Intelligence and its Applications"

>by A.G. Cohn and J.R. Thomas, John Wiley & Sons, 1986.


meulen@sunybcs.BITNET (Alice ter Meulen) suggests:

>E. Traugott, A. ter Meulen, C. Ferguson and J. Reilly, (eds.)
>On Conditionals
>Cambridge University Press, Cambridge (Engl.) 1986.
which contains a chapter by Johnson-Laird entitled
'Conditionals and mental models'


GA3182@SIUCVMB (John Dinsmore) comments:

>There seem to be two currents of activity in research in mental models:
>  1. work on the contents of the models, i.e., what knowledge they contain.
>     This includes work in naive physics and is the main thrust of the
>     Gentner and Stevens book.
>  2. work on general mechanisms of knowledge representation and inference.
>     This is  the thrust of Johnson-Laird's work.
>I'm not sure where your interests lie, but I can offer two references con-
>cerning the second current:
>
>   John Dinsmore. 1987. Mental Spaces from a Functional Perspective.
>      Cognitive Science 11: 1-21.
>   Gille Fauconnier. 1985. Mental Spaces. MIT/Bradford.
_________

Once again, thanks to all.  I will communicate more to the net
on this topic as it seems appropriate.

John M. Ford                    fordjm@byuvax.bitnet

(*Not* the "John M. Ford" that writes science fiction.)

------------------------------

Date: Fri, 20 Nov 87 12:28:33 est
From: mike@bucasb.bu.edu (Michael Cohen)
Subject: Conference - Int. Neural Network Society

November 16, 1987

-----CALL FOR PAPERS-----

INTERNATIONAL NEURAL NETWORK SOCIETY
1988 ANNUAL MEETING

September 6--10, 1988
Boston, Massachusetts

The International Neural Network Society (INNS) invites all
those interested in the exciting and rapidly expanding field of
neural networks to attend its 1988 Annual Meeting. The meeting
includes plenary lectures, symposia, contributed oral and poster
presentations, tutorials, commercial and publishing exhibits, a
placement service for employers and educational institutions,
government agency presentations, and social events.

---INNS OFFICERS AND GOVERNING BOARD---

Stephen Grossberg, President; Demetri Psaltis, Vice-President;
Harold Szu, Secretary/Treasurer.

Shun-ichi Amari, James Anderson, Gail Carpenter, Walter Freeman, Kunihiko
Fukushima, Lee Giles, Teuvo Kohonen, Christoph von der Malsburg, Carver Mead,
David Rumelhart, Terrence Sejnowski, George Sperling, Bernard Widrow.

---MEETING ORGANIZERS---

General Meeting Chairman: Bernard Widrow
Technical Program Co-Chairmen: Dana Anderson and James Anderson
Organization Chairman: Gail Carpenter
Tutorial Program Co-Chairmen: Walter Freeman and Harold Szu
Conference Coordinator: Maureen Caudill

---SPEAKERS---

Plenary:
Stephen Grossberg
Carver Mead
Terrence Sejnowski
Nobuo Suga
Bernard Widrow

Cognitive and Neural Systems:
James Anderson
Walter Freeman
Christoph von der Malsburg
David Rumelhart
Allen Selverston

Vision and Pattern Recognition:
Gail Carpenter
Max Cynader
John Daugman
Kunihiko Fukushima
Teuvo Kohonen
Ennio Mingolla
Eric Schwartz
George Sperling
Steven Zucker

Combinatorial Optimization and Content Addressable Memory:
Daniel Amit
Stuart Geman
Geoffrey Hinton
Bart Kosko

Applications and Implementations:
Dana Anderson
Michael Buffa
Lee Giles
Robert Hecht-Nielsen
Demetri Psaltis
Thomas Ryan
Bernard Soffer
Harold Szu
Wilfrid Veldkamp

Motor Control and Robotics:
Jacob Barhen
Daniel Bullock
James Houk
Scott Kelso
Lance Optican


---ABSTRACTS---

Submit abstracts for oral and poster presentation on biological and
technological models of:

--Vision and image processing
--Local circuit neurobiology
--Speech and language
--Analysis of network dynamics
--Sensory-motor control and robotics
--Combinatorial optimization
--Pattern recognition
--Electronic implementation (VLSI)
--Associative learning
--Optical implementation
--Self-organization
--Neurocomputers
--Cognitive information processing
--Applications

Abstracts must be typed on the INNS abstract form in camera-ready format.
Request abstracts from: INNS Conference, 16776 Bernardo Center Drive,
Suite 110B, San Diego, CA 92128 USA. INNS members will be directly sent
an abstract form.

----------ABSTRACT DEADLINE: MARCH 31, 1988----------

Acceptance notifications will be mailed in June, 1988. Accepted abstracts
will be published as a supplement to the INNS journal, Neural Networks,
and mailed to meeting registrants and Neural Networks subscribers in
August, 1988.


---PROGRAM COMMITTEE---

Joshua Alspector      Teuvo Kohonen
Shun-ichi Amari       Bart Kosko
Dana Anderson         Daniel Levine
James Anderson        Richard Lyon
Jacob Barhen          Ennio Mingolla
Michael Buffa         Paul Mueller
Daniel Bullock        Lance Optican
Terry Caelli          David Parker
Gail Carpenter        Demetri Psaltis
Michael Cohen         Adam Reeves
Max Cynader           Thomas Ryan
John Daugman          Jay Sage
David van Essen       Eric Schwartz
Federico Faggin       Allen Selverston
Nabil Farhat          George Sperling
Walter Freeman        David Stork
Kunihiko Fukushima    Harold Szu
Lee Giles             David Tank
Stephen Grossberg     Wilfrid Veldkamp
Morris Hirsch         Bernard Widrow
Scott Kelso


---PARTICIPATING SOCIETIES---

American Mathematical Society; Cognitive Science Society; Optical Society
of America; Society for Industrial and Applied Mathematics; Society of
Photo-Optical Instrumentation Engineers; and others pending.


---TUTORIALS---

Tutorials will consist of eight one-hour introductory lectures by distinguished
scientists. The lectures will help prepare the audience for the more advanced
presentations at the meeting. The tutorial topics include:

1. Vision and image processing
2. Pattern recognition, associative learning, and self-organization
3. Cognitive psychology for information processing
4. Local circuit neurobiology
5. Adaptive filters
6. Nonlinear dynamics for brain theory (competition, cooperation, equilibria,
   oscillations, and chaos)
7. Applications and combinatorial optimization
8. Implementations (electronic, VLSI, and optical neurocomputers)

Tutorials will be held on Tuesday, September 6, 1988, from 8AM to 6PM. The
general conference will begin with a reception at 6PM, followed by the
conference opening and a plenary lecture.


---REGISTRATION AND HOTEL---

Fill out attached forms.

Registration fees partially pay for abstract handling, the books of abstracts,
two evening receptions, coffee breaks, mailings, and administrative expenses.


---TRAVEL---

Call UNIGLOBE (800) 521-5144 or (617) 235-7500 to get discounts of up to 65%
off coach fares.


---COMMERCIAL AND GOVERNMENT FUNCTIONS---

Conference programs have been designed for commercial vendors, government
agencies and research laboratories, publishers, and educational institutions.
These include a large exhibit area (the Boston Park Plaza Castle); a placement
service for employment interviews; catered hospitality suites; and special
presentations. A professional exposition service contractor will carry out
exhibit arrangements. Organizations wishing to be put on a mailing list for
participants in these programs should fill out the enclosed form.


---STUDENTS AND VOLUNTEERS---

Students are welcome to join INNS and to participate in its meeting. See
attached forms for reduced registration, tutorial, and membership fees.
Financial support is anticipated for students and meeting volunteers. To
apply, attach a letter of request and a brief description of interests to
the conference registration form.

  [Contact the author if you need the various registration and
  membership forms.  -- KIL]

------------------------------

End of AIList Digest
********************
 1-Dec-87 01:37:51-PST,17306;000000000000
Mail-From: LAWS created at 30-Nov-87 22:38:53
Date: Mon 30 Nov 1987 22:31-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #275 - Pattern Recognition, VLSI Design, Philosophy, Law
To: AIList@SRI.COM


AIList Digest            Tuesday, 1 Dec 1987      Volume 5 : Issue 275

Today's Topics:
  Queries - STRIPS and its Derivatives & VM/CMS Software &
    ES Tools for the Mac,
  Binding - Cugini Mailer Problem,
  Pattern Recognition - Recognizing Humpback Fins,
  Application - NCR VLSI Design Expert System,
  Philosophy - Research Methodology,
  Law - Software Ownership

----------------------------------------------------------------------

Date: 24 Nov 87 21:45:09 GMT
From: steve@hubcap.clemson.edu ("Steve" Stevenson)
Subject: STRIPS and its derivatives

I am interested in finding out the current status of the
STRIPS model (Fike and Nilsson) and its successors.  Any
help would be appreciated.  Any compiler/interpreters?

--
Steve (really "D. E.") Stevenson           steve@hubcap.clemson.edu
Department of Computer Science,            (803)656-5880.mabell
Clemson University, Clemson, SC 29634-1906

------------------------------

Date: Fri, 27 Nov 87 14:28:10 EST
From: Jim Buchanan <ACAD8005%RYERSON.BITNET@wiscvm.wisc.edu>
Subject: Looking for software

I would appreciate any information or leads on the following software:

1) LISP for an IBM VM/CMS system
   I have copies of XLISP(version 1.4) and MTS lisp and know about IBM's
   LISP/VM but I am looking for the latest XLISP (that will run on VM/CMS)
   or other Public domain or inexpensive Lisps

2) Smalltalk for IBM VM/CMS
   Again Public Domain or cheap would be best.

Thanks again for any information

Jim Buchanan
Supervisor, Academic Computing Services
Ryerson Polytechnical Institute
Toronto, Ontario
Canada

------------------------------

Date: 30 Nov 87 14:16:19 EST
From: Mary.Lou.Maher@CIVE.RI.CMU.EDU
Subject: ES tools for Mac

I have to give a tutorial and workshop on Expert Systems at an engineering
conference and would like to use the Mac since it has relatively little
start up time. I am interested in simple rule based tools and object
oriented tools that run on a Mac. Simplicity  is more important
than sophistication. Can anyone help? Mary Lou Maher maher@cive.ri.cmu.edu

------------------------------

Date: 30 Nov 87 06:58:00 EST
From: cugini@icst-ecf.arpa
Reply-to: <cugini@icst-ecf.arpa>
Subject: mailer problem


My mailer hasn't been able to receive any mail for the past 2-3 weeks.
If anyone has tried to mail me something, apologies, and please try again.

John Cugini  <Cugini@icst-ecf.arpa>

------------------------------

Date: 23 Nov 87 02:57:54 GMT
From: nosc!humu!uhccux!cs313s19@sdcsvax.ucsd.edu  (Mike Morton)
Subject: pattern recognition software (recognizing humpback fins!)
         wanted

A friend does research work spotting humpbacks by recognizing their
dorsal fins.  The researchers finish each day by comparing the day's
photos with 300-400 photos of known whales to recognize individuals.
They're looking for a way to do this with a computer database.

They could code the data and enter them as numbers: size and shape of
fins, etc.  Then the database just needs to search for close matches.
This could be done with a simple Basic program or spreadsheet macro; any
suggestions for a turnkey system which does this?

Better, but presumably harder to find or implement, would be a graphics
recognition system, scanning images or allowing them to be traced by
hand and entered.  I doubt there's anything like this available off-the-
shelf, but would be interested to hear about it if there is.

Solutions for the Mac are especially of interest, but any micro is OK.
Please reply by email.  Thanks in advance.

 -- Mike Morton // P.O. Box 11378, Honolulu, HI  96878, (808) 456-8455 HST
      INTERNET: cs313s19@uhccux.uhcc.hawaii.edu
      UUCP:     {ihnp4,uunet,dcdwest,ucbvax}!sdcsvax!nosc!uhccux!cs313s19
      BITNET:   cs313s19%uhccux.uhcc.hawaii.edu@rutgers.edu

------------------------------

Date: 25 Nov 87 16:19:38 GMT
From: uh2@psuvm.bitnet  (Lee Sailer)
Subject: Re: pattern recognition software (recognizing humpback
         fins!) wanted

I can think of some pretty good ways to do this, but not with
database software, unless the matching problem is really simple.

The current masters of *sequence matching* are the molecular biologists,
who spend a lot of time matching LONG sequences of RNA, DNA, etc.

One approach

Can the fins be described with a simple sequence of tokens or symbols, like
<big gap> <small notch> <small gap> <big notch> <tip> ?  If so, then you've
got the DWIM (do what I mean) or spelling correction problem.  Given a
sequence of symbols, find the set of legal sequences that are close.
This turns out to be a graph search.

Another approach

Are accurate measurements needed to distinguish nearly identical fins?
If so, then a fin must be described something like this:

  gap of 15.2mm
  notch width 5mm depth 3mm
  gap of 45 mm
  notch width 3mm depth 5mm
  tip
  etc etc etc

If you think of a 'gap' as a notch with width 0, and the tip as a notch
of width and depth 0, then each feature characterized by a triple
of real numbers.  Using the <start> <stop> and <tip> as
landmarks, it ought to be possible to think up some way to convert
each fin to a point in N-space, and then to compute the distance
between a new fin and the 300-400 fins already in the database.

------------------------------

Date: 25 Nov 87 23:21:55 GMT
From: portal!cup.portal.com!David_Bat_Masterson@uunet.uu.net
Subject: Re: pattern recognition software (recognizing humpback
         fins!) w

This request sounds vaguely familiar.  I thought I had seen a show about
a few students for a college doing a study of humpback whales.  They
also were having trouble keeping track of which whales were which (maybe
it was killer whales).  The way they went about handling it was to
classify the dorsal fin shape by things like size, shape, bites, extra
spots, barnacles, etc. (their fingerprint).  I forget if they used a
database system to keep track of this or just a file card approach.  If
you use a DB, this information could be entered into a relational database
for scanning purposes (Dbase perhaps).  This would not provide an automatic
mechanism for processing the photographs, but its a start.  Additional ideas
would be to implement an expert system as front end to this process.  The
expert system could be trained to ask the right questions about a photograph
to get a good classification.  On top of this could be added a laser scanner
(for about $3K) that would bring the photo into the database; there may be
database systems that would allow you to store the image of the whale right in
the database (I know the Amiga databases can).  Think about it, you can build
up from a basic capability, but don't try to do the whole thing at once.
                                David_Bat_Masterson@cup.portal.com

------------------------------

Date: 27 Nov 87 03:48:33 GMT
From: portal!cup.portal.com!Bob_Robert_Brody@uunet.uu.net
Subject: Re: pattern recognition software (recognizing humpback
         fins!) w

There is an organization I belong to re Moclips Cetological Society
which is non profit and centered around whales and whale sightings
and cataloging.  Maybe they could be of help re using databases to
maintain the catalogs.  You can call 206 378-4710.

The Whale Museum
P.O. Box 945
Friday Harbor, Washington 98250

Moclips Cetological Society is a non profit research and educational
corporation.

Bob Brody
Los Angeles

------------------------------

Date: Sat 28 Nov 87 12:09:26-CST
From: Charles Petrie <AI.PETRIE@MCC.COM>
Reply-to: Petrie@MCC.com
Subject: Re: INFO REQUESTED ON SYSTEMS DEVELOPED USING AI TOOLS/SHELLS

Robin Steele of NCR has built a commercial expert system of some note:

   . It represents and reasons about real circuit designs consisting
     between 10 and 20K gates

   . Customers pay $4,000+ to come into NCR's shop and use the system.

Reference: "An Expert System Application in Semicuston VLSI Design",
Robin L. Steele, _Proc. 24th ACM/IEEE Design Automation Conference_,
Miami Beach, 1987.

------------------------------

Date: 23 Nov 87 22:33:55 GMT
From: honavar@speedy.wisc.edu (A Buggy AI Program)
Reply-to: honavar@speedy.wisc.edu (A Buggy AI Program)
Subject: Research methodology in AI (was Re: Success of AI)


In article <4739@wisdom.BITNET> eitan%H@wiscvm.arpa (Eitan Shterenbaum) writes:
>
>a) You can't understand the laws under which a system works without
>   understanding the structure of the system ( I believe that our
>   intelligence is the result of our brain's structure )

        Not entirely true. We can often gain insights into what structures
        are needed to produce a certain observed behavior simply by observing
        the system's behavior. This would
        then enable us to  hypothesize about the structures that actually
        produce such behavior. We would then test the hypotheses by
        putting them through experimental validation. Just as one can have
        several different computers that are functionally equivalent, it
        is reasonable to expect that there several possible architectures (the
        human brain being one of them) that are capable of intelligence.
>
>It seems to me that
>        1) You have no definition for Intelligence.
>        2) You want to have the rules of Itelligence.
>        3) Thus you build systems inorder to simulate Intelligence.
>        4) Since you don't know you're looking for and since you have no
>           basic rules to simulate the intelligence on, you invent your
>           own local definition and rules for Intelligence.
>        5) Then youtry to mach your results with your expectations of what
>           the results should be.

        This is an oversimplified view of the research methodology in AI
        and Cognitive sciences.
        It is true that we don't have a good definition of intelligence.
        For purposes of  AI, it is sufficient to say that we want to build
        systems that exhibit the kinds of behavior that are believed
        to require intelligence if performed by humans (I forget the author
        that first suggested this definition of AI). This is an operational
        definition or at least a basis for an operational definition of
        artificial intelligence. Given this, there are several alternative
        approaches one could adopt in building intelligent systems -
        including the one of simulating a system that most of us agree is
        capable of intelligence, the human brain (plus the sensory mechanisms).
        The search for architectures for intelligence is by no means an
        unconstrained, blind search. The hypothesis can be constrained by
        utilizing data gathered from experimental research in psychology,
        neuroscience, and related areas as well as theoretical analysis
        of complexity of the tasks involved and so on.

>
>Correct me if I'm wrong but I do feel that the neuro-biologists chaps are
>in the right track and that the Computer scientists should combine efforts
>with them instead of messing around with AI.
>
        I agree that AI researches can benefit from the research findings in
        neuroscience. It is also true that computational theories advanced
        in AI can provide insights to neuroscientists as well. In fact, there
        is evidence of this interaction in the works of David Marr, Shimon
        Ullman, and others. Cognitive psychology is another field which
        is at least as relevent as neuroscience to work in AI.

------------------------------

Date: 22 Nov 87 21:01:00 GMT
From: mnetor!utzoo!dciem!nrcaer!cognos!roberts@uunet.uu.net  (Robert
      Stanley)
Subject: Re: My parents own my output.

In article <7880@allegra.UUCP> jac@allegra.UUCP (Jonathan Chandross) writes:

>If I write a program that generates machine code from a high level language
>do I not own the output?  Of course I own it.  I also own the output from
>a theorum prover, a planner, and similar systems, no matter how elaborate.

You do indeed, unless you perform (or fail to perform) some act or acts which,
in the eyes of the Law, strip you either of your status as owner or of your
right to compensation for its use.  Giving a copy to a friend without explicit
(read: a witnessed contract) injunction against passing it on, using it other
than for private purposes, etc. is just as much a reduction of your legal
writes as selling it under a contract of sale/lease.  There is still some
considerable controversy as to the status of software license agreements under
a variety of legal systems, which is why no consensus has been reached on the
subject of how best to protect your software against theft.

Failing to take positive legal steps to protect your rights of ownership of a
piece of software is tantamount to surrendering those rights once you have
made, or allowed to be made, even one copy of the (suite of) programs.  This
may not be fair, but it is what appears to have been established by precedent
in all the major industrialized nations where cases involving software rights
have been tried.  At present, in the US and to a large degree in Canada, the
only really successful legal defences have been for ROM software, notably the
Apple Macintosh, which is why there are as yet *no* Macintosh clones in the
market place.  It is rumoured (comment anyone?) that this is one of the reasons
for IBM's approach to the design of the PS2, with critical components of the
system architecture in ROM.

For those with a speculative approach to the future, it will be interesting if
history repeats itself.  In the 1970's, IBM was taken to court by a number
of PCM's (Plug-Compatible Manufacturers) and eventually lost a ruling, being
forced to disclose the details of their internal architecture to a degree
sufficient to allow other manufacturers to design compatible equipment.  At the
time IBM was viewed as holding a monopolistic position, which is not currently
the case with any one personal computer manufacturer nor, as yet, for any
specific piece of software.

>The alternative is to view the AI as an sentient entity with rights, that
>is, a person.  Then we can view the AI as a company employee who developed
>said work on a company machine and on company time.  Therefore the employer
>owns the output, just as my employer owns my output done on company time.

Whether your employer owns your output is exactly and only a matter of legal
contract.  Either you have signed a legally binding contract of employment with
your employer or your (and your employer's) rights are protected by clauses in
one or more current labour relations bills.  Precise terms of the latter will,
of course, vary from country to country.  It is possible that some aspects of
an explicit contract of employment may be challengeable in court as being overly
restrictive; there have been several US and Canadian precedents within the last
year.

I, for instance, have a contract of emplyment into which I insisted be written
several waivers, simply because the wording of the standard contract gave my
employer the right to everything I did anywhere at any time (24 hours a day,
365.25 days per year) while I was still their employee.  I doubt that the
original contract would actually have withstood a challenge in court, but that
would have taken money and time; much, much better to avoid the situation
completely.

>The real question should be: Did the AI knowlingly enter into a contract with
>the employer.

This will only be an issue if an AI can first be demonstrated to be a legal
individual within the eyes of the court.  Remember, there are plenty of humans
who do not have this status, but for whom some other legal individual is deemed
to have legal responsibility: the legally insane and the under-aged, to name
but two.

>I wonder if the ACLU would take the case.

Not until there is seen to be some benefit to be gained from protecting the
rights of an AI.  Let's face it, more working human beings are likely to oppose
the establishment of such precedents right now than are going to be for it.
How soon do you see this attitude changing?  Especially if white-collar workers
start being displaced by intelligent management systems!

Robert_S
--
R.A. Stanley             Cognos Incorporated     S-mail: P.O. Box 9707
Voice: (613) 738-1440 (Research: there are 2!)           3755 Riverside Drive
  FAX: (613) 738-0002    Compuserve: 76174,3024          Ottawa, Ontario
 uucp: decvax!utzoo!dciem!nrcaer!cognos!roberts          CANADA  K1G 3Z4

------------------------------

End of AIList Digest
********************
 4-Dec-87 00:00:40-PST,26379;000000000000
Mail-From: LAWS created at  3-Dec-87 23:49:10
Date: Thu  3 Dec 1987 23:46-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #276 - Planning Bibliography
To: AIList@SRI.COM


AIList Digest             Friday, 4 Dec 1987      Volume 5 : Issue 276

Today's Topics:
  Bibliography - Planning

----------------------------------------------------------------------

Date: Wed, 2 Dec 87 12:55:42 PST
From: Richard Shu <rshu@ads.arpa>
Subject: Planning Bibliography


Ken,

A while back, Dickson Lukose posted a request for references on
planning.  I delayed posting this bibliography because it is based
on one compiled by a co-worker who was on vacation.  He has since
given his assent for its distribution.

Disclaimer:  The field of planning is very large.  This bibliography
             is by no means complete.  It is simply a compendium of
             sources encountered by a few people in the Planning
             Division at ADS.

             Additions and corrections are welcome.  Please mail
             them to me and I will aggregate them and repost to
             the net.

Richard Shu

------------------- *** Bibliography follows *** ---------------------------

@InProceedings{Agre87,
key "Agre87",
author "Agre, P.E. and Chapman, D.",
title "Pengi: An Implementation of a Theory of Activity",
booktitle "Proceedings of AAAI-87, Seattle, Wa.",
Organization "AAAI",
month "July",
pages "268-272",
year "1987"}

@Book(albus,
key "albus",
author "ALBUS, J.",
title "Brains, Behavior and Robotics",
publisher "Byte Books",
address "Chichester, England",
pages "chapter 5",
year "1985" )

@InProceedings{alterman85,
key "alterman85",
author "ALTERMAN, R.",
title "Adaptive planning: refitting old plans to new situations",
booktitle "Proceedings 7th Cognitive Science Society",
year "1985"}

@InProceedings{alterman86,
key "alterman86",
author "ALTERMAN, R.",
title "An adaptive planner",
booktitle "Proceedings AAAI",
year "1986",
pages "65ff" }

@InBook(amarel,
key "amarel68",
author "AMAREL, S.",
title "On representations of problems of reasoning about actions",
editor "MICHIE, D.",
booktitle "Machine Intelligence 3",
publisher "Ellis Horwood",
address "Chichester, England",
pages "131-171",
year "1968" )

@TechReport{appelt82a,
key "appelt82a",
author "APPELT, D.E.",
title "Planning natural language utterances to satisfy multiple goals",
type "Tech Note",
institution "SRI International, Menlo Park, California",
Pages = "259",
year = "1982"}

@Book(bratman,
key "bratman",
author "BRATMAN, M.",
title "Intentions, Plans and Practical Reason",
publisher "Harvard University Press",
year "forthcoming" )

@TechReport(functions,
key "bundy77",
author "BUNDY, A.",
title "Exploiting the properties of functions to control search",
type "Research Report",
number "45",
institution "Department of AI, University of Edinburgh",
year "1977" )

@TechReport{carbonell80a,
key "carbonell80a",
author "CARBONELL, J.G.",
title "The POLITICS project: subjective reasoning in a multi-actor planning
        domain",
journal "Carnegie-Mellon Computer Science Research Review",
year "1980"}

@Article{carbonell81,
key "carbonell81",
author "CARBONELL, J.G.",
title "Counterplanning:  a strategy-based model of adversary planning in
        real-world situations",
journal "Artificial Intelligence",
volume "16",
pages "295-329",
year "1981"}

@TechReport(chapman85,
key "chapman85",
author "CHAPMAN, D.",
title "Planning for conjunctive goals.",
type "Memo",
number "AI-802",
institution "AI Lab, MIT",
year "1985" )

@InProceedings{coles75,
key "coles75",
author "Coles, L.S., Robb, A.M., Sinclair, P.L., Smith, M.H. AND Sobek, R.R",
title "Decision Analysis for an Experimental Robot with Unreliable Sensors",
booktitle "Proceedings of 4th IJCAI 1975",
organization "IJCAI",
pages "749-754",
year "1975"}

@InProceedings(corkill79,
key "corkill79",
author "Corkill, D.",
title "Hierarchical Planning in a Distributed Environment",
booktitle "Proceedings of the 6th IJCAI",
year "1979",
pages "168-175" )

@TechReport(daniel77,
key "daniel77",
author "Daniel, L.",
title "Planning: Modifying Non-linear Plans.",
type "Working Paper",
number "24",
institution "Department of AI, University of Edinburgh",
year "1977" )

@TechReport(doyle80,
key "doyle80",
author "DOYLE, J.",
title "A Model for Deliberation, Action and Introspection",
type "Technical Report",
number "419",
institution "MIT",
year "1980" )

@InProceedings{Drummond85,
key "drummond85",
author "DRUMMOND, M.",
title "Refining and Extending the Procedural Net",
booktitle "Proceedings of the 9th IJCAI 1985",
organization "IJCAI",
pages"528-531",
month "August",
year "1985"}

@InProceedings(and-or-plans,
key "demello86",
author "DeMELLO, L.H. AND SANDERSON, A.C.",
title "And/Or graph representation of assembly plans",
organization "AAAI",
booktitle "Proceedings of AAAI-86",
year "1986",
pages "1113ff" )

@Article{soft-goals,
key "descotte85",
author "DESCOTTE, Y. AND LATOMBE, J.-C.",
title "Making compromises among antagonist constraints in a planner",
journal "Artificial Intelligence",
volume "27",
pages "183-217",
year "1985"}



@InProceedings(verification,
key "doyle86",
author "DOYLE, R.J., ATKINSON, D.J. AND DOSHI, R.S.",
title "Generating perception requests and expectations to verify the
       execution of plans",
organization "AAAI",
booktitle "Proceedings of AAAI",
year "1986",
pages "81ff" )

@Book(gps,
key "ernst69",
author "ERNST, G. AND NEWELL, A.",
title "GPS: a Case Study in Generality and Problem Solving",
publisher "ACM Monograph Series, Academic Press, New York",
year "1969" )

@Article{fahlman74,
key "fahlman74",
author "FAHLMAN, S.",
title "A Planning System for Robot Construction Tasks",
journal "Artificial Intelligence",
volume "5",
pages "1-49",
year "1974"}

@InProceedings(faletti82,
key "faletti82",
author "FALETTI, J.",
title "PANDORA: a program for doing common-sense planning in complex
        situations",
organization "AAAI",
booktitle "Proceedings of AAAI-82",
year "1982")

@Article(strips,
key "fikes71",
author "FIKES, R.E. AND NILSSON, N.J.",
title "STRIPS: a new approach to the application of theorem proving to
       problem solving.",
journal "Artificial Intelligence",
volume "2",
year "1971",
pages "189ff" )

@TechReport{feldman75,
key "feldman75",
author "FELDMAN, J.A. AND SPROULL, R.F.",
title "Decision Theory and Artificial Intelligence II: The Hungry Monkey",
institution "University of Rochester, Department of Computer Science",
year "1975"}

@Article(fikes71,
key "fikes71",
author "FIKES, R.E. AND NILSSON, N.J.",
title "STRIPS: A new approach to the application of theorem proving to
               problem solving",
journal "Artificial Intelligence",
volume "2",
year "1971",
pages "189-208" )

@Article(fikes72,
key "fikes72",
author "FIKES, R.E., HART, P.E. AND NILSSON, N.J.",
title "Learning and executing generalized robot plans",
journal "Artificial Intelligence",
volume "3",
year "1972",
pages "251-288" )

@TechReport{finger86,
key "finger86",
author "FINGER, J.J.",
title "Exploiting constraints in deductive design synthesis",
type "Ph.D. thesis",
institution "Stanford University",
note "to appear",
year "1986"}

@Article{spex,
key "friedland85",
author "FRIEDLAND, P.E. AND IWASAKI, Y.",
title "The concept and implementation of skeletal plans",
journal "Journal of Automated Reasoning",
volume "1",
pages "161-208",
year "1985"}

@TechReport(mrs1,
key "genesereth81",
author "GENESERETH, M.R. AND SMITH, D.E.",
title "Metalevel architecture",
type "Memo",
number "HPP-81-6",
institution "Stanford University",
year "1981" )

@TechReport(mrs2,
key "russell85",
author "RUSSELL, S.",
title "The compleat guide to MRS",
type "Report",
number "KSL-85-12",
institution "Stanford University",
year "1985" )

@InProceedings{georgeff83,
key "georgeff83",
author "GEORGEFF, M.P.",
title "Communication and interaction in multi-agent planning",
booktitle "Proceedings of AAAI-83",
pages "125-129",
month "August",
year "1983"}

@InProceedings(pes,
key "georgeff83",
author "GEORGEFF, M. AND BONOLLO, U.",
title "Procedural expert systems",
booktitle "Proceedings of the 8th IJCAI",
year "1983",
pages "151ff" )

@InProceedings(georgeff84,
key "georgeff84",
author "GEORGEFF, M.",
title "Procedural expert systems",
booktitle "Proceedings of AAAI-84",
year "1984",
pages "121-125" )

@InProceedings(prs-logic,
key "georgeff85",
author "GEORGEFF, M., LANSKY, A. AND BESSIERE, P.",
title "A procedural logic",
booktitle "Proceedings of the 9th IJCAI",
year "1985" )

@TechReport(prs-flakey,
key "georgeff86",
author "GEORGEFF, M., LANSKY, A. AND SCHOPPERS, M.",
title "Reasoning and planning in dynamic domains: an experiment with a mobile
       robot",
type "Technical Note",
number "380",
institution "AI Center, SRI International",
year "1986" )



@TechReport(recovery,
key "gini85",
author "GINI, M., DOSHI, R., GARBER, S., GLUCH, M., SMITH, R. AND
        ZUALKERNAIN, I.",
title "Symbolic reasoning as a basis for automatic error recovery in robots",
type "Tech Rept",
number "85-24",
institution "University of Minnesota",
year "1985" )

@InProceedings{pwplanning,
key "ginsberg86a",
author "GINSBURG, M.L.",
title "Possible Worlds Planning",
booktitle "Proceedings of the Workshop on Planning and Reasoning About Action",
pages "291-317",
month "July",
year "1986"}

@InProceedings(counterfactuals,
key "ginsberg85",
author "GINSBURG, M.",
title "Counterfactuals",
booktitle "Proceedings 9th IJCAI",
year "1985",
pages "80-86" )

@Article{green81,
key "green81",
author "GREEN, C.C.",
title "Application of theorem proving to problem solving",
journal "Readings in Artificial Intelligence",
year "1981"}

@InProceedings{hammond83,
key "hammond83",
author "HAMMOND, K.J.",
title "Planning and Goal Interaction:
       The use of past solutions in present situations",
booktitle "Proceedings of AAAI-83",
pages "148-151",
month "August",
year "1983"}

@InProceedings(ddb1,
key "hayes75",
author "HAYES, P.",
title "A representation for robot plans",
booktitle "Proceedings of the 4th IJCAI 1975",
year "1975",
pages "181ff")

@InProceedings(hayes79,
key "hayes79",
author "Hayes-Roth, Barabara, Hayes-Roth, Frederic, Rosenschein, Stan
        and Cammarata, Stephanie",
title "Modeling Planning as an Incremental, Opportunistic Process",
booktitle "Proceedings of 6th IJCAI",
year "1979",
pages "375-383")

@Article(Hendrix73,
key "Hendrix73",
author "Hendrix, G.",
title "Modeling Simultaneous Actions and Continuous Processes",
journal "Artificial Intelligence",
volume "4",
year "1973",
pages "145-180")

@InProceedings(hewitt71,
key "hewitt71",
author "HEWITT, C.",
title "Procedural Embedding of Knowledge in PLANNER",
booktitle "Proceedings 2nd IJCAI",
year "1971",
pages "167-182" )

@TechReport(hewitt72,
key "hewitt72",
author "HEWITT, C.",
title "Description and Theoretical Analysis (Using Schemata) of PLANNER:
       A Language for Proving Theorems and Manipulating Models in a Robot",
type "Technical Report",
number "258",
institution "MIT",
month "April",
year "1972" )


@InProceedings(lansky85a,
key "lansky85a",
author "LANSKY, A.",
title "Behavioral Planning for Multi-Agent Domains",
booktitle "Proceedings of 1985 Workshop on Distributed Artificial
    Intelligence",
year "1985")

@TechReport(lansky85b,
key "lansky85b",
author "LANSKY, A.",
title "Behavioral Planning for Multi-Agent Domains",
type "Technical Note",
number 360,
institution "AI Center, SRI International",
year "1985" )

@TechReport(lansky87a
key "lansky87a",
author "Lansky, A.",
title "A Representation of Parallel Activity Based on Events, Structure,
    and Causality",
year "1987",
number 401,
institution "AI Center, SRI International",
year "1987" )

@InBook(lansky87a1,
key "lansky87a1",
author "Lansky, A.",
title "A Representation of Parallel Activity Based on Events, Structure,
    and Causality",
booktitle "Reasoning About Actions and Plans, Proceedings of the 1986
    Workshop at Timberline, Oregon",
publisher "Morgan Kaufman",
pages "123-160",
year 1987
)

@comment("also submitted to the Computational Intelligence Journal
    Special Issue on Planning")
@TechReport(lansky87b
key "lansky87b",
author "Lansky, A.",
title "Localized Event-based Reasoning for Multiagent Domains",
year "1987",
number 423,
institution "AI Center, SRI International",
year "1987" )

@InProceedings{lansky87c,
key "Lansky87c",
author "Lansky, A. and Fogelsong, D.",
title "Localized Representation and Planning Methods for Parallel Domains",
booktitle "Proceedings of AAAI-87, Seattle, Wa.",
Organization "AAAI",
month "July",
pages "",
year "1987"}


@InProceedings(alv,
key "linden86",
author "LINDEN, T.A., MARSH, J.P. AND DOVE, D.L.",
title "Architecture and early experience with planning for the ALV",
booktitle "Conference on Robotics and Automation",
organization "IEEE",
year "1986" )


@InProceedings(waldinger86,
key "manna86",
author "MANNA, Z. AND WALDINGER, R.",
title "Unsolved problems in the blocks world",
booktitle "Proceedings Workshop on Planning and Reasoning about Action",
year "1986",
organization "AAAI" )

@TechReport(real-time,
key "marsh86",
author "MARSH, J.P. AND GREENWOOD, J.R.",
title "Real-time AI: software architecture issues",
type "White Paper",
institution "Planning Division, Advanced Decision Systems",
year "1986" )


@TechReport(elmer78,
key "mccalla78",
author "McCALLA, G., SCHNEIDER, P., COHEN, R. AND LEVESQUE, H.",
title "Investigations into planning and executing in an independent and
       continuously changing microworld",
type "AI Memo",
number "78-2",
institution "Department of Computer Science, University of Toronto",
address "Toronto, Ontario, CANADA M5S 1A7",
year "1978" )

@InProceedings(elmer79,
key "mccalla79",
author "McCALLA, G. AND SCHNEIDER, P.",
title "The execution of plans in an independent dynamic microworld",
booktitle "Proceedings of 6th IJCAI",
year "1979",
pages "553ff" )

@InProceedings(elmer82a,
key "mccalla82a",
author "McCALLA, G. AND SCHNEIDER, P.",
title "Planning in a dynamic microworld",
booktitle "Proceedings CSCSI Conf",
year "1982",
pages "248ff" )

@Article(elmer82b,
key "mccalla82b",
author "McCALLA, G., REID, L. AND SCHNEIDER, P.F.",
title "Plan creation, plan execution and knowledge acquisition in a
       dynamic microworld",
journal "Int'l J of Man-Machine Studies",
volume "16",
year "1982",
pages "89ff" )

@InProceedings(elmer82c,
key "ward82",
author "WARD, B. AND McCALLA, G.",
title "Error detection and recovery in a dynamic planning environment",
booktitle "Proceedings of AAAI",
year "1982",
pages "172ff" )

@InBook(philosophy,
key "mccarthy69",
author "McCARTHY, J. AND HAYES, P.J.",
title "Some philosophical problems from the standpoint of artificial
        intelligence",
editor "MICHIE, D.",
booktitle "Machine Intelligence 4",
publisher "Ellis Horwood",
address "Chichester, England",
pages "463ff",
year "1969" )

@Article{circumscription,
key "mccarthy80",
author "McCARTHY, J.",
title "Circumscription: a form of non-monotonic reasoning",
volume "13",
pages "27-39",
journal "Artificial Intelligence",
year "1980"}

@Article{nasl,
key "mcdermott78",
author "McDERMOTT, D.",
title "Planning and acting",
journal "Cognitive Science",
volume "2",
pages "78ff",
year "1978"}

@Article{mcdermott82,
key "mcdermott82",
author "McDERMOTT, D.",
title "A temporal logic for reasoning about processes and plans",
journal "Cognitive Science",
volume "6",
pages "101-155",
year "1982"}

@PhDThesis(miller85,
key "miller85",
author "MILLER, D.P.",
title "Planning by Search Through Simulations",
institution "Yale University",
year "1985" )

@Article{attending1,
key "miller83",
author "MILLER, P.L.",
title "ATTENDING: critiquing a physician's management plan",
journal "IEEE Trans PAMI",
volume "5",
pages "449ff",
year "1983"}

@TechReport(shakey,
key "nilsson84",
author "NILSSON, N.J.",
title "Shakey the robot",
type "Tech Note",
number "323",
institution "AI Center, SRI International",
year "1984" )

@TechReport(tritables,
key "nilsson85",
author "NILSSON, N.J.",
title "Triangle tables:  a proposal for a robot programming language",
type "Tech Note",
number "347",
institution "AI Center, SRI International",
year "1985" )

@TechReport(pednault85,
key "pednault85",
author "PEDNAULT, E.",
title "Preliminary Report on a Theory of Plan Synthesis",
type "Technical Note",
number "358",
institution "AI Center, SRI International",
month "September",
year "1985" )

@Article{pitrat,
key "pitrat77",
author "PITRAT, J.",
title "A chess combination program which uses plans",
volume "8",
pages "275-321",
journal "Artificial Intelligence",
year "1977"}

@InBook(id3,
key "quinlan",
author "QUINLAN, J.R.",
title "Inductive inference as a tool for the construction of efficient
        classification programs",
editors "MICKALSKI, R., CARBONELL, J. AND MITCHELL, T.",
booktitle "Machine Learning: an Artificial Intelligence Approach",
publisher "Tioga",
address "Palo Alto, CA",
year "1983" )

@InProceedings(r1-soar,
key "rosenbloom84",
author "ROSENBLOOM, P.S. et al",
title "R1-SOAR: an experiment in knowledge-intensive programming in a
        problem-solving architecture",
booktitle "Proceedings IEEE Workshop on Principles of KBSs (Denver)",
year "1984",
pages "65-71" )

@InProceedings(rosenschein81,
key "rosenschein81",
author "ROSENSCHEIN, S.J.",
title "Plan synthesis: A Logical Perspective",
booktitle "Proceedings 7th IJCAI",
year "1981",
pages "331-337" )

@InProceedings(rosenschein82,
key "rosenschein82",
author "ROSENSCHEIN, J.S.",
title "Synchronization of Multi-Agent plans",
organization "AAAI",
booktitle "Proceedings of AAAI-82",
year "1982",
pages "115-119")

@Article(rex1,
key "rosenschein85a",
author "ROSENSCHEIN, S.J.",
title "Formal theories of knowledge in AI and robotics",
journal "New Generation Computing",
volume "3",
year "1985",
pages "345-357" )

@InProceedings(rex2,
key "rosenschein85b",
author "ROSENSCHEIN, S.J. AND KAELBLING, L.P.",
title "A formal approach to the design of intelligent embedded systems",
booktitle "Proceedings Conf on Theoretical Aspects of Reasoning",
year "1985" )

@InProceedings(rex3,
key "kaelbling86",
author "KAELBLING, L.",
title "An architecture for intelligent reactive systems",
booktitle "Proceedings Workshop on Planning and Reasoning about Action",
year "1986",
organization "AAAI" )

@Article{sacerdoti74,
key "sacerdoti74",
author "SACERDOTI, E.D.",
title "Planning in a hierarchy of abstraction spaces",
journal "Artificial Intelligence",
volume "5",
pages "115-135",
year "1974"}

@Book(sacerdoti77,
key "sacerdoti77",
author "SACERDOTI, E.D.",
title "A Structure for Plans and Behavior",
publisher "Elsevier North-Holland",
address "New York",
year "1977" )

@InProceedings(sacerdoti79,
key "sacerdoti79",
author "SACERDOTI, E.D.",
title "Problem Solving Tactics",
booktitle "Proceedings of the 6th IJCAI",
year "1979",
pages "1077-1085" )

@InProceedings(concurrency,
key "sandewall86a",
author "SANDEWALL, E. AND RONNQUIST, R.",
title "A representation of action structures",
year "1986",
pages "89ff",
organization "AAAI",
booktitle "Proceedings of AAAI-86" )


@InProceedings(schoppers87,
key "schoppers87",
author "SCHOPPERS, M.J.",
title "Universal plans for unpredictable environments",
booktitle "Proceedings 10th IJCAI",
year "1987",
pages "to appear" )

@InProceedings(lawaly,
key "siklossy73",
author "SIKLOSSY, L. AND DREUSSI, J.",
title "An efficient robot planner which generates its own procedures",
booktitle "Proceedings 3rd IJCAI",
year "1973",
pages "423ff" )

@Article{smith80,
key "smith80",
author "SMITH, R.",
title "The contract net protocol: high-level communication and control
        in a distributed problem solver",
journal "IEEE Trans Computers",
volume "29",
year "1980"}

@InProceedings(side-effects,
key "sridharan77",
author "SRIDHARAN, N.S. AND HAWRUSIK, F.",
title "Representation of actions that have side effects",
booktitle "Proceedings 5th IJCAI",
year "1977",
pages "265ff" )

@Article{ddb2,
key "stallman78",
author "STALLMAN, R.M. AND SUSSMAN, G.J.",
title "Forward reasoning and dependency-directed backtracking in a system
        for computer-aided circuit analysis",
journal "Artificial Intelligence",
volume "9",
pages "135ff",
year "1978"}

@TechReport(steele-thesis,
key "steele80",
author "STEELE, G.L.",
title "The definition and implementation of a computer programming language
       based on constraints",
type "Memo",
number "595",
institution "AI Lab, MIT",
year "1980" )

@Article(steele-aij,
key "sussman80",
author "SUSSMAN, G.J. AND STEELE, G.L.",
title "CONSTRAINTS: a language for expressing almost-hierarchical
       descriptions",
journal "Artificial Intelligence",
volume "14",
year "1980" )

@Article{molgen1,
key "stefik81a",
author "STEFIK, M.J.",
title "Planning with constraints (MOLGEN: Part 1)",
journal "Artificial Intelligence",
volume "16",
pages "141-169",
year "1981"}

@Article{molgen2,
key "stefik81b",
author "STEFIK, M.J.",
title "Planning and meta-planning (MOLGEN: Part 2)",
journal "Artificial Intelligence",
volume "16",
pages "141-169",
year "1981"}

@InProceedings(stuart85,
key "stuart85",
author "STUART, C.J.",
title "An implementation of a multi-agent plan synchronizer using a temporal
        logic theorem prover",
booktitle "Proceedings 9th IJCAI",
year "1985",
pages "1031ff" )

@TechReport(hacker,
key "sussman73",
author "SUSSMAN, G.J.",
title "HACKER: a computational model of skill acquisition",
type "Memo",
number "297",
institution "AI Lab, MIT",
year "1973" )

@TechReport(tate74,
key "tate74",
author "Tate, A.",
title "INTERPLAN: A plan generation system
       which can deal with interactions between goals",
type "Research Memorandum",
number "MIP-R-109",
institution "Machine Intelligence Research Unit, University of Edinburgh",
year "1974" )

@PhDThesis(tate75,
key "tate75",
author "TATE, A.",
title "Using Goal Structure to Direct Search in a Problem Solver",
institution "Department of AI, University of Edinburgh",
year "1975" )

@TechReport(tate76,
key "tate76",
author "Tate, A.",
title "Project Planning Using a Hierarchic Non-Linear Planner",
type "Research Report",
number 245,
institution "Department of AI, University of Edinburgh",
year "1976" )


@InProceedings(tate77,
key "tate77",
author "TATE, A.",
title "Generating Project Networks",
booktitle "Proceedings 5th IJCAI",
year "1977",
pages "888-893" )

@InProceedings(tate84,
key "tate84",
author "TATE, A.",
title "Planning and Condition Monitoring in a FMS",
booktitle "Proceedings of the International Conference on
           Flexible Manufacturing Systems",
year "1984")


@InProceedings(diversions,
key "vanbaalen84",
author "VanBAALEN, J.",
title "Exception handling in a robot planning system",
booktitle "Workshop on Principles of Knowledge-Based Systems",
year "1984",
pages "1ff",
organization "IEEE" )

@InBook(waldinger77,
key "waldinger77",
author "WALDINGER, R.",
title "Achieving several goals simultaneously",
editor "MICHIE, D.",
booktitle "Machine Intelligence 8",
publisher "Ellis Horwood",
address "Chichester, England",
pages "94-136",
year "1977" )

@TechReport(warren74,
key "warren74",
author "Warren, D.",
title "WARPLAN: A System For Generating Plans",
type "Memo",
number 76,
institution "Department of Computational Logic, University of Edinburgh",
month = "June",
year "1976" )

@InProceedings(ward82,
key "ward82",
author "WARD, B. and McCALLA, G.",
title "Error Detection and Recovery in a Dynamic Planning Environment",
organization "AAAI",
booktitle "Proceedings of AAAI",
year "1982",
pages "172-175")

@Article{wilensky81,
key "wilensky81",
author "WILENSKY, R.",
title "Meta-planning: representing and using knowledge about planning in
        problem solving and natural language understanding",
journal "Cognitive Science",
volume "5",
year "1981"}

@Book{wilensky83,
key "wilensky83",
author "WILENSKY, R.",
title "Planning and Understanding: A Computational Approach to
    Human Reasoning",
publisher "Addison-Wesley Publishing Company, Reading, Massachusetts",
year "1983"}

@Article(paradise1,
key "wilkins82",
author "WILKINS, D.E.",
title "Using knowledge to control tree searching",
journal "Artificial Intelligence",
volume "18",
year "1982")

@InProceedings(wilkins83,
key "wilkins83",
author "Wilkins, D.E.",
title "Representation in a Domain-Independent Planner",
booktitle "Proceedings of the 8th IJCAI",
year "1983")


@Article(paradise2,
key "wilkins80",
author "WILKINS, D.E.",
title "Using patterns and plans in chess",
journal "Artificial Intelligence",
volume "14",
year "1980")

@Article(sipe,
key "wilkins84",
author "WILKINS, D.E.",
title "Domain-independent planning: representation and plan generation",
journal "Artificial Intelligence",
volume "22",
year "1984",
pages "269ff" )

@Article(sipe-exec,
key "wilkins85",
author "WILKINS, D.E.",
title "Recovering from execution errors in SIPE",
journal "Computational Intelligence",
volume "1",
year "1985",
pages "33ff" )

@Article(sipe-flakey,
key "wilkins86",
author "WILKINS, D.E.",
title "High-level planning in a mobile robot domain",
journal "J Man-Machine Systems",
year "to appear" )

------------------------------

End of AIList Digest
********************
 4-Dec-87 00:04:05-PST,23945;000000000000
Mail-From: LAWS created at  3-Dec-87 23:55:24
Date: Thu  3 Dec 1987 23:52-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #277 - Seminars, Conferences
To: AIList@SRI.COM


AIList Digest             Friday, 4 Dec 1987      Volume 5 : Issue 277

Today's Topics:
  Seminars - Dynamical Connectionism (MIT) &
    Ideonomy (MIT) &
    Rapid Prototyping via Executable Specifications (SMU) &
    On the Threshold of Knowledge (MIT) &
    Belief and Knowledge with Self-Reference and Time (SUNY) &
    Knowledge-Based Software Activity Management (AT&T) &
    Reasoning Under Uncertainty (BBN),
  Conferences - Intelligent Tutoring Systems &
    CHI'88 Workshop on Analytical Models

----------------------------------------------------------------------

Date: Monday, 9 November 1987  12:20-EST
From: Elizabeth Willey <ELIZABETH%OZ.AI.MIT.EDU at XX.LCS.MIT.EDU>
Subject: Seminar - Dynamical Connectionism (MIT)

From: Peter de Jong <DEJONG%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU>
Subject: Cognitive Science Calendar [Extract - Ed]

                 [Forwarded from the IRList digest.]


                       DYNAMICAL CONNECTIONISM

                           Elie Bienenstock
                       Universite de Paris-Sud

                        Wednesday, 11 November
                            E25-406, 12:00

In connectionist models, computation is usually carried out in a space
of activity levels, the connectivity state being frozen.  in contrast,
dynamical connectionist models manipulate connectivity states.  For
instance, they can solve various graph matching problems.  They also
have the typical associative memory and error-correcting properties of
usual connectionist models.  Applications include invariant pattern
recognition; dynamical connectionist models are able to generalize
over transformation groups rather than just Hamming distance.  It is
proposed that these principles underlie much of brain function; fast-
modifying synapses and high-resolution temporal correlations may
embody the dynamical links used in this new connectionist approach.

------------------------------

Date: Monday, 9 November 1987  12:20-EST
From: Elizabeth Willey <ELIZABETH%OZ.AI.MIT.EDU at XX.LCS.MIT.EDU>
Subject: Seminar - Ideonomy (MIT)

Friday,  13 November  12:00pm  E25-401

Ideonomy: Founding a 'Science of ideas'

In a book published in 1601, Francis Bacon urged that modern science
should have the equivalent of an 'ideonomic' character, as well as
being based on experimentation and induction.  My talk concerns a
five-year effort to lay foundations for a science of ideas which I
call Ideonomy.

Whereas the field of Artificial intelligence is primarily aimed at the
automation of mind, cognitive science at the modeling of human
intelligence and thought, and logic at the formalization of reasoning,
ideonomy is preoccupied with the discovery, classification, and
systematization of universal ideas, with aiding and abetting man's use
of ideas, and with automating the generation of ideas.  The ideonomist
holds that inattention to the latter things has hobbled the
development, and limited the success of the other fields; and that
properly all four subjects should be developed simultaneously and in
close coordination, being mutually necessary and synergistic.

At present ideonomy is divided into  some 320 subdivisions, a few of
which are: the study of ignorance, the study of analogies, the study
of form, the study of causes, the study of questions, the study of
answers, the  study of processes, and the study of cognitive and
heuristice principles.  In each of these cases it seeks to identify:
the types (of these things), higher and lower taxa, examples,
interrelationships, causes, effects, reasons for studying, needed
materials and methods, fundamental concepts, abstract and practical
relations to other ideonomic divisions, and the like.

We can also characterize ideonomy in another way, such as:

the study of how elementary ideas can be combined, permuted, and
trnsformed as exhaustive groups of ideas;

A new language designed to facilitate thought and creativity;

An attempt to exploit the qualitiative laws of the universe.

------------------------------

Date: Sun, 29 Nov 1987 20:53 CST
From: Leff (Southern Methodist University)
      <E1AR0002%SMUVM1.BITNET@wiscvm.wisc.edu>
Subject: Seminar - Rapid Prototyping via Executable Specifications
         (SMU)

December 2, 1987, 1:30 PM Science Information Center, Southern Methodist
University

Express:  Rapid Prototyping and Product Development via Integrated,
Knowledge-Based, Executable Specifications

                             ABSTRACT

Express includes integrated, knowledge-based, executable specifi-
cations and related tools to support the software development life
cycle, both rapid prototyping and full-scale engineering development.
We are building a prototype of Express at the Lockheed Software
Technology Center.

Express uses and extends powerful technologies--knowledge-based--
in relevant ways for aerospace products--domain languages, etc.--
across the software development lifecycle.  Express builds on Cordell
Green's Refine technology from Reasoning Systems and extends it in
ways useful for aerospace software development.

Express provides knowledge-base support for
- programming knowledge and
- domain knowledge.
Express will provide executable languages, which are
- brief, in comparison to conventional high-level languages, and
- easy to comprehend.

Express makes a knowledge-based technology usable
- by systems engineers and applications specialists
- who are not experts in knowledge-based systems and
- who may use the system infrequently.

We employ human-factors analysis and the following approaches:
- Object-oriented user's model
- Direct manipulation:  The user in control
- Bit-mapped graphical displays
- Point-and-select capabilities.

                            BIOGRAPHY

John W. McInroy joined the Lockheed Software Technology Center in
Austin, Texas, in November, 1986.  He performs research in human
interface for Express, a prototype of a knowledge-based software
development environment.  He published work-in-progress at the Fall
Joint Computer Conference in October, 1987, with Phillip J. Topping,
W. M. Lively, and Sallie V. Sheppard.  In 1986, McInroy performed
research in human interface for the Proto software development
environment at International Software Systems, Inc. (ISSI), in
Austin, Texas.

>From 1978-1986, McInroy worked at IBM in Austin, Texas.  He patented
eleven inventions and published nineteen others.  He developed
fundamental user interface concepts for the Common User Access
portion of IBM's Systems Application Architecture (SAA).  Earlier,
he specified parts of the user interface for Reportpack on the IBM
Displaywriter.

McInroy received an M.S. and a Ph.D. in Computer Science from the
University of North Carolina.  In both graduate education and
subsequent career, he has pursued interests in human interface
and in software engineering.

McInroy can be contacted at the following address:

John W. McInroy
Lockheed Software Technology Center
Org. 96-01/Bldg. 30E
2100 E. St. Elmo Rd.                  512/448-9715
Austin, Texas 78744                   CSNET:  McInroy@Lockheed.com

------------------------------

Date: Monday, 9 November 1987  12:20-EST
From: Elizabeth Willey <ELIZABETH%OZ.AI.MIT.EDU at XX.LCS.MIT.EDU>
Subject: Seminar - On the Threshold of Knowledge (MIT)

                           NE43, 8TH FLOOR
                         THUR, 11/12, 4:00PM

                    ON THE THRESHOLD OF KNOWLEDGE
                       The Case for Inelegance

                         Dr. Douglas B. Lenat
                       Principal Scientist, MCC


In this talk, I would like to present a surprisingly compact, powerful,
elegant set of reasoning methods that form a set of first principles
which explain creativity, humor, and common sense reasoning -- a sort of
"Maxwell's Equations" of Thought.  I'd like very much to present them,
but, sadly, I don't believe they exist.  So, instead, I'll tell you what
I've been working on down in Texas for the last three years.

Intelligent behavior, especially in unexpected situations, requires
being able to fall back on general knowledge, and being able to
analogize to specific but far-flung knowledge.  As Marvin Minsky said,
"the more we know, the more we can learn".

Unfortunately, the flip side of that comes into play every time we build
and run a program that doesn't know too much to begin with, especially
for tasks like semantic disambiguation of sentences, or open-ended
learning by analogy.  So-called expert systems finesse this by
restricting their tasks so much that they can perform relatively narrow
symbol manipulations which nevertheless are interpreted meaningfully
(and, I admit, usefully) by human users.  But such systems are
hopelessly brittle:  they do not cope well with novelty, nor do they
communicate well with each other.

OK, so the mattress in the road to AI is Lack of Knowledge, and the
anti-mattress is Knowledge.  But how much does a program need to know,
to begin with?  The annoying, inelegant, but apparently true answer is:
a non-trivial fraction of consensus reality -- the few million things
that we all know, and that we assume everyone else knows.  If I liken
the Stock Market to a roller-coaster, and you don't know what I mean, I
might liken it to a seesaw, or to a steel spring.  If you still don't
know what I mean, I probably won't want to deal with you anymore.

It will take about two person-centuries to build up that KB, assuming
that we don't get stuck too badly on representation thorns along the
way.  CYC -- my 1984-1994 project at MCC -- is an attempt to build that
KB.  We've gotten pretty far along already, and I figured it's time I
shared our progress, and our problems, with "the lab."  Some of the
interesting issues are: how we decide what knowledge to encode, and how
we encode it; how we represent substances, parts, time, space, belief,
and counterfactuals; how CYC can access, compute, inherit, deduce, or
guess answers; how it computes and maintains plausibility (a sibling of
truth maintenance); and how we're going to squeeze two person-centuries
into the coming seven years, without having the knowledge enterers'
semantics "diverge".

------------------------------

Date: 1 Dec 87 19:57:14 GMT
From: sunybcs!rapaport@ames.arpa  (William J. Rapaport)
Subject: Seminar - Belief and Knowledge with Self-Reference and Time
         (SUNY)


                STATE UNIVERSITY OF NEW YORK AT BUFFALO

                  GRADUATE GROUP IN COGNITIVE SCIENCE

                                PRESENTS

                             NICHOLAS ASHER

                        Department of Philosophy
                                  and
                      Center for Cognitive Science

                     University of Texas at Austin

   REASONING ABOUT BELIEF AND KNOWLEDGE WITH SELF-REFERENCE AND TIME

This talk will consider some aspects of a  framework  for  investigating
the  logic  of attitudes whose objects involve an unlimited capacity for
self-reference.  The framework, worked out in  collaboration  with  Hans
Kamp,  is the daughter of two well-known parents--possible worlds seman-
tics for the attitudes and the  revisionist,  semi-inductive  theory  of
truth  developed  by Herzberger and Gupta.  Nevertheless, the offspring,
from our point of view, was not an entirely happy one.  We had argued in
earlier  papers that orthodox possible worlds semantics could never give
an acceptable semantics for the attitudes.  Yet the  connection  between
our  use  of possible worlds semantics and the sort of reporesentational
theories of the attitudes that we favor  remained  unclear.   This  talk
will  attempt  to  provide a better connection between the framework and
representational theories of attitudes by developing a notion of reason-
ing  about  knowledge  and  belief  suggested by the model theory.  This
notion of reasoning has a temporal or dynamic aspect that I  exploit  by
introducing temporal as well as attitudinal predicates.

                      Thursday, December 17, 1987
                               4:00 P.M.
                       Baldy 684, Amherst Campus

                            Co-sponsored by:

Graduate Studies and Research Initiative in Cognitive and Linguistic Sciences
                        Buffalo Logic Colloquium

There will be  an  informal  discussion  at  a  time  and  place  to  be
announced.   Call  Bill Rapaport (Dept. of Computer Science, 636-3193 or
3180) or Gail Bruder (Dept. of Psychology, 636-3676) for further  infor-
mation.

------------------------------

Date: Wed, 2 Dec  11:49:20 1987
From: dlm%research.att.com@RELAY.CS.NET
Subject: Seminar - Knowledge-Based Software Activity Management (AT&T)


Title:     Knowledge Based Software Activity Management:
           An Approach to Planning, Tracking and Repairing
           Software Projects

Speaker:   Mark S. Fox
           Associate Professor of Computer Science and Robotics
           Carnegie-Mellon University

Date:      Thursday, December 17, 1987

Time:      9:00 AM to 11:00 AM Central Time
           (10:00 AM to Noon Eastern Time)

Place:     AT&T Bell Laboratories - Indian Hill Main Auditorium

Video & audio simulcast to:  AT&T Bell Labs Holmdel Room 1N-612 (Capacity: 85)
                             AT&T Bell Labs Murray Hill Auditorium
                             AT&T Bell Labs Whippany Auditorium

This talk will be video-taped.

Sponser:   William Opdyke (ihlpf!opdyke)
  Holmdel:      Wendy A. Waugh -homxc!wendy
  Murray Hill:  Deborah L. McGuinness allegra!dlm
  Whippany:     David Lewy - whuts!lewy

----------
                         Talk Abstract

The management of activities is a central part of many tasks
such as project management, software engineering and factory
scheduling.  Successful activity management leads to better
utilization of resources over shorter periods of time.  Over
the past eight years we have been conducting research into
the process of activity management, including:

  1. activity representation
  2. planning and scheduling of activities
  3. chronicling and reactive repair of activities
  4. display and explanation of activities
  5. distributed activity management

This presentation will briefly review the projects underway
in the Intelligent Systems Laboratory, describe the research
in each of the above areas, and demonstrate its application to
software engineering and project management.

----------

                         Speaker Bio.

Dr. Fox received his BSc in Computer Science from the
University of Toronto in 1975 and his PhD in Computer Science
from Carnegie-Mellon University in 1983.  In 1979 he joined
the Robotics Institute of Carnegie-Mellon University as a
Research Scientist. In 1980 he started and was appointed
Director of the Intelligent Systems Laboratory.  He
co-founded Carnegie Group in 1984.  Carnegie-Mellon University
appointed him Associate Professor of Computer Science and
Robotics in 1987.  His research interests include knowledge
representation, constraint directed reasoning and applications
of artificial intelligence to engineering and manufacturing
problems.

------------------------------

Date: Tue 1 Dec 87 16:11:42-EST
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: Seminar - Reasoning Under Uncertainty (BBN)

                    BBN Science Development Program
                       AI Seminar Series Lecture

                      REASONING UNDER UNCERTAINTY

                              Andee Rubin
                     Education Department, BBN Labs
                            RUBIN@G.BBN.COM

                                BBN Labs
                           10 Moulton Street
                    2nd floor large conference room
                      10:30 am, Tuesday December 8


Statistical reasoning is an important prerequisite for both ordinary and
scientific thinking.  Yet statistical reasoning is seldom taught to
pre-college students, and when it is, the emphasis is often on formulaic
manipulation, rather than on the concepts that are the foundation of
reasoning about statistical matters.

To address these concerns, we have developed, with funding from the
National Science Foundation, a computer-enhanced curriculum in
statistical reasoning called Reasoning Under Uncertainty that
incorporates the ELASTIC (TM) software system.  The course is designed to
help high school students develop statistical reasoning abilities by
using real world activities with which they have practical experience.

The ELASTIC (TM) software, implemented on a Macintosh computer, is a tool
for recording, representing, and manipulating statistical information.
It has standard capabilities such as the ability to represent different
types of variables and create appropriate graphs, including confidence
intervals.  Its most experimental features are three interactive
programs: Stretchy Histograms, Sampler, and Shifty Lines, each of which
allows students to interact directly with statistical graphics in order
to achieve a deeper understanding of the underlying statistical
concepts.

The curriculum and software were field-tested in Belmont and Cambridge
High Schools in the spring of 1987.  The talk will describe and
demonstrate the pedagogical principles underlying the course and
software, some results of the field test, and our plans for future
development.

------------------------------

Date: 26 Nov 87 02:58:31 GMT
From: mind!bjr@princeton.edu  (Brian J. Reiser)
Subject: Conference - Intelligent Tutoring Systems


                    Updated Call for Papers

                  INTERNATIONAL CONFERENCE ON
                  INTELLIGENT TUTORING SYSTEMS

                        1-3  JUNE  1988
                        MONTREAL, CANADA

Conference Objectives: ITS 88 will be a forum for presenting new
results in research, development, and applications of intelligent
tutoring systems.  The aims of the conference are to bring together
specialists in the field of Artificial Intelligence and Education, to
share state of the art information among the attendees and to outline
future developments of ITS and their applications.

Topics of interest: The ITS 88 Conference will accept scientific and
techincal papers on all areas of ITS development, but will primarily
focus on the following areas:

  Learning environments
  Methodologies and architectures for educational systems
  AI programming environments for educational use
  Student modelling and cognitive diagnosis
  Curriculum and knowledge representation
  Evaluation of tutoring systems
  Theoretical foundations of ITS
  Knowledge acquisition in ITS
  Design issues in building ITS
  Practical uses of ITS
  Empirical aspects of ITS

Program Committee Chairs are Prof. Gregor Bochmann of the University
of Montreal and Dr. Marlene Jones of the Alberta Research Council.

Program Committee: Ehud Bar-On, Dick Bierman, Jeffrey Bonar, Lorne
Bouchard, Jacqueline Bourdeau, Bernard Causse, Andy diSessa, Philippe
Duchastel, Gerhard Fischer, Jim Greer, Wayne Harvey, Lewis Johnson,
Heniz Mandl, Stuart Macmillan, Gordon McCalla, Vitoro Midoro, Riichiro
Mizoguchi, Andre Ouellet, Maryse Quere, Brian Reiser, Lauren Resnick,
John Self, Derek Sleeman, Elliot Soloway, Hans Spada, Georges Stamon,
Harold Stolovitch, Akira Takeuchi, Martial Vivet, Karl Wender, Beverly
Woolf, Massoud Yazdani.

Authors are requested to submit 5 copies (in English or French) of a
double-spaced manuscript of up to 5000 words by 15 December 1987 to:

  Prof. Gregor Bochmann
  Department d'informatique et de recherche operationnelle
  Universite de Monteal
  C.P. 6128, Succ "A"
  Montreal CANADA
  H3C 3J7

Authors will be notified of acceptance by February 29, 1988. Camera-ready
copies will be due April 10, 1988.

------------------------------

Date: Mon, 30 Nov 87 11:29:34 pst
From: Keith Butler <keith@BOEING.COM>
Subject: Conference - CHI'88 Workshop on Analytical Models

                        CALL FOR PARTICIPATION

                CHI'88 Workshop on Analytical Models:
        Predicting the Complexity of Human-Computer Interaction

In current practice, designs for human-computer interaction (HCI) can only
be evaluated empirically- after a prototype has been built in some form.
The empirical cycle is lengthy, expensive, and makes it difficult for HCI
designers to contribute timely revisions.

A more effective approach may be possible based on cognitive modeling and
perception research, currently underway at a number of sites.  Cognitive
complexity models based on knowledge representation techniques, and computer-
based perceptual evaluations may provide tools to analyze HCI designs.  These
tools would allow early evaluation of designs and design options before
actual implementation.  The payoff of this approach could be great, but
substantial work remains before effective commercial application can be proven.

The Workshop on Analytical Models is scheduled as part of the CHI'88 Conference
in Washington, D.C.  The one-day workshop will be held on Sunday, May 15, 1988.
The objective is to determine the current state of computational models for
perceptual and cognitive complexity, and then examine how such models might be
used as part of the HCI design process in industry and government.  The goal of
the workshop is to provide guidance for further research, to stimulate thinking
about development, to facilitate the exchange of research findings, and to
encourage higher levels of activity.

Attendance at the workshop will be by invitation- limited to about twenty
people.  People from two distinct backgrounds are sought: researchers who can
survey or critique a body of relevant work, and appliers of new technology to
HCI problems.  The program committee, consisting of Keith Butler, Boeing
Advanced Technology Center, John Bennett, IBM Almaden Research Center, Peter
Polson, University of Colorado, and Tom Tullis, McDonnell Douglas Astronautics
Co., will invite researchers working on models that are relevant to HCI design
and representatives from industry and government who are concerned with HCI
and experienced with technology transfer.  All attendees will participate in
roles such as speakers, discussants, panelists, or moderators.

Persons wishing to participate are requested to submit four copies of a
position paper. Researchers should provide a 2,000 word survey of work based
on their research.  Representatives from industry and government should provide
a 1,000 word description of their organization's interest in HCI and their
experience with technology transfer.

Please send hard copies only to arrive by January 25, 1988 to:

        Keith Butler                            For information:
        Boeing Advanced Technology Center
        PO Box 24346,  M/S 7L-64                keith@boeing.com
        Seattle, WA  98124                      (206) 865-3389

Invitations will be mailed by February 23, 1988.  Participants will also be
sent copies of selected papers along with a final agenda for the workshop.

------------------------------

End of AIList Digest
********************
 4-Dec-87 00:17:09-PST,15186;000000000000
Mail-From: LAWS created at  4-Dec-87 00:11:16
Date: Fri  4 Dec 1987 00:05-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #278 - Queries, Daedalus, Neural Network Reports
To: AIList@SRI.COM


AIList Digest             Friday, 4 Dec 1987      Volume 5 : Issue 278

Today's Topics:
  Queries - Expert Systems for Restoring Load-Flow &
    Applied Neural-Network Experiences &
    Training Sets for Rule Induction & Medical CAI References &
    Portable OPS-5 and CLIPS 4.0 & CogSci Call for Papers &
    RACTER & DCG Parser & Recording Mouse Input,
  Journal Issue - AI in DAEDALUS,
  Reports - Neural Network Reports

----------------------------------------------------------------------

Date: Tue, 24 Nov 87 18:50:01 GMT
From: A385%EMDUCM11.BITNET@CUNYVM.CUNY.EDU
Subject: Expert systems restoring load-flow andbiliography

Date: 24 November 1987, 18:31:46 GMT
From: Javier Lopez Torres       Tf: (91) 7113887     A385     at EMDUCM11

  Hello AI community from Spain!
The AI department of this University (Complutense de Madrid) along with a great
Spanish electric company are planning to develop an expert system to restore
high tension networks when overloading on the load flow appears.
  We are interested in dynamic determination of speed and voltage governors at
electric centrals to perform dynamic simulation studies -Stability and calcula-
tion analysis.
  At the moment we are using the PSS/E package to calculate the load flow dis-
tribution, running on a VAX/750 and have a lot of problems to connect it to our
MODCOMP computer.  So, please:

   (1)  Is there anyone who have done the above studies or knows someone who ha
ve done it?.
   (2)  Is anyone aware of any survey publications related to the above mentio-
ned areas?
   (3)  Has anyone got any bibliography related to the above subject areas?

We are most interested  in communicating with researchers currently involved
in this kind of expert systems.
   Thanks you very much in advance for any help or suggestion as we're really
very misled.
           Sincerely:

              Javier Lopez Torres
      Universidad Complutense de Madrid
          A385%EMDUCM11.BITNET

------------------------------

Date: 1 Dec 87 06:26:36 GMT
From: portal!cup.portal.com!Barry_A_Stevens@uunet.uu.net
Subject: request for information, offer to share info on neural nets


              REQUEST FOR INFORMATION ON NEURAL NETWORKS
                           Barry A Stevens
                          Applied AI Systems
-
I  am  conducting  a survey to identify the  "useful"  neural  network
paradigms.  There  are  many  available,  but  few  have   established
themselves as robust and trainable in the commercial environment.
-
I seek either: pointers to information sources, or information itself.
With enough response, I will summarize and post to the net. The  three
types of information sought are:
-
***The  usefulness of the network paradigms listed below when  applied
   to real problems with real data;
-
***The tests that a set of training data must meet to be useable  with
   each of the paradigms;
-
***The classes of problems for which each paradigm is useful.
-
-
Comments on stability, robustness, ease of construction and test,  and
results  obtained  from the application would be useful  and  welcome.
Pointers to sources that contain such information are equally welcome.
-
I  already  have access to numerous technical papers that  talk  about
such things as "spatiotemporal uses" as a class of applications.  What
is  of more interest is "The Spatiotemporal Paradigm was  successfully
used  to identify specific waveforms and patterns in foreign  currency
trading data... etc.". Or this: "a backpropogation network was used to
implement a consumer loan approval system, with performance  exceeding
both  that  of human loan officers making the loans and  a  rule-based
expert  system designed for the same purpose. The network was  trained
in three weeks, the expert system took two manyears to build."
-
These network paradigms are of specific interest:
-
     Back Propogation
     Back Propogation - shared weights
     Counter Propogation
     Adaptive Resonance 1 and 2
     Binary Associative Memory
     Spatiotemporal Network
     Neocognitron
     Hopfield Network
     Kohonen Feature Map
     Boltzman Machine
     Group Method of Data Handling
     Barron Associates: polynomial synthesis
-
If  there are others that you feel are also of interest,  please  feel
free  comment on them as well. Also, I realize that some of these  are
not  neural network paradigms per se, but they have been used  in  the
same situations and are therefore of interest.
-
I can be reached by email or at this address and phone:
-
Barry A Stevens
Applied AI Systems, Inc.
PO Box 2747
Del Mar, CA 92014
619-755-7231

------------------------------

Date: 2 Dec 87 02:22:18 GMT
From: stuart%warhol@ads.arpa (Stuart Crawford)
Reply-to: stuart@ads.arpa ()
Subject: Training Sets Needed for Rule Induction System


I'd like to start a collection of training sets for use with a rule induction
system.  The basic requirements are that a training set be composed of a
collection of observations, each of which consists of a *known* class
assignment, and a vector of observed features.  The features may be integer,
real or nominal (categorical) valued.

Ideally, I am looking for training sets which are drawn from a medical domain,
and have from 50-500 observations.  Real data is preferred, but simulated data
is ok too.  However, if the data is simulated, please supply the relevant
information needed to re-generate the data (program used, random number
generator used, random number seeds used, etc.).

If you have a training set, please contact stuart@ads.arpa.

        Stuart Crawford
        Advanced Decision Systems
        201 San Antonio Circle, Suite 286
        Mountain View, CA 94040
        (415) 941-3912 x325

Stuart

------------------------------

Date: 30 Nov 87 19:13:02 GMT
From: cunyvm!byuvax!cockaynes@psuvm.bitnet
Subject: Medical CAI References?

I am conducting a literature
search of research studies
demonstrating the effectiveness
of computer assisted instruction,
especially computer simulations,
in medical education.  Does anyone
know of recent or on-going
research?

Please e-mail responses to me
and I will summarize to the net.


Contact Susan Cockayne at
CockayneS@byuvax.bitnet

------------------------------

Date: 1 Dec 87 16:38:50 GMT
From: ihnp4!homxb!whuts!mtune!codas!ufcsv!beach.cis.ufl.edu!mfi@ucbvax
      .Berkeley.EDU  (Mark Interrante)
Subject: Portable OPS-5? and CLIPS 4.0?

In a recent paper I saw a references to portable ops5 and clips 4.0.

It is my understanding that these are public domain.  Dose anyone have
copies that could be Emailed?

------------------------------

Date: 3 Dec 87 00:56:31 GMT
From: A.GP.CS.CMU.EDU!spiro@PT.CS.CMU.EDU  (Spiro Michaylov)
Subject: CogSci call for papers wanted


Does anybody have a soft copy of the call for papers for the next CogSci
conference? If so could you please e-mail it to me directly?
Otherwise pointers to a hard copy would be appreciated.

Thanks in advance.

Spiro Michaylov.
CMU-CS.
spiro@a.gp.cs.cmu.edu

------------------------------

Date: Thu, 03 Dec 87 20:04:39 EST
From: Michael Nosal <ST502042%BROWNVM.BITNET@WISCVM.WISC.EDU>
Subject: Request for RACTER

Howdy!
I am interested in locating the (in)famous 'AI' program RACTER. Unfortunately I
 don't remember too much about it except that I first heard about it in an arti
cle in Scientific American and that a book called "The Policeman's Beard is Hal
f Constructed" that contained bits of prose that it created was published a few
 years ago. I am interested in finding any version of the program (source code
would be fantastic) If there is a group or company that owns the rights to it o
r is selling a commercial version, I would love to know their address. While I'
m on the subject, if anyone knows of other 'Eliza-like' AI programs out there,
please let me know.
        Thanks in advance,
        Michael Nosal (please respond to this account if possible)

------------------------------

Date: Thu, 03 Dec 87 20:27:46 EST
From: ganguly@ATHENA.MIT.EDU
Subject: DCG

Hi!
        Does someone have a Definite Clause Grammar parser written in
Edinburgh PROLOG that I may use as an user interface ?
Thanking in advance,


Jaideep Ganguly

------------------------------

Date: Thu 3 Dec 87 11:42:56-CST
From: CS.MARTINICH@R20.UTEXAS.EDU
Subject: recording mouse input

Does anyone know of a program that "records" mouse input on a
SUN workstation?  I need a program that "records" mouse input
which can be "played back" as input to a program.
I would appreciate any information on such a program.
  --Leslie Martinich
    cs.martinich@r20.utexas.edu

------------------------------

Date: Mon, 23 Nov 87 15:54:46 EST
From: amcad!alyson@husc6.harvard.edu
Reply-to: alyson%amcad.uucp@husc6.harvard.edu
Subject: AI DAEDALUS

To: Robert Engelmore Editor-in-chief AI Magazine Menlo Park, CA.
Re: New issue of DAEDALUS on AI

Parl Gerald (BCS) has suggested that I be in touch with you concerning
our Winter 1988 issue of DAEDALUS - journal of the American Academy of
Arts and Sciences - which deals exclusively with "Artificial Intelligence."
Both he and Mike Hamilton (AAAI) have suggested that it might be useful
to get news of this forthcoming issue onto the ARPANET AI Bulletin Board.

Authors in the forthcoming issue include: Papert, Dreyfus H & S, Sokolowski,
McCorduck, Cowan & Sharp, Jacob Schwartz, Reeke & Edelman, Hillis, Waltz,
Hurlbert & Poggio, Sherry Turkle, Putnam, Dennett and McCarthy.  Subjects
include, among others, the following: Natural and AI, Neural Nets and AI,
Real Brains and AI, Making Machines See, AI and Psychoanalysis, Philos-
ophers Encounter AI, and Mathematical Logic and Ai.

Copies from printer avialable by mid-December.

Best wishes,
Guild Nichols
DAEDALUS

------------------------------

Date: Wed, 2 Dec 87 12:26:33 EST
From: takefuji%uniks.ece.scarolina.edu@RELAY.CS.NET
Subject: Neural Network Reports


A Conductance programmable "neural" chip based on a Hopfield model employs
deterministically/stochastically controlled switched resistors

Yutaka Akiyama*, Yoshiyasu Takefuji**, Yong B. Cho**, Yoshiaki Ajioka*,
and Hideo Aiso*

* Keio University
Department of Electrical Engineering
3-14-1 Hiyoshi, Kouhoku-ku, Yokohama 223
JAPAN

** University of South Carolina
Department of Electrical and Computer Engineering
Columbia, SC 29208
(803)-777-5099

Abstract
        The artificial neural net models have been studied for many years.
        There has been  a recent resurgence in the field of artificial neural
        nets caused by Hopfield. Hopfield models are suitable for VLSI
        implementations because of the simple architecture and components such
        as OP Amps and resistors. However VLSI techniques for implementing the
        neural models face difficulties dynamically changing the values of the
        conductances Gij to represent the problem constraints.
        In this paper, VLSI neural network architectures based on a Hopfield
        model with deterministically/stochastically controlled variable
        conductances are presented. The stochastic model subsumes both
        functions of the hopfield model and Boltzmann machine in terms of
        neural behaviors. We are under implementations of two CMOS VLSI
        neural chips based on the proposed methods.
_______________________________________________________________________________


Multinomial Conjunctoid Statistical Learning Machines

Yoshiyasu Takefuji, Robert Jannarone, Yong B. Cho, and Tatung Chen

Unversity of South Carolina
Department of ECE
Columbia, SC 29208
(803)777-5099

ABSTRACT
        Multinomial Conjunctoids are supervised statistical modules that learn
        the relationships among binary events. The multinomial conjunctoid
        algorithm precluded the following problems that occur in existing
        feedforward multi-layerd neural networks:(a) existing networks often
        cannot detemine underlying neural architectures, for example how many
        hidden layers should be used, how many neurons in each hidden layer are
        required, and what interconnections between neurons should be made;(b)
        existing networks cannot avoid convergence to suboptimal solutions
        during the learning process; (c) existing networks require many
        iterations to converge, if at all, to stable states; and (d) existing
        networks may not be sufficiently general to reflect all learning
        situations.
        By contrast multinomial conjunctoids are based on a well-developed
        statistical decision theory framework, which guarantees that learning
        algorithms will converge to optimal learning states as the number of
        learning trials increases, and that convergence during each trial will
        be very fast.

_________________________________________________________________________

Conjunctoids: Statistical Learning Modules for Binary Events

Robert Jannarone, Kai Yu, and Y. Takefuji

University of South Carolina
Department of ECE
Columbia, SC 29208
(803)777-7930

ABSTRACT

A general family of fast and efficient PDP learning modules for binary events
is introduced. The family (a) subsumes probabilistic as well as functional
event associations; (b) subsumes all levels of input/output associations; (c)
yields truly parallel learning processes; (d) provides for optimal parameter
estimation; (e) points toward a workable description of optimal model
performance; (f) provides for retaining and incorporating previously learned
information; and (g) yields procedures that are simple and fast enough to
be serious candidates for reflecting both neural functioning and real time
machine learning. Examples as well as operationial details are provided.
_________________________________________________________________________


If you need the full copies of those papers, please state which papers you are
requesting through Email, phone, or USmail.

For Multinomial and VLSI neural chips papers:

Dr. Y. Takefuji
University of South Carolina
Deparment of Electrical and Computer Engineering
Columbia, SC 29208

(803)777-5099
(803)777-4195

takefuji@uniks.ece.scarolina.edu

For Conjuncoids papers:

Dr. Robert Jannarone
University of South Carolina
Department of Electrical and Computer Engineering
Columbia, SC 29208

(803) 777-7930

jann@uniks.ece.scarolina.edu

Thank you...

------------------------------

End of AIList Digest
********************
 6-Dec-87 22:18:32-PST,16497;000000000000
Mail-From: LAWS created at  6-Dec-87 22:06:02
Date: Sun  6 Dec 1987 22:04-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #279 - Prolog Source Library, Seminar, Conference
To: AIList@SRI.COM


AIList Digest             Monday, 7 Dec 1987      Volume 5 : Issue 279

Today's Topics:
  Announcement - Prolog Source Library,
  Seminar - Composing and Decomposing Universal Plans (SRI),
  Conference - AI Workshop in Singapore 1989

----------------------------------------------------------------------

Date: 3-DEC-1987 22:48:59 GMT
From: POPX@VAX.OXFORD.AC.UK
Subject: Prolog Source Library

From: Jocelyn Paine,
      St. Peter's College,
      New Inn Hall Street,
      Oxford OX1 2DL.
      Janet Address: POPX @ OX.VAX


                          PROLOG SOURCE LIBRARY


I teach  AI to  undergraduates, as  a one-term  practical course  in the
Experimental Psychology degree. For the  course, I use Poplog Prolog, on
a VAX  under VMS. During the  course, I talk about  topics like scripts,
mathematical creativity, planning, natural language analysis, and expert
systems; I  exemplify them by  mentioning well-known programs  like GPS,
Sam, and AM.

I  would like  my students  to be  able to  run these  programs, and  to
investigate their mechanism and limitations. For students to incorporate
into their  own programs, I'd also  like to provide a  library of Prolog
tools such  as chart  parsers, inference  engines, search  routines, and
planners. Unfortunately,  published descriptions of the  famous programs
give much  less information than is  necessary to re-implement  them. As
for tools like  planners and inference engines: the  literature is often
more helpful, but I still have to do  a lot of work which must have been
done before, even if it's merely typing in code from excellent textbooks
like "The Art of Prolog".


I'm sure other Prolog programmers have this problem too.


I have therefore  set up a LIBRARY  OF PROLOG SOURCE CODE,  which I will
distribute  over the  British  academic network  (Janet)  and nets  like
Bitnet  connected  to Janet,  to  anybody  who  wants  it. I  will  take
contributions from  anyone who wants to  provide them, subject to  a few
conditions mentioned below.  I proposed this in AIList  Bulletin V5 267:
here are the details of how the library works. If you want to contribute
entries, or to request them, please read on...


How to send contributions.

  Please send Prolog  source  for the  library, to  user  POPX at  Janet
  address  OX.VAX  (the  Vax-Cluster   at  Oxford  University  Computing
  Service). If a file occupies more than about 1 megabyte, please send a
  short message  about it first,  but don't  send the large  file itself
  until  I reply  with  a message  requesting it.  This  will avoid  the
  problems  we sometimes  have where  large files  are rejected  because
  there isn't enough space for them.

  I accept  source on the understanding  that it will be  distributed to
  anyone who asks for  it. I intend that the contents  of the library be
  treated  in the  same way  as (for  example) proofs  published in  the
  mathematical literature, and algorithms  published in computer science
  textbooks -  as publicly available  ideas which anyone  can experiment
  with, criticise, and improve.

  I will try to put an entry into the library within one working week of
  its arrival.

Catalogue of entries.

  I will keep a catalogue of  contributions available to anyone who asks
  for it.

  The catalogue will  contain for each entry: the  name and geographical
  address of the entry's  contributor (to prevent contributors receiving
  unwanted  electronic  mail,  I  won't include  their  electronic  mail
  addresses unless  I'm asked to  do so);  a description of  the entry's
  purpose; and  an approximate  size in kilobytes  (to help  those whose
  mail systems can't receive large files easily).

  I  will  also include  my  evaluations  of its  ease  of  use, of  its
  portability and  standardness (by the standards  of Edinburgh Prolog);
  and my evaluation of any documentation included.

Quality of entries.

  Any contribution may be useful to  someone out there, so I'll start by
  accepting anything. I'm not just  looking for elegant code, or logical
  respectability.  However, it  would  be  nice if  entries  were to  be
  adequately documented, to come with examples  of their use, and to run
  under  Edinburgh Prolog  as described  in "Programming  in Prolog"  by
  Clocksin and Mellish. If you can therefore, I'd like you to follow the
  suggestions below.

    The main predicate  or predicates in each entry  should be specified
    so that someone who knows nothing about how they work can call them.
    This means specifying: the type and mode of each argument, including
    details of  what must be  instantiated on  call, and what  will have
    become instantiated  on return; under what  conditions the predicate
    fails, and  whether it's resatisfiable; any  side-effects, including
    transput  and clauses  asserted  or retracted;  whether any  initial
    conditions    are   required,    including   assertions,    operator
    declarations,  and  ancilliary  predicates.  In  some  cases,  other
    information,  like  the  syntax  of   a  language  compiled  by  the
    predicate, may be useful.

    A set  of example calls would  be useful, showing the  inputs given,
    and the outputs expected. Use  your discretion: if you contribute an
    expert system shell  for example, I'd like a  sample rulebase, and a
    description  of  how  to  call  the  shell  from  Prolog,  and  some
    indication  of what  questions  I can  ask the  shell,  but I  don't
    require that the  shell's dialogue be reproduced down  to every last
    carriage return and indentation.

    For programmers who want to  look inside an entry, adequate comments
    should be  given in the  source code,  together perhaps with  a more
    general description of  how the entry works,  including any relevant
    theory.

    In the documentation, references to  the literature should be given,
    if this is helpful.

    Entries should be  runnable using only the  predicates and operators
    described in "Programming in Prolog" (if  they are not, I may not be
    able to test them!). I don't object to add-on modules being included
    which are only runnable under certain implementations - for example,
    an add-on with  which a planner can display its  thoughts in windows
    on  a high-resolution  terminal -  but they  will be  less generally
    useful.

    As mentioned earlier, I will  evaluate entries for documentation and
    standardness, putting  my results  into the  catalogue. If  I can, I
    will also  test them, and  record how easy I  found them to  use, by
    following the instructions given.

  I emphasise that I will accept all entries; the comments above suggest
  how to improve the quality of entries, if you have the time.

Requesting entries.

  I can't  afford to copy  lots of discs, tapes,  papers, etc, so  I can
  only deal with requests to send files along the network. Also, I can't
  afford to send along networks that I have to pay to use from Janet.

  You may  request the catalogue,  or a particular  entry in it.  I will
  also  try  to satisfy  requests  like  "please  send all  the  natural
  language parsers which you have" -  whether I can cope with these will
  depend on the size of the library.

  I will  try to answer each  request within seven working  days. If you
  get no reply within fourteen working  days, then please send a message
  by  paper  mail  to  my  address. Give  full  details  of  where  your
  electronic mail  messge was  sent from,  the time,  etc. If  a message
  fails to  arrive, this may help  the Computing Service  staff discover
  why.


Although I  know Lisp,  I haven't  used it  enough to  do much  with it,
though I'm  willing just to  receive and pass on  Lisp code, and  to try
running it under VAX Lisp or Poplog version 12 Lisp.

------------------------------

Date: Thu, 3 Dec 87 16:03:52 PST
From: Amy Lansky <lansky@venice.ai.sri.com>
Subject: Seminar - Composing and Decomposing Universal Plans (SRI)

               COMPOSING AND DECOMPOSING UNIVERSAL PLANS

                           Marcel Schoppers
                Advanced Decision Systems (MARCEL@ADS.ARPA)

                   11:00 AM, MONDAY, December 7
              SRI International, Building E, Room EJ228

``Universal plans'' are representations for robot behavior; they are
unique in being both highly reactive and automatically synthesized. As
a consequence of this plan representation, subplans have conditional
effects, and hence there are conditional goal conflicts. When block
promotion (= subplan concatenation) cannot remove an interaction, I
resort not to individual promotion (= subplan interleaving) but to
confinement (falsifying preconditions of the interaction).  With
individual promotion out of the way, planning is a fundamentally
different problem: plan structure directly reflects goal structure,
plans can be conveniently composed from subplans, and each goal
conflict needs to be resolved only once during the lifetime of the
problem domain. Conflict analysis is computationally expensive,
however, and interactions may be more easily observed at execution
time than predicted at planning time.

All conflict elimination decisions can be cached as annotated
operators. Hence it is possible to throw away a universal plan, later
reconstructing it from its component operators without doing any
planning. Indeed, an algorithm resembling backchaining mindlessly
reassembles just enough of a universal plan to select an action that
is helpful in the current world state. Since the selected action is
both a situated response and part of a plan, recent rhetoric about
situated action as *opposed* to planning is defeated.


VISITORS:  Please arrive 5 minutes early so that you can be escorted up
from the E-building receptionist's desk.  Thanks!

------------------------------

Date: Thu, 03 Dec 87 10:46:56 SST
From: Joel Loo <ISSLPL%NUSVM.BITNET@wiscvm.wisc.edu>
Subject: Conference - AI Workshop in Singapore 1989

Thanks for those who expressed interest in the call for papers
posted by me recently. Due to the overwhelming queries, it might
be beneficial to post a detailed one here for your convenience.

               2nd International IFIP/IFAC/IFORS
                        Workshop on
                  ARTIFICIAL INTELLIGENCE
                IN ECONOMICS AND MANAGEMENT

                         SINGAPORE
                    January 11-13, 1989

         Organized by the Institute of Systems Science
               National University of Singapore

                    +-----------------+
                    ! CALL FOR PAPERS !
                    +-----------------+

The Second International Conference on AI in Economics and
Management will be held in Singapore during the 2nd week of
January 1989. The workshop will address issues relevant to the
use of AI Technology in Economic and Management communities.
Topics for the workshop will cover both technology and
applications.

Professor Herbert Simon, Nobel Laureate will be the Keynote
Speaker.

This workshop will address research and applications of
artificial intelligence techniques and tools, in the areas of:
finance, accounting, marketing, banking, insurance, economics,
human-resource management,  assets adminstration, decision
support systems, public and private services, office automation,
law, and manufacturing planning.

The techniques to be presented should be explicitly relevant to
the above application areas, and include: knowledge
representation, search and inference, knowledge acquisition,
intelligent interfaces, knowledge base validation, natural
language analysis, planning procedures, and task support systems.

The tools to be presented should also be specific in design or in
use to the application areas discussed at the workshop, and may
cover: application specific expert systems, front-ends to
decision support systems, interfaces to database systems,
interfaces between symbolic and procedural processors, object
oriented environment.

The workshop will have contributed papers and case sessions.
There will be separate tutorials on the use of AI technology on
January 9 & 10.

                ** Paper Submission Procedure **

Authors should submit 700 word extended abstracts, typed with
double-spacing, in 2 copies before July 1, 1988 to:

               Mrs Vicky TOH
               Institute of Systems Science
               National University of Singapore
               Kent Ridge
               Singapore 0511

Each abstract should include full address of all authors, and
references in numerical order. Authors of accepted submission
will be notified by September 1, 1988. Papers not received in
full by this date will not be included for presentation. All
papers must be in English.

              ** Software Submission Procedure **

Authors not willing to submit a paper, but ready to demonstrate
an artifical intelligence software program are encouraged to do
so. The submission procedure is the same as for papers. The host
computers, operating systems, utilities and all interfaces must
be specified exactly, as well as the architecture and principles
underlying the program. Authors will have to be responsible for
all logistics, including supply of computers etc.

All authors of accepted papers or of accepted software demos, are
expected to present their work in person. Failure to do so will
result in the corrsponding paper not appearing in the workshop
proceedings.

                        ** Exhibit **

Companies interested in exhibiting publications equipment or
software falling within the scope of the workshop, should contact
the organizing committee.

 ---------------------------------
       Important Dates

 Tutorials      : 9 & 10 Jan 1989
 Workshop       : 11-13 Jan 1989

 For submission
 of extended
 abstract       : 1 Jul 1988

 Notification of
 Acceptance     : 1 Sep 1988

 Camera Ready
 Papers Due     : 1 Nov 1988
 ---------------------------------

Language    : Throughout the workshop, English will be the
              official language. Translation facilities will NOT
              be available.
Proceedings : Proceedings will be published after the workshop,
              with Y.H. Pao, L.F.Pau, J.Motiwalla and H.H.Teh as
              editors. Copyrights for accepted papers are thus
              transferred to the publishers.
Registration: US$200 for Tutorials
Fees          US$200 for Workshop
              US$300 for the complete Workshop & Tutorials.
              (fees cover freshments, lunches and conference
              documentation)
Hotels      : The price range for 5-star hotels in Singapore is
              US$50-US$75
Travel      : Arrangements will be made for special excursion air
              fares.

(Request for information should be directed as well to Mrs Vicky
Toh at the above address (Telex: ISSNUS RS 39988, Fax: 7782571,
BITNET: ISSVCT@NUSVM))

*** Conference Committee ***
Chairman : Juzar MOTIWALLA, Institute of Systems Science,
           National University of Singapore
Program Committee Chairmen:
           Yoh-Han PAO, Case Institute of Technology, US
           L.F. PAU, Technical University, Denmark
           Hoon-Heng TEH, Institute of Systems Science, Singapore
Organizing Committee Chairman:
           Desai NARASIMHALU, Institute of Systems Science, Singapore

*** International Program Committee *** (tentative)
  Jan Alkins, AION, US
  Jason Catlett, Univ. of Sydney, AUS
  C.H. Hu, Academy of Sciences, PRC
  Jae Kyu Lee, KAIST, KOREA
  Peng Si Ow, CMU, US
  Suzanne Pinson, Univ. of Paris, FRANCE
  Edison Tse, Stanford Univ., US
  Andrew Whinston, Purdue Univ., US

------------------------------

End of AIList Digest
********************
 6-Dec-87 22:25:34-PST,14356;000000000000
Mail-From: LAWS created at  6-Dec-87 22:20:10
Date: Sun  6 Dec 1987 22:13-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #280 - Robot Kits, Mac ES Tools, Scientific Method
To: AIList@SRI.COM


AIList Digest             Monday, 7 Dec 1987      Volume 5 : Issue 280

Today's Topics:
  Queries - Semantic Network Software & CHAT80 &
    Portable OPS-5 and Pseudo-Scheme & Expert System Liability,
  Education - Robotic "kits" for Kids,
  AI Tools - Expert System Tools for the Mac,
  Philosophy - Neural Nets are Science

----------------------------------------------------------------------

Date: 4 December 1987, 21:24:47 LCL
From: KANNAN@SUVM
Reply-to: AIList@Stripe.SRI.Com
Subject: Semantic Network Software

I would like to have information on any software package (shells or
specific AI languages) that supports semantic networks. I am at present
using semantic networks as a documentation tool and would like to
represent the same using a shell.
Thanks
Reply to KANNAN at SUVM
                                                      Ramu Kannan

------------------------------

Date: Sat, 05 Dec 87 20:10:21 EST
From: ganguly@ATHENA.MIT.EDU
Subject: chat80

Hi !
        I am interested in using the program CHAT80 developed
by Fernando Periera. I would appreciate very much if
someone could send me a copy of this program. I understand
that this program is documented in the following technical report:

Logic for Natural Language Analysis -
SRI  Technical Note 275 - Frenando Periera

------------------------------

Date: 6 Dec 87 17:54:59 GMT
From: USENET Master <uunet!gould!ufcsv!news@RUTGERS.EDU>
Reply-to: mfi@beach.cis.ufl.edu (Mark Interrante)
Subject: Portable OPS-5 and Pseudo-Scheme

I am looking for two systems I saw referenced recently: Portable OPS-5 written
in CL and Pseudo-Scheme written in CL. If anyone has these or has pointers to
these, I would appriciate hearing about it.


Mark Interrante                                               CIS Department
                                                       University of Florida
Internet:  mfi@beach.cis.ufl.edu                      Gainesville, FL  32611

------------------------------

Date: 4 Dec 87 05:05:00 GMT
From: portal!cup.portal.com!Barry_A_Stevens@uunet.uu.net
Subject: Can you sue an expert system?

I am interested in the legal aspects of using expert systems.

Consider, and please comment on, this scenario.

                     * * * * * * * * * * *

A well-respected, well-established expert systems(ES) company constructs
an expert financial advisory system. The firm employs the top ES
applications specialists in the country. The system is constructed with
help from the top domain experts in the financial services industry. It
is exhaustively tested, including verification of rules, verification of
reasoning, and further analyses to establish the system's overall value.
All results are excellent, and the system is offered for sale.

Joe Smith is looking for a financial advisory system. He reads the sales
literature, which lists names of experts whose advice was used when
building the system. It lists the credentials of the people in the
company who were the implementors. It lists names of satisfied users,
and quotes comments that praise the product. Joe wavers, weakens, and
buys the product.

"The product IS good,", Joe explains. "I got it up and running in less
than an hour!" Joe spends the remainder of that evening entering his own
personal financial data, answering questions asked by the ES, and
anticipating the results.

By now, you know the outcome. On the Friday morning before Black Monday,
the expert system tells Joe to "sell everything he has and go into the
stock market." ESs can usually explain their actions, and Joe asks for
an explanation. The ES replies "because ... it's only been going UP for
the past five years and there are NO PROBLEMS IN SIGHT."

Joe loses big on Monday. Since he lives in California, (where there is
one lawyer for every four households, or so it seems, and a motion
asking that a lawsuit be declared frivolous is itself declared
frivolous) he is going to sue someone. But who?

     The company that implemented the system?

     The domain experts that built their advice into the system?

     The knowledge engineers who turned expertise into a system?

     The distributor who sold an obviously defective product?

Will a warranty protect the parties involved? Probably not. If real
damages are involved, people will file lawsuits anyway.

Can the domain experts hide behind the company? Probably not. The
company will specifically want to use their names and reputations as the
source of credibility for the product. The user's reaction could be,
"There's the so-and-so who told me to go into the stock market."

Can the knowledge engineers be sued for faulty construction of a system?
Why not, when people who build anything else badly can be sued?

How about the distributor -- after all, he ultimately took money from
the customer and gave him the product.

                     * * * * * * * * * * *

I would be very interested in any of your thoughts on this subject. I'd
be happy to summarize the responses to the net.

Barry A. Stevens
Applied AI Systems, Inc.
PO Box 2747
Del Mar, CA 92014
619-755-7231

------------------------------

Date: 4 Dec 87 19:47:27 GMT
From: pitstop!sundc!potomac!garybc@sun.com  (Gary Berg-Cross)
Subject: Robotic "kits" for kids


        Does anybody have experience with robotic kits appropriate for
kids 9-14?  I'm thinking of robot arms up to more complete systems that
might be assembled over a period of weeks and serve to introduce one
or two younsters to the engineering issues before they enjoy the
fruits of their work.  Do any worthwhile products exist out there and
are there ones that might be in the price range of start-up computer
system costs?
        Expereiences and references would be appreciated.

--

Gary Berg-Cross. Ph.D. (garybc@Potomac.ADS.COM)
Advanced Decision Systems     vi .signature

ZZ

a

------------------------------

Date: 5 Dec 87 07:29:38 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: Robotic "kits" for kids


      Edmund Scientific, of Barrington, NJ, offers a number of robot
devices in kit form.  Prices are in the $30-50 range.

      Fischerteknik, the magnificent German construction set, now offers
a line of electrical, pneumatic, and electronic components intended for
the building of robots and other servomechanisms.  For the very
bright, self-directed child.  Obtain the catalog at better toy stores.
$50 and up, far up.

                                        John Nagle

------------------------------

Date: 6 Dec 87 17:39:24 GMT
From: gleicher@cs.duke.edu  (Michael Gleicher)
Subject: Re: Robotic "kits" for kids

When I was about that age I had a lot of fishertechnic stuff. It was neat
because you could build things that really worked, with exectric motors and
gear drives and stuff.
A lot of the stuff I had were strange gear boxes, strain gauages,
differentials, or other things an 11 year old kid would understand. My dad (a
mechanical engineers) liked these toys as much as I did.

A few years back at a computer show (I think it was the Trenton Computer Fair)
I saw some rather impressive demonstrations of robots build with the stuff.
The small electric motors were easy to interface with computers.
Unfortunately, these constructions were build out of a LOT of parts (and these
things are EXPENSIVE!!! they were expensive 10 years ago, I'd hate to see what
they cost now) and were very complex (they were designed and built by
engineers, not by kids).

I don't think if you buy your kids a whole bunch of fishertechnic stuff they
will be building robots. But they will be building other things, and probably
having as much fun with it. It is my personal philosophy (I am NOT a
psychologist) that things like this help develop not only an interest in
mechanical things, but also develop skills like mathematical ability, logical
reasoning, design, planning and the like. Once these things are developed,
you're ready to build robots.

One last comment: Fishertechnic pieces are EXPENSIVE (or at least were). There
might be cheaper alternatives (what ever happened to old fashioned erector
sets? (with the metal pieces and minature bolts). these might be even better
for building mini-robots).

Mike

Michael Lee Gleicher                    (-: If it looks like I'm wandering
        Duke University                 (-:    around like I'm lost . . .
E-Mail: gleicher@cs.duke.edu)(or uucp   (-:
Or P.O.B. 5899 D.S., Durham, NC 27706   (-:   It's because I am!

------------------------------

Date: 6 Dec 87 18:17:11 GMT
From: Robert Stanley <roberts%cognos%math.waterloo.edu@RELAY.CS.NET>
Reply-to: Robert Stanley
          <roberts%cognos%math.waterloo.edu@RELAY.CS.NET>
Subject: Re: ES tools for Mac


To the moderator:

My apologies for sending this to your group, but I am unable to persuade my
mailer that cive.ri.cmu.edu is a viable address on this unsupported Sunday
afternoon.  It then struck me that perhaps this information might be of
interest to the group after all; I'll leave you to make that decision.  When
support arrives on Monday, I'll get this mailed directly to Mary.Lou.Maher.

In article <8712010829.AA13510@ucbvax.Berkeley.EDU>
           Mary.Lou.Maher@CIVE.RI.CMU.EDU writes:
>I have to give a tutorial and workshop on Expert Systems at an engineering
>conference and would like to use the Mac since it has relatively little
>start up time. I am interested in simple rule based tools and object
>oriented tools that run on a Mac. Simplicity  is more important
>than sophistication. Can anyone help? Mary Lou Maher maher@cive.ri.cmu.edu

There are a number of possibilities, depending on how much you wish to achieve,
how big a Macintosh you have available, and how much you want to spend.  You
might also benefit from repeating your posting in comp.sys.mac, which is a very
lively group featuring some knowledgeable players.

With respect to Object-Oriented programming:

  * Probably the most interesting (and cheapest) approach is to use HyperCard,
    which comes free with all new Macs, and costs $49 (US) otherwise.  This
    has a true object-oriented language named HyperTalk very well integrated
    into its environment.  Drawback: needs minimum 128K ROMs, 1 Megabyte RAM,
    and is difficult to put to work without a hard disk.  The language is
    somewhat muddled, but quite powerful and *very* easy to use.

    Consult your local Apple dealer.

  * Other object-oriented possibilities include SmallTalk, available cheaply
    from APDA, and *much* more expensively from Parc Place Systems (I am not
    sure that they have brought their Mac product to market yet); the language
    NEON (a sort of cross between SmallTalk and FORTH) from Kriya Systems; and
    MacScheme, if you want to step right down to the nitty-gritty level.

    Consult a month's worth of the Mac news-stand publications.

With respect to shells and rule-based programming:

  * The hands-down winner in this field is NEXPERT Object from Neuron Data, but
    it is expensive, and runs best in large environments.  This is a real tool,
    aimed at implementing real solutions to real problems, but I suspect that
    it needs quite some practice to master.  On a Mac II with colour it runs
    rings around the VAX GPX II version.

    Neuron Data: 444 High Street, Palo Alto, CA 94301     (415) 321-4488

  * At the other end of the scale, there is a micky-mouse implementation of
    OPS/5 for the Mac, but it only allows around 50 rules!  I am sorry, but
    I have no reference to hand.

To the best of my knowledge, there has been little or no attempt on the part of
any of the innumerable shell-builders in the IBM-PC world to port their
products to the Mac.  This has left the Mac world pretty much devoid of simple
tools in this class.

Further possibilities:

  * LPA Associates have an acceptable implementation of Micro-Prolog for the
    Mac, which would give you access to tools such as APES (Augmented Prolog
    for Expert Systems).

  * Advanced AI Systems produce AAIS-Prolog, which appears to be currently the
    best Prolog implementation for the Mac.  By no means perfect, but
    definitely practical.

I hope these suggestions will go part way towards solving your problem.  If you
need more detailed references, e-mail me or telephone (we are on EST).

Robert_S
--
R.A. Stanley             Cognos Incorporated     S-mail: P.O. Box 9707
Voice: (613) 738-1440 (Research: there are 2!)           3755 Riverside Drive
  FAX: (613) 738-0002    Compuserve: 76174,3024          Ottawa, Ontario
 uucp: decvax!utzoo!dciem!nrcaer!cognos!roberts          CANADA  K1G 3Z4

------------------------------

Date: 5 Dec 87 07:45:46 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Neural nets are science


     I've been implementing Rumelhart's learning technique, and
observing how fast it learns, what factors affect the learning rate,
and how my results compare with his.  Suddenly it struck me - I'm
repeating someone else's experiment, and comparing my data with his.
It's rare in this field to be able to repeat the experiments of
another and actually compare numerical results.  In this area, we
can do it.  We can conduct repeatable experiments and objectively
validate the work of others.  This is real science.  Instead of
arguing, we converge on accepted, repeatable results.   The
scientific method works here.

     It's interesting that in the area of AI where things seem most complex,
chaotic, and noisy, one can do good experimental science.  This field
may move forward rapidly.

                                        John Nagle

------------------------------

End of AIList Digest
********************
 9-Dec-87 23:31:45-PST,11141;000000000000
Mail-From: LAWS created at  9-Dec-87 23:19:29
Date: Wed  9 Dec 1987 23:17-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #281 - Common Lisp Portability, Chess
To: AIList@SRI.COM


AIList Digest           Thursday, 10 Dec 1987     Volume 5 : Issue 281

Today's Topics:
  AI Tools - Common Lisp Portability,
  Games - Computer Chess Rankings

----------------------------------------------------------------------

Date: 8 Dec 87 17:52:26 GMT
From: orstcs!ruffwork@rutgers.edu  (Ritchey Ruff)
Subject: Common Lisp lacks portability (105 lines)

Would you use a language that can arbitrarily ignore some of your
code ???  Especially if different implementations ignored different
statements in the same code ???  Even if it didn't TELL you what
it was ignoring when ???

I have a bone to pick with Steele about something he left out of
the Common Lisp definition.  The above is *EXACTLY* what Common
Lisp *DOES* !!! In the sections about the
strong typing, "Common Lisp:The Language" says the compiler
or interpreter can ignore many declarations.  It should also
state that there be a standard way to find out WHAT the compiler/
interpreter is ignoring (or using).  Something like a compiler flag
(":declares-ignored t/nil") or a global flag (*IGNORED-WARNINGS*) to
force common lisp to show what it is ignoring.

Why, you ask???  First, principle (I kind of like that ;-):
when you put in strong typing statements (like "(the integer foo)")
do you REALLY want them ignored in different ways, at different times,
by different Common Lisps - and not even know which is ignoring
what when ???

Second, I've just spent weeks tracking bugs caused by
compilers/interpreters ignoring different parts of my declarations.  Simply
because an interpreter/compiler can IGNORE strong typing (like
"(the integer foo)"), optimizer statements (like safety=3),
and declarations (like "(declare (integer foo))") I found that
code that ran ok on one version of Common Lisp would not even
compile under another, run but go into a break on another, and
run to completion but give wrong results on another !!!!

For example - lots of people use Bill Shelters' excellent SLOOP
looping macro package (thanks for all that work you put into an
excellent package, Bill!).  Its great, but because it tries to
optimize (by default it expands with declarations that give
type info on looping vars, etc.) it turns out to be non-portable.
Here is a totally non-portable piece of code -

        (DEFUN TST (N M)
               (SLOOP FOR I FROM N TO M COLLECT I))

This is quite simple, right?  When it expands N, M, and I get
declared of type integer, and the iteration var gets checked by
the "THE" statement each time it's incremented to see that it
remains of type integer.  Below are results from several different
Common Lisps (all this was done with safety=3) ---

        ----------------------------------------
        FranzExtendedCommonLisp> (tst 1 5)
                (1 2 3 4 5)

        FranzExtendedCommonLisp> (tst 1.0 5.0)

        Continuable Error: Object 2.0 is not of type FIXNUM.
        If continued with :continue, Prompt for a new object.
        [1c] <cl> ^D

        FranzExtendedCommonLisp> (compile 'tst)
        TST
        FranzExtendedCommonLisp> (tst 1.0 5.0)
                (1.0 2.0 3.0 4.0 5.0)

        FranzExtendedCommonLisp> (tst 1 5.0)
                (1 2 3 4 5)
        ----------------------------------------
        KyotoCommonLisp> (tst 1 5)
                (1 2 3 4 5)

        KyotoCommonLisp> (tst 1.0 5.0)

                Error: 2.0 is not of type FIXNUM.
                Error signaled by THE.

                Broken at THE.  Type :H for Help.
        KyotoCommonLisp>> :q
        KyotoCommonLisp> (compile 'tst)
                End Pass1.
                End Pass2.

                TST
        KyotoCommonLisp> (tst 1.0 5.0)
                (0)
        KyotoCommonLisp> (tst 1 5.0)
                NIL
        ----------------------------------------
        AllegroCommonLisp> (tst 1 5)
                (1 2 3 4 5)
        AllegroCommonLisp> (tst 1.0 5.0)
                (1.0 2.0 3.0 4.0 5.0)
        AllegroCommonLisp> (compile 'tst)
                TST
        AllegroCommonLisp> (tst 1.0 5.0)
                (1.0 2.0 3.0 4.0 5.0)
        AllegroCommonLisp> (tst 1 5.0)
                (1 2 3 4 5)
        ----------------------------------------

So we have 3 different "Common Lisps" (and the quotes are intentional)
that give radically different results for the SAME code !!!  EVEN the
interpreter (Help me, Spock ;-) !!!  If the compiler and interpreter
gave warnings when they ignored code the reason for the bugs that this type
of behavior can cause would be so much easier to track down.
When you have your code debugged and are looking for raw speed,
a global flag could be set to stop displaying warnings of this type.

MORAL OF THE STORY --- IF YOU WANT TRULY PORTABLE COMMON LISP CODE
        THAT WORKS THE SAME INTERPRETED AS COMPILED, *DO* *NOT* PUT
        STRONG TYPING OR OPTIMIZER STATEMENTS ANYWHERE IN YOUR CODE !!!
        IF *ANYTHING* *CAN* IGNORE A STATEMENT, *NEVER* USE THAT STATEMENT !!!

I've gone on too long, but I think I've made my point.
Thanks for the bandwidth,

--Ritchey Ruff                          ruffwork@cs.orst.edu -or-
 "I haven't lost my mind,               ruffwork%oregon-state@relay-cs-net -or-
  its' backed up on tape somewhere..."  { hp-pcd | tektronix }!orstcs!ruffwork

------------------------------

Date: Fri, 04 Dec 87 10:33:58 PST
From: Stuart Cracraft <cracraft@venera.isi.edu>
Subject: computers vs. humans

Ken,

This might be of interest to the AI readership. I'll leave the
decision up to you...

 ***  C.R.A. Rates Commercial Chess Machines at American Open ***
            by Stuart Cracraft (copyright (C) 1987, 1988)

At the American Open, held during the Thanksgiving holidays, three chess
machines were certified. Certification involved having each machine play
48 rated games against strong human opposition. The result is a rating for
the machine.

The three manufacturers who submitted machines for certification are as
follows.
  Fidelity submitted a machine that is still somewhat of a mystery.
  [Editorial comment: C.R.A. policy should be amended to require full
  disclosure by the manufacturer. --Stuart] Fidelity representatives refused
  to reveal information about the micro-chip(s) inside the machine,
  memory-size, and search-speed.  (Rumor has it that this was a 16mhz 68020
  with a minimum of 128K memory for transposition table. Rumor also has it
  that this would be prohibitively expensive to market.)

  Mephisto came with the much-acclaimed Mephisto "Dallas" program in the
  commercial Mephisto Mondial unit (available exactly as it was in the
  certification, from U.S.C.F. for about $400) the winner of the 1986
  world-micro championship (when running on a 28mhz 68020 which is available
  from Mephisto commercially only at 14mhz). At certification time, it was
  running at 12 mhz on a 68000.

  Novag came with the "Super-Expert" a follow-on to the Novag Expert.
  Super-Expert ran at 6mhz and contained a 6502 processor.

Due to variations and fluctuations in the ratings of the machine's
opponents and the actual certification rating process itself (a
complicated procedure), no final rating-per-machine was calculated, though
estimates are available. Please remember that these are estimates only
and that the actual, final, certified rating will be available shortly.
Please also note that unless the machine is commercially available exactly
as it existed at certification time, the certified rating is not available
for advertisment purposes nor can the manufacturer place the C.R.A.
rating seal on any other machine.

So, with that disclaimer aside, here are the results of the tournament,
and at the very end are the estimates ratings for each manufacturer's
entry. Results consist of six-games per round, organized in tabular
format. A 0 means a loss for machine, .5 means a draw, and 1 means
a win for the machine. The ratings are of the human opponent
the machine played.

Round 1      2       3       4       5      6        7       8
-------------------------------------------------------------------
Fidelity (16mhz 68020? with 128K+ memory for transposition? by the Spracklens)
    2300-0  2185-0  2139-.5 2067-1  2256-0  2144-.5 2116-0  2274-1
    2283-0  2204-0  2175-0  1778-1  2244-0  2115-1  2105-1  2103-0
    2209-.5 2244-0  2260-1  2226-1  2351-0  2119-1  2161-1  2434-1
    2129-1  1969-.5 2163-0  2183-0  2067-1  2073-1  2055-0  2002-1
    1966-1  2175-0  2168-0  2122-.5 2134-0  2191-1  2181-1  2106-.5
    1944-1  2106-1  1963-1  1954-1  1970-1  1987-1  1890-1  1866-1

Mephisto (12mhz 68000 with "Dallas" program by Richard Lang)
    2286-0  2189-0  2137-0  2242-1  2243-.5 2250-0  2183-0  1871-1
    2267-0  2179-.5 2000-1  2069-1  2140-.5 2227-0  2123-.5 2058-.5
    2145-1  2216-0  2145-1  1929-1  2358-0  2074-1  2171-0  2172-.5
    2139-0  2174-1  2109-1  2167-.5 2175-0  1966-1  2127-0  2006-1
    2298-0  2119-1  1953-.5 2156-1  2117-.5 1958-1  2145-1  2053-1
    1924-0  1875-1  2182-0  1962-1  1947-.5 2109-1  2216-0  2030-1

Novag (6 mhz 6502 with "Super-Expert" program by David Kittinger)
    2294-.5 2262-.5 2261-.5 2320-.5 2250-0  2213-1  2217-0  2235-0
    2274-0  2209-0  1958-0  1966-.5 2115-0  2000-0  2145-1  1992-1
    2264-1  2389-0  2257-0  2219-0  2249-0  2068-1  2206-.5 2233-1
    2144-0  2122-1  2114-0  2074-0  2053-1  2160-1  2092-1  2000-1
    2137-0  2106-0  2156-1  2069-0  2050-.5 2089-1  2010-1  2167-0
    1854-1  1950-1  1922-1  1941-.5 1989-0  1952-1  1814-1  2157-1

Estimated ratings:
   Fidelity Experimental (not currently commercially available):
      USCF 2190-2200
   Mephisto Mondial 68000 XL (just becoming available commercially):
      USCF 2150-2160
   Novag Super-Expert (just becoming available commercially):
      An estimated rating for this machine is complicated by
      the fact that the first 30-games of the certification
      were played with a selective-search feature, and the last
      18-games were played with the feature disabled (done with C.R.A.
      permission.) C.R.A. agency extended an invitation to Novag to use the
      latter 18 games as the first 18 games of a new certification
      (requiring 30 more games be played).

The overall concensus is that a commercial master will first become
available in one year or less. Certainly, the prestige associated with
being the manufacturer of such a product, especially if attractively
priced, would be immense. There is clearly a race to be the first
manufacturuer to do so.

        Stuart

------------------------------

End of AIList Digest
********************
 9-Dec-87 23:33:54-PST,13589;000000000000
Mail-From: LAWS created at  9-Dec-87 23:25:51
Date: Wed  9 Dec 1987 23:22-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #282 - Semantic Nets, Mac Lisp and Prolog, Science, Law
To: AIList@SRI.COM


AIList Digest           Thursday, 10 Dec 1987     Volume 5 : Issue 282

Today's Topics:
  Queries - OPS83 Execution Profiling & Expert System References &
    Epistemic Logic Examples & Planning Papers &
    UNISYS Master Apprentice Program,
  AI Tools - Semantic Nets & Mac Lisp and Prolog,
  Philosophy - Neural Nets as Science,
  Law -  Can You Sue an Expert System?

----------------------------------------------------------------------

Date: 7 Dec 87 19:48:52 GMT
From: "David C. Bond" <dcbond%watvlsi.waterloo.edu@RELAY.CS.NET>
Subject: OPS83 execution profiling


At the University of Waterloo, a computer architecture called CUPID has
been developed to  rapidly perform the  match phase of OPS5.  CUPID is
a multiprocessor which executes a distributed RETE algorithm and
returns match information to the host machine.

I am investigating the changes required to allow CUPID to evaluate
OPS83 programs.  The main difference between these two languages is
OPS83's use of simple procedures in the left hand sides of rules.  The
processors currently used in CUPID are simple and were designed to
quickly compare fixed fields in a pair of tokens.  "Left hand
procedures" can perform  numerical calculations and comparisons of
arbitrary data structures.  These operations require a more
sophisticated processor than those currently used in CUPID.  Two
possibilities exist: make the processors more complex so they can
perform these operations, or off-load these operations to a subhost
(e.g. 680x0 processor).  The latter alternative is the simpler but I
don't know what the impact on performance will be.

What I would like to find out is:

1. generally how many of these procedures are in an OPS83 program

2. what are their general execution characteristics (i.e. execution
   time).

3. how many times are they called.  Note: I mean how many
   times they are evaluated, *NOT* how many times rules containing
   procedures in their left hand sides fire.

4. how other researchers who have proposed multi-processors for
   evaluating the RETE algorithm handle "left hand procedures".

Any data on these four items would be very appreciated.

Thanks in advance,

------------------------------

Date: Mon, 07 Dec 87  16:45 EST
From: WURST%UCONNVM.BITNET@WISCVM.WISC.EDU
Subject: Expert System references...

          I am a graduate student in Computer Science, and I am planning
     to do an independent study project next semester in Expert Systems.
     My project, as it stands now, will be to build a simple expert system
     for use in a microbiology lab.  I plan to write the system twice,
     once in LISP, and once in PROLOG, and then compare the relative
     merits of each language for expert systems.
          Can anyone suggest some references to get me started?  This
     will be my first expert system, and I am interested in literature
     on how to go about building one.  I would like to see information
     on designing expert systems in general, how to go about getting
     the information from the domain expert, and any information on
     building expert systems in LISP and PROLOG in particular.  Any
     help you can give me would be greatly appreciated.

----------
Karl R. Wurst
Computer Science and Engineering
University of Connecticut

BITNET:  WURST@UCONNVM

'Things fall apart.  It's scientific'  - David Byrne

------------------------------

Date: Wed, 9 Dec 87 13:58:21 PST
From: mcvax!casun.kaist.ac.kr!skhan@uunet.UU.NET (Sangki Han)
Subject: Epistemic Logic Examples

Hi! I and my collegue have designed and implemented a theorem prover
for the epistemic logic based on Konolige's deduction model.
We want to get various meaningful or famous examples to test our prover.
Especially, it would be better if the example concerns both the knowledge
and belief of multiple agents since we want to handle that kind of situations.
Thanks in advance.


Sangki Han

------------------------------

Date: Wed, 9 Dec 87 08:54:16 PST
From: marcel%meridian@ADS.ARPA (Marcel Schoppers)
Subject: two rare papers wanted


I have been looking for the following two papers for several years, and have
been unable to get copies. I can't wait any longer -- my thesis needs them.
If you have one or both of them, *please* send me a message. So as to avoid
duplicate labor I'll let you know if someone else is already helping me out.
The articles are

        Warren, DHD.  "Generating conditional plans and programs"  Proc
        AISB Summer Conference, Edinburgh (1976), 344ff.

        Sacerdoti, ED  "Plan generation and execution for robotics"  Rhode
        Island Wshop on Robotics Research (Apr 1980).

marcel@ADS.ARPA

------------------------------

Date: 8 Dec 87 14:22 -0600
From: Imants Krumins <krumins%asd.arc.cdn%ubc.csnet@RELAY.CS.NET>
Subject: UNISYS Master Apprentice Program

I have been asked to develop a proposal for development of an expert
system under the UNISYS Master Apprentice Program (MAP).

For those unfamiliar with MAP, it is basically a program in which UNISYS
provides training and expert consulting with the goal of introducing the
client corporation to expert systems through the development of a
prototype system to "solve" an appropriate practical problem faced by
the client.  The trainee will presumably have gained sufficient
expertise during MAP to complete the development of the prototype to a
production system.

My backgound/knowledge in this field consists primarily of reading this
newsgroup and a very limited amount of literature as well as low level
fooling with LISP programming. I would appreciate hearing from anyone in
the group with direct or indirect experience with MAP or expert systems
technology at UNISYS in general.  Is the MAP a good way to get involved
in expert systems development?  Are the MAP products of any practical
use?  What backgound reading would be useful as a preparation?  Any info
regarding the quality of the MAP, personnel, hardware, software, etc.
would be very useful.

I will summarize to the net if there is sufficient interest.

Imants Krumins                      (krumins@asd.arc.cdn)
Resource Technologies Department
Alberta Research Council
PO Box 8330, Postal Station F
Edmonton, Alberta
Canada T6H 5X2
403/450-5263

------------------------------

Date: Mon, 7 Dec 87 09:03:46 EST
From: rapaport@cs.Buffalo.EDU (William J. Rapaport)
Subject: kannan's inquiry re sem nets

I couldn't contact Kannan by email (daemon problems); so here's a
reply about sem nets:


The SNePS semantic network processing system might be what you want.
See:

Shapiro, Stuart C. (1979), ``The SNePS Semantic Network Processing System,''
in N. V. Findler (ed.),
.ul
Associative Networks
(New York: Academic Press, 1979): 179-203.

and

Shapiro, Stuart C., & Rapaport, William J. (1987),
``SNePS Considered as a Fully Intensional Propositional Semantic Network,''
in G. McCalla & N. Cercone (eds.),
.ul
The Knowledge Frontier:  Essays in the Representation of Knowledge
(New York:  Springer-Verlag):  262-315;
earlier version preprinted as Technical Report No. 85-15
(Buffalo:  SUNY Buffalo Dept. of Computer Science, 1985);
shorter version appeared in
.ul
Proc. 5th Nat'l. Conf. on Artificial Intelligence (AAAI-86; Philadelphia)
(Los Altos, CA:  Morgan Kaufmann), Vol. 1, pp. 278-83.

------------------------------

Date: Mon 7 Dec 87 09:17:32-PST
From: George S. Cole <GCOLE@Sushi.Stanford.EDU>
Subject: Re: AIList V5 #280 - Robot Kits, Mac ES Tools, Scientific
         Method

Re: Expert System Shells for the Mac: Tools to Build the Tool

    The paucity of shells for the Macintosh is puzzling. There are three
language environments which can be used to build such a shell currently on
the market: (1) AAIS Prolog; (2) Expertelligence's ExperCommonLisp, and
(3) Allegro Common LISP from Coral Software.
        AAIS Prolog is the least expensive of the three -- but contains the
least support for moving beyond the language. The price is below $200 (as
part of a class purchase, we were able to buy it for $70 a copy). Tying new
resources into the system will require some Mac-hacking.
        ExperCommonLisp comes in two varieties: plain (~$200) and chocolate
(~$800). It is an extension to LISP that allows object-oriented programming,
but lacks type-casting features. The debugger works on the compiled code
rather than the interpreted code, which can be puzzling. The expensive version
is supposed to produce stand-alone applications (but I have only used the
language).
        Allegro Common LISP falls into the mid-range (~$490). It is also an
extension to Common LISP that allows object-oriented programming, contains
the full type-casting power, and is a better implementation by far. However,
it demands 2 megabytes (5 for us cautious types) and does not yet have the
"stand-alone application" power, though this is promised for the future.

                George S. Cole, Esq.  GCole@sushi.stanford.edu
                793 Nash Av.
                Menlo Park, CA  94025 (415) 322-7760

------------------------------

Date: Mon, 7 Dec 87 09:08:55 EST
From: Jim Hendler <hendler@brillig.umd.edu>
Subject: Re: Neural Nets are science

I'd like to congratulate John Nagle on his sense of humor.  Without
arguing about his premise I'd like to point out that by his argument
everytime I make a phone call I am doing science by comparing my results
with Alexander Graham Bells.  Building something and exploring how it
works is not even close to the scientific methodology.  Experimentation
requires small little things like hypotheses and analytic methods.  I hope
Mr. Nagle can succeed at developing a scientific approach to neural nets,
but comparing results??? Not even close.

------------------------------

Date: 7 Dec 87 16:54:08 GMT
From: trwrb!aero!venera.isi.edu!smoliar@ucbvax.Berkeley.EDU  (Stephen
      Smoliar)
Subject: Re: Can you sue an expert system?

In article <1788@cup.portal.com> Barry_A_Stevens@cup.portal.com writes:
>
>Consider, and please comment on, this scenario.
>
>                     * * * * * * * * * * *
>
>A well-respected, well-established expert systems(ES) company constructs
>an expert financial advisory system. The firm employs the top ES
>applications specialists in the country. The system is constructed with
>help from the top domain experts in the financial services industry. It
>is exhaustively tested, including verification of rules, verification of
>reasoning, and further analyses to establish the system's overall value.
>All results are excellent, and the system is offered for sale.
>
Anyone who is willing to accept these premises at face value may be more
interested in investing in the bridge I have between Manhattan and Brooklyn
than in expert systems.  The sort of "ideal" product envisaged here is
certainly beyond the grasp of current development technology and may remain
so for quite some time.  The most important omission from this scenario is
the assumption that any sort of disclaimer has been attached to the product.
I have encountered a variety of advertisements for human financial consultants;
and, as a rule, there is always some disclaimer about risk present.  The
idea that their would be a machine-based product which would be risk-free
borders on ludicrous.  If a customer was hooked by such a claim, most likely
the only place he would be able to complain would be to the Better Business
Bureau.
>
>By now, you know the outcome. On the Friday morning before Black Monday,
>the expert system tells Joe to "sell everything he has and go into the
>stock market." ESs can usually explain their actions, and Joe asks for
>an explanation. The ES replies "because ... it's only been going UP for
>the past five years and there are NO PROBLEMS IN SIGHT."
>
Would Joe have accepted such an explanation from a human advisor?  If so,
he has gotten what he deserved.  (I happened to be discussing an analogous
case with my lawyer-neighbor.  Our scenario involved medical systems and
malpractice, but the theme is basically the same.)

This raises another question:  Assuming Joe is no dummy (and that he can
afford good human advice), why would he be intersted in an machine advisor?
I would argue that the area in which machines tend to have it over humans
is that of quantitative risk assessment.  Thus, the machine is more likely
to synthesize and justify concrete quantitative predictive models than is
a human expert, whose skills are fundamentally qualitative.  Thus, the best
Joe could hope for would be such a model.  INTERPRETING the model would
remain his responsibility (although that interpretation may be linked to
the machines justification of the model, itself).

I would conclude that this scenario is far too simplistic for the real world.
I suggest that Mr. Stevens debug it a bit.  Then we might be able to have a
more realistic debate on the matter.

------------------------------

End of AIList Digest
********************