2-Sep-84 21:50:00-PDT,14081;000000000000
Mail-From: LAWS created at  2-Sep-84 21:44:48
Date: Sun  2 Sep 1984 21:37-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #113
To: AIList@SRI-AI


AIList Digest             Monday, 3 Sep 1984      Volume 2 : Issue 113

Today's Topics:
  Humor - Eliza Passes Turing Test (again),
  AI Tools - Taxonomy Assistant & User Aids,
  Psychology - User Modeling,
  Speech Recognition - Separating Syllables,
  Conferences - Functional Languages
----------------------------------------------------------------------

Date: 29 Aug 84 18:22:02-PDT (Wed)
From: decvax!minow @ Ucb-Vax.arpa
Subject: Eliza Passes Turing Test (again)
Article-I.D.: decvax.59

Readers of net.ai might enjoy this extract from "Computing Across
America,"  Chapter 11: A High-tech Oasis in the Texas Sun, written by
Stephen K. Roberts, published originally in (and Copyright 1984 by)
Online Today, September 1984 (a CompuServ Publication).

                        The Phantom SysOp

  (Austin SYSOP speaking)

        "Personally, I get a little tired of answering CHAT
        requests.  That's why I brought up Eliza."

        "You mean..."

        He twinkled with wry humor.  "You got it.  It's the
        perfect Turing test.  I have a second computer hooked
        up to my board system.  When someone issues a CHAT
        request, it says 'Hello?  Can I help you?' I changed
        all the messages so it emulates a sysop instead of a
        psychiatrist.  Some people never do catch on."

        I groaned.  "That's cruel!"

(Transcribed by my collegue, John Wasser.)

Martin Minow
decvax!minow

------------------------------

Date: 31 Aug 1984 11:31:36 PDT
From: Bill Mann <MANN@USC-ISIB.ARPA>
Subject: reply to McGuire about taxonomy assistant


(Reply to Wayne McGuire's comments on the need for a taxonomy assistant: )

I agree with the notion that representing and using conceptual relations
effectively  is one of the central problems of AI research.  You say

     "It seems to me that in the knowledgebase management systems which
     I hope we will see developed in the near future will be embedded rich
     resources for evoking and representing taxonomies. Semantic nets
     provide an ideal scheme with which to do just that."


How do we know that semantic nets are so good?  Isn't this a complex
unsolved problem, for which the effectiveness of semantic nets is still
an open issue?

I suspect that semantic nets are useful for these problems just as
binary notation is useful.  The representative power is there, but
success depends not so much on the distinctive properties of nets as on
the techniques that create and use the nets.  I agree that they look
promising.  (Promises, promises.)

You suggest that a taxonomy assistant might work by operating on the
vocabulary of the domain, relating items.  That sounds like another
promising idea that might lead to a very powerful set of
generalizations if it was tried.

In the case that prompted all this, there is no recognized domain or
literature.  I have an experimental program which includes a specialized
internal interface language having several hundred predefined operators.
Doing a taxonomy is one way to increase my understanding of the
interface language.  So I would like to have a taxonomy assistant that
did not have to presume a domain.

Bill Mann

------------------------------

Date: 31 Aug 1984 12:20:08 PDT
From: Bill Mann <MANN@USC-ISIB.ARPA>
Subject: aids to the mind

I've gotten several interesting replies to my inquiry about finding a
"taxonomy assistant"  that could help me in thinking about the
organization of a collection of items.  It raises a larger issue:

        What intellectual operations are worth developing programmed
aids for?

Nobody came up with a pointer to an existing program for the taxonomy
task (except for something named PEGASUS, on the related topic of
vocabulary construction; I need to check it out.)  But still, there
might be other sorts of programmed assistants out there.

Here is a wish list for programmed assistants that could potentially be
important for my lifestyle:

RESOURCE ALLOCATION ASSISTANT:  Given a supply or a flow of resources,
help allocate them to particular uses.  Budgeting, personal time
allocation and machine scheduling are special cases.

TIME ALLOCATION ASSISTANT:  (a specialization, very important to me)
Help work through allocation of my time so that feasible things get
done, infeasible things don't get started, the things that get done are
the important ones,  things tend to get done on time,  allocations get
revised appropriately in the face of change, and the allocation provides
suitable flexibility and availability to other people.

I have in mind here much more than just the scratchpad-and-alarm-clock
kind of time allocation resource.  Those are fine as far as they go, but
they don't go nearly deep enough.  I want something that will ask me the
pertinent questions when they are timely.

EXPOSITORY WRITING ASSISTANT:  In this case, my research on text
generation has gone far enough to assure me that such a program is
feasible.  I have a systematic manual technique that works pretty well
for me, that could be developed into an interactive aid.  It would be
very different from the sentence-critic sort of programs that are now
emerging.

NEGOTIATION ASSISTANT:  There is a viewpoint and a collection of skills
that are very helpful in bargaining to an agreement.  A program could
raise a lot of the right questions.

                              ***

That is just a starter list.  What other sorts of assistants can we
identify or conceive of?

Other ideas can probably be developed from the problem-solving
literature, e.g. Polya, Wickelgren and Lenat.

This sort of thing could go far beyond the capabilities of autonomous AI
programs.  Often there are well known heuristics that are helpful to
people but too indefinite for programs to apply; an assistant could
suggest them.  Proverbs are one sort.

In sum, What do we want, and What do we have?

Bill Mann

------------------------------

Date: 29 Aug 84 14:21:56-PDT (Wed)
From: hplabs!hpda!fortune!amd!decwrl!dec-rhea!dec-bartok!shubin @
      Ucb-Vax.arpa
Subject: Replies to query for citations on user modeling
Article-I.D.: decwrl.3473

I posted a request for some papers on modeling users and/or user behavior,
and promised to post the results of my query.  (The original posting was on
or about 18 July 84).  Here is a summary of the results; a line of hyphens
separates one person's response from another.  I haven't had to check all
of them, and I may wind up with more references, which may be posted
later.  Any more suggestions are welcome.  Thanks to all.

------
Elaine Rich, "Users are Individuals: Individualizing User Models"
        Int.J.Man-Machine Studies 18(3), March, 1983.
Zog project at CMU
Elliot Soloway at Yale -- modeling novice programmer behavior
"The Psychology of Human-Computer Interaction" by Card, Moran, and Newell
Current work at Penn by Tim Finin, Ethel Shuster, and Martha Pollock
             at UT at Austin by Elaine Rich
Work on on-line assistance:
        Wizard program by Jeff Shrager and Tim Finin (AAAI 82)
        Integral help by Fenchel and Estrin
        Lisp tutor - John Anderson at CMU
------
Regarding users' models of computer systems:
a.      Schneiderman, B. and Meyer R. "Syntactic/Semantic Interactions
        in Programmer Behaviour: A Model and Experimental Results"
        Int. J. of Computer and Information Sciences, Vol 8, No. 3, 1979
b.      Caroll, J.M., and Thomas, J.C. "Metaphor and the Cognitive
        Representation of Computing Systems" IEEE Trans. on Systems,
        Man, and Cybernetics, Vol SMC - 12, No. 2, March/April 1982.
c.      Anything from the CHI'83 conference -- Human Factors in
        Computing Systems sponsored by ACM.
About Modelling the User:
a.      Card, Newell and Moran, a book whose title escapes me
        offhand -- it has a chapter entitled The human Information
        Processor.
b.      Rich, E. "Users are Individuals: Individualizing user Models"
        Int. J. Man-Machine Studies 18, 1983
--------
Peter Polson (U.COlorado) and David Kieras (U.Arizona) have a paper in this
year's Cognitive Science Conference on a program that tests user interfaces
by testing high-level descriptions of user behavior against expected system
behavior.
--------
There was a lot of work done at Xerox PARC in the late
70's on task times and such.   They were interested
in work relating to I/O device design (mice, etc.), as
well as general models.  Some very good task timing
models came out of that work, I believe.
-------
Take a look at the work of Elaine Rich at Texas (formerly CMU).
-------
Chapter 6,The Psychology of Human-Computer Interaction,SK Card,
  SP Moran,A Newell
-------
...Some of the results of this are published in the 1983 AAAI Proceedings
in the paper "Learning Operator Semantics by Analogy" by S. Douglas
and T. Moran.

"A Quasi-Natural Language Interface to UNIX"; S. Douglas; Proceedings of
the USA-Japan Conference on Human-Computer Interaction; Hawaii; 18-20 Aug
84; Elsevier.

------------------------------

Date: 31 Aug 84 13:05:13-PDT (Fri)
From: ihnp4!houxm!mhuxl!ulysses!burl!clyde!watmath!utzoo!dciem!mmt @
      Ucb-Vax.arpa
Subject: Re: Hearsay II question in AIList Digest   V2 #110
Article-I.D.: dciem.1098


    It turns out that even to separate the syllables in continuous speech
    you need to have some understanding of what the speaker is talking
    about! You can discover this for yourself by trying to hear the sounds
    of the words when someone is speaking a foreign language. You can't
    even repeat them correctly as nonsense syllables.

I used to believe this myth myself, but my various visits to Europe for
short (1-3 week periods, mostly) trips have convinced me otherwise. There
is no point trying to repeat syllables as nonsense, partly because the
sounds are not in your phonetic vocabulary.  More to the point, syllable
separation definitely preceded understanding.  I HAD to learn to separate
syllables of German long before I could understand anything (I still
understand only a tiny fraction, but now I can parse most sentences
into kernel and bound morphemes because I now know most of the common
bound ones).  My understanding of written German is a little better,
and when I do understand a German sentence, it is because I can transcribe
it into a visual representation with some blanks.

(Incidentally, I also do some research in speech recognition, so I am
well aware of the syllable segmentation problem.  There do exist
segmentation algorithms that correctly segment over 95% of the syllables
in connected speech without any attempt to identify phonemes, let
alone words or the "meaning" of speech.  Mermelstein, now in Montreal,
and Mangold in Ulm, Germany, are names that come to mind.)
--

Martin Taylor
{allegra,linus,ihnp4,floyd,ubc-vision}!utzoo!dciem!mmt
{uw-beaver,qucis,watmath}!utcsrgv!dciem!mmt

------------------------------

Date: Wed 29 Aug 84 10:53:53-EDT
From: Joseph E. Stoy <JES@MIT-XX.ARPA>
Subject: Call For Papers

CALL FOR PAPERS

          FUNCTIONAL PROGRAMMING LANGUAGES AND COMPUTER ARCHITECTURE
                          A Conference Sponsored by
           The International Federation for Information Processing
                        Technical Committees 2 and 10

                                Nancy, France
                           16 to 19 September, 1985


This conference has been planned as a successor to the highly successful
conference on the same topics held at Wentworth, New Hampshire, in October
1981.  Papers are solicited on any aspect of functional or logic programming
and on computer architectures to support the efficient execution of such
programs.

Nancy, in the eastern part of France, was the city of the Dukes of Lorraine; it
is known for its "Place Stanistlas" and its "Palais Ducal".  "Art Nouveau"
started there at the beginning of this century.  There are beautiful buildings
and museums and, of course, good restaurants.

Authors should submit five copies of a 3000 to 6000-word paper (counting a full
page figure as 300 words), and ten additional copies of a 300-word abstract of
the paper to the Chairman of the Programme Committee by 31 January 1985.  The
paper should be typed double spaced, and the names and affiliations of the
authors should be included on both the paper and the abstract.

Papers will be reviewed by the Programme Committee with the assistance of
outside referees; authors will be notified of acceptance or rejection by 30
April 1985.  Camera-ready copy of accepted papers will be required by 30 June
1985 for publication in the Conference Proceedings.

Programme Committee:
        Makoto Amamiya (NTT, Japan)
        David Aspinall (UMIST, UK)
        Manfred Broy (Passau University, W Germany)
        Jack Dennis (MIT, USA)
        Jean-Pierre Jouannaud (CRIN, France)
        Manfred Paul (TUM, W Germany)
        Joseph Stoy (Oxford University, UK)
        John Willliams (IBM, USA)

Address for Submission of Papers:
        J.E. Stoy, Balliol College, Oxford OX1 3BJ, England.

Paper Deadline:  31 January 1985.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To receive a copy of the advance programme, return the following information to
J.E. Stoy, Balliol College, Oxford OX1 3BJ, England
or by electronic mail to JESTOY@UCL-CS.ARPA

I plan to submit a paper: [ ]
        Subject:
Name:
Organisation:
Address:

------------------------------

End of AIList Digest
********************
 5-Sep-84 09:44:18-PDT,16974;000000000001
Mail-From: LAWS created at  5-Sep-84 09:35:42
Date: Wed  5 Sep 1984 09:20-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #114
To: AIList@SRI-AI


AIList Digest           Wednesday, 5 Sep 1984     Volume 2 : Issue 114

Today's Topics:
  LISP - LISP for the Eclipse 250 with RDOS,
  Expert Systems - AGE Contact? & Programmed Assistants,
  Speech Understanding - Word Recognition,
  Philosophy - Now and Then,
  Seminars - Bay Area Computer Science,
  Conference - IJCAI-85 Call for Papers
----------------------------------------------------------------------

Date: 4 Sep 1984 9:00-EDT
From: cross@wpafb-afita
Subject: LISP for the Eclipse 250 with RDOS

I recently joined a group here doing low level pattern recognition work
applied to speech recognition and image processing. We have an Eclipse
250 running the RDOS operating system. We also have C (unix version 7
compatable). Does anyone out there know of a dialect of LISP that can be
used with this system? Any suggestions? Please respond to the address
listed below. Thanks in advance.

Steve Cross
cross@wpafb-afita

------------------------------

Date: 4 Sep 84 10:29 PDT
From: Feuerman.pasa@XEROX.ARPA
Subject: AGE:  Who to contact?

I'm interested in looking into AGE, which is quoted as being a "Stanford
product."  Does anyone have the name and phone number of who to contact
to obtain things, such as manuals, users guides, etc.  Thanks in
advance.

--Ken <Feuerman.pasa@Xerox.ARPA>.

------------------------------

Date: Wed, 5 Sep 84 15:59 BST
From: TONY HASEMER (on ALVEY at Teddington) <TONH%alvey@ucl-cs.arpa>
Subject: programmed assistants

In response to Bill Mann's list of desirable mechanised assistants,
one of our graduate students urgently wants to know: if he drops
everything else and writes a thesis-writing assistant, will he get
a PhD for it?
Tony Hasemer.

------------------------------

Date: 4 Sep 84 09:56 PDT
From: Feuerman.pasa@XEROX.ARPA
Subject: Understanding speech vs. hearing words

The subject has come up about whether one need understand the semantics
of an utterance before one can recognize words, or even syllables.
While it seems a bit of research has been cited for both sides, I
thought it would be interesting to offer an experience of mine for
evidence:

I was travelling in Italy, and it was that time of the evening again,
time to find our daily ration of gelato (Italian ice cream)!  Our search
brought us into a bar of sorts, with Paul Simon's (I think it was Paul
Simon) recording of "Slip Sliding Away" playing in the background.  The
bartender was singing along, only it didn't quite come out right.  What
he was singing was more like "Sleep Sliding Ayway" (all of the vowels
being rather exagerated).  I regret that I had no way of knowing whether
he had seen the words written down before (which could account for some
of his mis-pronunciations), but it was pretty clear that he had no idea
of the meaning of what he was singing.


--Ken.

[It seems to me that the same sort of anecdote could be told of any
child; they frequently store and repeat phrases that are to them
merely nonsense (e.g., the alphabet, especially LMNOP).  More to the
point, a good first step in learning any new oral language is to listen
to it, sans understanding, long enough to begin to identify syllables.
This greatly simplifies later word drills since the student can then
grasp the phonetic distinctions that the teacher considers important
(and obvious).  The implication for speech understanding is that it is
indeed possible to identify syllables without understanding, but only
after some training and the development of fairly sophisticated
discriminant capabilities.  -- KIL]

------------------------------

Date: Fri, 31 Aug 84 15:53 BST
From: TONY HASEMER (on ALVEY at Teddington) <TONH%alvey@ucl-cs.arpa>
Subject: Now and Then

(Tony Hasemer challenges Norm Andrews' faith about cause and effect)

You say: "logical proof involves implication relationships between
discrete statements...causality assumes implication relationships
between discrete events".

Don't think me just another rude Brit, but:-

     > in what sense is a statement not an event
         A statement (as opposed to a record of a statement,
         which is a real-world object) takes place in the
         real world and therefore is an event in the real
         world.

     > what do you mean by "implication"
         This is the nub of all questions about cause and
         effect, and of course the word subsumes the very
         process it tries to describe.  One can say "cause
         and effect", or "implication", or "logically
         necessary", and mean ALMOST the same thing in each
         case.  They all refer to that same intangible feeling
         of certainty that a certain argument is valid or that
         event B was self-evidently caused by event A.

     > what do you mean by "relationship"
         Again, this is a word which presumes the existence
         of the very link we're trying to identify.


   May I suggest the following-

   The deductive logical syllogism (the prototype for all
infallible arguments) is of the form

     All swans are white.
     This is a swan.
     Therefore it is white.

Notice that the conclusion (3rd sentence) is only true iff the two
premises (sentences 2 and 3) are true.  And if you can make any
descriptive statement beginning "All..." then you must be talking
about a closed system.
   Mathematics, for example, is a set of logical statements about
the closed domain of numbers.  It is common, but on reflection rather
strange, to talk about "three oranges" when each orange is unique and
quite different from the rest.  It is clear that we impose number
systems on the real world, and logical statements about the square
root of the number 3 don't tell us whether or not there is a real
thing called the square root of three oranges.
   I'm saying that closed systems do not map onto the real world.
Mathematics doesn't, and nor does deductive logic (you could never
demonstrate, in practice, the truth of any statement about ALL of a
class of naturally-occurring objects).
   On the contrary, the only logic which will in any sense "prove"
statements about the real world (such as that the sun will rise tomorrow)
is INDUCTIVE logic.  Inductive logic and the principle of cause and
effect are virtually synonymous.  Inductive logic is fuzzy (deductive
logic is two-valued), and bootstraps itself into the position of
saying: "this must be true because it would be (inductively) absurd to
suppose the contrary".
   There is no real problem, no contradiction, between the principle
of cause and effect and deductive logic.  There is merely a category
mistake.  The persuasive power of deduction is very appealing, but
to try to justify an inductive argument (e.g. causality) by the
criteria of deductive arguments is like trying to describe the colour
red in a language which has no word for it.  We just have to accept that
in dealing with the real world the elegant and convenient certainties
of the deductive system do not apply.  The best logic we have is
inductive: if I kick object A and it then screams, I assume that it
screamed BECAUSE I kicked it.

   If repeated kicking of object A always produces the concomitant
screams, I have two choices: either to accept the notion of causality,
or to envisage the real world as being composed of a vast series of
arbitrary possibilities, like billions of tossed pennies which only
by pure chance have so far happened always to come down heads.  Personally,
I much prefer a fuzzy, uncertain logic to a chaos in which there is no
logic at all!  Belief in causality, like belief in God, is an act of
faith: you can't hope to PROVE it.  But whichever one chooses, it doesn't
really matter: stomachs still churn and cats still fight in the dark.
The very best solution to the problem of causality is to stop worrying
about it.

     Tony.

------------------------------

Date: 04 Sep 84  1424 PDT
From: Yoni Malachi <YM@SU-AI.ARPA>
Subject: Seminars - Abstracts for BATS

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

The next Bay Area Theory Seminar (aka BATS) will be at Stanford, this Friday,
7 September.

The talks (and lunch) will take place in Room 200-305. This is a room on the
third floor of History Corner, the NE corner of the Stanford Campus Quadrangle.

The schedule:

10:00am         U. Vazirani (Berkeley):
                "2-Processor Scheduling in Random NC"

11:00am         R. Anderson (Stanford):
                "A P-complete Problem and Approximations to It"

noon:           Lunch

1:00pm          E. Lawler (Berkeley):
                "The Traveling Salesman Problem Made Easy"

2:00pm          A. Schoenhage (Tuebingen, IBM San Jose):
                "Efficient Diophantine Approximation"


*****************************************************************************

ABSTRACTS:

10:00am:        U. Vazirani:

                    "2-Processor Scheduling in Random NC"

(joint work with D. Kozen and V. Vazirani)

The Two-Processor Scheduling Problem is a classical problem in Computational
Combinatorics, and several efficient algorithms have been designed for it.
However, these algorithms are inherently sequential in nature. We give a
randomizing poly-log time parallel algorithm (run on a polynomial number of
processors). Interestingly enough, our algorithm for this purely
combinatoric-looking problem draws on some powerful algebriac methods.  The
Two-processor Scheduling problem can be stated as follows:

Given a set S of unit time jobs, and a partial order specifying precedence
constraints among them, find an optimal schedule for the jobs on two identical
processors.


11:00am:        R. Anderson (Stanford):

            "A P-complete Problem and Approximations to It"

The P-complete problem that we will consider is the High Degree
Subgraph Problem.  This problem is: given a graph G=(V,E) and an integer k,
find the maximum induced subgraph of G that has all nodes of degree at least
k.  After showing that this problem is P-complete, we will discuss two
approaches to finding approximate solutions to it in NC.  We will give a
variant of the problem that is also P-complete that can be approximated to
within a factor of c in NC, for any c < 1/2, but cannot be approximated by a
factor of better than 1/2 unless P=NC.  We will also give an algorithm that
finds a subgraph with moderately high minimum degree.  This algorithm exhibits
an interesting relationship between its performance and the time it takes.



 1:00pm:        E. Lawler (Berkeley):

                 "The Traveling Salesman Problem Made Easy"

    Despite the general pessimism resulting from both theory and
practice, the TSP is not necessarily a hard problem--there are many
interesting and useful special cases that can be solved efficiently.
For example, there is an efficient procedure for finding an optimal
solution for the bottleneck TSP in the case that the distance matrix
is "graded." This result will be used to show how to solve a problem
of great practical importance to paperhangers: how to cut sheets from
a long roll of paper so as to minimize intersheet wastage.

    Material for this talk is drawn from a chapter, by P. Gilmore,
E.L. Lawler, and D.B. Shmoys, of a forthcoming book, The Traveling
Salesman Problem, edited by Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan,
and D.B. Shmoys to be published by J. Wiley in mid-1985.


 2:00pm:        A. Schoenhage (Tuebingen, IBM San Jose):

                    "Efficient Diophantine Approximation"

Abstract: Given (a_1,...,a_n) in R^d (with d < n) and epsilon > 0, how to find
a nontrivial x = (x_1,...,x_n) in Z^n of minimal Euclidean norm nu such that
|x_1 a_1 + ... + x_n a_n| < epsilon holds. A weak version of this classical
task (where epsilon and nu may be multiplied by 2^(cn) ) can be solved in time

                O(n^2 (d*n/(n-d) * log(1/epsilon))^(2+o(1))).

The main tool is an improved basis reduction algorithm for integer lattices.

------------------------------

Date: Tue 4 Sep 84 09:27:09-PDT
From: name AAAI-OFFICE <AAAI@SRI-AI.ARPA>
Subject: IJCAI-85 Call for Papers


                                IJCAI-85
                             CALL FOR PAPERS

The IJCAI conferences are the main forum for the presentation of Artificial
Intelligence research to an international audience.  The goal of the IJCAI-85
is to promote scientific interchange, within and between all subfields of AI,
among researchers from all over the world.  The conference is sponsored by the
International Joint Conferences on Artificial Intelligence (IJCAI), Inc., and
co-sponsored by the American Association for Artificial Intelligence (AAAI).
IJCAI-85 will be held at the University of California, Los Angeles from
August 18 through August 24, 1985.

        * Tutorials: August 18-19; Technical Sessions: August 20-24

TOPICS OF INTEREST

Authors are invited to submit papers of substantial, original, and previously
unreported research in any aspect of AI, including:

* AI architectures and languages
* AI and education (including intelligent CAI)
* Automated reasoning (including theorem proving, automatic programming,plan-
  ning, search, problem solving, commensense, and qualitative reasoning)
* Cognitive modelling
* Expert systems
* Knowledge representation
* Learning and knowledge acquisition
* Logic programming
* Natural language (including speech)
* Perception (including visual, auditory, tactile)
* Philosophical foundations
* Robotics
* Social, economic and legal implications


REQUIREMENTS FOR SUBMISSION

Authors should submit 4 complete copies of their paper.  (Hard copy only, no
electronic submissions.)

        * LONG PAPERS: 5500 words maximum, up to 7 proceedings pages
        * SHORT PAPERS: 2200 words maximum, up to 3 proceedings pages

Each paper will be stringently reviewed by experts in the topic area specified.
Acceptance will be based on originality and significance of the reported
research, as well as the quality of its presentation.  Applications clearly
demonstrating the power of established techniques, as well as thoughtful
critiques of previously published material will be considered, provided that
they point the way to new research and are substantive scientific contributions
in their own right.

Short papers are a forum for the presentation of succinct, crisp results.
They are not a safety net for long paper rejections.

In order to ensure appropriate refereeing, authors are requested to
specify in which of the above topic areas the paper belongs, as well
as a set of no more than 5 keywords for further classification within
that topic area.  Because of time constraints, papers requiring major
revisions cannot be accepted.

DETAILS FOR SUBMISSION

The following information must be included with each paper:

        * Author's name, address, telephone number and net address
          (if applicable);
        * Topic area (plus a set of no more than 5 keywords for
          further classification within the topic area.);
        * An abstract of 100-200 words;
        * Paper length (in words).

The time table is as follows:

        * Submission deadline: 7 January 1985 (papers received after
          January 7th will be returned unopened)
        * Notification of Acceptance: 16 March 1985
        * Camera Ready copy due: 16 April 1985

Contact Points

Submissions should be sent to the Program Chair:

        Aravind Joshi
        Dept of Computer and Information Science
        University of Pennsylvania
        Philadelphia, PA 19104 USA

General inquiries should be directed to the General Chair:

        Alan Mackworth
        Dept of Computer Science
        University of British Columbia
        Vancouver, BC, Canada V6T 1W5

Inquiries about program demonstrations (including videotape system
demonstrations) and other local arrangements should be sent to
the Local Arrangements Chair:

        Steve Crocker
        The Aerospace Corporation
        P.O. Box 92957
        Los Angeles, CA 90009 USA

Inquiries about tutorials, exhibits, and registration should be
sent to the AAAI Office:

        Claudia Mazzetti
        American Association for Artificial Intelligence
        445 Burgess Drive
        Menlo Park, CA 94025 USA

------------------------------

End of AIList Digest
********************
 7-Sep-84 10:47:53-PDT,13735;000000000001
Mail-From: LAWS created at  7-Sep-84 10:45:11
Date: Fri  7 Sep 1984 10:27-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #115
To: AIList@SRI-AI


AIList Digest             Friday, 7 Sep 1984      Volume 2 : Issue 115

Today's Topics:
  LISP - QLAMBDA & Common Lisp,
  Expert Systems - AGE Contact & Expository Writing Assistant,
  Books - Lib of CS and the Handbook of AI,
  AI Tools - Statistical Workstations and Time-Series Lisp,
  Binding - Jim Slagle,
  Speech Recognition - Semantics,
  Philosophy - Induction vs. Deduction & Causality,
  Seminars - A Calculus of Usual Values & Week on Logic and AI
----------------------------------------------------------------------

Date: Thu, 6 Sep 84 8:54:58 EDT
From: "Ferd Brundick (VLD/LTTB)" <fsbrn@BRL-VOC.ARPA>
Subject: QLAMBDA


Does anyone have any information on a new LISP called QLAMBDA ??
It is a "parallel processor" language being developed by McCarthy at
Stanford and is supposed to run on the HEP (Heterogeneous Element
Processor).  Since we have one of the original HEPs, we are interested
in any information regarding QLAMBDA.  Thanks.

                                        dsw, fferd
                                        Fred S. Brundick
                                        USABRL, APG, MD.
                                        <fsbrn@brl-voc>

------------------------------

Date: 13 Aug 84 8:21:00-PDT (Mon)
From: pur-ee!uiucdcsb!nowicki @ Ucb-Vax.arpa
Subject: Re: Common Lisp - (nf)
Article-I.D.: uiucdcsb.5500009

I am also interested in such info. We have Sun-2's running 4.2 and I am
interested in obtaining Common Lisp for them.

-Tony Nowicki
{decvax|inuxc}!pur-ee!uiucdcs!nowicki

------------------------------

Date: Wed 5 Sep 84 17:28:19-CDT
From: Charles Petrie <CS.PETRIE@UTEXAS-20.ARPA>
Subject: AGE

Call Juanita Mullen at (415)497-0474 for a good time in obtaining
Stanford programs such as AGE.  It'll cost you about $500.
CJP

------------------------------

Date: 5 Sep 1984 14:08:13 PDT
From: Bill Mann <MANN@USC-ISIB.ARPA>
Subject: Clarification on the non-existence of the Expository Writing Assistant

I've gotten several inquiries asking for the Expository Writing Assistant
Program that I wished for in a previous message.  Unfortunately, it
doesn't exist.  I'm convinced from studying text generation that we have
ENOUGH TECHNICAL INFORMATION about the structure of text, the functions
of various parts and how parts are arranged that such a program could be
written.  My own writing practise, which now in effect simulates such a
program, indicates that the program's suggestions could be very helpful.

An introduction to the text structures I have in mind was presented at
the 1984 ACL/Coling conference at Stanford in July.  The paper was
entitled "Discourse Structures for Text Generation."

Right now I have no plans to create the assistant.

Sorry, folks.
Bill Mann

------------------------------

Date: 4 Sep 84 16:36:13-PDT (Tue)
From: ihnp4!houxm!vax135!cornell!uw-beaver!ssc-vax!adcock @ Ucb-Vax.arpa
Subject: Re: Lib of CS intro offer: Handbook of AI Vols 1-3 for $5

Please note that the Handbook of AI is a REFERENCE book. It is not
meant to be read from cover to cover.

Also, this is the only books on AI that the Lib of CS sells.

[I disagree with the first point.  The Handbook is also an excellent
tutorial, although it does lack illustrations.  I enjoyed reading it
cover to cover (although I admit to not having finished all three
volumes yet).  The second point is largely true, although they have
offered The Brains of Men and Machines, Machine Perception, LISPcraft,
and a few other related books.  -- KIL]

------------------------------

Date: Fri 7 Sep 84 10:15:02-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Statistical Workstations and Time-Series Lisp (Tisp)

Anyone interested in statistical workstations should look up the
August IEEE Computer Graphics and Applications article
"A Graphical Interface to an Economist's Workstation" by Thomas
Williams of Wagner, Stott and Company, 20 Broad Street, New York,
NY 10005.  He describes a prototype for time-series analysis that
was quickly assembled from standard Interlisp-D functions on the
Xerox 1108.  Apparently the economists of the International
Monetary Fund took to it immediately, and Williams sees no problems
in extending its capabilities to better support them.  His company
is also working on a workstation for professional securities traders.

                                        -- Ken Laws

------------------------------

Date: 5 Sep 1984 13:42-EDT
From: Russ Smith <smith@NRL-AIC>
Subject: Binding - Jim Slagle

As of September 10, 1984 Dr. Slagle will have a new address:

        Professor James R. Slagle
        University of Minnesota
        136 Lind Hall
        207 Church Street, S.E.
        Minneapolis, MN  55455

        (612) 373-7513
        (612) 373-0132

        slagle%umn-cs.csnet@csnet-relay.arpa (possibly...)

------------------------------

Date: 5 Sep 84 10:00:24-PDT (Wed)
From: ihnp4!fortune!polard @ Ucb-Vax.arpa
Subject: Re: Understanding speech versus hearing words
Article-I.D.: fortune.4138

<fowniymz for dh6 layn iyt6r>   [Phonemes for the line-eater. -- KIL]

        Which hip was burned?
        Which ship was burned?
        Which chip was burned?
and     Which Chip was spurned?

all sound the same when spoken at the speed of conversational speech.  This
is evidence that in order to recognize words in continuous speech
you (and presumably a speech-recognition apparatus) need to understand
what the speaker is talking about.
        There seem to be two reasons why understanding is necessary
for word recognition in continuous speech:
        1. The existence of homonyms.  This is why "It's a good read."
sounds the same as: "It's a good reed," and why the two sentences
could not be distinguished without a knowledge of the context.
        2. Sandhi, or sound changes at word boundaries. The sounds at the
 end of a word tend to blend into the sounds at the beginning of the next
 word in conversation,  making words sound as if they ran into each other
and making words sound differently than they would when said in isolation.
        The resulting ambiguities are usually resolved by context.
        Speech rarely occurs without some sort of context, and even then
the first thing that usually happens is to establish a context for what
is to follow.
        To paraphrase Edsgar Dijkstra: "Asking whether computers will
understand speech is like asking whether submarines swim."

--
Henry Polard (You bring the flames - I'll bring the marshmallows.)
{ihnp4,cbosgd,amd}!fortune!polard

------------------------------

Date: Wed 5 Sep 84 10:54:11-PDT
From: BARNARD@SRI-AI.ARPA
Subject: induction vs. deduction

Tony Hasemer's comments on causality and its relationship to inductive
versus deductive logic are very well-taken.  It's time for people in
AI to realize that deduction is quite limited as a mode of reasoning.
Compared to induction, the mathematical foundations of deduction are
well-understood, and deductive systems are relatively easy to
implement on computers.  This no doubt explains its popularity in AI.
The problem arises when one tries to extend the deductive paradigm
from toy problems to real problems, and must confront exceptions,
borderline cases, and, in general, the boggling complexity of the
state space.

While deduction proceeds from the general (axioms) to the specific
(propositions), induction proceeds from the specific to the general.
This seems to be a more natural view of human intelligence.  By
observing events, one recognizes correlations, and infers causality
and other relationships.  To be sure, the inferences may be wrong, but
that's tough.  People make mistakes.  In fact, one of the weaknesses
of deduction is that it does not permit one to draw conclusions that
may be in error (assuming the axioms are correct), but that represent
the best conclusions under the circumstances.

Visual illusions provide good examples.  Have you ever wondered why
you see a Necker Cube as a cube (one of the two reversals), and not as
one of the other infinite number of possiblities?  Perhaps we learn of
cubes through experience (an inductive explanation), but the effect
also occurs with totally unfamiliar figures.  A more general inductive
explanation holds that we see the simplest possible figure (the
Gestalt principle of Pragnanz).  A cube, which has right angles and
equal-length sides, is simpler than any of the other possiblilities.
The concept of "simple" can be made precise: one description is
simpler than another if it can be encoded more economically.  This is
sometimes called the principle of Occam's Razor or the principle of
Minimum Entropy.

        Steve Barnard

------------------------------

Date: 6 Sep 84 07:39 PDT
From: Woody.pasa@XEROX.ARPA
Subject: Causality

Food for thought:
  All the arguments for and against cause and effect and the workings of
Causality have been based around the notion that the cause 'A' of an
effect 'B' are time-related:  we assume that for A to affect B, A must
come before B in our perseption of time.
  But does this have to be the case?  Mathematics (inductive and
deductive logic) are time-independent identities; by assuming that
Causality may be a time-dependent phenomina on the basis of
time-independent arguments is at best wishful thinking.
  What's wrong with event A affecting event B in event A's past?  You
can't go back and shoot your own mother before you were born because you
exist, and obviously you failed.  If we assume the universe is
consistant [and not random chaos], then we must assume inconsistancies
(such as shooting your own mother) will not arise.  It does not,
however, place time constrictions on cause and effect.

    - Bill Woody

Woody.Pasa@XEROX.Arpa   [Until 7 September 1984]
** No net address **    [After 7 September 1984]

------------------------------

Date: Fri, 7 Sep 84 00:35:19 pdt
From: syming%B.CC@Berkeley
Subject: Seminar - A Calculus of Usual Values

  From: chertok@ucbkim (Paula Chertok)
  Subject: Berkeley Cognitive Science Seminar--Sept. 11

                  COGNITIVE SCIENCE PROGRAM

                         Fall 1984

           Cognitive Science Seminar -- IDS 237A


SPEAKER:        L.A. Zadeh
                Computer Science Division, UC Berkeley

TITLE:          Typicality, Prototypicality, Usuality,
                Dispositionality, and Common Sense

          TIME:           Tuesday, September 11, 11 - 12:30pm
          PLACE:          240 Bechtel Engineering Center
          DISCUSSION:     12:30 - 2 in 200 Building T-4


The grouping of the concepts listed in  the  title  of  this
talk is intended to suggest that there is a close connection
between them.  I will describe a general approach  centering
on  the  concept of dispositionality which makes it possible
to formulate fairly precise definitions  of  typicality  and
prototypicality,  and  relate  these concepts to commonsense
reasoning.  These  definitions  are  not  in  the  classical
spirit and are based on the premise that typicality and pro-
totypicality are graded concepts, in the  sense  that  every
object is typical or prototypical to a degree.  In addition,
I will outline what might be  called  a  calculus  of  usual
values.

------------------------------

Date: Thu, 6 Sep 84 16:45:49 edt
From: minker@maryland (Jack Minker)
Subject: WEEK ON LOGIC AND AI


                               WEEK of
            LOGIC and its ROLE in ARTIFICIAL INTELLIGENCE
                                  at
                      THE UNIVERSITY OF MARYLAND
                         OCTOBER 22-26, 1984

The Mathematics and Computer Science Departments at the University
of Maryland at College Park are jointly sponsoring a Special Year in
Mathematical Logic and Theoretical Computer Science.  The week of
October 22-26 will be devoted to Logic and its role in Artificial
Intelligence.  There will be five distinguished lectures as follows:

Monday, October 22: Ray REITER

        "Logic for specification: Databases
        conceptual models, and knowledge representation
        languages"

Tuesday, October 23: John McCARTHY

        "The mathematics of circumscription"

Wednesday, October 24: Maarten VAN EMDEN

        "Strict and lax interpretations of rules in logic programming"

Thursday, October 25: Jon BARWISE

        "Constraint logic"

Friday, October 26: Lawrence HENSCHEN

        "Compiling constraint checking programs in deductive databases"


All lectures will be given at:
        Time: 10:00 AM - 11:30AM

Location: Mathematics Building, Room Y3206

The lectures are open to the public.  If you plan to attend kindly
notify us so that we can make appropriate plans for space.
Limited funds are available to support junior faculty and graduate
students for the entire week or part of the week.  To obtain funds,
please submit an application listing your affiliation and send either
a net message or a letter to:

Jack Minker
Department of Computer Science
University of Maryland
College Park, MD 20742
(301) 454-6119
minker@maryland

------------------------------

End of AIList Digest
********************
10-Sep-84 09:46:25-PDT,10580;000000000001
Mail-From: LAWS created at 10-Sep-84 09:44:06
Date: Mon 10 Sep 1984 09:37-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #116
To: AIList@SRI-AI


AIList Digest            Monday, 10 Sep 1984      Volume 2 : Issue 116

Today's Topics:
  AI Tools - FRL in Franz,
  Robotics - Symbolic Programming Query,
  Psychology - Memory Tests,
  Knowledge Representation - OPS5 Problem,
  LISP - Delayed Reply About muLISP,
  Speech Recognition - Syllables,
  Philosophy - Correction,
  Expert Systems - Personal Assistants,
  Seminar - Semantic Modulation
----------------------------------------------------------------------

Date: 5 Sep 84 10:08:15-PDT (Wed)
From: decvax!mcnc!duke!ucf-cs!bethel @ Ucb-Vax.arpa
Subject: Help : need a Full implementation of FRL.
Article-I.D.: ucf-cs.1468


Does anyone have a full implementation of Minsky's FRL, running
under Unix 4.2 and Franz Lisp ? If so would you please respond and
let me know where you are. I would like to get the sources if
they are available and not protected by company/university policy.

Thanks in advance,

Robert C. Bethel

 ...decvax!ucf-cs!bethel or ...duke!ucf-cs!bethel
        bethel.ucf-cs@Rand-Relay

------------------------------

Date: 3 Sep 84 12:35:53-PDT (Mon)
From: hplabs!hao!denelcor!csu-cs!walicki @ Ucb-Vax.arpa
Subject: prolog/lisp/robotics - query
Article-I.D.: csu-cs.2619

I am looking for information on applications of symbolic computing
(lisp, prolog) in the area of robotics.  I do not have any specifics
in mind; I am interested in any (even fuzzy) intersections of the
abovementioned domains.
Please respond by mail, and I will post a summary in net.ai.

Jack Walicki
Colorado State U.
Computer Science Dept.
(Fort Collins, CO 80523)
{hplabs,hao}!csu-cs!walicki

------------------------------

Date: 9 Sep 84 18:11:40 PDT (Sunday)
From: wedekind.es@XEROX.ARPA
Subject: Memory tests

Someone I know is looking for a battery of well-documented,
self-administered memory tests.  Does anyone know of an accessible
source?

thank you,
                Jerry

------------------------------

Date: Saturday,  8-Sep-84 18:35:50-BST
From: O'KEEFE  HPS (on ERCC DEC-10) <okeefe.r.a.@EDXA>
Subject: OPS5 problem


An MSc student came to me with a problem.  He had a pile of OPS5 rules
and was a bit unhappy about the means he had adopted to stop them
looping.  Each rule looked rather like
    (p pain77
        (task ^name cert)
        (injury ^name injury6 ^cert <C>)
        (symptom ^name symptom9 ^present yes)
       -(done pain77)
    -->
        (make done pain77)
        (modify 2 ^cert (compute ....))
    )
There were dozens of them.  The conflict resolution rule of never
firing the same rule on the same data more than once didn't help, as
modify is equivalent to a delete and a make.  What he actually wanted
can be expressed quite neatly in Prolog:

        candidates(BestToWorst) :-
                setof(W/Injury, weight(Injury, W), BestToWorst).

        weight(Injury, MinusCertainty) :-
                prior_certainty(Injury, Prior),
                findall(P, pro(Injury, P), Ps),
                product(Ps, 1.0, P),
                findall(C, con(Injury, C), Cs),
                product(Cs, 1.0, C),
                MinusCertainty is -(1 - P + P*C*Prior).

        pro(Injury, Wt) :-
                evidence_for(Injury, Symptom, Wt),
                present(Symptom).

        con(Injury, Wt) :-
                evidence_against(Injury, Symptom, Wt),
                present(Symptom).

        product([], A, A).
        product([W|Ws], A, R) :-
                B is A*W,
                product(Ws, B, R).

We managed to produce something intermediate between these two, it
used evidence-for and evidence-against tables in working memory, and
had just two hacky rules instead of the scores originally present.
I did spot a way of stopping the loop without using negation, and
that is to make the "certainty" held in the (injury ^name ^cert)
WM elements a gensym whose value is the desired number, then as far
as OPS5 is concerned the working memory hasn't been changed.  Of
course that makes patterns that use the number harder to write, and
seems rather hacky itself.

To come to the point, I have two questions about OPS5.
1) Is there a clean way of coding this in OPS5?  Or should I have
   told him to use EXPERT?
2) As I mentioned, we did manage to do considerably better than his
   first attempt.  But the thing that bothered me was that it hadn't
   occurred to him to use the WM for tables.  The course he's in
   uses the Teknowledge(??) "OPS5 Tutorial" (the one with the Wine
   Advisor) and students seem to copy the Wine Advisor more or less
   blindly.  Is there any generally available GOOD course material on
   OPS5, and if so who do we write to?  Are there any moderate-size
   examples available?

------------------------------

Date: 10 May 84 11:33:00-PDT (Thu)
From: hplabs!hp-pcd!hp-dcd!hpfcls!hpbbla!coulter @ Ucb-Vax.arpa
Subject: Delayed Reply About muLISP
Article-I.D.: hpbbla.4900001

It may not be what you are looking for, but there are several LISP
implementations that run on CP/M.  I bought muLISP which is
distributed by MICROSOFT.  It cost $200.  Because of its larger
address space, you should be able to get a more capable LISP for the
IBM/PC, but it will cost more.  The muLISP is fairly complete, although
the only data type is integer (it can represent numbers up to 10**255).
The DOCTOR (a.k.a. ELIZA) program is supplied with it and it runs.

------------------------------

Date: Fri, 7 Sep 84 17:44 EST
From: Kurt Godden <godden%gmr.csnet@csnet-relay.arpa>
Subject: understanding speech, syllables, words, etc.

Which hip was burned?
Which ship was burned?
which chip was burned?
Which Chip was spurned?

First of all, I disagree that all 4 sound 'the same' in conversational
speech, esp. the last.  The final [z] in 'was' gets devoiced because of
the voiceless cluster that follows in 'spurned'.  However, of course I do
agree that often/usually context is necessary to DISAMBIGUATE, tho' not
necessarily to understand in the first place.  Since I am already writing
this I might as well give my originally suppressed comments on the first
person's statement that syllable identification requires understanding.
I definitely do not agree with that claim.  Others have mentioned learning
a foreign language by first tuning the ear to the phonetics of the target
language including that target's syllable types, and this is a point well
taken.  The notion of syllable is certainly different in different lgs,
but apparently can be learned without understanding.
The point is even clearer in one's native language.  We have all heard
Jabberwockish type speech and can clearly recognize the syllables and
phonetic elements as 'English', yet we do so without any understanding.

All this assumes that we know just what a syllable is, which we don't,
but that's another argument and is not really suitable for ailist.
-Kurt Godden <godden.gmr@csnet-relay>

------------------------------

Date: 7 Sep 84 9:13:41-PDT (Fri)
From: ihnp4!houxm!vax135!ariel!norm @ Ucb-Vax.arpa
Subject: Re: Now and Then
Article-I.D.: ariel.751

>
> From:  TONY HASEMER (on ALVEY at Teddington) <TONH%alvey@ucl-cs.arpa>
>
> (Tony Hasemer challenges Norm Andrews' faith about cause and effect)
>
> You say: "logical proof involves implication relationships between
> discrete statements...causality assumes implication relationships
> between discrete events".
>
Hold on here!  I, Norm Andrews, didn't say that!  You are quoting someone
going by the name "Baba ROM DOS" who was attempting to disprove my
statement that "The concept of proof depends upon the concepts of cause
and effect, among other things."  Please don't assign other peoples'
statements to me!

I haven't time now to reply to any other part of your posting...

Norm Andrews

------------------------------

Date: 6 Sep 84 7:14:39-PDT (Thu)
From: decvax!genrad!teddy!mjn @ Ucb-Vax.arpa
Subject: Personal Assistants
Article-I.D.: teddy.391

FINANCIAL ASSISTANT

        I think this would be a good one to add to the list of personal
assistants which would be valuable to have.  It could be a great
aid to budgeting and guiding investments.  It should go beyond
simple bookkeeping and offer advice (when it can).  If conficts
arise in where to spend money, it should be capable of asking
questions to determine what you consider to be more important.

        Additional functionality might include analysis of spending
patterns.  Where does my money go?  Such a question could be
answered by this assistant.  It might include gentle reminders if
you are overspending, or not meeting a payment schedual, or forget
something.

------------------------------

Date: 7 Sep 1984 15:04-EDT
From: Brad Goodman <BGOODMAN at BBNG>
Subject: Seminar - Semantic Modulation

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

 David McAllester will give the next BBN AI Seminar at 10:30 AM on
Wednesday September 12. The talk is in the 3rd floor large conference room
at 10 Moulton St. Title and abstract follow.

    Semantic Modulation: A New General Purpose Inference Technique

                       David McAllester

             Massachusetts Institute of Technology

        Semantic modulation is a general purpose inference technique
based on the "modulation" of the interpretations of parameters which
appear free in an assertional data base.  A semantic modulation system
includes a finite and fixed set Delta of formulae.  By varying the
interpretation of the free parameters in Delta it is possible to use
the finite and FIXED data base Delta to perform a large set of
inferences which involve reasoning about quantification.  Semantic
modulation is a way of reasoning with quantifiers that does not
involve unification or the standard techniques of universal
instantiation.  Semantic modulation replaces these notions with the
notion of a "binding premise".  A binding premise is a propositional
assumption which constrains the interpretation of one or several free
parameters.

------------------------------

End of AIList Digest
********************
12-Sep-84 10:11:27-PDT,15788;000000000001
Mail-From: LAWS created at 12-Sep-84 10:08:20
Date: Wed 12 Sep 1984 10:01-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #117
To: AIList@SRI-AI


AIList Digest           Wednesday, 12 Sep 1984    Volume 2 : Issue 117

Today's Topics:
  AI Tools - Expert-Ease,
  Expert Systems - Lenat Bibliography,
  Pattern Recognition - Maximal Submatrix Sums,
  Cognition - The Second Self & Dreams,
  Seminars - Computational Theory of Higher Brain Function &
    Distributed Knowledge
----------------------------------------------------------------------

Date: 10 Sep 1984 13:16:20-EDT
From: sde@Mitre-Bedford
Subject: expert-ease

I got a flyer from Expert Systems Inc. offering something called Expert Ease
which is supposed to facilitate producing expert systems. They want $125 for
a demo version, so I thought to inquire if anyone out there can comment on
the thing, especially since the full program is $2000. I'm not eager to buy
a lemon, but if it is a worthwhile product, it might be justifiable as an
experiment.
Thanx in advance,
   David   sde@mitre-bedford

------------------------------

Date: Tue, 11 Sep 84 19:16 BST
From: TONY HASEMER (on ALVEY at Teddington) <TONH%alvey@ucl-cs.arpa>
Subject: Lenat

   Please can anyone suggest any good references, articles etc. concerning
Lenat's heuristic inferencing machine I'd be very grateful.

Tony.

[I can suggest the following:

D.B. Lenat, "BEINGS: Knowledge as Interacting Experts,"
Proc. 4th Int. Jnt. Conf. on Artificial Intelligence,
Tblisi, Georgia, USSR, pp. 126-133, 1975.

D.B. Lenat, AM: An Artificial Intelligence Approach to Discovery
in Mathematics as Heuristic Search, Ph.D. Dissertation,
Computer Science Department Report STAN-CS-76-570,
Heuristic Programming Project Report HPP-76-8,
Artificial Intelligence Laboratory Report SAIL AIM-286,
Stanford University, Stanford, California, 1976.

D.B. Lenat, "Automated Theory Formation in Mathematics,"
5th Int. Jnt. Conf. on Artificial Intelligence, Cambridge, pp. 833-42, 1977.

D.B. Lenat and G. Harris, "Designing a Rule System That Searches for
Scientific Discoveries," in D.A. Waterman and F. Hayes-Roth (eds.),
Pattern-Directed Inference Systems, Academic Press, 1978.

D.B. Lenat, "The Ubiquity of Discovery," National Computer Conference,
pp. 241-256, 1978.

D.B. Lenat, "On Automated Scientific Theory Formation: A Case Study Using
the AM Program," in J. Hayes, D. Michie, and L.I. Mikulich (eds.),
Machine Intelligence 9, Halstead Press (a div. of John Wiley & Sons),
New York, pp. 251-283, 1979.

D.B. Lenat, W.R. Sutherland, and J. Gibbons, "Heuristic Search for
New Microcircuit Structures: An Application of Artificial Intelligence,"
The AI Magazine, Vol. 3, No. 3, pp. 17-33, Summer 1982.

D.B. Lenat, "The Nature of Heuristics," The AI Journal, Vol. 9, No. 2,
Fall 1982.

D.B. Lenat, "Learning by Discovery: Three Case Studies in Natural and
Artificial Learning Systems," in Michalski, Mitchell, and Carbonell (eds.),
Machine Learning, Tioga Press, 1982.

D. B. Lenat, Theory Formation by Heuristic Search,
Report HPP-82-25, Heuristic Programming Project, Dept. of
Computer Science and Medicine, Stanford University, Stanford,
California, October 1982.  To appear in The AI Journal, March 1983.

D. B. Lenat, "EURISKO: A Program that Learns New Heuristics and Domain
Concepts," Journal of Artificial Intelligence, March 1983.  Also available
as Report HPP-82-26, Heuristic Programming Project, Dept. of
Computer Science and Medicine, Stanford University, Stanford,
California, October 1982.

                                        -- KIL]

------------------------------

Date: Wed 12 Sep 84 01:50:03-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Pattern Recognition and Computational Complexity

I have a solution to Jon Bentley's Problem 7 in this month's
CACM Programming Pearl's column (September 1984, pp. 865-871).
The problem is to find the maximal response for any rectangular
subwindow in an array of maximum-likelihood detector outputs.
The following algorithm is O(N^3) for an NxN array.  It requires
working storage of just over half the original array size.

/*
**  maxwdwsum
**
**    Compute the maximum rectangular-window sum in a matrix.
**    Return 0.0 if all array elements are negative.
**
**  COMMENTS
**
**    This algorithm scans the matrix, considering for each
**    element all of the rectangular subwindows with that
**    element as the lower-right corner.  The current best
**    window will either be interior to the previously
**    processed rows or will end on the current row.  The
**    latter possibility is checked by considering the data
**    on the current row added into the best window of each width
**    for each lower-right corner element on the previous row.
**
**    The memory array for tracking maximal window sums could
**    be reduced to a triangular data structure.  An additional
**    triple of values could be carried along with globalmax
**    to record the location and width of the maximal window;
**    saving or recovering the height of the window would be
**    a little more difficult.
**
**  HISTORY
**
**    11-Sep-84  Laws at SRI-AI
**    Wrote initial version.
*/


/* Sample problem. (Answer is 6.0.) */
#define NROWS 4
#define NCOLS 4
float X[NROWS][NCOLS] = {{ 1.,-2., 3.,-1.}, { 2.,-5., 1.,-1.},
    { 3., 1.,-2., 3.}, {-2., 1., 1., 0.}};

/* Macro to return the maximum of two expressions. */
#define MAX(exp1,exp2)  (((exp1) > (exp2)) ? (exp1) : (exp2))


main()
{

  float globalmax;                      /* Global maximum */
  float M[NCOLS][NCOLS];                /* Max window-sum memory,   */
                                        /* (triangular, 1st >= 2nd) */
  int maxrow;                           /* Upper row index */
  int mincol,maxcol;                    /* Column indices */
  float newrowsum;                      /* Sum for new window row */
  float newwdwsum;                      /* Previous best plus new window row */
  float newwdwmax;                      /* New best for this width */
  int nowrow,nowcol;                    /* Loop indices */


  /* Initialize the maxima registers. */
  globalmax = 0.0;
  for (nowrow = 0; nowrow < NCOLS; nowrow++)
    for (nowcol = 0; nowcol <= nowrow; nowcol++)
      M[nowrow][nowcol] = -1.0E20;

  /* Process each lower-right window corner. */
  for (maxrow = 0; maxrow < NROWS; maxrow++)
    for (maxcol = 0; maxcol < NCOLS; maxcol++) {

      /* Increase window width back toward leftmost column. */
      newrowsum = 0.0;
      for (mincol = maxcol; mincol >= 0; mincol--) {

        /* Cumulate the window-row sum. */
        newrowsum += X[maxrow][mincol];

        /* Compute the sum of the old window and new row. */
        newwdwsum = M[maxcol][mincol]+newrowsum;

        /* Update the maximum window sum for this width. */
        newwdwmax = MAX(newrowsum,newwdwsum);
        M[maxcol][mincol] = newwdwmax;

        /* Update the global maximum. */
        globalmax = MAX(globalmax,newwdwmax);
      }
  }

  /* Print the solution, or 0.0 for a negative array. */
  printf("Maximum window sum:  %g\n",globalmax);
}

------------------------------

Date: Sat 8 Sep 84 11:14:04-MDT
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: The Second Self

The Second Self by Sherry Turkle is an interesting study of the
relationship between computers and people.  In contrast to most
studies I've seen, this is not a collection of sensationalism from the
newspapers combined with the wilder statements from various
professionals.  Rather, it is (as far as I know) the first thorough
and scientific study of the influence of computers on human thinking
(there's even a boring appendix on methodology, for those who are into
details).

The book starts out with analyses of young children's attitudes towards
intelligent games (Merlin, Speak'n'Spell and others).  Apparently, the children
playing with these games spend a great deal of time discussing whether these
games are actually alive or not, whether they know how to cheat, and so forth.
The games manifest themselves as "psychological machines" rather than the
ordinary physical machines familiar to most children.  As such, they prompt
children to think in terms of mental behavior rather than physical behavior,
which is said to be an important stage in early mental development (dunno myself
if psychologists hold this view generally).

The theme of computers as "psychological machines" is carried throughout the
book.  Older children and adolescents exhibit more of a desire to master the
machine rather than just to interact with it, but interviews with them reveal
that they, too, are aware of the computer as something fundamentally different
from an automobile, in the way that it causes them to think.  Computer
hobbyists of both the first (ca 1978) and later generations are interviewed,
and one of them characterizes the computer as "a tool to think with".

Perhaps the section of most interest to AIList readers is the one in which
Turkle interviews a number of workers in AI.  Although the material has an
MIT slant (since that's where she did her research), and there's an excess
of quotes from Pam McCorduck's Machines Who Think, this is the first time
I've seen a psychological analysis of motives and attitudes behind the
research.  Most interesting was a discussion of "egoless thought" - although
most psychologists (and some philosophers) believe that the existence
of self-consciousness and an ego is a prerequisite to thought and
understanding, there are many workers in AI who do not share this view.
The resolution of this question will have profound effects on many of
the current views in psychology.  Along the same lines, Minsky gave a
list of concepts common in computer science which have no analogies in
psychology (such as the notions of "garbage collection" and "pure procedure").

I recommend this book as an interesting viewpoint on computer science in
general and AI in particular.  The experimental results alone are worth
reading it for.  Hopefully we'll see more studies along these lines in the
future.

                                                               stan shebs

------------------------------

Date: Wed 12 Sep 84 09:58:29-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Dreams

A letter by Donald A. Windsor in the new CACM (September, p. 859) suggests
that the purpose of dreams is to test our cognitive models of the people
around us by simulating their behavior and monitoring for bizarre
patterns.  He claims that the "dream people" are AI programs that
we construct subconsciously.

                                        -- Ken Laws

------------------------------

Date: 09/11/84 13:56:44
From: STORY
Subject: Seminar - Computational Theory of Higher Brain Function

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


TITLE:    ``A Computational Theory of Higher Brain Function''

SPEAKER:  Leslie M. Goldschlager, Visiting Computer Scientist, Stanford
          University

DATE:     Friday, September 14, 1984
TIME:     Refreshments, 3:45pm
          Lecture, 4:00pm
PLACE:    NE43-512a

     A new model of parallel computation is proposed.  The fundamental
item of data in the model is called a "concept", and concepts may be
stored  on  a  two-dimensional   data  structure  called  a   "memory
surface".  The  nature  of the  storage  mechanism and  the  mode  of
communication which is required between storage locations renders  the
model suitable for implementation in VLSI.  An implementation is  also
possible  with  neurons  arranged  in a two-dimensional sheet.  It  is
argued  that the model  is particularly  worthwhile  studying   as  it
captures   some  of  the  computational  characteristics of the brain.

     The memory surface consists of a vast number of processors  which
are called "columns" and  which operate asynchronously in  parallel.
Each processor stores a small amount of information and can be thought
of as a simple finite-state  transducer.  Each processor is  connected
only to those processors within a small radius, or neighbourhood.   As
is usually found with parallel computation, the most important  aspect
of the model is the method of communication between the processors.

     It is  shown in  the  talk  how the  function of  the  individual
processors and the communication  between them supports the  formation
and storage of associations between concepts.  Thus the memory surface
is in effect an associative  memory.  This type of associative  memory
reveals a number of interesting computational features, including  the
ability to store and retrieve sequences of concepts and the ability to
form abstractions from simpler concepts.

     Certain capabilities taken from the realm of human activities are
shown to  be explainable  within the  model of  computation  presented
here.  These include creativity, self, consciousness and free will.  A
theory of sleep is also presented which is consistent with the  model.
In general it is  argued that the  computational model is  appropriate
for describing  and  explaining the  higher  functions of  the  brain.
These are  believed to  occur in  a  region of  the brain  called  the
cortex, and the known anatomy of  the cortex appears to be  consistent
with the memory surface model discussed in this talk.

HOST:   Professor Gary Miller

------------------------------

Date: Mon, 10 Sep 84 17:38:55 PDT
From: Shel Finkelstein <SHEL%ibm-sj.csnet@csnet-relay.arpa>
Reply-to: IBM-SJ Calendar <CALENDAR%ibm-sj.csnet@csnet-relay.arpa>
Subject: Seminar - Distributed Knowledge

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

                      IBM San Jose Research Lab
                           5600 Cottle Road
                         San Jose, CA 95193

  [...]

  Thurs., Sept. 13 Computer Science Seminar
  3:00 P.M.   KNOWLEDGE AND COMMON KNOWLEDGE IN A DISTRIBUTED
  Front Aud.  ENVIRONMENT
            By examining some puzzles and paradoxes, we argue
            that the right way to understand distributed
            protocols is by considering how messages change the
            state of a system.  We present a hierarchy of
            knowledge states that a system may be in, and discuss
            how communication can move the system's state of
            knowledge up the hierarchy.  Of special interest is
            the notion of common knowledge.  Common knowledge is
            an essential state of knowledge for reaching
            agreements and coordinating action.  We show that in
            practical distributed systems, common knowledge is
            not attainable.  We introduce various relaxations of
            common knowledge that are attainable in many cases of
            interest.  We describe in what sense these notions
            are appropriate, and discuss their relationship to
            each other.  We conclude with a discussion of the
            role of knowledge in a distributed system.
            J. Halpern, IBM San Jose Research Lab
            Host:  R. Fagin


  Please note change in directions due to completion of new Monterey
  Road (82) exit replacing the Ford Road exit from 101.  [...]

------------------------------

End of AIList Digest
********************
13-Sep-84 22:19:58-PDT,14090;000000000001
Mail-From: LAWS created at 13-Sep-84 22:17:46
Date: Thu 13 Sep 1984 21:57-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #118
To: AIList@SRI-AI


AIList Digest            Friday, 14 Sep 1984      Volume 2 : Issue 118

Today's Topics:
  AI Tools - MACSYMA Copyright,
  Philosophy - The Nature of Proof,
  Robotics - Brian Reid's Robot Cook,
  Humor - Self-Reference & Seminar on Types in Lunches,
  Journals - Sigart Issue on Applications of AI in Engineering
----------------------------------------------------------------------

Date: 10 September 1984 15:29-EDT
From: Paula A. Vancini <PAULA @ MIT-MC>
Subject: MACSYMA Notice

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

  TO:  ALL MACSYMA USERS
  FROM:  MIT Patent and Copyright Office
  DATE:  August 31, 1984
  SUBJECT:  Recent Notices by Paradigm Associates Regarding MACSYMA Software

Please be advised that the version of MACSYMA designated by Paradigm
Associates in recent messages over this network as "DOE MACSYMA" is a
version of MACSYMA copyrighted to MIT.  "DOE MACSYMA" is an improper
designation.  MIT has delivered a copy of the MIT MACSYMA software to
DOE, pursuant to MIT's contraactual obligations to DOE.

Also be advised that Symbolics, Inc. is the only commercial company
authorized by MIT to perform maintenance services on, or to make
enhancements to, the MIT copyrighted versions of MACSYMA.

MIT hereby disclaims any association with Paradigm Associates and has
not granted Paradigm licensing rights to commercially make use of its
copyrighted versions of the MACSYMA or NIL software.


Queries to Hynes%MIT-XX@MIT-MC

------------------------------

Date: 10 Sep 84 14:33:25-PDT (Mon)
From: decvax!genrad!teddy!rmc @ Ucb-Vax.arpa
Subject: Re: Now and Then
Article-I.D.: teddy.403

        I am not sure I agree that an inductive proof proves any more
or less than a deductive proof.  The basis of induction is to claim
1)  I have applied a predicate to some specific cases within a large
    set (class) of cases.
2)  I detect a pattern in the result of the predicate over those cases
3)  I predict that the results of the predicate will continue following
    the pattern for the rest of the cases in the set.
I state the proof pattern this way to include inductive arguments about
natural world phenomena as well as mathematical induction.

The proof is valid if the accepted community of experts agrees that the
proof is valid (see for example various Wittgenstein and Putname essays
on the foundations of mathematics and logic).  The experts could be
wrong for a variety of reasons.  Natural law could change.  The
argument may be so complicated that everyone gets lost and misses a
mistake (this has even happened before!)  The class of cases may be
poorly chosen.  etc.

        The disagreement seems to be centered around a question of
whether this community of experts accepts causality as part of the
model.  If it is, then we can use causality as an axiom in our proof
systems.  But it still boils down to what the experts accept.

                                        R Mark Chilenskas
                                        decvax!genrad!teddy!rmc

------------------------------

Date: 11 Sep 84 9:27:15-PDT (Tue)
From: ihnp4!houxm!mhuxl!ulysses!allegra!princeton!eosp1!robison @
      Ucb-Vax.arpa
Subject: Re: Now and Then
Article-I.D.: eosp1.1106

Mark Chilenskas discussion of inductive proof is not correct for
mathematics, and greatly understates the strength of
mathematical inductive proofs.  These work as follows:

Given a hypothesis;

- Prove that it is true for at least one case.
- Then prove that IF IT IS TRUE FOR A GENERIC CASE,
  IT MUST BE TRUE FOR THE NEXT GENERIC CASE.

For example, in a hypothesis about an expression with regard
to all natural numbers, we might show that it is true if "n=1".
We then show that IF it is true for "n", it is true for "n+1".

By induction we have shown that the hypothesis is absolutely true
for every natural number.  Since true: n=1 => true for n=2,
                                 true: n=2 => true for n=3, etc.

It is the responsibility of the prover to prove that induction
through all generic cases is proper; when it is not, additional
specific cases must be proved, or induction may not apply at all.

Such an inductive proof is absolutely true for the logical system it
is defined in, and just as correct as any deductive proof.
When our perception of the natural laws change, etc., the proof
remains true, but its usefulness may become nil if we perceive
that no system in the real world could possibly correspond to the proof.

In non-mathematical systems, it is possible that both deductive
and inductive proofs will be seriously flawed, and I doubt one
can try to prefer "approximate proofs" of one type over the other.
If a system is not well-enough defined to permit accurate logical
reasoning, then the chances are that an ingenious person can
prove anything (see net.flame and net.religion for examples, also
the congressional record).

        - Toby Robison (not Robinson!)
        allegra!eosp1!robison
        or: decvax!ittvax!eosp1!robison
        or (emergency): princeton!eosp1!robison

------------------------------

Date: Thu 13 Sep 84 09:14:34-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Inductive Proof - The Heap Problem

As an example of improper induction, consider the heap problem.
A "heap" of one speck (e.g., of flour) is definitely a small heap.
If you add one speck to a small heap, you still have a small heap.
Therefore all heaps are small heaps.

                                        -- Ken Laws

------------------------------

Date: Fri 7 Sep 84 09:40:42-CDT
From: Aaron Temin <CS.Temin@UTEXAS-20.ARPA>
Subject: Robot chef bites off too much

        [Forwarded from the UTexas-20 bboard by Laws@SRI-AI.]

Our West Coast correspondent has returned with (among other things) an
article from the San Jose Mercury News entitled "Robot cooks if it finds
the beef" about Brian Reid's(TM){1} attempts to program a robot to cook beef
Wellington.   [...]

Aaron
{1}  Brian Reid is a trademark of ScribeInc., Ouagadougou, Bourkina Fasso.


[I have copied the following excerpts from the June 10 article. -- KIL]

                   Robot cooks if it finds the beef
                            by Kathy Holub

  Some professors will do anything for a theoretical exercise.  Brian
K. Reid, a food-loving assistant professor of electrical engineering
at Stanford University, recently tried to simulate teaching a mindless
robot how to cook beef Wellington, using Julia Child's 16-page recipe.
He failed.  Try telling a robot what "spread seasoning evenly" means.
"You have to specify the number of grams (of seasoning) per square
centimeter," he said, with a wry smile.

  It took him 13 hours and 60 pages of computer instructions just to
teach the would-be automaton how to slice and season a slab of beef
and put it safely in the oven.  Julia Child takes only three pages to
explain these simple steps.  "Where I bogged down -- where I gave it
all up and decided to go to bed -- was when I had to tell the robot
how to wrap the beef in pastry," he said.

  But Reid, an excellent cook with a doctorate in computer science,
was thrilled with the experiment, which involved only the computer
program and not an actual robot.  "It was exactly what I wanted," he
said.  "It showed that a cookbook does not tell the whole story, that
there is a lot of information missing from the recipe" that human
cooks provide without knowing it.  The Wellington exercise, he
believes, will help him reach his real goal: to teach a computer how
to make integrated circuits with a computer "recipe" that doesn't
depend on human judgement, memory or common sense.

[...]

  He picked the recipe for his experiment, because it's the longest
one in the book, involving 26 ingredients.  Beef Wellington is a long
piece of tenderlion that is baked twice, the second time in a light
pastry crust that should turn golden brown.  Forget telling the robot
what "golden brown" means.

  "Every time I turned around I discovered massive numbers of
things I was contributing without even thinking about it."
For example, "Julia Child has, 'you slice the beef and season
each piece separately'" before cooking, he said.  "The meat must
be cold or it won't hold its shape, but Julia doesn't tell you
that.  She assumes you know."

  For purposes of simplicity, Reid let the robot skip the slicing of
mushrooms and onions and sauteeing them in butter "until done."
"Cooking until done requires a great deal of knowledge.  A robot
doesn't know that fire [in the pan] isn't part of the process.  It
would happily burn the pan."

  But just telling the robot how to slice the meat, season it,
reassemble it with skewers and put it in the oven was tricky enough --
like teaching a 3-year-old to fix a car.  "You can't just say, 'Cut
into slices,' Reid said.  "You have to say, 'Move knife one centimeter
to the east, cut.'  And that assumes a sub-program telling th robot
what 'cut' means."  You can't tell a robot to slice 'across.'  "Across
what?" said Reid.  "You can't tell a robot to eyeball something.  You
have to tell it to define the center of gravity of the beef, find the
major axis of the beef and cut perpendicular to it."  You also have to
tell the robot how to find the beef, that is, distinguish it from the
other ingredients, and when to stop slicing.  These are standard
problems in robotics.
 
  Other problems are not so standard.  Reid forgot to specify that the
skewers should be removed before the pastry shell is added.  Julia may
be forgiven for leaving this step out, but the robot trainer has
tougher work.

------------------------------

Date: 9 September 1984 04:04-EDT
From: Steven A. Swernofsky <SASW @ MIT-MC>
Subject: Humor in A.I.?

I saw the following button at a science fiction convention:

    Q.  Why did Douglas Hofstadter cross the road?

    A.  To make this riddle possible.

-- Steve

------------------------------

Date: 11 Sep 1984  14:52 EDT (Tue)
From: Walter Hamscher <WALTER%MIT-OZ@MIT-MC.ARPA>
Subject: Humor - Seminar on Types in Lunches

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

              GENERALIZED TYPES IN GRADUATE STUDENT LUNCHES

FIRST MEETING:  Friday, Sept. 14, 1984, 12:00 noon
PLACE:          MIT AI Lab Playroom, 545 Tech. Sq., Cambridge, MA, USA
ORGANIZER:      Walter Hamscher, (walter@oz)

An eating seminar about generalized cold cuts and spread-recognition;
gluttonism, leftovers, and indigestion; related notions appearing
in current and proposed lunches, such as volunteers, menus, and
The Roosevelt Paradox ("There is no such thing as a free lunch")
will be discussed.  The slant will be toward identifying the
underlying digestional problems raised by the desired menu features.
For the first five minutes (during the visit of Prof. Gustav Fleischbrot,
Univ. of Essen) we will present and discuss the papers below starting
with the first two and concluding with the final two:

1. Burger, Chip N., ``The Nutritional Value of Pixels'',
PROC. INT'L. CONF. 5TH GENERATION INGESTION SYSTEMS, Tokyo, to
appear.  Manuscript from Dept. of Computer Science, Univ. of Sandwich, 1984.

2. Burger, Chip N. and Gelly Muffin, ``A Kernel language for abstract
feta cheese and noodles'', SEMANTICS OF FETA CHEESE: PROCEEDINGS, (eds.)
Cream, MacFried and Potstick, Springer-Verlag, Lect. Notes in Comp. Sci.
173, 1-50, 1984.

3. MacDonald, Ronald, ``Noodles for standard ML'', ACM SYMP. ON LINGUICA
AND LINGUINI, 1984.

4. Munchem, J. C., ``Lamb, D-Calories, Noodles, and Ripe Fruit'',
Ph.D. Thesis, MIT, Dept. of EECS, September, 1984.

Meeting time for the first five minutes is Fri. 12:00-12:05, and
Friday 12:00-12:05 thereafter.  Aerobics course credit can be arranged.

------------------------------

Date: Wednesday, 5 September 1984 23:28:30 EDT
From: Duvvuru.Sriram@cmu-ri-cive.arpa
Subject: Special Sigart Issue on Applications of AI in Engineering


                       SPECIAL ISSUE ON APPLICATIONS OF
                               AI IN ENGINEERING

The  April  1985 issue of the SIGART newsletter (tentative schedule) will focus
on the applications of AI in engineering. The  purpose  of  this  issue  is  to
provide  an overview of research being conducted in this area around the world.
The following topics are suggested:

   - Knowledge-based expert systems
   - Intelligent computer tutors
   - Representation of engineering problems
   - Natural language and graphical interfaces
   - Interfacing engineering databases with expert systems

The above topics are by no means exhaustive; other related topics are welcome.

Individuals or groups conducting research in this area and who  would  like  to
share  their  ideas  are invited to send two copies of 3 to 4 page summaries of
their work,  preferably  ongoing  research,  before  December  1,  1984.    The
summaries  should  include  a  title,  the  names of people associated with the
research, affiliations, and bibliographical references.  Since the primary  aim
of  this  special  issue  is  to provide information about ongoing and proposed
research, please be as brief  as  possible  and  avoid  lengthy  implementation
details.    Submissions  should  be  sent  to D. Sriram (or R. Joobbani) at the
following address or through Arpanet to Sriram@CMU-RI-CIVE.

      D. Sriram
      Design Research Center
      Carnegie-Mellon University
      Pittsburgh, PA 15213
      Tel. No. (412)578-3603

------------------------------

End of AIList Digest
********************
16-Sep-84 15:58:32-PDT,15122;000000000000
Mail-From: LAWS created at 16-Sep-84 15:56:42
Date: Sun 16 Sep 1984 15:47-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #119
To: AIList@SRI-AI


AIList Digest            Sunday, 16 Sep 1984      Volume 2 : Issue 119

Today's Topics:
  LISP - VAX Lisps & CP/M Lisp,
  Philosophy - Syllogism Correction,
  Scientific Method - Induction vs. Deduction,
  Course - Logic Programming,
  Conference - Database Systems
----------------------------------------------------------------------

Date: Sun, 16 Sep 84 14:28 BST
From: TONY HASEMER (on ALVEY at Teddington) <TONH%alvey@ucl-cs.arpa>
Subject: Lisp on the VAX

  We have a VAX 11/750 with four Mb of memory, running NIL.  We also have
four Lisp hackers of several years' standing who are likely to write
quite substantial programs.  We have to decide whether to buy some extra
memory, or to spend the money on Golden Common Lisp, which someone told
us is much more effiecient than NIL.

  Can anyone please advise us? Thank you.

   Tony.

------------------------------

Date: 11 Sep 84 17:36:37-PDT (Tue)
From: hplabs!sdcrdcf!sdcsvax!stan @ Ucb-Vax.arpa
Subject: Lisp under CP/M
Article-I.D.: sdcsvax.52

I recently purchased a copy of ProCode's Waltz Lisp for the Z80 and CP/M
and found it to be a very good imitation of Franz Lisp.

I downloaded some rather substantial programs I'ld written over the
past two years and within 20 minutes had them up and running on my
Kaypro.  Surprisingly, there was little speed degradation unless there
was a major amount of computations involved.

All that was required (for my programs) were a few support routines to
implement defun, terpri, etc.

The manual is very complete and well written.  (For example, it had examples
of how to write defun in it.)

Cost was just under $100.00, and well worth it.

Now, if only my Kaypro could handle background processes like the VAX...

    Stan Tomlinson

------------------------------

Date: 11 Sep 84 11:06:09-PDT (Tue)
From: hplabs!hao!seismo!rochester!rocksvax!rocksanne!sunybcs!gloria!colonel
      @ Ucb-Vax.arpa
Subject: Re: Now and Then
Article-I.D.: gloria.535

>>           All swans are white.
>>           This is a swan.
>>           Therefore it is white.
>>
>>      Notice that the conclusion (3rd sentence) is only true iff the two
>>      premises (sentences 2 and 3) are true.

A minor correction:  "iff" does not belong here.  The premises do not follow
from the conclusion.
--
Col. G. L. Sicherman
...seismo!rochester!rocksanne!rocksvax!sunybcs!gloria!colonel

------------------------------

Date: 14 Sep 84 09:01 PDT
From: Feuerman.pasa@XEROX.ARPA
Subject: Re:  Inductive Proof - The Heap Problem

At the risk of getting involved.....

One thing bothers me about the inductive proof that all heaps are small.
I will claim that that is NOT an inductive proof after all.  The second
requirement for a (mathematical) proof by induction states that one must
show that P(n) implies P(n+1).  I see nothing in the fact that one
"speck" is small that NECESSARILY implies that two "specks" constitutes
a small heap.  One seems to conclude the fact that a two-speck heap is
small from some sort of outside judgment of size.  Thus, Small(1 Speck)
does NOT imply Small(2 Specks), something else implies that.

Lest we get into an argument about the fact that large for one could be
small for another, I'll bring up another mathematical point:  The
Archimedian Principle.  It basically says that given any number (size,
number of specks, what have you), one can ALWAYS find a natural number
that is greater.  Applying that to the heap problem, given anyone's
threshold of what constitutes a large heap and what constitutes a small
heap, one can ALWAYS make a large heap out of a small heap by adding one
speck at a time.  I'll further note that one need not make that
transition between small and large heaps a discreet number; as long as
you can put a number on some sense of a large heap (regardless of
whether that is the smallest large heap), you can always exceed it.  For
example, I will arbitrarily say that 10**47 specks in a heap makes it
large.  I don't have to say that 10**47 - 1 is small.  Yet we will still
be able to create a large heap (eventually).

Now, anyone interested in speculating about what happens if someone's
size function is not constant, but varies with time, mood, money in the
bank, etc.?

As further proof of my Archimedian Principle, we will note that I have
just in fact turned a small heap/argument (Ken Laws' four line Heap
Problem) into a large one (this message).

--Ken <Feuerman.pasa@Xerox.arpa>

------------------------------

Date: Fri 14 Sep 84 14:30:14-PDT
From: BARNARD@SRI-AI.ARPA
Subject: induction vs. deduction

The discussion of induction vs. deduction has taken a curious turn.
Normally, when we speak of induction, we don't mean *mathematical
induction*, which is a formally adequate proof technique.  We mean
instead the inductive mode of reasoning, which is quite different.
Inductive reasoning can never be equated to deductive reasoning
because it begins with totally different premises.  Inductive
reasoning involves two principles:

(1) The principle of insufficient reason, which holds that in the
absence of other information, the expectation over an ensemble of
possibilities is uniform (heads and tails are equally probable).

(2) The principle of Occam's razor, which hold that given a variety of
theories about some data, the one that is "simplest" is preferred.
(We prefer the Copernican model of the solar system to the Ptolemaic
one, even though they both account for the astronomical data.)

The relationship of time, causality, and induction has been
investigated by the Nobel Laureate, Ilya Prigogine.  The laws of
classical physics, with one exception, are neutral with respect to the
direction of time.  The exception is the Second Law of Thermodynamics,
which states that the entropy of a closed systems must increase, or
equivalently, that a closed system will tend toward more and more
disordered states.  For a long time, physicists tried to prove the
Second Law in terms of Newtonian principles, but with no success.
Eventually, Boltzman and Gibbs explained the Second Law
satisfactorily by using inductive principles to show that the
probability of a system entering a disordered, high-entropy state is
far higher than the converse.  Prigogine proposes that random,
microscopic events cause macroscopic events to unfold in a
fundamentally unpredictable way.  He extends thermodynamics to open
systems, and particularly to "dissipative systems" that, through
entropy exchange, evolve toward or maintain orderly, low-entropy
states.

Inductive reasoning is also closely connected with information theory.
Recall that Shannon uses entropy as the measure of information.
Brillouin, Carnap, and Jaynes have shown that these two meanings of
entropy (information in a message and disorder of a physical system)
are equivalent.

Steve Barnard

------------------------------

Date: Wed 12 Sep 84 21:16:28-EDT
From: Michael J. Beckerle <BECKERLE@MIT-XX.ARPA>
Subject: Course Offering - Logic Programming

           [Forwarded from the MIT bboard by Laws@SRI-AI.]


              TECHNOLOGY OF LOGIC PROGRAMMING

                          CS  270
                         Fall  1984

              Professor Henryk Jan Komorowski
                     Harvard University
                 Aiken Computation Lab. 105
                          495-5973

Meeting:  Mondays, Wednesdays - 12:30 to 2 PM, Pierce Hall 209

This year the course will focus on presenting basic concepts
of  logic programming by deriving them from logic.  We shall
study definite clause programs:

    - What they specify (the least Herbrand model).
    - How they are used: a logical view  of  the  notion  of
      query.
    - What computations of logic programs are:  the  resolu-
      tion principle, SLD-refutability, completeness and nega-
      tion by failure.

This general background will serve as a basis to introduce a
logic  programming language Prolog and will be associated by
a number of assignments to master specification programming.
It  will  be followed by some implementation issues like in-
terpreting, compiling, debugging and other programmer's sup-
port  tools.   We shall then critically investigate a number
of applications of Prolog to  software  specification,  com-
piler writing, expert system programming, embedded languages
implementation, database  programming,  program  transforma-
tions,  etc.,  and  study  language's power and limitations.
The course will end with a  comparison  of  definite  clause
programming  to  other  formalisms,  eg. attribute grammars,
functional programming, rule based programming.   Time  per-
mitting parallelism, complexity and other topics of interest
will be studied.

REQUIREMENTS A background in propositional logic, some fami-
liarity  with  predicate  calculus and general background in
computer science (reasonable acquaintance with parsing, com-
piling, databases, programming im recursive languages, etc.)
is expected.

WORKLOAD
    - one problem set on logic.
    - Two sets of Prolog assignments.
    - Mid-term mid-size Prolog single person project.
    - A substantial amount of papers to  read:  core  papers
      and  elected one-topic papers (the latter to be reviewed
      in sections).
    - Final research paper  on  individually  elected  topic
      (with instructor's consent).

LITERATURE, REQUIRED

PROGRAMMING IN PROLOG by Clocksin and Mellish.  RESEARCH PA-
PERS distributed in class.


LITERATURE, OPTIONAL

LOGIC FOR PROBLEM SOLVING, by Kowalski  MICRO-PROLOG:  LOGIC
PROGRAMMING, by Clark and McCabe LOGIC AND DATABASES, edited
by Gallaire and Minker IMPLEMENTATIONS OF PROLOG, edited  by
Campbell


                       TENTATIVE PLAN
                        25 meetings

- Introduction: declarative and imperative programming,  the
goals of Vth Generation Project.

- Informal notions of: model, truth, provability.  The  syn-
tax  of predicate calculus, proof systems for predicate cal-
culus completemess, soundness, models.

- Transformation to clausal form,  resolution  and  its com-
pleteness.

- Definite clause programs:

        * operational semantics
        * proof-theoretic semantics
        * fixed point semantics

- Introduction to programming in Prolog.
- Data structures.
- Negation by failure and cut.
- Specification programming methodology.
- Advanced Prolog programming.
- Algorithmic debugging.
- Parsing and compiling in Prolog.
- Abstract data type specification in Prolog.
- Logic  programming  and  attribute  grammars,  data  flow
  analysis.
- Interpretation and compilation of logic programs

- Artificial intelligence applications:

        * metalevel programming
        * expert systems programming
        * Natural language processing

- Alternatives to Prolog;  breadth-first  search,  coroutines,
  LOGLISP, AND- and OR-parallelism.
- Concurrent Prolog.
- Relations between LP and functional programming.
- LP and term rewriting.
- Program transformation and derivation.
- Object oriented programming.
- Some complexity issures.
- LP and databases.
- Architecture for LP.

------------------------------

Date: Wed, 12 Sep 84 10:40:23 pdt
From: Jeff Ullman <ullman@diablo>
Subject: Conference - Database Systems

                      CALL FOR PAPERS

        FOURTH ANNUAL ACM SIGACT/SIGMOD SYMPOSIUM ON
               PRINCIPLES OF DATABASE SYSTEMS

             Portland, Oregon March 25-27, 1985


The conference will  cover  new  developments  in  both  the
theoretical  and  practical  aspects  of  database  systems.
Papers  are  solicited  that  describe  original  and  novel
research into the theory, design, or implementation of data-
base systems.

     Some suggested but not  exclusive  topics  of  interest
are:  application of AI techniques to database systems, con-
currency control, database and database scheme design,  data
models,  data  structures  for physical database implementa-
tion,  dependency  theory,  distributed  database   systems,
logic-based  query languages and other applications of logic
to database systems, office automation  theory,  performance
evaluation  of database systems, query language optimization
and implementation, and security of database systems.

     You are invited  to  submit  9  copies  of  a  detailed
abstract (not a complete paper) to the program chairman:

                  Jeffrey D. Ullman
                  Dept. of Computer Science
                  Stanford University
                  Stanford, CA 94305

Submissions will be evaluated on the basis of  significance,
originality,  and  overall quality.  Each abstract should 1)
contain enough information  for  the  program  committee  to
identify  the  main contribution of the work; 2) explain the
importance of the work, its novelty, and  its  relevance  to
the  theory  and/or  practice  of  database  management;  3)
include comparisons with and references to relevant  litera-
ture.   Abstracts  should be no longer than 10 typed double-
spaced pages (12,000 bytes of source text).  Deviations from
these  guidelines may affect the program committee's evalua-
tion of the paper.

                     Program Committee

             Jim Gray            Richard Hull
             Frank Manola        Stott Parker
             Avi Silberschatz    Jeff Ullman
             Moshe Vardi         Peter Weinberger
             Harry Wong

The deadline for submission  of  abstracts  is  October  12,
1984.   Authors  will be notified of acceptance or rejection
by December 7, 1984.  The accepted papers, typed on  special
forms or typeset camera-ready in the reduced-size model page
format, will be due at the  above  address  by  January  11,
1985.   All  authors  of accepted papers will be expected to
sign copyright release forms.  Proceedings will  be  distri-
buted at the conference and will be available for subsequent
purchase through ACM.  The proceedings  of  this  conference
will  not  be  widely disseminated.  As such, publication of
papers in this record will not, of itself, inhibit  republi-
cation in ACM's refereed publications.


        General Chairman:      Local Arrangements Chairman:
        Seymour Ginsburg       David Maier
        Dept. of CS            Dept. of CS
        USC                    Oregon Graduate Center
        Los Angeles, CA 90007  19600 N. W. Walker Rd.
                               Beaverton, OR 97006

------------------------------

End of AIList Digest
********************
19-Sep-84 09:35:47-PDT,13591;000000000000
Mail-From: LAWS created at 19-Sep-84 09:34:17
Date: Wed 19 Sep 1984 09:29-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #120
To: AIList@SRI-AI


AIList Digest           Wednesday, 19 Sep 1984    Volume 2 : Issue 120

Today's Topics:
  AI Tools - Micro Production Systems,
  Professional Societies - AI SIG in San Diego,
  Books - Publisher Info for The Second Self,
  Scientific Method - Swans & Induction,
  AI and Society - CPSR,
  Robotics - Kitchen Robots,
  Pattern Recognition - Maximum Window Sum,
  Course - Decision Systems,
  Games - Computer Chess Championship
----------------------------------------------------------------------

Date: 18 September 1984 1053-EDT
From: Peter Pirolli at CMU-CS-A
Subject: micro production systems

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

A friend of mine is looking for a production system language (however simple)
that runs on an Apple (preferably) or any other micro.  He basically wants to
use the system to give some hands-on experience to fellow faculty members
at a small university where main-frame resources are too scarce to run a
full-blown production system.  Any pointers to micro-based systems would be
greatly appreciated.

Send mail to pirolli@cmpsya or pirolli@cmua.

------------------------------

Date: 17 Sep 84 07:16 PDT
From: Tom Perrine <tom@LOGICON.ARPA>
Subject: AI SIG in San Diego

I have an off-net friend who is interested in starting (or finding) a
Special Interest Group for AI in San Diego.  It would appear that if
ACM or IEEE knows about such a group, "they ain't talking." Is there
anyone else in S.D.  who would be interested in such a group?  Please
reply to me, not the Digest, of course.

Please include name, address and a daytime phone.

Thanks,
Tom Perrine
Logicon - OSD
San Diego, CA

------------------------------

Date: Mon 17 Sep 84 12:41:04-MDT
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: Publisher Info for The Second Self

I neglected to provide the details...

The publisher is Simon & Schuster, ISBN is 0-671-46848-0, and
LC number is QA76.T85 1984 (or something like that).  The book is available
in quite a few bookstores, including the big chains, so try there first.

                                                        stan shebs

------------------------------

Date: 17 Sep 1984 08:47:02-EDT
From: sde@Mitre-Bedford
Subject: Swans:

At least during the latter part of March, 1980, the statement,
"all swans are white," was false; those familiar with Heinlein's
"fair witness" concept will recognize the phrasing; I say it
having witnessed black or near-black swans in Perth during the
aforementioned time.
Granting that the facts have little to do with the principle of
the argument, I thought folks might nonetheless be amused.
   David   sde@mitre-bedford

------------------------------

Date: 12 Sep 84 9:11:34-PDT (Wed)
From: hplabs!tektronix!bennety @ Ucb-Vax.arpa
Subject: Re: Now and Then
Article-I.D.: tektroni.3588

Toby Robison's comment on Mark Chilenska's discussion on inductive
proof was quite apt -- however, we should note that induction is
limited to statements on a countably infinite set.  That is, induction
can only work with integers.

-bsy
 tektronix!bennety

------------------------------

Date: Mon, 17 Sep 84 11:01:17 PDT
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Uhrig's Stream of Consciousness in V2 #112


With regard to Werner's concern about unethical or immoral applications of
AI: Computer Professionals for Social Responsibility (CPSR) is very concerned
with this issue as am I.

Please give me feedback on this.  Perhaps the surest death-knell for the
outrageous-dangerous stuff ("Intelligent Computers" that would make life-or-
death decisions for the human race) is to require that they pass rigorous
tests.  If it is required that they actually work the way they are supposed to
many of the systems will die a natural (and deserved) death.  A comprehensive
and rigorous top-down (parallel with the top-down design) testing program may
be the answer.

  --Charlie

------------------------------

Date: 17 Sep 84 09:20:20 PDT (Monday)
From: Hoffman.es@XEROX.ARPA
Subject: AI in the kitchen, continued

The article in V2,#118, "Robot cooks if it finds the beef", reminded me
of the following:

"....
John McCarthy, one of the founders of the field of artificial
intelligence, is fond of talking of the day when we'll have 'kitchen
robots' to do chores for us, such as fixing a lovely shrimp creole.
Such a robot would, in his view, be exploitable like a slave because it
would not be conscious in the slightest.  To me, this is
incomprehensible.  Anything that could get along in the unpredictable
kitchen world would be as worthy of being considered conscious as would
a robot that could survive for a week in the Rockies.  To me, both
worlds are incredibly subtle and potentially surprise-filled.  Yet I
suspect that McCarthy thinks of a kitchen as ... some sort of simple and
'closed' world, in contrast to 'open-ended' worlds, such as the Rockies.
This is just another example, in my opinion, of vastly under-estimating
the complexity of a world we take for granted, and thus under-estimating
the complexity of the beings that could get along in such a world.
Ultimately, the only way to be convinced of these kinds of things is to
try to write a computer program to get along in a kitchen...."

Excerpted from a letter by DOUG HOFSTADTER in 'Visible Language',
V17,#4, Autumn 1983.  (In 1983, that periodical carried, in successive
issues, an extensive piece by Knuth on his Meta-Font, a lengthy review
by Hofstadter, and letters from both of them and from others.)

--Rodney Hoffman

------------------------------

Date: 14 Sep 1984 16:36-EDT
From: Dan Hoey <hoey@NRL-AIC>
Subject: Maximum window sum, in AIList V2 #117

Ken,

    Bentley's problem 7 asks for the complexity of the maximum
subarray sum problem.  I would advise you to call your algorithm a
solution to the maximum subarray sum problem, rather than a solution to
problem 7.  You have given an upper bound for the complexity, but
barring an equal lower bound problem 7 is still unsolved.  I know of
no lower bound larger than the size of the input.

    In case you're interested, here's another maximum subarray sum
algorithm with the same time complexity, using less working storage.
See the comments for a description of its working.  Enjoy.

Dan


[The following is simpler, more efficient, and uses less auxilliary
storage than the version I gave (although it does require buffering
the full input array).  I can't think of any improvement.  -- KIL]

/*
**  maxsbasum
**
**    Compute the maximum subarray sum in an array. In case all
**    array elements are negative, the maximum sum is 0.0
**    for an empty subarray.
**
**  COMMENTS
**
**    Every subarray of an array is a full-height subarray of a
**    full-width subarray of the array.
**
**    This routine examines each of the O(NROWS^2) full-width
**    subarrays of the array.  A vector containing the sum of each
**    column in the full-width subarray is maintained.  The maximum
**    full-height subarray sum of the full-width subarray corresponds
**    to the maximum subvector sum of the vector of column sums,
**    found in O(NCOLS) time using Kadane's algorithm.
**
**    Running time is O(NROWS^2 NCOLS).  Working storage for this
**    program is dominated by the O(NCOLS) vector of column sums.
**
**  HISTORY
**
**    16-Sep-84  Laws at SRI-AI
**    Merged innermost two loops into one.
**
**    14-Sep-84  Hoey at NRL-AIC
**    Cobbled this version together.
**    Comm. ACM, September 1984; Jon Bentley
**    published maximum subvector code (Pascal).
**    Algorithm attributed to Jay Kadane, 1977.
**
**    11-Sep-84  Laws at SRI-AI
**    Wrote another program solving the same problem.  Parts of
**    his program, from AIList V2 #117, appear in this program.
*/


/* Sample problem. (Answer is 6.0.) */
#define NROWS 4
#define NCOLS 4
float X[NROWS][NCOLS] = {{ 1.,-2., 3.,-1.}, { 2.,-5., 1.,-1.},
    { 3., 1.,-2., 3.}, {-2., 1., 1., 0.}};

/* Macro to return the maximum of two expressions. */
#define MAX(exp1,exp2)  (((exp1) > (exp2)) ? (exp1) : (exp2))

main()
{
  float MaxSoFar;               /* Global maximum */
  float ColSum[NCOLS];          /* Column sums of full-width subarray */
  float MaxEndingHere;          /* For Kadane's algorithm */
  int lowrow,highrow;           /* Bounds of full-width subarray */
  int thiscol;                  /* Column index */

  /* Loop over bottom row of full-width subarray. */
  MaxSoFar = 0.0;
  for (lowrow = 0; lowrow < NROWS; lowrow++) {

    /* Initialize column sums. */
    for (thiscol = 0; thiscol < NCOLS; thiscol++)
      ColSum[thiscol] = 0.0;

    /* Loop over top row of full-width subarray. */
    for (highrow = lowrow; highrow < NROWS; highrow++) {

      /* Update column sum, find maximum subvector sum of ColSum. */
      MaxEndingHere = 0.0;
      for (thiscol = 0; thiscol < NCOLS; thiscol++) {
        ColSum[thiscol] += X[highrow][thiscol];
        MaxEndingHere = MAX(0.0, MaxEndingHere + ColSum[thiscol]);
        MaxSoFar = MAX(MaxSoFar, MaxEndingHere);
      }
    }
  }

  /* Print the solution. */
  printf("Maximum subarray sum:  %g\n",MaxSoFar);

}

------------------------------

Date: Tue 18 Sep 84 15:09:58-PDT
From: Samuel Holtzman <HOLTZMAN@SUMEX-AIM.ARPA>
Subject: Course - Decision Systems

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

                         Course Announcement

            DECISION ANALYSIS AND ARTIFICIAL INTELLIGENCE


                   Engineering Economic Systems 234
                               3 units
                     Instructor: Samuel Holtzman

                 Monday and Wednesday 2:00 to 3:15 pm
                        Building 260, room 264

This course investigates the relationship between decision analysis
and artificial intelligence in building expert systems for decision
making in complex domains.  Major topic areas include fundamentals of
artificial intelligence (production systems, search, logic
programming) and design of intelligent decision systems based on
decision analysis (use of formal methods in decision making,
represention and solution of decision problems, reasoning under
uncertainty).  The course will also cover programming in Lisp for
students not familiar with the language.  Course requirements include
a sustantial project based on the concepts developed in the course.

Prerequesites:  EES 231 (Decision Analysis) or equivalent
                and familiarity with computer programming.

For further information contact:

                Samuel Holtzman
                497-0486, Terman 301
                HOLTZMAN@SUMEX

------------------------------

Date: Mon Sep 17 17:15:08 1984
From: mclure@sri-prism
Subject: Games - Computer Chess Championship

          [Forwarded from the SRI-AI bboard by Laws@SRI-AI.]

    The ACM annual North American Computer Chess Championship is a
watering-hole for computer chess researchers, devotees, and ordinary
chess players interested in what new improvements have been made in
computer chess during the past year.

        Come see Ken Thompson and Belle seek out chess
        truth, chess justice, and the American Way!

        Watch David Levy wince as his chess program
        discovers innovations in chess theory unknown even
        to Grandmasters!

        Marvel at Bob Hyatt's Cray Blitz program as it
        slices through the opposition at many MIPS!

        See the tiny Spracklen program otherwise marketed as
        Prestige and Elite by Fidelity tally up points
        against the "big boys!"

        Gawk as ivory tower researchers such as Tony
        Marsland of University of Alberta try to turn
        obscure and obfuscating computer chess theory into
        tangible points against opposition!

        Watch in amazement as David Slate's NUCHESS program,
        a descendent of the famous Northwestern University
        Chess 4.5 program, tries to become the most
        "human-like" of chess programs!

        And strangest of all, see a chess tournament where the
        noise level is immaterial to the quality of play!

The following information is from AChen at Xerox...

        1) dates - 7-9 Oct, 1984
        2) where - Continental Parlors at San Francisco Hilton
        3) times - Sun 1300 and 1900, 7 Oct, 1984
                   Mon 1900, 8 Oct, 1984
                   Tue 1900, 9 Oct, 1984
        4) who   - Tournament director will be Mike Valvo
                   four round Swiss-style includes Cray BLITZ,
                   BELLE and NUCHESS.

        for more information:
                Professor M. Newborn
                School of Computer Science, McGill University
                805 Sherbrooke Street West, Montreal
                Quebec, Canada H3A 2K6

note: this info can be found in July, 1984 issue of ACM Communications,
page A21.

        Stuart

------------------------------

End of AIList Digest
********************
19-Sep-84 21:57:28-PDT,12747;000000000000
Mail-From: LAWS created at 19-Sep-84 21:55:00
Date: Wed 19 Sep 1984 21:48-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #121
To: AIList@SRI-AI


AIList Digest           Thursday, 20 Sep 1984     Volume 2 : Issue 121

Today's Topics:
  Machine Translation - Aymara as Intermediate Language,
  Logic - Induction & Deduction,
  Linguistics - Pittsburghese,
  Expert Systems & Humor - Excuse Generation
----------------------------------------------------------------------

Date: 11 Sep 84 17:07:00-PDT (Tue)
From: pur-ee!uiucdcs!uokvax!emjej @ Ucb-Vax.arpa
Subject: Aymara as intermediate language?
Article-I.D.: uokvax.900011

Yesterday in a local paper a news item appeared (probably AP or UPI)
telling about a fellow in South America (Ecuador? Peru, perhaps?),
named Ivan Guzman de Rojas, who seems to be having respectable
success using a S. American Indian language, that of the Aymara
Indians, as an intermediate language for machine translation of
natural languages. The article seemed to indicate that Aymara is
something of a pre-Columbian Loglan, near as I could tell. Any
references to the literature concerning this would be greatly
appreciated. (Send mail, I'll summarize to the net after a seemly
interval.)

                                        James Jones

                                uucp: ...!ctvax!uokvax!emjej
                                or    ...!{ctvax,mtxinu}!ea!jejones

------------------------------

Date: Wed, 19 Sep 84 14:10:13 pdt
From: Stanley Lanning <lanning@lll-crg.ARPA>
Subject: Nitpicking...

  "... induction is limited to statements on a countably infinite set."

Well, that depends how you define induction.  If you define it in
the right way, all you need in a well-ordered set.  Cardinality
doesn't enter into it.

Concerning the argument "All A are B, x is an A, therefore x is a B".
It is not true that the conclusion is true only if the two assumptions
are true.  It is not even true that the argument is valid only if the
assumptions are true.  What is true is that we are guarenteed that
the conclusion is true only if the assumptions are true.

Thanks for your indulgence.
                                                        -smL

------------------------------

Date: 17 September 1984 1419-EDT
From: Lee Brownston at CMU-CS-A
Subject: Pittsburghese figured out

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

The way Pittsburghers talk is a sure source of amusement for newcomers
to this area.  Most attention is devoted to diction, especially to the
idioms.  Although the latter are no more nor less illogical than any
other idioms, they are easily identified and likely to be unfamiliar.
Over the past couple years, I've been trying to figure out the system
of phonology.  I'm still working on the suprasegmentals, but I have
some preliminary results on vowels and consonants that may be of some
interest.

As far as I can tell, the only consonantal departure from Standard
American English is that the final 'g' is omitted from the present
progressive to the extent that the terminal sound is the alveolar nasal
rather than the palatal nasal continuant.  This pronunciation is of
course hardly unique to Pittsburgh.

The vowels are much more interesting.  The 'ow' sound is pronounced 'ah',
as in 'dahntahn'.  Confusion between, say, "down" and "don" is avoided
since the 'ah' sound has already vacated: it is pronounced 'aw', as in
'Bawb signed the cawntract'.  Similarly, 'aw' has gone to the greener
pastures of 'or', as in 'needs worshed'.  It appears that the chain ends
here.  As its discoverer, I shall call this phonological game of musical
chairs "the great Pittsburgh vowel movement."

------------------------------

Date: Mon 17 Sep 84 23:38:41-CDT
From: David Throop <LRC.Throop@UTEXAS-20.ARPA>
Subject: Humor - Excuse Generation


TOWARDS THE AUTOMATIC GENERATION OF EXCUSES
by David Throop

The Great and Pressing Need for Excuses.
  There is a huge need in industry for excuses.  A recent  marketing survey
shows that the biggest need in air transport is not for a service to get it
there overnight, but for one that takes the blame for it being three weeks
late.  Every time there is a dockworkers' strike anywhere in the world, titans
of commerce who are completely unaffected get on the phone.  They then explain
to all their customers that every order that is overdue is sitting on that
dock, and that they couldn't help it.  Then they grin.  Because they've got a
good excuse.  And even the smallest industrial project needs a raft of
excuses by the time it finishes.
  Computers have already helped with this need.  Many problems that used to be
blamed on the postal service, on the railroads and on telegraph operators are
now routinely blamed on computers.  "Your check is in the mail" has now been
supplemented by "Our computer has been down, and we'll send you your money as
soon as the repairman fixes it."  Whenever a bridge collapses, specialized
teams of system analysts are called in, in order to quickly blame the whole
mess on a computer.
  But computers can do more than this.  Computers have a place to play in the
generation of excuses; actually coming up with the lies and evasions that keep
our economy running.

The Structure of Excuses
  There is a great size range in excuses.  Many small excuses can be generated
without any AI or other advanced techniques.  And there will always be some
really big FUBARS that will need humans to come up with appropriate excuses.
But in between there is the somewhat stereotyped snafu that can be framed in
some structure and has different excuse elements as slots.  These are the
half-assed excuses, the most fruitful field for knowledge engineering.

Where It Came From
  It has been noted repeatedly in work on computer vision that a subject often
does not have all of the necessary information to justify an observation, but
that he makes it anyway and supplies some "excuse" to explain why some
features are missing.  The classic illustration of this problem is in
envisioning a chair: the subject may only be able to see three of the legs
but assumes a 4-legged chair.  Indeed, Dr. Minsky presented such a chair at the
AAAI in August.
  We interview the chair itself after the lecture, and asked it why it came
with only three legs.  The resulting string of excuses was impressive, and
more robust than one might expect from a broken piece of furniture.
  These included:
      "I'm not registered with the local chairs' union, so they'd only let me
        up on stage if I took off one of my legs.
      "Accounting cut my travel allowance by 18%, so I had to leave my leg
        back in California.
      "This is just a demo chair that we put together for the conference.  We
        have a programming team on the West coast that will have implemented
        another leg by October.
      "My secretary talked to somebody on the program committee who assured
        her that I wouldn't have to bring my own legs, and that there would be
        plenty of legs here in Austin.  Then I go here and found they were
        overbooked.
      "I felt that three legs was adequate to demonstrate the soundness of the
        general leg concept, and actually implementing a fourth leg would have
        been superfluous."

  This underlined a central observation: making excuses is critical to
perception, and is central to intelligence.  I mean, think about.  Sounding
intelligent involves making gross generalizations & venting primitive
prejudices and then making plausible excuses for them when they collide with
reality.  Any imaginable robot that understands the consequences of its action
will want to weasel out of them.

     The 3 legged chair problem yielded a high number of palatable excuses.
This toy problem shows the feasibility of generating large numbers of
industrial-strength excuses.  This goal would free humans from having to
justify their actions, leaving them more time to spend on screwing things
up.  That, after all, seems to be what they are best at.

How It Works
  A user makes request via SNIVEL (Stop-Nagging,-I'm-Verifying-an-Excuse
Language), a user-friendly system that nods, clucks sympatheticly, encourages
the user to vent his hostility & frustration, and has a large supply of
sympathetic stock answers for lame excuses:
  "You poor dear, I know you were trying as hard as you could.
  "Well, you can't be blamed for trusting them.
  "I can certainly see how you couldn't get your regular work done after an
     emotional shock like that."

  The program then begins to formulate an excuse appropriate to the problem.
Many problems can be recognized trivially and have stock excuses.  These can
be stored in a hash table and supplied without any search at all:
  "The dog vomited on it, so I threw it out.
  "It's in the mail.
  "I thought you LIKED it when I did that.
  "Six debates would probably bore the public.
  "I have a headache tonight.
  "I trusted in the advice of my accountant/lawyer/broker/good-time mama."

  If the problem is more complex, SNIVEL enters into a dialog with the user.
Even if he wants to take responsiblity for his share of the problem, SNIVEL
solicits the user, getting him to blame other people and explain why it wan't
REALLY his fault.  A report may be late getting to a client, for instance; it
may ask what last minute changes the client had requested, and what kinds of
problems the user had with a typing pool.  SNIVEL shares records with the
personnel file, so that it can quickly provide a list of co-workers' absences
that problably slowed the whole process down.  It has a parsing alogrithm that
takes the original work order and comes with hundreds of different parses for
each sentence, demonstrating that the original order was ambiguous and caused
a lot of wasted effort.
  One of the central discoveries of AI has been that problems that look easy
are often very hard.  Proving this rigorously is a powerful tool: it provides
the excuse that almost any interesting problem is too hard to solve.  So of
course we're late with the report.

Theoretical Issues
  Not all the work here has focused on immediate payoffs.  We
have studied several theoretical issues involved with excuses.  We've found
that all problems can be partitioned into:
   1) Already Solved Problems for which excuses are not needed.
   2) Unsolved Problems
   3) Somebody Else's Problem
  We concentrate on (2).  We've shown that this class is further dividable.
Of particular interest is the class of unsolved problems for which the set of
palatable excuses is infinite.  These problems never need to actually be
solved.  We can generate research proposals, programs and funds requests
indefinitely without ever having to produce any results.  We just compute the
next excuse in the series and go on.

Remaining problems
  It is easiest to generate excuses when the person receiving the excuse is
either a complete moron or really couldn't care less about the whole project.
Fortunately, this is often the case and can be the default assumption.  But is
often useful to model the receiver of the excuse.  We can than calulate
just how big a whopper he's likely to swallow.
  It is, of course, not necessary that the receiver believe the excuse, just
that he accepts it.  The system is not ready yet able to model why anyone
would accept the excuse "Honestly, we're just friends, there's nothing
between us at all."  But our research shows that most people accept this
excuse, and almost no one believes it.

  The system still has problems understanding different points of view.  For
instance, it cannot differentiate why

  "My neighbors were up drinking and fighting and doing drugs and screaming
all night, so I didn't get any sleep at all,"

 is a reasonable excuse for being late to work, but

  "I was up drinking and fighting and doing drugs and screaming all night, so
I didn't get any sleep at all," is not.

  Finally, the machine is handicapped by its looks.  No matter how
brilliantly it calculates a good excuse, it can't sweep back a head of
chestnut hair, fix a winning smile on its face, and say with heartfelt warmth,
"Oh, thank you SO much for understanding..."  And that is so much of the soul
of a truly good excuse.

------------------------------

End of AIList Digest
********************
20-Sep-84 23:20:22-PDT,12447;000000000000
Mail-From: LAWS created at 20-Sep-84 23:16:28
Date: Thu 20 Sep 1984 23:08-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #122
To: AIList@SRI-AI


AIList Digest            Friday, 21 Sep 1984      Volume 2 : Issue 122

Today's Topics:
  AI Tools - Production Systems on Micros,
  Logic - Deduction & Induction,
  Project - Traffic Information System,
  Seminar - Common Sense Thinking,
  Seminar Series - Theories of Information & NCARAI Series
----------------------------------------------------------------------

Date: Thu 20 Sep 84 11:13:30-CDT
From: CMP.BARC@UTEXAS-20.ARPA
Subject: Production Systems on Apple

The only thing I have seen for Apple is micro-PROLOG + APES (Augmented PROLOG
for Expert Systems), marketed in the U.S. by Programming Logic Systems, Inc.,
31 Crescent Drive, Milford, CT 06460 (203-877-7988).  I have no experience with
the system, but the brochures I have seen and the price make it attractive.
Micro-PROLOG runs on Apple II with a Z80 card and full l.c. keyboard, on the
IBM PC and PC Jr., and on various configurations of Obsborne, Kaypro II,
HP 150, TRS 2000, Xerox 820, among others.  CP/M 80 systems require at least
48K RAM, while PC/MS DOS needs 128K.  APES reportedly runs on any system which
supports micro-PROLOG, but the order form lists only PC/MS DOS and CP/M 86
versions (for Apricot, Sirius and IBM PC compatible).  APES requires a minimum
memory configuration of 128K.  In today's inflated market, the license fees of
$295 each or $495 for both are not too outrageous.  Clark and McCabe's book is
included.

The only other systems I've heard about are Expert-Ease and M.1 for the IBM PC
and TI's Personal Consultant for their Professional Computer.  These go for
$2000, $12,500 and $3000 each.  The literature and reviews of Expert-Ease make
it look like a joke (a friendly interface to a table), but neither media has
been able to give an example of the system's inductive capabilities.  Expert-
Ease appears to be able to form rules from examples, but the people writing the
brochures and reviews don't seem to be able to understand or convey this.
saw M.1 and the Personal Consultant demoed at AAAI.  Both are Emycin clones,
minus a lot of the frills (and thus, perhaps, minus the bugs).  The Personal
Consultant seemed more impressive.  It is supposedly written in IQLISP, but
does not appear to transport to non-TI computers running IQLISP.  All of these
products seem way overpriced, as university research has made them fairly
simple engineering projects.  In the case of the Personal Consultant, none of
the academics who did the research seem connected with the project.  I imagine
that Teknowledge (M.1) has some of Emycin's designers on staff, and know that
Michie is involved with Expert-Ease.

Dallas Webster (CMP.BARC@UTexas-20)

------------------------------

Date: 15 Sep 84 18:16:54-PDT (Sat)
From: decvax!mcnc!akgua!psuvax1!simon @ Ucb-Vax.arpa
Subject: Re: Now and Then
Article-I.D.: psuvax1.1140

   ....induction (in mathematics) can deal only with integers.

(approximate quote). So what else do you expect a formal system to deal with?
The only reasonable answer would be "small finite sets (that are equivalent to
subsets of integers). Sure, there are non-denumerable sets that are interesting
- but only to sufficiently abstract mathematicians. I do not see useful computer
systems worrying about large cardinals, determinacy or the continuum.
janos simon

------------------------------

Date: 20 Sep 84 17:30-PDT
From: mclure @ Sri-Unix.arpa
Subject: deduction vs. induction

The recent claim in AILIST that

        'deduction proceeds from the general (axioms) to
         the specific (propositions), induction proceeds from
         the specific to the general.'

is not correct.

A lucid definition and comparision of both can be found in:

    LOGIC AND CONTEMPORARY RHETORIC by Kahane

        Stuart

------------------------------

Date: Wed, 19 Sep 84 23:01:24 BST
From: "Dr. A. Sloman" <XASV02%svpa@ucl-cs.arpa>
Subject: Project - Traffic Information System

                       [Edited by Laws@SRI-AI.]


     An Intelligent Collator and Condenser of Traffic Information

The Cognitive Studies Programme, at Sussex University, UK, now has an
AI/Natural Language project to build a traffic information system.
The project is concerned with a system which processes and integrates
reports from the police about traffic accidents. It must also make decisions
about which motorists are to be informed about these accidents, by means
of broadcasts over an (eventually) nationwide cellular radio network.
A significant part of the project will involve investigating to what
extent unrestricted natural language input can be handled, and how the obvious
problems of unexpected and ungrammatical input can be overcome. It will also
be necessary to encode rules about intelligent broadcasting strategies for
traffic information.  A dedicated workstation (probably SUN-2/120)
will be provided for the project, as well as access to network
facilities and other computing facilities at Sussex University (mostly
VAX-based).

For information about the project, and/or about the large and growing AI
group at Sussex University, please contact Chris Mellish, Arts Building E,
University of Sussex, BRIGHTON BN1 9QN, England. Phone (0273)606755 -
if Chris is not in ask for Alison Mudd.
(Contact via netmail is not convenient at present.)

Aaron Sloman

------------------------------

Date: Wed, 19 Sep 84 15:49:20 pdt
From: chertok%ucbkim@Berkeley (Paula Chertok)
Subject: Seminar - Common Sense Thinking

             BERKELEY COGNITIVE SCIENCE PROGRAM
                         Fall 1984
           Cognitive Science Seminar -- IDS 237A

   TIME:                Tuesday, September 25, 11 - 12:30
   PLACE:               240 Bechtel Engineering Center
   DISCUSSION:          12:30 - 2 in 200 Building T-4

   SPEAKER:        John McCarthy, Computer Science  Department,
                   Stanford University

   TITLE:          What is common sense thinking?

   ABSTRACT:       Common sense  thinking  includes  a  certain
                   collection  of knowledge and certain reason-
                   ing  ability.   Expert  knowledge  including
                   scientific knowledge fits into the framework
                   provided  by  common  sense.   Common  sense
                   knowledge  includes  facts  about the conse-
                   quences  of  actions  in  the  physical  and
                   psychological  worlds,  facts about the pro-
                   perties of space, time, causality and physi-
                   cal  and  social objects.  Common sense rea-
                   soning includes both logical  deductive  and
                   various  kinds  of  non-monotonic reasoning.
                   Much common sense knowledge is  not  readily
                   expressible  in  words, and much that can be
                   usually isn't.

                   The lecture will attempt  to  survey  common
                   sense  knowledge and common sense reasoning.
                   It will be oriented  toward  expressing  the
                   knowledge in languages of mathematical logic
                   and expressing the  reasoning  as  deduction
                   plus formal non-monotonic reasoning.

------------------------------

Date: Wed 19 Sep 84 19:55:18-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Seminar Series - Theories of Information

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]

    PROJECT ACTIVITIES FOR PROJECT F-1:  THEORIES OF INFORMATION

The notions information and of informational content are central to much
of the work done at CSLI and are emerging as central notions in philosophy,
computer science, and other disciplines.  Thus we need mathematically
precise and philosophically cogent accounts of information and the forms
it takes. The F-1 project will hold a series of meetings on various CSLI
researchers' approach to the notion of information.  The emphasis will be
on gaining a detailed understanding of the theories that are being developed
and discussing issues in ways that will be helpful in making further
progress.  Those interested should attend the meetings regularly to help
develop a working group with a shared body of knowledge.  For this reason,
we will not make it a practice to announce individual meetings, which will
occur approximately bi-weekly, Tuesdays at 3:15, in the Ventura Seminar
Room.  The first meeting will be on October 2, when Jon Barwise will speak
for a bit about the nature and prospects for a theory of information,
followed by Fernando Pereira and/or Stan Rosenschein who will talk about
the current state of situated automata theory.

                                                        ---John Perry

------------------------------

Date: 20 Sep 84 15:26:51 EDT
From: Dennis Perzanowski <dennisp@NRL-AIC.ARPA>
Subject: Seminar Series - Fall AI Seminar Schedule at NCARAI

                       U.S. Navy Center for Applied Research
                             in Artificial Intelligence
                       Naval Research Laboratory - Code 7510
                             Washington, DC   20375-5000


                                FALL SEMINAR SERIES


        Monday,
        24 Sept. 1984   Professor Hanan Samet
                        Computer Science Department
                        University of Maryland
                        College Park, MD
                                "Overview of Quadtree Research"

        Monday,
        15 Oct. 1984    Professor Stefan Feyock
                        Computer Science Department
                        College of William and Mary
                        Williamsburg, VA
                                "Syntax Programming"

        Monday,
        22 Oct. 1984    Professor Andrew P. Sage
                        Computer Science Department
                        George Mason University
                        Fairfax, VA
                                "Alternative Representations
                                 of Imprecise Knowledge"

         Monday,
         5 Nov. 1984    Professor Edwina Rissland
                        Department of Computer and Information Sciences
                        University of Massachusetts
                        Amherst, MA
                                "Example-based Argumentation and Explanation"

        Monday,
        19 Nov. 1984    Mr. Kurt Schmucker
                        National Security Agency
                        Office of Computer Science Research
                        Ft. Meade, MD
                                "Fuzzy Risk Analysis: Theory and Implication"



   The above schedule is a partial listing of seminars to be offered this
   year.  When future dates and speakers are confirmed, another mailing
   will be sent to you.

   Our meetings are usually held on the first and third Monday mornings
   of each month at 10:00 a.m. in the Conference Room of the Navy Center
   for Applied Research in Artificial Intelligence (Bldg. 256) located on
   Bolling Air Force Base, off I-295, in the South East quadrant of
   Washington, DC.  A map can be mailed for your convenience.  Please
   note that not all seminars are held on the first and third Mondays this
   fall due to conflicting holidays.

   Coffee will be available starting at 9:45 a.m. for a nominal fee.

   IF YOU ARE INTERESTED IN ATTENDING A SEMINAR, PLEASE CONTACT US BEFORE
   NOON ON THE FRIDAY PRIOR TO THE SEMINAR SO THAT A VISITOR'S PASS WILL
   BE AVAILABLE FOR YOU ON THE DAY OF THE SEMINAR.  NON-U.S. CITIZENS
   MUST CONTACT US AT LEAST TWO WEEKS PRIOR TO A SCHEDULED SEMINAR.
   If you would like to speak, be added to our mailing list, or would
   like more information, contact Dennis Perzanowski.  [...]

   ARPANET: DENNISP@NRL-AIC or (202) 767-2686.


------------------------------

End of AIList Digest
********************
23-Sep-84 11:42:57-PDT,15654;000000000000
Mail-From: LAWS created at 23-Sep-84 11:38:35
Date: Sun 23 Sep 1984 10:58-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #123
To: AIList@SRI-AI


AIList Digest            Sunday, 23 Sep 1984      Volume 2 : Issue 123

Today's Topics:
  AI Tools - OPS5,
  Expert Systems - Computer Program Usage Consultant,
  Literature - Introductory Books & IEEE Computer Articles,
  LISP - VMS LISPS,
  Logic - Induction and Deduction & Causality,
  Humor - Slimy Logic Seminar,
  Seminar - Analysis of Knowledge,
  Course & Conference - Stanford Logic Meeting
----------------------------------------------------------------------

Date: 21 Sep 84 13:24:47 EDT
From: BIESEL@RUTGERS.ARPA
Subject: Info needed on OPS5


Any information on compilers/interpreters for the OPS5 language on VAXen
will be appreciated. I'm particularly interested in relatively short
reviews and/or introductions to the language; a tutorial would be nice.
If any of this stuff is available online I'd like to FTP it.
        Thanx in advance.
                Biesel@rutgers.arpa

------------------------------

Date: 17 Sep 84 15:28:05-PDT (Mon)
From: hplabs!tektronix!uw-beaver!ssc-vax!alcmist @ Ucb-Vax.arpa
Subject: Computer Program Usage Consultants?
Article-I.D.: ssc-vax.99

I am working on an expert system to advise users setting up
runs of a complex aerodynamics program.  The project is sort of like
SACON, only we're trying to do more.

Does anyone know of work in progress that I should know about?  I
am interested in any work being done on

        1. Helping users set up appropriate inputs for a
        sophisticated analytical or simulation program,
        2. Diagnosing problems with the output of such a program,
        or
        3. Interpreting large volumes of numerical output in
        a knowledgeable fashion.

I am looking for current work that people are willing to talk about.
Pointers to literature will be appreciated, even though our library
is doing a literature search.

Please reply by mail!  I will send a summary of responses to anybody
who wants one.

Fred Wamsley
Boeing Computer Services AI Center
UUCP:     {decvax,ihnp4,sdcsvax,tektronix}!uw-beaver!ssc-vax!alcmist
ARPA:     ssc-vax!alcmist@uw-beaver.ARPA

------------------------------

Date: 15 Sep 84 10:38:00-PDT (Sat)
From: pur-ee!uiucdcs!convex!graham @ Ucb-Vax.arpa
Subject: "introductory" book on AI??
Article-I.D.: convex.45200003

I would like to learn more about the AI field.  I am almost "illiterate" now.
I have a PhD in CS from Illinois and 26 years experience in system software
such as compilers, assemblers, link-editors, loaders, etc...  Can anyone cite
a good book  or books for the AI field which
        is comprehensive
        is tutorial, in the sense that it includes the motivation behind
                the avenues in AI that it describes, and
        includes a good bibliography to other works in the field?

[Previous AIList discussion on this subject seems to have found Winston's
new "Artificial Intelligence" and Elaine Rich's "Artificial Intelligence"
to be good textbooks.  The three-volume Handbook of AI is also excellent.
Older texts by Nils Nilsson and by Bertram Raphael ("The Thinking Computer")
still have much to offer.  Other recent books cover LISP, PROLOG, and AI
programming techniques, as well as expert systems and AI as a business.
-- KIL]

------------------------------

Date: Fri 21 Sep 84 10:08:00-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Knowledge Engineering Article

The September issue of IEEE Computer is devoted to AI systems, with
emphasis on the man-machine interface.  It's well worth reading.

Frederick Hayes-Roth's article seems to be an excellent introduction
to knowledge engineering.  (The title is "The Knowledge-Based Expert
System: A Tutorial," but it is not really an expert-systems overview.)
The article by Elaine Rich on natural-language interfaces is also
excellent.  There are other articles on smart databases, tutoring
systems, job-shop control, and decision support systems.

There is also an article on a declarative parameter-specification
system for Schlumberger's Crystal system.  I found the article hard
to follow, and I have strong doubts about the desirability of building
a domain-independent parameter parser, then using procedural attachment
in the parameter declarations to hack in runtime dependencies and
domain-specific intelligent behavior.  Even if this is to be done,
the base program should have the option of requesting parameters only
as (and if) they are needed, and should be able to create or alter the
declarative structures dynamically at the time the parameters are
requested.  Given such a system, the declarative structures are simple
a convenient way of passing control options to the user-query
subroutine.  Most of the procedual knowledge belong in the procedural
code, not in declarative structures in a separate knowledge base.

                                        -- Ken Laws

------------------------------

Date: Sat, 22 Sep 84 14:48:59 EDT
From: Gregory Parkinson <Parkinson@YALE.ARPA>
Subject: VMS LISPS

We run Yale's T on VMS and like it a lot.  According to our benchmarks
it runs (on the average) a little faster than DEC's Common Lisp.  The
T compiler gets rid of tail recursion which speeds things up a bit, and
is about 40 times faster when dealing with labels.  Subjectively, working
with CL after working with T feels like driving a 76 Caddie Eldorado (power
windows, seats, brakes, steering, etc.) after getting used to a Honda CRX.
They both get you where you're going, but there's something about the
Honda that makes you feel like you're really driving......

                                          Greg Parkinson
                                          Cognitive Systems, inc.

------------------------------

Date: 21 Sep 84 3:42:03-EDT (Fri)
From: hplabs!hao!seismo!mcvax!vu44!tjalk!dick @ Ucb-Vax.arpa
Subject: Proof by induction, fun & entertainment
Article-I.D.: tjalk.338

Claim: All elements of an array A[1..n] are equal to its first element.
Proof by induction:
        Starting case: n = 1.
                Proof:
                        Obvious, since A[1] = A[1].
        Induction step:
                If the Claim is true for n = N, it is true for n = N + 1.
                Proof:
                        All elements of A[1..N] are equal (premise), and since
                        A[2..N+1] is an array of length N, all its elements
                        are equal also.  A[N] is in both (sub-)arrays, so
                                A[1] = A[N] and
                                A[N] = A[N+1]   ->
                                        A[1] = A[N+1]
                        which makes all of A[1..N+1] equal.
                End of proof of induction step
        The starting case and the induction step together prove the Claim.
End of proof by induction

                Courtesy of             Dick Grune
                                        Vrije Universiteit
                                        Amsterdam
                                        the Netherlands



[ *** Spoiler ***     The flaw, of course, is in the statement that
"A[N] is in both (sub-)arrays".  (I point this out to avoid a flood of
mail supplying the answer.)  -- KIL]

------------------------------

Date: Fri, 21 Sep 84 08:36 CDT
From: Boebert@HI-MULTICS.ARPA
Subject: More on induction and deduction

More on induction and deduction, along with much other interesting and
entertaining discussion, can be found in

Proofs and Refutations
by Imre Lakatos
Cambridge

------------------------------

Date: Fri 21 Sep 84 10:32:39-PDT
From: BARNARD@SRI-AI.ARPA
Subject: induction vs. deduction

In reply to the claim that my statement

        'deduction proceeds from the general (axioms) to
         the specific (propositions), induction proceeds from
         the specific to the general.'

is not correct (according to Kahane, LOGIC AND CONTEMPORARY RHETORIC),
see Aristotle, BASIC WORKS OF ARISTOTLE, ed. by R. McKeon, Random
House, 1941.

------------------------------

Date: 18 Sep 84 5:54:04-PDT (Tue)
From: hplabs!hao!seismo!umcp-cs!chris @ Ucb-Vax.arpa
Subject: Re: Causality
Article-I.D.: umcp-cs.16

(Apply :-) to entire reply)

>  What's wrong with event A affecting event B in event A's past?  You
>can't go back and shoot your own mother before you were born because you
>exist, and obviously you failed.  If we assume the universe is
>consistant [and not random chaos], then we must assume inconsistancies
>(such as shooting your own mother) will not arise.  It does not,
>however, place time constrictions on cause and effect.

Who says you can't even do that?  Perhaps your existence is actually
just a probablility function.  If P(existence) becomes small enough
you'll just disappear.  Maybe that explains all those mysterious
disappearances (``He just walked around the horses a moment ago...'')

In-Real-Life: Chris Torek, Univ of MD Comp Sci (301) 454-7690
UUCP:   {seismo,allegra,brl-bmd}!umcp-cs!chris
CSNet:  chris@umcp-cs           ARPA:   chris@maryland

------------------------------

Date: 17 Sep 84 18:21:16-PDT (Mon)
From: hplabs!hpda!fortune!wdl1!jbn @ Ucb-Vax.arpa
Subject: Re: Now and Then
Article-I.D.: wdl1.424

     Having spent some years working on automatic theorem proving and
program verification, I am occasionally distressed to see the ways in which
the AI community uses (and abuses) formal logic.  Always bear in mind that
for a deductive system to generate only true statements, the axioms of the
system must not imply a contradiction; in other words, it must be impossible
to deduce TRUE = FALSE.  In a system with a contradiction, any statement,
however meaningless, can be generated by deductive means.
     It is difficult to ensure the soundness of one's axioms.  See Boyer
and Moore's ``A Computational Logic'' for a description of a logic for which
soundness can be demonstrated and a program which generates inductive proofs
based on that logic.  The Boyer and Moore approach works only for mathematical
objects constructed in a specific and rigorous manner.  It is not applicable
to ``real world reasoning.''
     There are schemes such as nonmonotonic reasoning which attempt to deal
with contradictions.  These are not logical systems but heuristic systems.
Some risk of incorrect results is accepted in exchange for the ability to
``reason'' with non-rigorous data.  A clear distinction should be made between
mathematical deduction in rigorous spaces and heuristic problem solving by
semi-logical means.

                                John Nagle

------------------------------

Date: 20 Sep 1984  10:44 EDT (Thu)
From: Walter Hamscher <WALTER%MIT-OZ@MIT-MC.ARPA>
Subject: Humor & Seminar - Slimy Logic

     [Forwarded from the MIT bboard by SASW@MIT-MC.]


       The Computer Aided Conceptual Art Laboratory
                            and
           Laboratory for Graduate Student Lunch
                          presents

                         SLIMY LOGIC
                              or
       INDENUMERABLY MANY TRUTH-VALUED LOGIC WITHOUT HAIR

                         by Lofty Zofty


The indenumerably many-valued logics which result from the first stage
of slime-ification are so to speak "non-standard" logics; but slimy logic,
the result of the second stage of slime-ification, is a very radical
departure indeed from classical logics, and thereby sidesteps many
fruitless preoccupations of logicians such as completeness, consistency,
axiomatization, and proof.  In this talk I attempt to counter Slimy Logic's
low and ever-declining popularity by presenting a "qualitative" view
of slimy logic in which such definitions as
                        2
        very true = true
and                                  -3/2
        not very pretty false = false

by the qualitative (i.e. so even people who don't carry
around two calculators can understand them) definitions:

        very true = true
and
        not very pretty false = ugly false

I will then use this "qualitative" slimy logic to very nearly prove
very much that Jon Doyle is probably not very right about nearly
extremely many things.

HOSTS: Robert Granville and Isaac Kohane
Refreshments will be served
Moved to the Third Floor Theory Group Playroom

------------------------------

Date: 20 September 1984 13:30-EDT
From: Kenneth Byrd Story <STORY @ MIT-MC>
Subject: Seminar - Analysis of Knowledge

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

DATE:     Wednesday, September 26, 1984
TIME:     Refreshments, 3:45pm
          Lecture, 4:00pm
PLACE:    NE43-453
TITLE:    ``A MODEL-THEORETIC ANALYSIS OF KNOWLEDGE''
SPEAKER:  Dr. Joseph Y. Halpern, IBM, San Jose

Understanding knowledge is a fundamental issue in many disciplines.  In
computer science, knowledge arises not only in the obvious contexts (such as
knowledge-based systems), but also in distributed systems (where the goal is to
have each processor know something, as in Byzantine agreement).  A general
semantic model of knowledge is introduced, to allow reasoning about statements
such as "He knows that I know whether or not she knows whether or not it is
raining."  This approach more naturally models a state of knowledge than
previous proposals (including Kripke structures).  Using this notion of model,
a model theory for knowledge is developed.  This theory enables one to
interpret such notions as a "finite amount of information" and "common
knowledge" in different contexts.  This is joint work with Ron Fagin and Moshe
Vardi.

HOST:    Professor Silvio Micali

------------------------------

Date: Mon 17 Sep 84 09:01:21-PDT
From: Jon Barwise <BARWISE@SU-CSLI.ARPA>
Subject: Course & Conference - Stanford Logic Meeting

           Logic, Language and Computation Meeting

The Association for Symbolic Logic (ASL) and the Center for the  Study
of Language  and Information  (CSLI) are  planning a  two-week  summer
school and  meeting, July  8-20, 1985,  at Stanford  University.   The
first week (July  8-13) will  consist of  a CSLI  Summer School,  with
courses on various topics, including PROLOG, LISP, Complexity  Theory,
Denotational Semantics,  Generalized Quantifiers,  Intensional  Logic,
and Situation Semantics.  The second week (July 15-20) will be an  ASL
meeting  with  invited  lectures  (in  Logic,  Natural  Language,  and
Computation), symposia (on "Logic in Artificial Intelligence",  "Types
in the  Study  of  Computer  and  Natural  Languages",  and  "Possible
Worlds"), and  sessions  for  contributed  papers.   Those  interested
should contact Ingrid Deiwiks, CSLI, Ventura Hall, Stanford, CA  94305
(ph 415-497-3084) before November 1, with an indication as to  whether
they would like to make a reservation for a single or shared room  and
board in  a residence  hall, and  for  what period  of time.   A  more
detailed program will be available in November.  The program committee
consists of Jon  Barwise, Solomon Feferman,  David Israel and  William
Marsh.

------------------------------

End of AIList Digest
********************
23-Sep-84 22:06:48-PDT,14148;000000000001
Mail-From: LAWS created at 23-Sep-84 22:04:18
Date: Sun 23 Sep 1984 21:57-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #124
To: AIList@SRI-AI


AIList Digest            Monday, 24 Sep 1984      Volume 2 : Issue 124

Today's Topics:
  Algorithms - Demonstration Idea Wanted,
  Machine Translation - SIGART Special Issue,
  Natural Language - A Generalized Phrase Structured Grammar in Prolog,
  Expert Systems & Logic Programming - Kastner's Preference Rules in Prolog
----------------------------------------------------------------------

Date: 23 Sep 84 19:30:30 PDT (Sun)
From: Mike Brzustowicz <mab@aids-unix>
Subject: Demonstration Idea wanted

A non-network friend of mine needs to demonstrate to a class the importance
of detailed specifications.  He has been trying to find a task which is easy
to do but hard to describe, so that half of the class can write descriptions
which the other half will follow literally and thereby fail to accomplish
the described task.  Anyone have any ideas other than tying shoelaces or
cooking beef wellington?  (Many people don't wear laced shoes and the
facilities available aren't up to cooking :-)).  Thanks!

-Mike
<mab@aids-unix>

------------------------------

Date: Thu, 20 Sep 84 20:41 EST
From: Sergei Nirenburg <nirenburg%umass-cs.csnet@csnet-relay.arpa>
Subject: SIGART Special Section on Machine Translation


                 ACM SIGART SPECIAL SECTION
         ON MACHINE TRANSLATION AND RELATED TOPICS

     A special section on MT and related work is planned for
an early 1985 issue of the SIGART Newsletter.

     The purpose of the section is:

     1.  To update the knowledge of the new paradigms in  MT
         in the AI community

     2.  To help MT workers to learn about  developments  in
         AI that can be useful for them in their projects

     3.  To  provide   the   MT   community   with   updated
         information  about  current,  recent and nascent MT
         projects

     4.  To  help  identify  major  topics,   results   and,
         especially, directions for future research.

Contributions are solicited from MT workers, as well as  all
workers   in  AI,  theoretical,  computational  and  applied
linguistics and other related fields  who  feel  that  their
work  has  a bearing on MT (machine-aided human translation;
automatic dictionary  management;   parsing  and  generating
natural  language;  knowledge representation for specialized
domains;  discourse analysis;  sublanguages  and  subworlds,
etc., etc.)

     A detailed questionnaire to help  you  in  preparing  a
response is available from the guest editor,

Sergei Nirenburg
Department of Computer Science
Colgate University
Hamilton NY 13346 USA
(315) 824-1000 ext. 586
nirenburg@umass

     If  you  know  of  people  interested   in   MT-related
activities  who are not on a net, please let them know about
this call.

     The deadline for submissions is DECEMBER 1, 1984.
     Electronic submissions are welcome

------------------------------

Date: Thursday, 13-Sep-84 18:49:25-BST
From: O'Keefe HPS (on ERCC DEC-10)
Subject: Availability of a GPSG system in Prolog

          [Forwarded from the Prolog Digest by Laws@SRI-AI.]

This message is composed of extracts from the ProGram manual.

ProGram is a suite of Prolog programs that are intended to permit
the design, evaluation, and debugging of computer realizations of
phrase structure grammars for large fragments of natural languages.
The grammar representation language employed is that known as GPSG
(Generalized Phrase Structure Grammar).  A GPSG grammar, as far as
ProGTram is concerned, has up to nine components as follows:

        1. Specification of feature syntax.
        2. Immediate dominance rules (ID rules).
        3. Metarules which operation on the ID rules.
        4. Linear precedence rules (LP rules).
        5. Feature coefficient default values.
        6. Feature co-occurrence restrictions.
        7. Feature aliasing data.
        8. Root admissibility conditions.
        9. A lexicon.

All the major conventions described in the GPSG literature are
implemented, including the Head Feature Convention, the Foot
Feature Principle (and hence slash categories &c), the Control
Agreement Principle, the Conjunct Realisation Principle, lexical
subcategorisation  and rule instantiation incorporating the notion
of privilege.

All the major parts of the grammar interpreter code are written
in standard Prolog (Clocksin&Mellish).  Installation of the
system should be fairly simple on any machine of moderate size
which supports Prolog.

                             AVAILABILITY

1.  The manual is "University of Sussex Cognitive Science Research
    Paper 35 (CSRP 035) and can be ordered from Judith Dennison,
    Congitive Studies Programme, Arts E, University of Sussex,
    Falmer, Brighton BN1 9QN, for 7.50 pounds including postage.
2.  ProGram is aprt of the standard Sussex POPLOG system and is
    included, without extra charge, in all academic issues and
    updates of the POPLOG system.  POPLOG is available to UK
    academic users for the sum of 500 pounds (special arrangements
    apply to holders of SERC AI grants who have a VAX running UNIX).
    Existing UK academic POPLOG users can obtain a free update of
    the POPLOG system which will include ProGram.  POPLOG runs on
    VAXes under VMS and UNIX, and on Bleasdale BDC 680as under UNIX.
    [RAOK: The Bleasdale is a 68000, POPLOG is on SUNs too by now.]
    Non-educational customers (UK & overseas) who want ProGram with
    POPLOG should order it through System Designers Ltd, Systems
    House, 1 Pembroke Broadway, Camberley, Surrey GU15 3XH.  This
    company makes POPLOG available to educational institutions in
    the USA for 995 dollars.
3.  Academic users of other Prolog systems can obtain a magnetic tape
    in UNIX "tar" format of the Prolog code of the ProGram system
    free, together with a copy of "The Program Manual", provided they
    pay the tape, postage, package, and handling costs (35 pounds).
    Copies can be ordered from Alison Mudd, Cognitive Studies
    Programme, Arts E, University of Sussex, Falmer, Brighton BN1 9QN
    A cheque for 35 pounds made payable to "The University of Sussex"
    should be enclosed with the order.



I have no connection with POPLOG, ProGram, or (save a recent visit
when I picked up the ProGram manual and saw PopLog running on its
home ground) with the University of Sussex.

Just to make sure you realise what ProGram is and isn't, it IS
meant to be a convenient toolkit for *developing* a GPSG grammar,
it is NOT meant to be the world's most efficient parser.  The manual
warns you that "in general, automatic exhaustive parsing with more
than a few rules tends to be slow".  You shouldn't need to know
any Prolog in order to use ProGram.

------------------------------

Date: Friday, 14-Sep-84 21:20:02-BST
From: O'Keefe HPS (on ERCC DEC-10)
Subject: Interpreting Kastner's Preference Rules in Prolog

[Forwarded from the Prolog Digest by Laws@SRI-AI.  This is a declarative
specification of an expert-system interpreter. -- KIL]


I've always been quite impressed by the "EXPERT" stuff being
done at Rutgers, and when I read Kastner's thesis

        Kastner, J.K.
        @i"Strategies for Expert Consultation in Therapy Planning."
        Technical Report CMB-TR-135, Department of Computer Science,
        Rutgers University, October 1983.  (PhD thesis)

I decided to write an interpreter for his rules in Prolog as an
exercise.  The first version just came up with the answer, that's the
stuff that's commented out below.  The second version left behind
information for "explanations":

    chosen(Answer, Reason, WouldHaveBeenPreferred)

Answer was the answer, Reason was the text the rule writer
gave to explain his default ordering of the treatments, and
WouldHaveBeenPreferred are the treatments we'd have preferred
in this ordering if they hadn't been contraindicated

    despite(Answer, Contraindications)

means that Answer was contraindicated by each of the problems
listed, but it was still picked because the preferred choices
had worse problems.

    rejected(Treatment, Contraindications)

means that Treatment was rejected because it had the problems
listed.  Every treatment will be rejected or chosen.  Note: in
these two facts the Contraindications are those which were
checked and found to be applicable, less severe ones may not
have been checked.  (This is a feature, the whole point of the
code in fact.)

You'll have to read Kastner's thesis to see how these rules are used,
but if you're interested in Expert Systems you'll want to read it.

Why have I sent this to the [Prolog] Digest?  Two reasons.  (1) someone
may have a use for it, and if I send it to the library it'll sink without
trace.  (2) I'm quite pleased with the "no-explanations" version, but
the "explanations" version is a bit of a mess, and if anyone can find
a cleaner way of doing it I'd be very pleased to see it.  I guess I
still don't know how best to do data base hacking.

A point which may be interesting: I originally had worst/6 binding its
second argument to 'none' where there were no new contraindications.
The mess which resulted (though it worked) reminded me of a lesson I
thought I'd learned before: it is dangerous to have an answer saying
there are no answers, because that looks like an answer.  All the
problems I had with this code came from thinking procedurally.

:-  op(900, fx, 'Is ').
:-  op(899, xf, ' true').
:-  compile([
        'util:ask.pl',          % for yesno/1
        'util:projec.pl',       % for project/3
        'prefer.pl'             % which follows
    ]).

%   File   : PREFER.PL
%   Author : R.A.O'Keefe
%   Updated: 14 September 1984
%   Purpose: Interpret Kastner's "preference rules" in Prolog

:- public
        go/0,
        og/0.

:- mode
        prefer(-, +, +, +),
        pass(+, +, +, -),
        pass(+, +, +, +, +, +, -),
        worst(+, -, +, +, -, -),
        chose(+, +, +),
        forget(+, +),
        compare_lengths(+, +, -),
        evaluate(+).


prefer(Treatment, Rationale, Contraindications, Columns) :-
        pass(Columns, [], Contraindications, Treatment),
        append(Pref1, [Treatment=_|_], Columns), !,
        project(Pref1, 1, Preferred),
        assert(chosen(Treatment, Rationale, Preferred)).


pass([Tr=Tests|U], Cu, Vu, T) :-
        worst(Tests, Rest, Cu, Vu, Cb, Vb), !,
        pass(U, [Tr=Rest], Cu, Vu, Cb, Vb, T).
pass([T=_|U], C, _, T) :-
        chose(T, U, C).


pass([], [T=_], _, _, C, _, T) :- !,
        chose(T, [], C).
pass([], B, _, _, Cb, Vb, T) :-
        reverse(B, R),
        pass(R, Cb, Vb, T).
pass([Tr=Tests|U], B, Cu, Vu, Cb, Vb, T) :-
        worst(Tests, Rest, Cu, Vu, Ct, Vt),
        compare_lengths(Vt, Vb, R),
        (   R = (<), C1 = Ct, V1 = Vt, B1 = [Tr=Rest], forget(B, Cb)
        ;   R = (=), C1 = Cb, V1 = Vb, B1 = [Tr=Rest|B]
        ;   R = (>), C1 = Cb, V1 = Vb, B1 = B, assert(rejected(Tr,Ct))
        ),  !,          % moved down from worst/6 for "efficiency"
        pass(U, B1, Cu, Vu, C1, V1, T).
pass([T=_|_], B, _, _, C, _, T) :-
        chose(T, B, C).


worst([Test|Tests], Tests, C, [X|V], [X|C], V) :-
        evaluate(Test), !.
worst([_|Tests], Rest, Cu, [_|Vu], Ct, Vt) :-
        worst(Tests, Rest, Cu, Vu, Ct, Vt).


evaluate(fail) :- !, fail.
evaluate(Query) :-
        known(Query, Value), !,
        Value = yes.
evaluate(Query) :-
        yesno('Is ' Query ' true'),
        !,
        assert(known(Query, yes)).
evaluate(Query) :-
        assert(known(Query, no)),
        fail.


chose(Treatment, Rejected, Contraindications) :-
        assert(despite(Treatment, Contraindications)),
        forget(Rejected, Contraindications).


forget([], _).
forget([Treatment=_|Rejected], Contraindications) :-
        assert(rejected(Treatment, Contraindications)),
        forget(Rejected, Contraindications).


compare_lengths([], [], =).
compare_lengths([],  _, <).
compare_lengths( _, [], >).
compare_lengths([_|List1], [_|List2], R) :-
        compare_lengths(List1, List2, R).


/*----------------------------
%  Version that doesn't store explanation information:

prefer(Treatment, Rationale, Contraindications, Columns) :-
        pass(Columns, 0, [], Treatment).


pass([], _, [T=_], T) :- !.
pass([], _, B, T) :-
        reverse(B, R),
        pass(R, 0, [], T).
pass([Tr=Col|U], I, B, T) :-
        worst(Col, 1, W, Reduced),
        !,
        (   W > I, pass(U, W, [Tr=Reduced], T)
        ;   W < I, pass(U, I, B, T)
        ;   W = I, pass(U, I, [Tr=Reduced|B], T)
        ).
pass([T=_|_], _, _, T).         % no (more) contraindications


worst([], _, none, []).
worst([Condition|Rest], Depth, Depth, Rest) :-
        evaluate(Condition), !.
worst([_|Col], D, W, Residue) :-
        E is D+1,
        worst(Col, E, W, Residue).

---------------------------------------------------------------*/


antiviral(Which) :-
        evaluate(full_therapeutic_antiviral_dose_recommended),
        prefer(Which, efficiacy,

         [pregnancy, resistance, severe_algy, mild_algy ], [
   ftft =[fail,      rtft,       at3,         at1       ],
   fvira=[fail,      rvira,      av3,         av1       ],
   fidu =[preg,      ridu,       ai3,         ai1       ]  ]).


go :-
        antiviral(X),
        write(X), nl,
        pp(chosen), pp(despite), pp(rejected).

og :-
        abolish(chosen, 3),
        abolish(despite, 2),
        abolish(known, 2),
        abolish(rejected, 2).

------------------------------

End of AIList Digest
********************
26-Sep-84 00:05:39-PDT,13064;000000000001
Mail-From: LAWS created at 26-Sep-84 00:03:53
Date: Tue 25 Sep 1984 23:57-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #125
To: AIList@SRI-AI


AIList Digest           Wednesday, 26 Sep 1984    Volume 2 : Issue 125

Today's Topics:
  Expert Systems - Foster Care Placements,
  LISP - Franz Lisp Help,
  Inductive Proof - The Heap Problem,
  Machine Translation - Natural Languages as Interlinguas,
  Seminars - Semantic Modulation & SOAR Intelligent System
  Administrivia - Current Distribution List
----------------------------------------------------------------------

Date: Sun, 23 Sep 84 22:34 EST
From: Ed Fox <fox%vpi.csnet@csnet-relay.arpa>
Subject: Expert System for Foster Care Placements

One of my students has begun a project as described below.
We are wondering if there are any similar projects that people
would be willing to let us know about.
Many thanks, Ed Fox.

 This expert system will provide assistance to social workers charged with
 finding suitable substitute care placements for children who cannot continue
 to live with their families.  The system's rules will be based on
 expert input from social workers and an analysis of a social service agency's
 case records to determine the constellation of child, natural family, and
 substitute caregivers' characteristics and environmental factors which have
 been associated with successful placements in the past.  Users will be asked
 for descriptive information about the child for whom a placement is being
 sought and his/her family.  System output will provide the social worker with
 a description(s) of substitute care settings which can be expected to suit the
 needs of the particular child and contribute to a successful placement.

------------------------------

Date: 25 Sep 1984 07:59:09-EDT
From: kushnier@NADC
Subject: Help- Franz Lisp


Help!
Does anyone have a good practical guide to Franz LISP running under UNIX
on a VAX ?
Is there a way to list the LISP environment when running the interpreter or
do you have to go in and out using the Unix editors?
Can you save the LISP envirnment to an editor file while you are in LISP?

P.S. I have the Franz LISP manual, but I haven't translated it to English yet.

P.S.S I haven't even figured out what language it's written in.......

                                     Ron Kushnier
                                     kushnier@nadc.arpa

[I'm not sure what's possible under Berkeley Unix (if that's what you
have) since I'm using a VAX EUNICE system.  Our people have rigged the
EMACS editor so that it can be called from Franz, provided that you load
and then suspend EMACS before starting up Franz.  Interpreted functions
can thus be edited and newly edited functions can be run; special editor
macros facilitate this.  4.1BSD Unix lacks the interprocess mechanisms
needed to support this (LEDIT), although EMACS process windows running
Franz are possible; 4.2BSD may be more flexible.

To examine your environment while in Franz, use the pp (pretty-print)
command.  You can certainly save an environment; check out the
dumplisp and savelisp commands.  For a readable Franz tutorial get
Wilensky's new LISPcraft book.  -- KIL]

------------------------------

Date: 19 Sep 84 14:42:49-PDT (Wed)
From: ihnp4!houxm!mhuxj!mhuxn!mhuxl!ulysses!allegra!princeton!eosp1!robison
      @ Ucb-Vax.arpa
Subject: Re: Inductive proof -- the heap problem
Article-I.D.: eosp1.1131


BUT! Human beings continually reason inductively on tiny amounts
of info, often two or even one case!  We have some way of monitoring
our results and taking back some iof the inductions that were wrong.
AI has to get the hang of this some day...

--- Toby Robison

------------------------------

Date: Mon, 24 Sep 84 22:28 EST
From: Sergei Nirenburg <nirenburg%umass-cs.csnet@csnet-relay.arpa>
Subject: natural languages as interlinguas for MT


Re: using a natural language as an interlingua in a machine translation
system

A natural language and an MT interlingua have different purposes and are
designed differently.  An interlingua should be ambiguity-free and should
facilitate automatic reasoning about the knowledge encoded in it.  A natural
language is designed to be used by truly intelligent speakers and hearers, so
that a lot of polysemy, homonymy, anaphoric phenomena, even outright errors
can be put up with -- because the understander is so sophisticated.  Brevity
is at a premium in natural language communication, not clarity.

The most recent attempt to use a language designed for humans as an MT
interlingua is the Dutch researcher A. Witkam's attempt in his DLT machine
translation project.  He plans to use Binary-Coded Esperanto (BCE) as the
interlingua in a planned multilingual MT system.

An analysis of the approach shows that in reality the system involves two
complete (transfer-based) translation modules: 1) Source language to BCE; and
2) BCE to Target language.

Of many points of criticism possible let me mention just that this
approach (in effect, double transfer)  has nothing to do with AI methods.
If transfer is used, it is not clear why an interlingua should be involved at
all.

For some more discussion see Tucker and Nirenburg, "Machine Translation: A
Contemporary View", in the 1984 issue of the Annual Review of Information
Science and Technology.

At the same time, it would be nice to see a technical discussion of the
system by Guzman de Rojas -- is any such thing available?

Sergei

------------------------------

Date: Mon, 24 Sep 1984  15:30 EDT
From: WELD%MIT-OZ@MIT-MC.ARPA
Subject: Seminar - Semantic Modulation

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

        The AI revolving seminar this week is by David McAllester:

        SEMANTIC MODULATION:  A Relevance Based Inference Technique

        The Reasoning Utility Package RUP provides a set of
propositonal inference mechanisms for constructing inference-based
data bases and reasoning systems.  This talk will present new
inference mechanisms which can be incorporated into the RUP
architecture.  These inference mechansisms reason about quantified
formula using a technique based on the "modulation" of the
interpretation of free parameters.  By modulating the interpretation
of free parameters it is possible to perform a wide variety of
quantificational inferences without ever "consing" new formulae.
The semantic modulation inference mechanism relies on a notion
of relevance in propositional reasoning:  when a formula is proven
one can determine a subset of premises relevant to the proof.
The relevant subset is usually smaller than the set of premises actually
used in the proof.  Semantic modulation is also closely related to
the notions of "inheritance" and "virtual copy" used in semantic networks.


Time:           2:00PM          Wednesday Sept. 26  (THIS Wednesday)
Place:          7th Floor Playroom

------------------------------

Date: Tue 25 Sep 84 11:09:13-PDT
From: Paula Edmisten <Edmisten@SUMEX-AIM.ARPA>
Subject: Seminar - SOAR Intelligent System

 [Forwarded from the Stanford SIGLUNCH distribution by Laws@SRI-AI.]

DATE:        Friday, September 28, 1984
LOCATION:    Chemistry Gazebo, between Physical and Organic Chemistry
TIME:        12:05

SPEAKER:     John Laird,
             Xerox Corp.

ABSTRACT:    SOAR: An Architecture for General Intelligence

I will present recent progress in developing an architecture for general
intelligence, called Soar.   In Soar, all problem solving occurs as
search in a problem space and all knowledge is encoded as production
rules.  I will describe the Soar architecture and then present three
demonstrations of its generality and power.

1. Universal Subgoaling: All subgoals are created automatically by the
architecture whenever the problem solver is unable to carry out the
basic functions of problem solving (so that all subgoals in Soar are
also meta-goals).  All the power of Soar is available in the subgoals,
including creating new subgoals, making Soar a completely reflective
problem solver.

2. A Universal Weak Method: The weak methods emerge from knowledge about
a task instead of through explicit representation and selection.

3. R1-Soar: Although Soar was designed for general problem-solving, it
is also effective in the knowledge-intensive domains of expert systems.
This will be demonstrated by a partial implementation of the R1 expert
system in Soar.

Soar also has a general learning mechanism, called Chunking.  Paul
Rosenbloom will present this aspect of our work at the SIGLunch on
October 5.

------------------------------

Date: Tue 25 Sep 84 14:08:12-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Current Distribution List

SIGART has recently been publishing names of companies involved in AI,
which started me wondering just where AIList goes.  The following are
organizations that I mail to directly, as nearly as I can figure out
from the net names.  In some cases the digest goes to numerous campuses,
departments, or laboratories; in others it goes to a single individual.
AIList also goes to numerous sites through indirect remailings,
particularly through Usenet redistribution.  If anyone would like to
add to my list, please send a brief message to AIList-Request@SRI-AI.ARPA.

GOVERNMENT AND MILITARY:
Admiralty Surface Weapons Establishment
Air Force Institute of Technology Data Automation
Army Armament Research and Development Command
Army Aviation Systems Command
Army Communications Electronics Command
Army Engineer Topographic Laboratory
Army Materiel Systems Analysis Activity
Defense Communications Engineering Center
National Aeronautics and Space Administration
National Library of Medicine
National Research Council Board on Telecomm.-Comp. Applications
National Science Foundation
Naval Air Development Center
Naval Intelligence Processing System Support Activity
Naval Ocean Systems Center
Navel Personnel Research and Development Center
Navel Research Laboratory
Navel Surface Weapons Center
Norwegian Defence Research Establishment

LABORATORIES AND RESEARCH INSTITUTES:
Aerospace Medical Research Laboratory
Brookhaven National Laboratory
Center for Seismic Studies
Center for Studies of Language and Information
Jet Propulsion Laboratory
Lawrence Berkeley Laboratory
Lawrence Livermore Labs
Los Alamos National Laboratory
MIT Lincoln Laboratory
NASA Ames Research Center
Norwegian Telecommunication Administration Research Institute
Oak Ridge National Laboratory
Sandia National Laboratories
USC Information Sciences Institute

CORPORATIONS AND NONPROFIT ORGANIZATIONS:
ACM SIGART
Advanced Computer Communications
Advanced Information and Decision Systems
Bolt Beranek and Newman Inc.
Compion Corp.
Digital Equipment Corp.
Ford Aerospace and Communications Corp.
GTE Laboratories
General Motors Research
Hewlett-Packard Laboratories
Honeywell, Inc.
Hughes Research
IntelliGenetics
International Business Machines
Kestrel Institute
Linkabit
Litton Systems
Logicon, Inc.
Marconi Research Centre, Chelmsford
Northrop Research Center
Perceptronics
Philips
Rome Air Development Center
SRI International
Science Applications, Inc.
Software A&E
Tektronix, Inc.
Texas Instruments
The Aerospace Corporation
The MITRE Corporation
The Rand Corporation
Tymshare
Xerox Corporation

UNIVERSITIES:
Boston University
Brandeis University
Brown University
California Institute of Technology
Carnegie-Mellon University
Clemson University
Colorado State University
Columbia University
Cornell University
Georgia Institute of Technology
Grinnell College
Harvard University
Heriot_Watt University, Edinburgh
Louisiana State University
Massachusetts Institute of Technology
New Jersey Institute of Technology
New York University
Oklahoma State University
Rice University
Rochester University
Rutgers University
St. Joseph's University
Stanford University
State University of New York
University College London
University of British Columbia
University of California (Berkeley, Davis, UCF, UCI, UCLA, Santa Cruz)
University of Cambridge
University of Delaware
University of Edinburgh
University of Massachusetts
University of Michigan
University of Minnesota
University of North Carolina
University of Pennsylvania
University of South Carolina
University of Southern California
University of Tennessee
University of Texas
University of Toronto
University of Utah
University of Virginia
University of Washington
University of Wisconsin
Vanderbilt
Virginia Polytechnic Institute
Yale University

------------------------------

End of AIList Digest
********************
26-Sep-84 22:52:37-PDT,10082;000000000000
Mail-From: LAWS created at 26-Sep-84 22:49:35
Date: Wed 26 Sep 1984 22:44-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #126
To: AIList@SRI-AI


AIList Digest           Thursday, 27 Sep 1984     Volume 2 : Issue 126

Today's Topics:
  AI & Business - Literature Sought,
  Expert Systems - Critique Pointer & Teknowledge's M.1
----------------------------------------------------------------------

Date: 20 Sep 84 10:29:09-PDT (Thu)
From: hplabs!sdcrdcf!trwrb!trwspp!jensen @ Ucb-Vax.arpa
Subject: AI for Business

   Article-I.D.: trwspp.582

I hope that I can obtain a list of resources that  apply  AI
techniques  to  business.   Such  resources  would  include:
research  bulletins,  software,  books,   and   conferences.
Awhile  back,  I  recall  an  AI  for Business Summary being
offered, perhaps one of you still has a copy lying around on
disk.   I  will  pass on submissions to requesters, via mail
rather than a net posting.

Thank you very much for your assistance.
James Jensen

[I believe that Syming%B.CC@Berkeley is keeping an AI for Business
summary, as well as a list of interested individuals.  This is
still a suitable topic for AIList, of course.  -- KIL]

------------------------------

Date: Wed, 26 Sep 1984  10:25 EDT
From: Chunka Mui <CHUNKA%MIT-OZ@MIT-MC.ARPA>
Subject: request for info: commercialization of ai


Has anyone seen a report entitled "Commercial Applications of Expert
Systems"?  The author is Tim Johnson and it is put out by a company in
London named OVUM.  I'm wondering what perspective the report is
written from and whether or not it is worth tracking down.  Replies
can be sent directly to me at Chunka%mit-oz@mit-mc if general interest
in the topic does not exist.  Thanks,

                                      Chunka Mui

------------------------------

Date: Wed 26 Sep 84 03:44:35-CDT
From: Werner Uhrig  <CMP.WERNER@UTEXAS-20.ARPA>
Subject: A "populist's" view - Jerry Pournelle comments.

in Popular Computing, Nov 84, p 59, Jerry writes in his column THE MICRO
REVOLUTION about ARTIFICIAL EXPERTS: The computer as diagnostician has definite
limits.

worth reading as Jerry (love him or hate him) is a sharp and insightful
'populist' (consider this a compliment), who tries to bridge the gap
between experts and academia and is doing a credible job at it.  If you
keep a folder with informative articles about AI, especially with emphasis
on medical applications, you'll want to add this one.

------------------------------

Date: Tue 25 Sep 84 20:06:36-PDT
From: JKAPLAN@SRI-KL.ARPA
Subject: Clarification Regarding Teknowledge's M.1 Product

I recently learned that an article by John Dvorak criticizing our M.1
product in the San Francisco Chronicle 7/29/84 was reproduced and
distributed to the AIlist.  This article presented a distorted and
factually incorrect picture of the Teknowledge product.  The author
made no attempt to contact us for information prior to publishing the
article and as far as we know, has not seen the product. The article
appears to be based solely on information from a brochure, and
hearsay.

Based on the tone and content of the article, it was apparently
written primarily for entertainment value, and so we decided it would
not be fruitful to draft a formal reply.  However, the AIlist might be
interested in a response.  [I added a note to the original article
requesting such a response.  -- KIL]

First about M.1 -

M.1 is a  knowledge engineering tool that enables technical
professionals without prior AI experience to build rule-based
consultation systems. It is designed for rapid prototyping of
large-scale applications, as well as building small-scale systems. The
product includes a four-day hands-on course, extensive documentation,
sample systems, training materials, one year of "hot-line" support,
and maintenance.

M.1 contains a variety of advanced features. Some of interest to the
AIlist types include: certainty factors; a multi-window interactive
debugging environment; explanation facility; list processing; single-
and multi-valued attributes; variables; dynamic overlays of the
knowledge base during consultations; presupposition checking;
and automatic answer "completion". However, the system was carefully
designed so that it can be learned incrementally, i.e. the beginner
doesn't have to understand or use these features.

An initial CPU costs $12,500 (not $12,000 as stated in the article),
which includes training. Additional licenses costs $5,000 with
training, and $2500 without.

Strategically, M.1 fills a gap between mainframe- or lisp
machine-based tools for AI professionals, and a variety of less
sophisticated systems available to hobbyists.

Turning to the article -


Dvorak makes basically three points:

1. The program is overpriced for personal computer software.

2. The program gives bad advice about wine.

3. Expert systems are too complex to run on micros, at least with M.1.

Let me respond briefly to  each point.

1. M.1 is not targeted to "personal computer owners" the way Wordstar
and VisiCalc are.  M.1 is not intended, nor is it suitable for, mass
distribution.  While M.1 can be used effectively without a graduate
degree in artificial intelligence, it is still quite a distance from
business productivity tools (such as Lotus 1-2-3) for non-technical
computer users.

Rather, it is a tool for technical professionals.  We decided to host
the system on the IBM Personal Computer rather than the VAX or other
environments because (a) we believed this would be more convenient for
our target customers, and (b) it was technically possible without
compromising the product.

M.1 is priced consistent with similar systems that run on the IBM
Personal Computer, such as CAD/CAM tools, or modelling and simulation
packages.  These systems typically appeal to a specialized audience,
and come with extensive training and support (as does M.1).

Our customers and the trade press understand the value of and
rationale for such systems. Some members of the popular and business
press do not. When we receive inquiries from these latter groups, we
explain the product positioning and provide appropriate references and
data points. We did not have this opportunity with Mr. Dvorak.

2. M.1 comes with a variety of sample knowledge systems, that
illustrate various M.1 features and suggest potential areas of
application.  Skipping past extensive consultations in the M.1 brochure
with a Bank Services Advisor and a Structural Analysis Consultant, Mr.
Dvorak reprints an edited transcript of a sample system that provides
Wine Advice, in an attempt to ridicule the quality of the product.

In our brochure, the purpose of the brief wine advisor example is to
illustrate that the user's preferences can be taken into account in a
consultation, and that the user can change his or her mind part way
through a consultation. Initially, the user specifies a preference for
red wine, despite the fact that the meal contains poultry. The M.1
knowledge base naturally recommends a set of red wines.  Mr. Dvorak's
version of the consultation stops at this point. In the balance of the
consultation, the user changes to moderately sweet white wines, and is
advised to try chardonnay, riesling, chenin blanc, or soave.

While it may occasionally provide controversial advice, the wine
advisor sample systems was reviewed by two California wine
experts before release, who felt that its advice was quite reasonable.

3.  Regarding Mr. Dvorak's final point, he is simply wrong. Micros in
general, and M.1 in particular, are powerful enough to solve high
value knowledge engineering problems.  Approximately 200 knowledge
base entries (facts and rules) can be loaded at any one time, and can
be overlayed dynamically if larger knowledge bases are required,
making the only practical limit the amount of disk storage. Through
the use of variables and other representational features, the language
is more concise and powerful than most of its predecessors.  Practical
systems such as the Schlumberger Dipmeter Advisor and the PUFF system
at the Pacific Medical Center in San Francisco use knowledge bases
that could fit easily within the M.1 system without overlays.

For pedagogical purposes, we reimplemented a subset of SACON, a system
originally developed at Stanford University using EMYCIN, as a sample
system.  SACON provides advice to structural engineers on the use a
complex structural analysis Fortran program.  Our sample system
demonstrates that M.1 has sufficient functionality at reasonable speed
to accomplish this task. (The current version does NOT contain the
entire original knowledge base - time and project resource constraints
precluded our doing a complete translation. It includes all questions
and control rules, which account for about 50% of the original system,
but only about half of the judgmental rules, using no overlays. The
reimplementation can run the standard consultation examples from the
SACON literature.)



AIlist readers may be interested to know that M.1 has been selling
very well since its introduction in June. Our customers have been
extremely pleased with the system - many have prototyped serious
applications in a short period of time after taking the course, and at
a cost far below their available alternatives.

For more serious reviews of M.1, may I refer you to

Rosann Stach
Manager of Corporate Development and Public Relations
Teknowledge Inc
525 University Ave
Palo Alto, CA
415-327-6600


                                Jerry Kaplan
                                Chief Development Officer
                                Teknowledge

------------------------------

End of AIList Digest
********************
27-Sep-84 23:57:43-PDT,11170;000000000000
Mail-From: LAWS created at 27-Sep-84 23:55:36
Date: Thu 27 Sep 1984 23:49-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #127
To: AIList@SRI-AI


AIList Digest            Friday, 28 Sep 1984      Volume 2 : Issue 127

Today's Topics:
  Computer Music - Mailing List,
  Expert Systems - Windows,
  Machine Translation - Natural Languages as Interlinguas,
  Natural Language - Idioms,
  Logic - Induction and Deduction,
  Seminar - Anatomical Analogy for Linguistics
----------------------------------------------------------------------

Date: 26 September 1984 1043-EDT
From: Roger Dannenberg at CMU-CS-A
Subject: Computer Music Mailing List

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

        If you are interested in announcements pertaining to computer music
(such as the one you are reading), send mail to Dannenberg@cmu-cs-a and
I'll put you on my mailing list.
        First announcement: there will be a seminar on Monday, October 8,
from 11 to 1 with pre-presentations of 3 talks from the 1984 International
Computer Music Conference.  Please let me know if you plan to attend.

------------------------------

Date: Thu 27 Sep 84 10:09:16-MDT
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: Windows and Expert Systems

Has anyone else become bothered by the recent apparent equation between
window packages and expert system tools?  The recent spiel on Teknowledge's
M.1 takes care to mention that it provides windows (along with other features).
However, other vendors (for instance all of those at the recent AAAI) seem
to emphasize their window and menu capabilities at the expense of actual
reasoning capacity.  Recent papers on expert systems at both AAAIs and IJCAIs
include the obligatory picture of a screen with all the capabilities being
shown at once (even if they're not really related to the paper's content).
What's going on?
Does a window system really have something substantial to offer expert systems
development?  If so, what is it?  Ultra-high bandwidth for display, so that
the system doesn't have to decide what the user wants to see - it just shows
everything?  Do people get entranced by all the pretty pictures?  Ease of
managing multiple processes (what expert system tools can even employ multiple
communicating processes)?  We've got zillions of machines with window systems
around here, but they seem supremely irrelevant to the process of expert
system development (perhaps because I tend to regard a system that requires
only low-bandwidth communication to be more inherently intelligent - it has
to do more inference to supply missing information).  Can anyone give a solid
justification for windows being an essential part of an expert systems tool?
(Please no one say anything about it being easier to sell tools with flashy
graphics...)

                                                        stan shebs

------------------------------

Date: 26 Sep 1984 09:33-PDT (Wednesday)
From: Rick Briggs <briggs@RIACS.ARPA>
Subject: natural languages as interlinguas for MT

        Sergia Nirenburg's statement that "a natural language and an
MT interlingua have different purposes and are designed differently"
is false and reveals an incorrect premise underlying much linguistic and
AI research.  There is a natural language which was spoken between
1000 B.C. and 1900 A.D. which was used amongst a scientific community,
and which was ambiguity free(in some senses syntax-free) and which
fascilitated automatic inference.  Instead of saying "John gave Mary
a book" these scientists would say "there was a giving event, having as
agent John, who is qualified by singularity...etc".
        I have shown this well-developed system to be equivalent to
certain semantic net systems, and in some cases the ancient language
is even more specific.
        The language is an obscure branch of Indo-Iranian of which there
are no translations, but the originals are extant.
        Natural languages CAN serve as interlingua.

Rick Briggs
briggs@riacs

------------------------------

Date: Thu 27 Sep 84 10:58:36-CDT
From: David Throop <LRC.Throop@UTEXAS-20.ARPA>
Subject: Re: Having no crime rate & other text curiosities

  Continuing the consideration of texts that contain mistakes but are still
comprehensible:
  Another example, this from the Summer '84 issue of Foreign Affairs (p 1077):

  "In nine months... the [Argentine] peso fell in value by more than 400
percent."

------------------------------

Date: 9 Sep 84 10:06:00-PDT (Sun)
From: hplabs!hp-pcd!hpfclk!fritz @ Ucb-Vax.arpa
Subject: Re: Inductive Proof - The Heap Problem
Article-I.D.: hpfclk.75500005

    As an example of improper induction, consider the heap problem.
    A "heap" of one speck (e.g., of flour) is definitely a small heap.
    If you add one speck to a small heap, you still have a small heap.
    Therefore all heaps are small heaps.
                                        -- Ken Laws

That's a little like saying, "The girl next to me is blonde.  The
girl next to her is blonde.  Therefore all girls are blonde."  (Or,
"3 is a prime, 5 is a prime; therefore all odd numbers are prime.")

An observation of 2 (or 3, or 20, or N) samples does *not* an inductive
proof make.  In order to have an inductive proof, you must show that
the observation can be extended to ALL cases.

    [I disagree with Gary's analysis of the flaw.  I didn't say "if
    you add one speck to a one-speck heap", I said that you could add
    one speck to a (i.e., any) small heap.  -- KIL]


Mathematician's proof that all odd numbers are prime:
  "3 is a prime, 5 is a prime, 7 is a prime; therefore, by INDUCTION,
  all odd numbers are prime."

Physicist's proof:
  "3 is a prime, 5 is a prime, 7 is a prime,... uhh, experimental error ...
   11 is a prime, 13 is a prime, ...."

Electrical Engineer's proof:
  "3 is a prime, 5 is a prime, 7 is a prime, 9 is a prime, 11 is a prime..."

Computer Scientist's proof:
  "3 is a prime, 5 is a prime, 7 is a prime,
                               7 is a prime,
                               7 is a prime,
                               7 is a prime,
                               7 is a prime, ..."

Gary Fritz
Hewlett Packard Co
{ihnp4,hplabs}!hpfcla!hpfclk!fritz

------------------------------

Date: Wed 26 Sep 84 10:42:28-MDT
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: Re: Induction

There's another name for "induction" on one case: generalization.  Lenat's
AM and the Boyer-Moore theorem prover are both capable of doing
generalizations, and there are probably others that can do it also.
Not too hard really;  if you've set up just the right formalism,
generalization amounts to easily-implemented syntactic mutations (now
all we need is a program to come up with the right formalisms!)

                                                        stan shebs

------------------------------

Date: 17 Sep 84 9:03:48-PDT (Mon)
From: hplabs!pesnta!scc!steiny @ Ucb-Vax.arpa
Subject: Re: induction vs. deduction
Article-I.D.: scc.156

A point about logical induction that has not come up is
what Charles Sanders Peirce (who coined the
term "pragmatism") argued that one could never prove
anything inductively.  We believe that any human will die
eventurally and we reason that is so inductively.
We do not, however, have records on every human that has
ever existed, and humans that are still alive offer
no evidence to support the statement "all humans die".

        Peirce (being pragmatic), did not think we should
throw away the principle just because we can't prove anything
with it.  He suggested renaming it "reduction" (and renaming
deduction "abduction").  This  would leave the word
"induction" available to those special cases where
we do have all the evidence.
--
Don Steiny - Personetics @ (408) 425-0382
109 Torrey Pine Terr.
Santa Cruz, Calif. 95060
ihnp4!pesnta  -\
fortune!idsvax -> scc!steiny
ucbvax!twg    -/

------------------------------

Date: Wed, 26 Sep 84 17:27:31 pdt
From: chertok%ucbkim@Berkeley (Paula Chertok)
Subject: Seminar - Anatomical Analogy for Linguistics

                BERKELEY COGNITIVE SCIENCE PROGRAM
                            Fall 1984
              Cognitive Science Seminar -- IDS 237A

       TIME:                Tuesday, October 2, 11 - 12:30
       PLACE:               240 Bechtel Engineering Center
       DISCUSSION:          12:30 - 2 in 200 Building T-4

   SPEAKER:        Jerry Sadock, Center for the Advanced  Study
                   in   the  Behavioral  Sciences;  Linguistics
                   Department, University of Chicago

   TITLE:          Linguistics as Anatomy

   ABSTRACT:       The notion of modularity in linguistic  sys-
                   tems  is often supported by invoking an ana-
                   tomical metaphor in which the  various  sub-
                   systems  of the grammar are the analogues of
                   the organs of the body.  The primitive  view
                   of  anatomy  that  is employed supposes that
                   the organs are entirely separate in internal
                   structure, nonoverlapping in function, shar-
                   ply  distinguished  from  one  another,  and
                   entirely autonomous in their internal opera-
                   tion.

                   There is a great deal of suggestive evidence
                   from  language  systems  that  calls many of
                   these assumptions into  question  and  indi-
                   cates  that there are transmodular `systems'
                   that form part of the internal structure  of
                   various  modules,  that there is a good deal
                   of redundancy of function between  grammati-
                   cal  components,  that the boundaries of the
                   modules are unsharp, and that  the  workings
                   of  one module can be sensitive to the work-
                   ings of another.  These facts do  not  speak
                   against  either the basic notion of modular-
                   ity of grammar or  the  anatomical  analogy,
                   but  rather  suggest  that  the structure of
                   grammatical systems is to be compared with a
                   more  sophisticated view of the structure of
                   physical organic systems than has been popu-
                   larly employed.

                   The appropriate analogy is not only biologi-
                   cally more realistic, but also holds out the
                   hope of yielding better accounts of  certain
                   otherwise    puzzling    natural    language
                   phenomena.

------------------------------

End of AIList Digest
********************
 1-Oct-84 10:19:13-PDT,11726;000000000000
Mail-From: LAWS created at  1-Oct-84 10:16:27
Date: Mon  1 Oct 1984 10:08-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #128
To: AIList@SRI-AI


AIList Digest             Monday, 1 Oct 1984      Volume 2 : Issue 128

Today's Topics:
  Education - Top Ten Graduate Programs,
  Natural Language - ELIZA source request,
  AI Tools - OPS5 & VMS LISPs & Tektronix 4404 AI Machine,
  Bindings - Syntelligence,
  AI Survey - Tim Johnson's Report,
  Expert Systems - John Dvorak's Column & Windows,
  Knowledge Representation - Generalization,
  Machine Translation - Natural Languages as Interlingua
----------------------------------------------------------------------

Date: 22 Sep 84 0:39:37-PDT (Sat)
From: hplabs!sdcrdcf!sdcsvax!daryoush @ Ucb-Vax.arpa
Subject: Top Ten
Article-I.D.: sdcsvax.79

What are the top ten graduate programs in AI?
MIT is first I suppose.

--id

------------------------------

Date: 24 Sep 84 13:41:12-PDT (Mon)
From: hplabs!hpda!fortune!amd!dual!zehntel!zinfandel!berry @ Ucb-Vax.arpa
Subject: Humor - Top Ten
Article-I.D.: zinfande.199

    What are the top ten graduate programs in AI?
                                -- Karyoush Morshedian

To the best of my knowledge, NO AI program has ever graduated from an
accredited degree-granting institution , though I do know of a LISP
program that's a Universal Life Church minister.....


Berry Kercheval         Zehntel Inc.    (ihnp4!zehntel!zinfandel!berry)
(415)932-6900

------------------------------

Date: 26 Sep 84 18:21:17-PDT (Wed)
From: hplabs!hpda!fortune!wdl1!jbn @ Ucb-Vax.arpa
Subject: Re: Top Ten
Article-I.D.: wdl1.437

     The Stanford PhD program probably ranks in the top 10.  (The MS
program is much weaker).

------------------------------

Date: 29 Sep 84 17:39:34-PDT (Sat)
From: hplabs!hao!seismo!umcp-cs!koved @ Ucb-Vax.arpa
Subject: Re: ELIZA source request
Article-I.D.: umcp-cs.171

I would also like a copy of ELIZA if someone could send it to me.
Thanks.

Larry
koved@umcp-cs or koved@maryland.arpa

Spoken: Larry Koved
Arpa:   koved.umcp-cs@CSNet-relay
Uucp:...{allegra,seismo}!umcp-cs!koved

------------------------------

Date: 26 Sep 84 18:21:31-PDT (Wed)
From: hplabs!hpda!fortune!wdl1!jbn @ Ucb-Vax.arpa
Subject: Re: Info needed on OPS5
Article-I.D.: wdl1.438

    OPS5 runs in Franz Lisp on the VAX, and can be obtained from
Charles Forgy at CMU.  It can be obtained via the ARPANET, but an agreement
must be signed first.

------------------------------

Date: 26 Sep 84 18:21:46-PDT (Wed)
From: hplabs!hpda!fortune!wdl1!jbn @ Ucb-Vax.arpa
Subject: Re: VMS LISPS
Article-I.D.: wdl1.439

     And then, there is INTERLISP-VAX, the Bulgemobile of language systems.

------------------------------

Date: 27 Sep 84 10:12:10-PDT (Thu)
From: hplabs!tektronix!orca!iddic!rogerm @ Ucb-Vax.arpa
Subject: Tektronix 4404 AI Machine
Article-I.D.: iddic.1822

For information on the 4404 please contact your nearest Tektronix AIM Sales
Specialist; Tektronix Incorporated.

    Farwest:  Jeff McKenna
              3003 Bunker Hill Lane
              Santa Clara, CA 95050
              (408) 496-496-0800

    Midwest:  Abe Armoni
              PO Box 165027
              Irving, TX. 75016
              (214) 258-0525

  Northwest:  Gary Belonzi
              482 Bedford St.
              Lexington, MA. 02173
              (617) 861-6800

  Southeast:  Reed Phillips
              Suite 104
              3725 National Drive
              Raleigh, NC. 27612
              (919) 782-5624

This posting is to relieve tekecs!mako!janw from fielding responses that she
doesn't have time to answer after her initial posting several weeks ago.

Thank you.

------------------------------

Date: Fri 28 Sep 84 13:29:11-PDT
From: Margaret Olender <MOLENDER@SRI-AI.ARPA>
Subject: NEW ADDRESS FOR SYNTELLIGENCE

           [Forwarded from the SRI bboard by Laws@SRI-AI.]


Syntelligence is pleased to announce their new Headquarters at

                           100 Hamlin Court
                            P.O. Box 3620
                         Sunnyvale, CA 94088
                             408/745-6666

Effective September 1, 1984.

------------------------------

Date: Thu 27 Sep 84 08:18:14-PDT
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Tim Johnson's Report

On AIList today, I saw where someone was asking about the report Tim Johnson
did on Commercial Applications of AI.  It was produced by Ovum Ltd. in England
and is available for about $350 from a place in Portola Valley.  I have the
address at home and can send that to you later.  The report covers AI
research and applications in the USA and UK but also covers the larger research
projects worldwide.  It is a well written and researched report.

Harry Llull

------------------------------

Date: 28 Sep 1984 15:15:09-PDT
From: smith%umn.csnet@csnet-relay.arpa
Subject: John Dvorak as an information source

  I assume that the John Dvorak who wrote the critique of M.1. is the same
one that writes a weekly column in InfoWorld.  He is not what I would consider
a reliable source of technical information about computers.  His columns
usually consist of gossip and unsupported personal opinion.  What he writes
can be interesting but I like to see facts once in a while, too.  I've read
exactly one good column of his -- it was about computer book PUBLISHING rather
than about computers or software.  He looks to me like a talented individual
who spends too much time out of his league, but is respected for it anyway.
This is common in the 'popular' computer media these days, I guess.

Rick.

------------------------------

Date: Fri 28 Sep 84 10:44:28-PDT
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Re: Windows and Expert Systems

Reply to Sheb's flames:

No there is no direct relationship between window systems and expert
systems.  However, the goal of these vendors is to sell software
systems that make it easy to CONSTRUCT, DEBUG, and USE expert systems.
We know that high bandwidth between programmer and program makes it
easier to construct and maintain a program.  Similarly, high bandwidth
(properly employed) makes it easier to use a program.  The goal is to
reduce the cognitive load on the user/programmer, not to strive for
maximizing the cognitive load on the program.

Good software is 90% interface and 10% intelligence.

--Tom

------------------------------

Date: Fri 28 Sep 84 11:04:58-PDT
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Generalization

Reply to Shebs' other flame:

"Induction...Not too hard really;"

Shebs comments are very naive.  Of course it isn't too hard to
construct a MECHANISM that sometimes performs inductive
generalizations properly.  However, every mechanism developed thus far
is very ad hoc.  They all rely on "having the right formalism".  In
other words, the programmer implicitly tells the program how to
generalize.  The programmer communicates a set of "biasses" or
preferences through the formalism.  Many of us working in inductive
learning suspect that general techniques will not be found until we
have a THEORY that justifies our generalization mechanisms.  The
justification of induction appears to be impossible.  Appeals to the
Principle of Insufficient Reason and Occam's Razor just restate the
problem without solving it.  In essence, the problem is: What is
rational plausible inference?  When you have no knowledge about which
hypothesis is more plausible, how do you decide that one hypothesis IS
more plausible?  A justification of inductive inference must rely on
making some metaphysical assertions about the nature of the world and
the nature of knowledge.  A justification for Occam's razor, for
example, must show why syntactic simplicity necessarily corresponds to
simplicity in the real world.  This can't be true for just any
syntactic representation!  For what representations is it true?

--Tom

------------------------------

Date: Fri 28 Sep 84 14:58:48-PDT
From: Bill Poser <POSER@SU-CSLI.ARPA>
Subject: Natural languages as MT interlingua

I would like to hear more about the language mentioned by
briggs@riacs as a natural language suitable for use as an MT
interlanguage. Specifically, what is it called and where is
it documented? Where did he publish his demonstration that
it is equivalent to certain kinds of semantic nets?
I would also be interested to hear in what sense he means that
it is a natural language. Virtually all known natural languages
are ambiguous, in the sense that they contain sentences that are
ambiguous, but that does not mean that they cannot be used unambiguously.
An example is the use of English in mathematical writing-it is
possible to avoid ambiguity entirely by careful choice of syntax
and avoidance of anaphora. I wonder whether briggs' language is not
of the same sort-a natural language used in a specialized and restricted
way.

                                        Bill Poser
                                        (poser@su-csli,poser@su-russell)

------------------------------

Date: Fri, 28 Sep 84 15:14:41 PDT
From: "Dr. Michael G. Dyer" <dyer@UCLA-LOCUS.ARPA>
Subject: Natural Languages

A recent comment was made that natural languages can serve as an
interlingua.  I disagree.  There's an ancient language used by scientists
to communicate that's called "mathematics"... but is that a
"natural" language?   Natural languages have certain features, namely,
ambiguity, reference to complex conceptualizations regarding human
affairs, and abbreviated messages (that is,  you only say a tiny bit
of what you mean,  and rely on the intelligence of the listener to
combine his/her knowledge with the current context to reconstruct
everything you left out).  If that ancient language spoken by Iranian
scientists was unambiguous and unabbreviated,  then it's probably
about as "natural" as mathematics is as a language.  Then, also, there's
LOGLAN,  where,  when you say (in it) "every sailor loves some woman",  you
specify whether each sailor has his own woman or whether everyone
loves the same woman.  Fine,  but I'd hate to have to use it as an
everyday "natural" language for gettting around.  Natural languages
are complicated because people are intelligent.  The job of AI NLP
researchers is to gain insight into natural languages (and the cognitive
processes which support their comprehension) by working out  mappings
from natural languages into formal systems (i.e., realizable on stupid
machines).  It's hard enough mapping NL into something unambiguous
without mapping it into a language that itself must be parsed to remove
ambiguities and to resolve contextual references, etc.  It's conceivable
that a system could parse by a sequence of mappings into a sequence of
slightly more formal (i.e., less "natural") intermediate languages.  But then
disambiguation, etc., would have to be done over and over again.  Besides,
people don't seem to be doing that.   Natural languages and formal languages
serve different purposes.  English is currently used as an "interlingua"
by the world community,  but that is using the term "interlingua" in a
different sense.  The interlingua we need for NLP research should not
be "natural".

------------------------------

End of AIList Digest
********************
 2-Oct-84 09:27:47-PDT,15108;000000000000
Mail-From: LAWS created at  2-Oct-84 09:22:53
Date: Tue  2 Oct 1984 09:18-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #129
To: AIList@SRI-AI


AIList Digest            Tuesday, 2 Oct 1984      Volume 2 : Issue 129

Today's Topics:
  Bindings - Syntelligence Address Correction,
  Induction - Induction on One Case,
  Machine Translation - Sanskrit,
  Humor - Onyx BC8820 Stone Block Reader,
  Seminar - Learning in SOAR,
  Conference - Knowledge-Based Command and Control
----------------------------------------------------------------------

Date: 01 Oct 84  1144 PDT
From: Russell Greiner <RDG@SU-AI.ARPA>
Subject: Syntelligence: Address Correction

Syntelligence, an AI company specializing in building
expert systems for business applications, has just moved.
Its new address and phone number are

        Syntelligence
        1000 Hamlin Court          [not 100]
        PO Box 3620
        Sunnyvale, CA 94088
        (408) 745-6666

Dr Peter Hart, its president, can also be reached as
HART@SRI-AI.arpa.  (This net address should only be used for
professional (e.g., AAAI related) reasons.)

------------------------------

Date: Mon 1 Oct 84 14:10:23-MDT
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: Re: Induction on One Case

(My my, people seem to get upset, even when I think I'm making
 noncontroversial statements...)

It wasn't clear whether Tom Dietterich (and maybe others) understood
my remark on induction.  I was merely pointing out that "induction on
one case" is indistinguishable from "generalization".  Simple-minded
generalization IS easy.  Suppose I have as input a Lisp list (A B),
(presumably the first in a stream), and I tell my machine to create
some hypotheses about what it expects to see next.  Possible hypotheses
are:

  (A B)         - the machine expects to see (A B) forever
  (?X B)        - the machine expects to see 2nd element B
  (A ?X)        - similarly
  (?X ?Y)       - 2-element lists

Since these are lists, presumably one could get more elaborate...

  (?X ?Y optional ?Z)
  ...

And end up with "the most general hypothesis":

  ?X

All of these patterns can be produced just by knowing how to form
Lisp lists;  I don't think there's any hidden assumptions or biases
(please enlighten me if there are).  I would say that in general,
one can exhaustively generate all hypotheses, when the domains
are completely specified (i.e. a pattern like (<or A B> B) for the
above example has an undefined entity "or" which has nothing to do
with Lisp lists; one would have to extend the domains in which one
is operating).  Generating hypotheses in a more reasonable order is
completely domain-dependent (and no general theory is known).

Getting back to the example, all of the hypotheses are equally
plausible, since there is only one case to work from (unless one
wants to arbitrarily rank these hypotheses somehow; but none can
be excluded at this point).

I agree that selecting representations is very hard; there's not
even any consensus about what representations are useful, let alone
about how to select an appropriate one in particular cases.

(Have I screwed up anywhere in this?  I really wasn't intending
to flame...)

                                                stan shebs

------------------------------

Date: 1 Oct 1984 16:01-PDT (Monday)
From: Rick Briggs <briggs@RIACS.ARPA>
Subject: Sanskrit

        In response to the flood of messages I recieved concerning the
ambiguity-free natural language, here is some more information about it.
        The language is a branch of Sastric Sanskrit which flourished
between the 4th century B.C and 4th century A.D., although its
beginnings are somewhat older.  That it is unambiguous is without
question.  (I am writing two papers, one for laymen and one for those with
AI background).  A more interesting question is one posed by Dr. Michael
Dyer, that is "is it a natural language?".
        The answer is yes, it is natural and it is unambiguous.  It
would be difficult to call a language living and spoken for over a
millenium with as rich a literature as this langauge has anything but a
natural language.  The problem is that most (maybe all) of us are used
to languages like English (one of the worst) or other languages which
are so poor as vehicles of transmission of logical data.  We have
assumed that since all languages known have ambiguity, that it is
a necessary property of natural languages, but there is no reason to
make this assumption.  The complaint that it is awkward to speak
with the precision required to rule out ambiguity is one based on
(I would guess) the properties of Engish or other common Indo-European
languages.
        If one were to take a specific formulation such as a semantic
net and "read" it in English the result is a cumbersome mass of
detail which nobody would be willing to use in ordinary communication.
However, if one were to take that same semantic net and translate it
into the language I am studying you get (probably) one very long word
with a series of affixes which convey very compactly the actual meaning
of the semantic net.  In other words, translations from this language
to English are of the same nature as those from a semantic net to
English (hence the equivalence to semantic nets), one compact structure
to a long paragraph.
        The facility and ease with which these Indians communicated
indicates that it is possible for a natural language to serve all
purposes of artificial languages based on logic.  If one could say
what one wishes to say with absolute clarity (although with apparent
redundancy) in the same time and with the same ease as you say
part of what you mean in English, why not do so?  And if a population
actually got used to talking in this way there would be much more
clarity and less confusion in our communication.  Sastric Sanskrit
allows you to say WHAT YOU MEAN without effort.  The questions
"Can you elaborate on that?" or "What exactly are you trying to say?"
would simply not come up unless the hearer wished to go to a deeper
level of detail.
        This language was used in much the same way as language found
in technical journals today.  Scientists would communicate orally
and in writing in this language.  It is certainly a natural language.
        As to how this is accomplished, basically SYNTAX IS ELIMINATED.
Word order is unimportant, speaking is thus comparable to adding a
series of facts to a data-base.
        What interests me about this language is:
        1) Many theories derived recently in Linguistics and AI were
           independently in use over a thousand years ago, without
           computers or any need to eliminate ambiguity except for
           precise thinking and communication
        2) A natural language can serve as a mathematical (or artificial
           language) and thus the dichotomy between the two is false.
        3) There are methods for translating "regular" Sanskrit into
           Sastric Sanskrit from which much could be learned from NLP
           research.
        4) The possibilities of this language serving as interlingua
           for MT.

        There are no translated texts and it takes Sanskrit experts a
very long time to analyze the texts, so a translation of a full work
in this language is a way off. However, those interested can get
a hold of "Vaiyakarana-Siddhanta-Laghu-Manjusa" by Nagesha Bhatta.

Rick Briggs
NASA Ames

------------------------------

Date: Thu, 27 Sep 84 16:05:37 edt
From: Walter Hamscher <walter@mit-htvax>
Subject: Onyx BC8820 Stone Block Reader

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

Professor Petra Hechtman of the Archaeology Dept has an Egyptian
tombstone written in Hieroglyphs on an Onyx C8002 system running
ONYX IV.II that he needs to read.  The Onyx system that the
block was written with has died (legend has it that it is archived
in the temple of Tymsharin).  He needs to get the data off the
rock soon so that the exact date of Graduate Student Lunches can
be calculated (the most recent prediction fixes the date of the
next "bologna eclipse" as Friday the 28th at noon in the Third Floor
Playroom, hosted by David "Saz" Saslov and Mike "Mpw" Wellman).
According to Data Gene-rock, the original Filer was 1/4 cubit,
6250 spd (strokes per digit), 90 RAs, up to 10K BC.  Anyone who has,
knows of, or has chips off the original device that might be
able to decipher the stone, please contact Prof. Hechtman at
x5848, or at /dev/null@mit-htvax.

------------------------------

Date: Mon 1 Oct 84 10:25:14-PDT
From: Paula Edmisten <Edmisten@SUMEX-AIM.ARPA>
Subject: Seminar - Learning in SOAR

 [Forwarded from the Stanford SIGLUNCH distribution by Laws@SRI-AI.]

DATE:        Friday, October 5, 1984
LOCATION:    Chemistry Gazebo, between Physical and Organic Chemistry
TIME:        12:05

SPEAKER:     Paul S. Rosenbloom
             Assistant Professor

ABSTRACT:    Towards Chunking as a General Learning Mechanism

Chunks have long been proposed as a basic organizational unit for
human memory.  More recently chunks have been used to model human
learning on simple perceptual-motor skills.  In this talk, I will
present recent progress in extending chunking to be a general learning
mechanism by implementing it within a general problem solver.
Combining chunking with the SOAR problem-solving architecture
(described by John Laird in the SigLunch of September 28) we can take
significant steps toward a general problem solver that can learn about
all aspects of its own behavior.  The combination of a simple learning
mechanism (chunking) with a sophisticated problem-solver (SOAR)
yields: (1) practice speed-ups, (2) transfer of learning between
related tasks, (3) strategy acquisition, (4) automatic
knowledge-acquisition, and (5) the learning of general macro-operators
of the type used by Korf (1983) to solve Rubik's cube.  These types of
learning are demonstrated for traditional search-based tasks, such as
tic-tac-toe and the eight puzzle, and for R1-SOAR (a reformulation of
a portion of the R1 expert system in SOAR).

This work has been pursued in collaboration with John Laird (Xerox
PARC) and Allen Newell (Carnegie-Mellon University).

------------------------------

Date: 24 Sep 1984 18:13-EDT
From: ABN.CJMERRICK@USC-ISID.ARPA
Subject: Conference - Knowledge-Based Command and Control


                 SYMPOSIUM & EXHIBITION ON "ARTIFICIAL
                      INTELLIGENCE" TO BE HELD IN
                         KANSAS CITY, MISSOURI


                 "THE ROLE OF KNOWLEDGE BASED SYSTEMS
                         IN COMMAND & CONTROL"

                             SPONSORED BY:
                     KANSAS CITY CHAPTER OF AFCEA

                          OCTOBER 17-19, 1984


     The Kansas City Chapter of the Armed Forces Communications and
Electronics Association is proud to announce that it is sponsoring
its Second Annual Symposium and Exhibition to discuss the applicability
of artificial intelligence and knowledge based systems to command and
control requirements, in both the military and commercial environments.
     The Symposium will be enhanced by the presence of hardware and
software exhibits, representing advances in technology related to the
theme.
     Highlights of the Symposium will include noted individuals such
as Dr. Joseph V. Braddock of the BDM Corporation addressing user
perspectives of utilizing knowledge based systems to fulfill command
and  control needs.  Dr. Robert W. Milne of the Air Force Institute
of Technology will address AI technology and its application to
command and control.
     A luncheon presentation will be given by Lieutenant General
Carl E. Vuono, Commander, Combined Arms Center, Fort Leavenworth
and Deputy Commander, Training and Doctrine Command.
     General Donn A. Starry (Ret), Vice President and General Manager,
Space Missions Group of Ford Aerospace and Communications Corporation
will be the guest speaker following the evening meal on Thursday.
     The Symposium and Exhibition will be held over a three-day
period commencing with an opening of the exhibit area and a cocktail
and hors d'oeuvres social on October 17, 1984.  Technical sessions
will begin at 8:00 a.m. on October 18.  The format of the technical
presentation will consist of two high intensity panel discussions,
a session in which pertinent papers will be presented and two guest
lectures.

                          ABBREVIATED AGENDA

     WEDNESDAY, 17 OCTOBER 1984

1200-1700     Check in & Registration
1700-1900     Welcome Social & Exhibits Open

     THURSDAY, 18 0CTOBER 1984

0800-1145     SESSION I - Panel Discussion:  "Status and Forecast of
              of AI Technology as it applies to Command and Control"
              Panel Moderator:
                   Mr. Herbert S. Hovey, Jr.
                   Director, U.S. Army Signals Warfare Laboratory
                   Vint Hill Farms Station
                   Warrenton, Virginia  22186

1145-1330     Luncheon/Guest Speaker:
                   Lieutenant General Carl E. Vuono
                   Commander, U.S. Army Combined Arms Center
                   Deputy Commander, Training and Doctrine Command
                   Fort Leavenworth, Kansas  66207

1330-1700     SESSION II - Presentation of Papers

1700-1830     Social Hour

1830-2030     Dinner/Evening Speaker:
                   General Donn A. Starry (Ret)
                   Vice President & General Manager
                   Space Missions Group of Ford Aerospace and
                   Communications Corporation

     FRIDAY, 19 OCTOBER 1984

0800-1200     SESSION III - Panel Discussion:  "User Perspectives of
              Pros and Cons of Knowledge Based Systems in Command and
              Control"
              Panel Moderator:
                   Brigadier General David M. Maddox
                   Commander, Combined Arms Operations Research Activity
                   Fort Leavenworth, Kansas  66027


To make reservations or for further information, write or call:

                   AFCEA SYMPOSIUM COMMITTEE
                   P.O. Box 456
                   Leavenworth, Kansas  66048
                   (913) 651-7800/AUTOVON 552-4721


                   MILITARY POC IS:

                   CPT (P) CHRIS MERRICK
                   CACDA, C3I DIRECTORATE
                   FORT LEAVENWORTH, KANSAS 66027-5300
                   AUTOVON:  552-4980/5338
                   COMMERCIAL:  (913) 684-4980/5338
                   ARPANET:  ABN.CJMERRICK

------------------------------

End of AIList Digest
********************
 3-Oct-84 11:04:26-PDT,13138;000000000000
Mail-From: LAWS created at  3-Oct-84 11:02:08
Date: Wed  3 Oct 1984 10:56-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #130
To: AIList@SRI-AI


AIList Digest           Wednesday, 3 Oct 1984     Volume 2 : Issue 130

Today's Topics:
  Games - Chess Program,
  Pattern Recognition - Minimal Spanning Trees,
  Books - Tim Johnson's Report,
  Academia - Top Graduate Programs,
  AI Tools - OPS5 & Windows,
  Games - Computer Chess Tournament & Delphi Game
----------------------------------------------------------------------

Date: Tue, 2 Oct 84 21:46:14 EDT
From: "David J. Littleboy" <Littleboy@YALE.ARPA>
Subject: Chess Request

I  would  like  to acquire a state of the art chess program, preferably better
than USCF 1500, to run on  a  68000  based  machine  (an  Apollo).   Something
written  in  any  of the usual languages (C, Pascal) would probably be useful.
Since I intend to use it  as  an  opponent  for  the  learning  program  I  am
building,  I would also like the sources.  I am, of course, willing to pay for
the program.  Any pointers would be greatly appreciated.  Alternatively,  does
anyone know of a commercial chess machine with an RS-232 port?

                                          Thanks much,
                                          David J. Littleboy
                                          Littleboy@Yale
                                          ...!decvax!yale!littleboy

By  the  way,  the  basic  theoretical claim I start from is that the "problem
space" a chess player functions in is determined not so much by  the  position
at  hand,  as by the set of ideas, plans, and experiences he brings to bear on
that position.  Thus I view chess as a planning activity, with the goals to be
planned for deriving from a player's experiences in similar positions.

------------------------------

Date: 2 Oct 1984 11:25-cst
From: "George R. Cross" <cross%lsu.csnet@csnet-relay.arpa>
Subject: MST distributions

           [Forwarded from the SRI bboard by Laws@SRI-AI.]

I am interested in references to the following problem:

Suppose we have n-points uniformly distributed in a subset S contained in
p-dimensional Euclidean space R^p:

1.What is the distribution of the largest length of the Minimum
Spanning Tree (MST) over the n-points?  Assume Euclidean distance is
used to define the edge weights.

2.What is the distribution of the length of edges in the MST?

3.What is the distribution of the size of the maximal clique?

Asymptotic results or expected values of these quantities would be
interesting also.  We expect to make use of this information in
cluster algorithms.

Thanks,
        George Cross
        Computer Science
        Louisiana State University

        CSNET: cross%lsu@csnet-relay

------------------------------

Date: Tue 2 Oct 84 09:54:13-PDT
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Tim Johnson's Report

The Commercial Application of Expert Systems Technology by Tim Johnson is
a 1984 publication from Ovum Ltd., 14 Penn Road, London N7 9RD, England.
It is also available from IPI, 164 Pecora Way, Portola Valley, Ca. 94025
and sells for $395.  The report is 382 pages and primarily covers expert
systems research in the USA and UK although it also describes some of the
larger research projects worlwide.

Harry Llull, Stanford University Math/CS Library

------------------------------

Date: 29 Sep 84 20:19:50-PDT (Sat)
From: decvax!ittvax!dcdwest!sdcsvax!daryoush @ Ucb-Vax.arpa
Subject: Re: Top Ten
Article-I.D.: sdcsvax.149

Stanford is defintely one of the 3 best, if not THE best.

--id

------------------------------

Date: 3 Oct 84 11:41:55 EDT
From: BIESEL@RUTGERS.ARPA
Subject: OPS5 info summary.

Thanks are due to all the folks who responded to my request for information
on OPS5. What follows is a summary of this information.

There are at least three version of OPS5 currently available:

1) DEC Compiler QA668-CM in BLISS, available to 2 and 4 year degree granting
institutions for $1000. Documentation:
        AA-GH00A-TE  Forgy's Guide
        AA-BH99A-TE  DEC's User Guide

2)Forgy's version (Charles.Forgy@CMU-CS-A), running under Franz Lisp on
VAXen. A manual is also available from the same source.

3)A T Lisp version created by Dan Neiman and John Martin at ITT
(decvax!ittvax!wxlvax!martin@Berkeley). This version is also supported by
some software tools, but cannot be given away. For costs and procedures
contact John Martin.

Short courses on OPS5 are available from:
        Smart System Technology
        6870 Elm Street
        McLean, VA 22101
        (703) 448-8562

Elaine Kant and Lee Brownston@CMU-CS-A, Robert Farrell@Yale and Nancy Martin
at Wang Labs are writing a book on OPS5, to be published this Spring by
Addison_Wesley.

        Regards,
                Pete

------------------------------

Date: Mon 1 Oct 84 14:35:13-MDT
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: Summary of Window Responses

I got several replies to my question about the relation between windows
and expert systems.  The consensus seemed to be that since an expert
system development environment is like a programming environment, and
since PEs are known to benefit from having multiple windows available,
windows are an important part of expert system tools.  Incidentally, the
issue of graphics is orthogonal - graphics is useful in a great number
of applications (try describing the weirder geologic formations in words!),
although perhaps not all.

I have a little trouble with both assumptions.  I looked in my nifty
collection of reprints, "Interactive Programming Environments" (Barstow,
Shrobe, and Sandewall, eds., pub. by McGraw-Hill),
and found no research supporting the second assertion.  Its main
support appeared to be anecdotal.  My own anecdotal experience
is that even experienced users spend an inordinate amount of clock
time trying to do something right, but are not aware of just how
much time they're taking (pick a menu item, oops!, undo, try again,
then search all over the screen for 5 chars of text, then go through
an elaborate sequence of ops to grab those chars, paste them in the
wrong place when your mouse hand jiggles, delete, and try again, etc).
It's interesting to note that Winograd's two papers (from 1974 and 1979)
talk about all kinds of things that a PE should have, but with no mention
of graphics anywhere.

The first assertion appears to be true, and is a sad comment on the
sophistication of today's expert system tools.  If expert system
environments are just PEs, why not just supply PEs?  What's the
important difference between a Lisp stack backtrace and a rule
system backtrace?  Why can't today's expert system tools at least
provide a TMS and some detailed explanation facilities?  Why
hasn't anybody included some meta-level knowledge about the tool
itself, as opposed to supplying an inscrutable block of code and
a (possibly correct) user's manual?  I don't understand.  It seems
as though the programming mentality reigns supreme (if you don't
understand that remark, go back and carefully reread Winograd's
1979 paper "Beyond Programming Languages" (in CACM, and reprinted
in the abovementioned book).

                                                        stan shebs

------------------------------

Date: Tue Oct  2 12:24:29 1984
From: mclure@sri-prism
Subject: reminder of upcoming computer chess tournament in San
         Francisco

    This is a reminder that this coming Sunday (Oct 7) will herald the
beginning of the battle of the titans at the San Francisco Hilton
"continental parlors" room at 1pm.

    Cray Blitz the reigning world champion program will attempt to
squash the vengeful Belle.  Nuchess, a perennial "top-finishing
contender" and descendent of Chess 4.5, wants a piece of the action and
would be very happy to see the Belle/Cray Blitz battle cause both to go
up in a puff of greasy, black smoke, leaving Nuchess as the top dog for
the entire year.

    It promises to be as interesting as it is every year.  You don't
have to be a computer-freak or chess-fanatic to enjoy the event.

    Come on by for a rip-roaring time.

        Stuart

------------------------------

Date: Sun Sep 30 16:02:03 1984
From: mclure@sri-prism
Subject: Delphi 15: cruncher nudges bishop

The Vote Tally
--------------
The winner is: 14 ... Ne8
There were 16 votes. We had a wide mixture. The group seemed to have
difficulty forming a plan. Many different plans were suggested.

The Machine Moves
-----------------
        Depth   Move    Time for search         Nodes      Machine's Estimate
        8 ply   h3       6 hrs, 4 mins         2.18x10^     +4% of a pawn
                (P-KR3)

                Humans                    Move        # Votes
        BR ** -- BQ BN BR BK **       14 ... Ne8        4
        ** BP ** -- BB BP BP BP       14 ... Rc8        3
        BP ** -- BP -- ** -- **       14 ... Nh5        3
        ** -- ** WP BP -- ** --       14 ... Nd7        2
        -- ** -- ** WP ** BB **       14 ... Qd7        2
        ** -- WN -- WB WN ** WP       14 ... Nxe4       1
        WP WP -- ** WQ WP WP **       14 ... Qb6        1
        WR -- ** -- WR -- WK --
             Prestige 8-ply

The machine's evaluation turned from negative to slightly positive.
Apparently it likes this position somewhat but still considers the
position even.

The Game So Far
---------------
1. e4  (P-K4)   c5 (P-QB4)  11. Be2 (B-K2)  Nxe2 (NxB)
2. Nf3 (N-KB3)  d6 (P-Q3)   12. Qxe2 (QxN)  Be7 (B-K2)
3. Bb5+(B-N5ch) Nc6 (N-QB3) 13. Nc3 (N-QB3) O-O (O-O)
4. o-o (O-O)    Bd7 (B-Q2)  14. Be3 (B-K3)  Ne8 (N-K1)
5. c3 (P-QB3)   Nf6 (N-KB3) 15. h3 (P-KR3)
6. Re1 (R-K1)   a6 (P-QR3)
7. Bf1 (B-KB1)  e5 (P-K4)
8. d4  (P-Q4)   cxd4 (PXP)
9. cxd4 (PXP)   Bg4 (B-N5)
10. d5  (P-Q5)  Nd4 (N-Q5)

Commentary
----------
    BLEE.ES@XEROX
        14  ...  Ne8 as
        14  ...  Nh5?; 15. h3 B:f3 (if 15 ... Bd7?; 16. N:e5
        and white wins a pawn) 16. Q:f3 Nf6 (now we've lost
        the bishop pair, a tempo and the knight still blockades
        the f pawn and the white queen is active...)
        (if 16 ... g6?; 16. Bh6 Ng7; 17. g4 and black can't support f5 because
        the light square bishop is gone) while
        14 ... Nd7?; 15. h3 Bh5; 16. g4 Bg6; and black has trouble supporting
        f5. I expect play to proceed:
        15. h3    Bd7
        16. g4    g6
        17. Bh6   Ng7
        18. Qd3   f5 (at last!)
        19. g:f5  g:f5

    JPERRY@SRI-KL
        In keeping with the obvious strategic plan of f5, I
        vote for 14...N-K1.  N-Q2 looks plausible but I would
        rather reserve that square for another piece.

    SMILE@UT-SALLY
        14 ... Nh5.
        Paves the way for f5. Other possibility is Qd7 first. Either
        way I believe f5 is the key (as it often is!).

    REM@MIT-MC
        I'm not much for attacking correctly, so let's prepare
        to double rooks: 14.  ...  Q-Q2 (Qd7) (It also helps a
        K-side attack if somebody else can work out the details.)

    VANGELDER@SU-SCORE
        14. ... Nxe4 (vote)
        In spite of what the master says, White can indefinitely prevent f5 by
        h3, Bd7, g4.  Will the computer find this after Ne8 by Black?
        Stronger over the board is 14 ... Nxe4.  If 15. Nxe4 f5 16. N/4g5 f4
        and Black regains the piece with advantage.  The
        majority will probably not select this move, which may
        be just as well, as attack-by-committee could present
        some real problems.  Nevertheless, the computer
        presumably saw and examined several ply on this line and
        it would be interesting to see what it thinks White's
        best defense is.  An alternate line for White is 15.
        Nxe4 f5 16.  N/4d2 e4 17.  h3 Bh5 18.  Bd4 Bg4!?  19.
        Nxe4 fxe4 20.  Qxe4 Bxf3 21.  gxf3 Rf4.
        There are many variations, but most are not decisive in
        8 ply, so the computer's evaluation function would be
        put to the acid test.

    ACHEN.PA@XEROX
        13 ... Nh5 (keep up the pressure)
        this might provoke 14 g3 Bd7, either 15 Nd2 or h4 to
        start a counter attack.  the black is hoping to exchange
        the remaining knight with queen's bishop 16 ...  Nf4
        then maybe attempt to encircle the white with Qb6
        attacking the weakside behind the pawns.  (note: if 13
        ...  Nh5 can't 14 ...  f5 for the obvious reason)

Solicitation
------------
    Your move, please?

        Replies to Arpanet: mclure@sri-prism, mclure@sri-unix or
        Usenet: ucbvax!menlo70!sri-unix!sri-prism!mclure

------------------------------

End of AIList Digest
********************
 5-Oct-84 10:02:25-PDT,20707;000000000000
Mail-From: LAWS created at  5-Oct-84 09:56:45
Date: Fri  5 Oct 1984 09:50-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #131
To: AIList@SRI-AI


AIList Digest             Friday, 5 Oct 1984      Volume 2 : Issue 131

Today's Topics:
  Linguistics - Sastric Sanskrit & LOGLAN & Interlinquas
----------------------------------------------------------------------

Date: Wed, 3 Oct 1984  23:55 PDT
From: KIPARSKY@SU-CSLI.ARPA
Subject: Sanskrit has ambiguity and syntax

Contrary to what Briggs claims, Shastric Sanskrit the same kinds of
ambiguities as other natural languages. In particular, the language
allows, and the texts abundantly exemplify: (1) anaphoric pronouns
with more than one possible antecedent, (2) ambigous scope of
quantifiers and negation, (3) ellipses, (4) lexical homonymy, (5)
morphological syncretism.  Even the special regimented language in
which Panini's grammar of Sanskrit is formalized (not a natural
language though based on Sanskrit) falls short of complete unambiguity
(see Kiparsky, Panini as a Variationist, MIT Press 1979).  The claim
that Sanskrit has no syntax is also untrue, even if syntax is
understood to mean just word order: rajna bhikshuna bhavitavyam would
normally mean "the beggar will have to become king", bhikshuna rajna
bhavitavyam "the king will have to become a beggar" --- but in any
case, there is a lot more to syntax than word order.

------------------------------

Date: Wed, 3 Oct 84 01:23:07 PDT
From: "Dr. Michael G. Dyer" <dyer@UCLA-LOCUS.ARPA>
Subject: Sastric Sanskrit


Re: Rick Briggs' comments on a version of Sastric Sanskrit.

Well,  I AM incredulous!  Imagine.  The entire natural language
processing problem in AI has already been solved!  and a millenium ago!
All we need to do now is publish a 'manual' of this language and
our representational problem in NLP is over!  Since this language
can say anything you want,  and "mean exactly what you say"  and
"with no effort",  and since it is unambiguous,  it sounds like
my problems as an NLP researcher are over.

I DO have a few minor concerns (still).  The comment that
there are no translations,  and that it takes sanskrit scholars
a "very long time"  to figure out what it says,  makes it sound to
me like maybe there's some complex interpretations going on.
Does this mean that a 'parser' of some sort is still needed?

Also,  I'd greatly appreciate a clearer reference to the book (?)
mentioned.  Who is the publisher?  Is it in English?  What year
was it published?  How can we get a copy?

Another problem:  since this language has an "extensive literature" does
that include poetry?  novels?  Are the poems unambiguous?  are there
plays on words?  metaphor?  (Can you say the equivalent of "Religion is
the opiate of the masses"?  and if not, it that natural?  if not, then
how are analogical mappings formed?) satire?  humor?  puns?
exaggeration?  fantasy?  does the language look like a bunch of horn
clauses?  (most of the phenomena in the list above involve AMBIGUITY of
context, beliefs, word senses, connotations, etc. How does the
literature avoid these features and remain literature?)

Finally,  Yale researchers have been arguing that representational
systems for story understanding requires explict conceptual structures
making use of scripts, plans, goals,  etc.  Do such constructs
(e.g. scripts) exist explicity  in the language?

does its literature make use of idioms?
e.g. "John drove Mary [home]"  vs
     "John drove Mary [to drink]"

Also,  why is English "worse" than other languages?  Chinese has
little syntax and it's ambiguous.  Latin has very free word order
with prefixes and suffixes and it's ambiguous.  Both rely heavily on
context and implicit world knowledge.  Early work by Schank
included representing a Mayan dialect (i.e. Quiche') in Conceptual
Dependency.  Quiche seems to have features standard to other natural
languages,  so how is English worse?

In the book "Reader over Your Shoulder", Graves & Hodge  have a humorous
piece about some town councilmen trying to write a leash law.
No matter how they state it,  unhappy assumptions pop up.
e.g.  "No dogs in the park without a leash"  seems to be addressed
to the dogs.  "People must take their dogs into the park on a leash"
seems to FORCE people to drag there dogs into the park (and at what hour?)
even if they don't want to do so. etc etc

what about reference?  does sastric sanskrit have pronouns?
what about IT?  does IT have THEM? etc  if so,  how does it avoid
ambiguous references?  how many different types of pronouns does it
have (if any)?

Let's have some specific examples.  E.g. does it have the equivalent of
the word "like"?  Before you answer "yes",  there's a difference
between "John likes newsweek"  and "John likes chocolate"

In one case we want our computer to infer that John likes to "eat"
chocolate  (not read it)  and in the other case that he likes to
read newsweek (not eat it).    Sure,  I COULD have said
"John likes to eat chocolate" specifically.  but I can abbreviate
that simply to "x likes <object>"  and let the intelligent listener
figure out what I mean.   When I say "John likes to eat chocolate"
do I mean he enjoys the activity of eating,  or that he feels
better after he's eaten?   When I say "John likes to eat
chocolate but feels terrible afterwards"  I used the word "but"
because I know it violated a standard inference on the part of the
listener.  Natural languages are "expectation-based".  Does this
ancient language require the speaker to explicitly state all
inferences & expectations?

Like I said already,  if this ancient language really does what
is claimed,  then we should all dump the puny representational
systems we've been trying to invent and extend over the last
decade and adopt this ancient language as our final say
on semantics.

Recent work by Layman Allen (1st Law & Technology conference)
in normalizing American law shows that the logical connectives
used by lawyers are horribly ambiguous.  Lawyers use
content semantics to avoid noticing these logical ambiguities.
Does this brand of sanskrit have a text of ancient law?  What
connectives did they use?  Maybe the legal normalization problem
has also already been solved.

Did they have a dictionary?  If so, can we see some of the entries?  How
do the dictionary entries combine?  No syntax AT ALL?  Loglan adds
suffixes onto everything and it's plenty awkward.  It has people who
write poems in it and other "literature" but you can probably pack all
loglanners who "generate" loglanese into a single phone booth.

Just how many ancient scholars spoke this sanskrit?

I look forward to more discussion on this incredible language.

--  A still open-minded but somewhat skeptical inquirer

------------------------------

Date: Thursday,  4-Oct-84 23:59:06-BST
From: O'KEEFE HPS (on ERCC DEC-10) <okeefe.r.a.%edxa@ucl-cs.arpa>
Subject: An Unambiguous Natural Language?


     There was a recent claim in this digest that a "branch of Sastric
Sanskrit" was an unambiguous natural language.  There are a number of
points I'd like to raise:

(a)  If there are no translated texts, and if it takes a very long
     time for an expert in "ordinary" Sanskrit to read untranslated
     texts, it seems more than likely that the appearance of being
     free from ambiguity is an illusion due to our ignorance.

(b)  Thanks for the reference.  But judging by the title you need to
     know a lot more about Indian languages to read it than most of
     the readers of this digest, and without knowing the publisher one
     would have to be thoroughly at home with the literature to even
     find it.

(c)  It's news to me that Sanskrit wasn't an Indo-European language.
     The Greek-English dictionary I have a copy of keeps pointing to
     Sanskrit roots as if the two languages were related, but what do
     they know?  If Sastric Sanskrit is an Indo-European language, it
     is astonishing that it alone is unambiguous.  It's especially
     astonishing when the one non-Indo-European language of which I
     have even the sketchiest acquaintance (Maaori) isn't unambiguous
     either and when no-one seems to be claiming that Japanese or
     Chinese or any other common living language is unambiguous.

(d)  Dead languages are peculiarly subject to claims of perfection.
     Without a living informant, we cannot tell whether our failure to
     discover another reading means there isn't one or whether it just
     means that we're ignorant of a word sense.  I suppose this is
     point (a) again.

(e)  If a language permits metaphor, it is ambiguous.  The word for
     "see" in ordinary Sanskrit is something like "oide", and I'm told
     that it can mean "understand" as well as "perceive with the eye".
     Do we KNOW that the Sastric Sanskrit words for "see", "grasp",
     and so on were NEVER employed with this meaning?

(f)  We're actually dealing with an ambiguous term here: "ambiguous".
     The following definition is the only one I can think of which is
     not dependent on some "expert's" arbitrary choice:
        a sentence S in a text is ambiguous if
        taking into account assumed common knowledge and the
        context supplied by the rest of the text
        there is some natural language L such that
        S has at least two incompatible translations in L.
     Here's an example: there are four people in a room, A, B, C, D.
     This is the beginning of the text, and nothing else in the text
     lets us judge these points, and we've never heard of A,B,C,D
     before.  A says to D: "we came from X."
     I assume we know exactly what place X is.  Now, does A mean that
        A,B,C and D all came from X?  (reminding D)
        A,B,C came from X?
        A and D came from X? (he knows B and C are listening)
        A and one of B and C came from X?
     We need to distinguish between dual and plural number, and
     between inclusive first person and exclusive first person.  If
     the language L marks the gender of plural subjects, we may need
     to know in the case of A and (B or C but not both) which of B
     and C was intended.  Now consider A mentioning to D "that table",
     assuming that there are several tables in the same room, all of
     the same sort.  We need to know whether the table he is indicating
     is near D (it can't be near A or he'd say "this table") or whether
     it is distant from both A and D.  Does the branch of Sanskrit in
     question make all these distinctions?  Can every tense in it be
     translated to a unique English tense?  Does it have no broad
     colour terms such as the "grue" present in several languages?
     Failing that, by what criterion IS it unambiguous?
     {What's a better definition of ambiguity?  This one strikes
     most people I've offered it to as too strong.}

(g)  Absence of syntax is no guarantee of unambiguity.  Consider the
     phrase "blackbird".  It doesn't matter how we indicate that
     black modifies bird, the source of ambiguity is that we don't
     know whether the referent is some generic bird that happens to
     be black (a crow, say), or whether this phrase is used as the
     name of a species.  In English you can tell the difference by
     prosody, but that doesn't work to well with long-dead languages,
     and if you thought it always meant turdus merula you might never
     find anything in the fixed stock of surviving texts to reveal
     the mistake.

(h)  What evidence is there that this language was spoken?  Note that
     if a text in this language quotes someone as speaking in it,
     that still isn't evidence that the language was spoken.  I've
     just been reading a book set in Greece, with Greek characters,
     but the whole thing was in English...  Are there historians
     writing in other languages who say that the language was spoken?

(i)  There is another ambiguous term: "natural" language.  Is Esperanto
     a natural language?  Is Shelta?  The pandits were nobody's fools,
     after all, Panini invented Backus-Naur form for the express
     purpose of describing Sanskrit, and I am not so contemptuous of
     the ancient Indians as to say that they couldn't do a better job
     of designing an artificial language than Zamenhof did.

I'm not saying the language isn't unambiguous, just that it's such a
startling claim that I'll need more evidence before I believe it.

------------------------------

Date: 3 Oct 84 12:57:24-PDT (Wed)
From: hplabs!sdcrdcf!sdcsvax!sdamos!elman @ Ucb-Vax.arpa
Subject: Re: Sanskrit
Article-I.D.: sdamos.17

Rick,

I am very skeptical about your claims that Sastric Sanskrit is an
unambiguous language.  I also  feel you misunderstand the nature
and consequences of ambiguity in natural human language.

    |        The language is a branch of Sastric Sanskrit which flourished
    |between the 4th century B.C and 4th century A.D., although its
    |beginnings are somewhat older.  That it is unambiguous is without
    |question.

Your judgment is probably based on written sources.  The sources may also
be technical texts.  All this indicates is that it was possible to write
in Sastric Sanskrit with a minimum of ambiguity.  So what?   Most languages
allow utterances which have no ambiguity.  Read a mathematics text.

    |The problem is that most (maybe all) of us are used
    |to languages like English (one of the worst) or other languages which
    |are so poor as vehicles of transmission of logical data.

I think you have fallen victim to the trap of the egocentrism.  English is
not particularly less (or more) effective than other languages as a vehicle
for communicating logical data, although it may seem that way to
a native monolingual speaker.

    |        The facility and ease with which these Indians communicated
    |indicates that it is possible for a natural language to serve all
    |purposes of artificial languages based on logic.

How do you know how easily they communicated?   I'm serious.  And
how easily do you read a text on partial differential equations?  An
utterance which is structurally ambiguous may not be the easiest to
read.

    |If one could say what one wishes to say with absolute clarity (although
    |with apparent redundancy) in the same time and with the same ease as
    |you say part of what you mean in English, why not do so?  And if a
    |population actually got used to talking in this way there would be
    |much more clarity and less confusion in our communication.

Here we come to an important point.  You assume that the ambiguity of
natural languages results in loss of clarity.  I would argue that
in most cases the structural ambiguity in utterances is resolved
by other (linguistic or paralinguistic) means.  Meaning is determined
by a complex interaction of factors, of which surface structure is but one.
Surface ambiguity gives the language a flexibility of expression.  That
flexibility does not necessarily entail lack of clarity.  Automatic
(machine-based) parsers, on the other hand, have a very difficult time
taking all the necessary interactions into account and so must rely more
heavily on a reliable mapping of surface to base structure.

    |        As to how this is accomplished, basically SYNTAX IS ELIMINATED.
    |Word order is unimportant, speaking is thus comparable to adding a
    |series of facts to a data-base.

Oops!  Languages may have (relatively) free word order and still have
syntax.   A language without syntax would be the linguistic find of
the century!

In any event, the principal point I would like to make is that structural
ambiguity is not particularly bad nor incompatible with "logical" expression.
Human speech recognizers have a variety of means for dealing with
ambiguity.  In fact, my guess is we do better at understanding languages
which use ambiguity than languages which exclude it.

Jeff Elman
Phonetics Lab, Dept. of Linguistics, C-008
Univ. of Calif., San Diego La Jolla, CA 92093
(619) 452-2536,  (619) 452-3600

UUCP:      ...ucbvax!sdcsvax!sdamos!elman
ARPAnet:   elman@nprdc.ARPA

------------------------------

Date: Friday,  5 Oct 1984 10:15-EDT
From: jmg@Mitre-Bedford
Subject: Loglan, properties of interlinguas, and NLs as interlinguas

        There has been a running conversation regarding the use of an
intermediate language or interlingua to facilitate communication between
man and machine.  The discussion lately has focused on whether or not it
is possible or even desirable for a natural language (i.e., one which was
made for and spoken/written by humans in some historical and cultural
context) to serve in this role.  At last glance it would seem to be a
standoff between the cans and cannots.  It might be interesting to see
if a consensus can at least be reached regarding what an interlingua
might be like and therefore whether any natural languages or formal ones
for that matter would fit or could be made to fit the necessary form.
        It would seem that a candidate language would possess a fair
sample of the following characteristics (feel free to add to or modify
this list):
        1) small number of grammar rules--to reduce the trauma of learninng
a new language, simplify parsing program, and generally speed up the works
        2) small number of speech sounds--to ease learning, and, if well
chosen, improve the distinction between sounds and thus the apprehensibil-
ity of the spoken language
        3) phonologically consistent--for similar reasons as 2) above
        4) relative freedom from syntactic ambiguity--to ease translation
activities and provide an experimental tool for exploring ambiguity in
NLs and thought
        5) graphologically regular/consistent with phonology--to ease the
transition to the interlingua by introducing no new characters and only
simple spelling rules
        6) simple morphology--to improve the recognizability of words and
word types by limiting the structures of legal words to a few and making
word construction regular
        7) resolvability--to aid in machine and human information extraction,
particularly in noisy environments, by combining  well-chosen phonology and
morphology
        8) freedom from cultural or metaphysical bias--to avoid introducing
unintended effects due to specific built-in assumptions about the universe
that may be contained within the language
        9) logical clarity--to ensure the ability to construct the classical
logical connections important to semantically and linguistically useful
expressions
       10) wealth of metaphor--to allow this linguistic feature to be studied
and provide a creative tool for expression

        These features were selected to try to characterize the intent of
a hypothetical designer of an interlingua.  Possibly no product could fully
merge all the features without compromising unacceptably some of the desir-
able traits.  If this list appears unacceptable, make suggestions and/or
additions and deletions until a workable list results.
        It is likely that no current or historical natural language would
combine a sufficient number of the above features to stand out as an obvious
choice to use as interlingua.  Simplicity, regularity, ease of learning,
ease of information extraction, lack of syntactic ambiguity, and the rest
are the earmarks of a constructed language.  It remains to be seen that a
so-constructed language can be used by humans to express  unrestrictedly the
full range of human thought.
        In response to Dr. Dyer's comment about loglan, I can testify that it
is not all that hard to get around in.  It is a "foreign" language, however,
and thus takes some learning and getting used to.  It does have several of
the features that an interlingua would.  Only experience will ultimately
reveal whether it is "natural" enough to be useful for exploring the rela-
tionship between thought and language and formal enough to be machine-
realizable.

                            -Michael Gilmer
                            jmg@MITRE-BEDFORD.ARPA

------------------------------

End of AIList Digest
********************
 7-Oct-84 09:52:21-PDT,12693;000000000000
Mail-From: LAWS created at  5-Oct-84 10:30:02
Date: Fri  5 Oct 1984 10:19-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #132
To: AIList@SRI-AI


AIList Digest            Saturday, 6 Oct 1984     Volume 2 : Issue 132

Today's Topics:
  Bindings - Query about D. Hatfield,
  Applications - AI and Business,
  AI Literature - List of Sources,
  Academia - Top Graduate Programs,
  Conference - Fifth Generation at ACM 84,
  AI Tools - OPS5 & YAPS & Window Systems & M.1,
  Scientific Method - Induction,
  Seminar - Natural Language Structure
----------------------------------------------------------------------

Date: Wed, 3 Oct 1984  15:54 EDT
From: MONTALVO%MIT-OZ@MIT-MC.ARPA
Subject: Query about D. Hatfield

      Wed., Aug. 29 Computer Science Seminar at IBM-SJ
      10:00 A.M.  WYSIWYG PROGRAMMING
                D. Hatfield, IBM Cambridge Scientific Center
                Host:  D. Chamberlin


This message appeared some time ago.  [Can someone provide]
any pointers to the speaker, D. Hatfield?  Does he have any papers
on the same subject?  Thanks.

Fanya Montalvo, MIT, AI Lab.

------------------------------

Date: 3 Oct 84 8:39:05-PDT (Wed)
From: hplabs!sdcrdcf!sdcsvax!noscvax!bloomber @ Ucb-Vax.arpa
Subject: Re: AI for Business
Article-I.D.: noscvax.641

  I would also be interested in pointers to books or articles that
emphasize the business (preferably practical) uses of AI.


                                        Thanks ... Mike
--

Real Life: Michael Bloomberg
   MILNET: bloomber@nosc
     UUCP: [ihnp4,akgua,decvax,dcdwest,ucbvax]!sdcsvax!noscvax!bloomber

------------------------------

Date: Wed, 3 Oct 84 00:05 CDT
From: Jerry Bakin <Bakin@HI-MULTICS.ARPA>
Subject: Keeping up with AI research

I am interested in following trends and research in AI.  What do active
AI'ers feel are the important journals, organizations and conferences?

Thanks,

Jerry Bakin -- Bakin@HI-Multics


[I have sent Jerry the list of journals and conferences compiled by
Larry Cipriani and published in AIList V1 N43.  In short,

    AI Magazine
    AISB Newsletter
    Annual Review in Automatic Programming
    Artificial Intelligence
    Behavioral and Brain Sciences
    Brain and Cognition
    Brain and Language
    Cognition
    Cognition and Brain Theory
    Cognitive Psychology
    Cognitive Science
    Communications of the ACM
    Computational Linguistics
    Computational Linguistics and Computer Languages
    Computer Vision, Graphics, and Image Processing
    Computing Reviews
    Human Intelligence
    IEEE Computer
    IEEE Transactions on Pattern Analysis and Machine Intelligence
    Intelligence
    International Journal of Man Machine Studies
    Journal of the ACM
    Journal of the Assn. for the Study of Perception
    New Generation Computing
    Pattern Recognition
    Robotics Age
    Robotics Today
    SIGART Newsletter
    Speech Technology

    IJCAI   International Joint Conference on AI
    AAAI    American Association for Artificial Intelligence
    TINLAP  Theoretical Issues in Natural Language Processing
    ACL     Association of Computational Linguistics
    AIM     AI in Medicine
    MLW     Machine Learning Workshop
    CVPR    Computer Vision and Pattern Recognition (formerly PRIP)
    PR      Pattern Recognition (also called ICPR)
    IUW     Image Understanding Workshop (DARPA)
    T&A     Trends and Applications (IEEE, NBS)
    DADCM   Workshop on Data Abstraction, Databases, and Conceptual Modeling
    CogSci  Cognitive Science Society
    EAIC    European AI Conference

Would anyone care to add a list of organizations?  -- KIL]

------------------------------

Date: Wed, 3 Oct 84 13:31:08 
From: Bob Woodham <woodham%ubc.csnet@csnet-relay.arpa>
Subject: Top Graduate Programs

I cannot resist offering my contribution but first three comments:

 1. A strict linear ordering is rather meaningless so I've simply listed
    schools alphabetically within two broad categories.
 2. Not surprisingly, given my location, I've expanded things to
    all of North America.  There are good programs outside the continent
    but I'm not qualified to comment.
 3. If your favourite school is missing, let that indicate my ignorance
    rather than a slight.  Since this is roughly the advice I give our own
    students, I'd like to hear more.

Category I:   Major Strength in all Areas of AI (alphabetic order)

CMU, MIT, Stanford

Category II:  Major Strength in at least one Area of AI, adequate overall
              (alphabetic order)

Illinois, McGill, Penn, Rochester, Rutgers, Texas (at Austin), Toronto,
UBC, Yale

There are other schools with strengths, or emerging strengths, that are
worth considering.  Thankfully, I'm already beyond the requested number
of ten.  Any of the above schools could be an excellent choice, depending
on the particular area of interest.

------------------------------

Date: 3 Oct 1984 14:24-PDT
From: scacchi%usc-cse.csnet@csnet-relay.arpa
Subject: ACM 84

Just a short note to point out that at the 1984 ACM Conference in San
Francisco has a number of sessions on AI and Fifth Generation
technologies. In particular, there are at least three sessions that
focus on the broader social consequences that might arise from
the widespread adoption and use of AI systems. The three sessions
include:

1. "The Workplace Impacts of Fifth Generation Computing -- AI and Office
Automation" on tuesday (9 Oct 84) morning

2. "Social and Organizational Consequences of New Generation Technology"
on tuesday afternoon.

3. "Social Implications of Artificial Intelligence" on wednesday
afternoon.

If you are able to attend the ACM 84 conference and you are interested
in discussing or learning about social analyses of AI technology
development, then you should try to attend these sessions.

-Walt-

(Scacchi@Usc-cse via CSnet)

------------------------------

Date: 2 Oct 84 16:03:48-PDT (Tue)
From: hplabs!hpda!fortune!wdl1!jbn @ Ucb-Vax.arpa
Subject: Re: obtaining OPS-5
Article-I.D.: wdl1.458

     OPS-5 is obtained from Charles Forgy at CMU, reached at the following
address.  Do not contact me regarding this.

Forgy, Charles L. (CLF)                              CHARLES.FORGY@CMU-CS-A
   Carnegie-Mellon University
   Computer Science Department
   Schenley Park
   Pittsburgh, Pennsylvania 15213
   Phone: (412) 578-3612

------------------------------

Date: Wed, 3 Oct 84 23:39:58 edt
From: mark@tove (Mark Weiser)
Subject: ops5 and yaps.

For those of you interested in ops5, don't forget YAPS.  Yaps was
described by Liz Allen of Maryland at the '83 AAAI.

Yaps, yet another production system, uses Forgy's high speed
short cuts for left hands sides which fall into ops5's limited
legal lhs, but yaps also allows fully general left hand sides.
Yaps second advantage over ops5 is that it is imbedded in
the Franz lisp flavors system (also from Maryland), so that
one can have several simultaneous yaps objects and send them
messages like add-a-rule, add-object-to-database, etc.

For more information, mail liz@maryland.

Spoken: Mark Weiser     ARPA:   mark@maryland
CSNet:  mark@umcp-cs    UUCP:   {seismo,allegra}!umcp-cs!mark

------------------------------

Date: 1 Oct 84 18:21:18-PDT (Mon)
From: hplabs!hpda!fortune!wdl1!jbn @ Ucb-Vax.arpa
Subject: Re: Windows and Expert Systems
Article-I.D.: wdl1.451

    I've noticed this lately too; I've also seen the claim that ``windows were
developed ten years ago by the AI community'', but the early Alto effort at
PARC, which I saw demonstrated in 1975 by Allen Kay, was not AI-oriented; they
were working primarily on improved user interfaces, including window systems.

                                                John Nagle

------------------------------

Date: 30 Sep 84 8:30:02-PDT (Sun)
From: decvax!mcnc!unc!ulysses!burl!clyde!watmath!water!rggoebel@Ucb-Vax.arpa
Subject: Re: Clarification Regarding Teknowledge's M.1 Product
Article-I.D.: water.20

I've just read what amounts to an advertisement for Teknowledge's
M.1 software product.   I can't believe there isn't something to
be criticized in a product that comes from such an infant technology?
I'd be interested to know what's wrong with M.1?  Will Teknowledge
give it away to universities to teach students about expert systems?
Is SRI-KL using M.1 for anything (note origin of original message)?
On a lighter note, what is novel about a software system that supports
``variables?''

Randy Goebel
Logic Programming and Artificial Intelligence Group
Computer Science Department
University of Waterloo
Waterloo, Ontario, CANADA N2L 3G1
UUCP:   {decvax,ihnp4,allegra}!watmath!water!rggoebel
CSNET:  rggoebel%water@waterloo.csnet
ARPA:   rggoebel%water%waterloo.csnet@csnet-relay.arpa

[I am not aware of any SRI use of M.1, nor do I know of anyone at SRI
who has a financial interest in it.  Many people around the country
have mailboxes on systems where they once worked or otherwise have
incidental access; I assume that is the case here.  An SRI group has
recently come out with its own micro-based expert system toolkit,
SeRIES-PC, a PROSPECTOR derivative.  -- KIL]

------------------------------

Date: 1 Oct 84 22:21:20-PDT (Mon)
From: hplabs!hpda!fortune!wdl1!jbn @ Ucb-Vax.arpa
Subject: Re: Re: Clarification Regarding Teknowle
Article-I.D.: wdl1.453

     I'd like to see them offer a training version of the program for $50 or so
which allowed, say, a maximum of 50 rules, enough to try out the system but
not enough to implement a production application.  This would get the tool
(and the technology) some real exposure.

                                John Nagle

------------------------------

Date: Wed 3 Oct 84 00:05:12-PDT
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: re: Induction

Well I guess I don't understand Stan Shebs' point regarding induction
very well.  I agree with everything he said in his message: It is
indeed possible to generate all possible generalizations of some fact
within some fixed, denumerable domain of discourse.  The problem of
induction is to infer PLAUSIBLE beliefs from a finite set of examples.
Shebs is correct in saying that from any finite set of examples, a
very large (usually infinite) set of generalizations can be generated.
He is also correct in saying that--in the absence of any other
knowledge or belief--all of these generalizations are equally
plausible.  The problem is that in common-sense reasoning, all of
these generalizations are not equally plausible.  Some seem (to
people) to be more plausible than others.  This reflects some hidden
assumptions or biases held by people about the nature of the common
sense world.

------------------------------

Date: Thu, 4 Oct 84 15:17:51 pdt
From: chertok%ucbkim@Berkeley (Paula Chertok)
Subject: Seminar - Natural Language Structure

                      BERKELEY COGNITIVE SCIENCE PROGRAM
                                  Fall 1984
                    Cognitive Science Seminar -- IDS 237A

             TIME:                Tuesday, October 9, 11 - 12:30
             PLACE:               240 Bechtel Engineering Center
             DISCUSSION:          12:30 - 2 in 200 Building T-4

         SPEAKER:        Gilles Fauconnier, Linguistics Dept, UC  San
                         Diego & University of Paris

         TITLE:          Roles,  Space  Connectors  &  Identification
                         Paths

         ABSTRACT:       Key aspects of natural language organization
                         involve  a  general  theory  of  connections
                         linking mental constructions.   Logical  and
                         structural  analyses  have  overlooked  this
                         important  dimension,  which  unifies   many
                         superficially    complex    and    disparate
                         phenomena.  I will focus here  on  the  many
                         interpretations  of  descriptions and names,
                         and suggest a reassessment of  notions  like
                         rigidity,  attributivity,  or  ``cross-world
                         identification.''

------------------------------

End of AIList Digest
********************
 8-Oct-84 10:17:47-PDT,16005;000000000000
Mail-From: LAWS created at  8-Oct-84 10:07:46
Date: Mon  8 Oct 1984 09:42-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #133
To: AIList@SRI-AI


AIList Digest             Monday, 8 Oct 1984      Volume 2 : Issue 133

Today's Topics:
  Bindings - John Hosking Query,
  Workstations - Electrical CAD/CAE & TI LISP Machine,
  AI Tools - Graph Display,
  Expert Systems - Liability,
  Humor - Theorem Proving Contest,
  Comments - Zadeh & Poker,
  Seminar - First Order Logic Mechanization
----------------------------------------------------------------------

Date: Saturday,  6-Oct-84  2:12:41-BST
From: O'KEEFE HPS (on ERCC DEC-10) <okeefe.r.a.%edxa@ucl-cs.arpa>
Subject: References wanted

Anyone know where I can find anything by John Hosking,
now of Auckland University New Zealand?  Said to be in
expert systems/knowledge representation field.

------------------------------

Date: 3 Oct 84 17:08:44-PDT (Wed)
From: hplabs!intelca!qantel!dual!amd!turtlevax!ken @ Ucb-Vax.arpa
Subject: Electrical CAE software/hardware
Article-I.D.: turtleva.541

We've been gathering information about CAD/CAE for electrical/computer
engineering and have been deluged with a foot's worth of literature.
No on makes the entire package of what we want, which includes
schematic entry, hierarchical simulation, timing verification, powerful
functional specification language, finite-state machine generator, PAL
primitives, PLA and PROM high-level language specification compiling
down to JEDEC format, driver for a Data I/O or more dependable PROM/PAL
programmer, transient and frequency analysis (SPICE works well here),
symbolic, analytical, and graphical mathematics, etc.

We've accepted the fact that we will need to get several packages of
software, but are prepared to buy no more than 1 extra piece of
hardware, if we can't get software to run on our VAX or Cadlinc
workstations.

Has anyone used any of the available products?  Does anyone have any
recommendations?

Following is a list of suppliers of CAE tools of some sort, for which I
managed to get some literature, and is in no way guaranteed to be
complete:

Altera
Assisted Technology
Avera Corporation
Cad Internet, Inc.
Cadmatics
Cadnetix
Cadtec
CAE Systems
Calma
Chancellor Computer Corporation
Control Data
Daisy
Design Aids, Inc.
Futurenet
GenRad
HHB Softron
Inference Corp.
Intergraph
Interlaken Technology Corp.
Mentor
Metalogic, Inc.
Metheus
Mirashanta
Omnicad Corp.
Phoenix
Racal-Redac
Signal Technology, Inc.
Silvar-Lisco
Step Engineering
Symbolics
Teradyne
Valid
Vectron
Verstatec
Via Systems
VLSI Technology, Inc.
--
Ken Turkowski @ CADLINC, Palo Alto, CA
UUCP: {amd,decwrl,flairvax,nsc}!turtlevax!ken
ARPA: turtlevax!ken@DECWRL.ARPA

------------------------------

Date: Fri 5 Oct 84 16:23:15-PDT
From: Margaret Olender <MOLENDER@SRI-AI.ARPA>
Subject: TI LISP MACHINE

          [Forwarded from the SRI-AI bboard by Laws@SRI-AI.]

Texas Instruments invites ACM attendees (and AIC-ers) to see the new
TI LISP machine demo-ed at the

        San Francisco Hilton
        333 O'Farrel Street
        Imperial Suite Room #1915

        Monday, October 8, 1984
        5:00pm - 8:00pm

Refreshments and hors d'oeuvers.  Bring your ACM badge for admission.

...margaret

------------------------------

Date: Sat 6 Oct 84 23:56:50-PDT
From: Scott Meyers <MEYERS@SUMEX-AIM.ARPA>
Subject: Wanted:  info on printing directed graphs

I am faced with the need to come up with an algorithm for producing
hardcopy of a directed graph, i.e. printing such a graph on a lineprinter
or a V80 plotter.  Rather than just plopping the nodes down helter-skelter,
I will have an entry node to the graph which I will place at the far left
of the plot, and then I will want to plot things so that the edges
generally point to the right.  If anyone has solved this problem or can
give me pointers to places where it has been solved, or can offer any
other assistance, I would very much like to hear from you.  Thanks.

Scott

[Scott could also use a routine printing graphs top to bottom if
that is available.  -- KIL]

------------------------------

Date: Sun, 7 Oct 84 13:47:09 pdt
From: Howard Trickey <trickey@diablo>
Subject: printing graphs

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

I did a program that takes a graph description and produces a TeX input
file which in turn produces a reasonably nice looking graph on the
Dover (\special's are used to draw lines at arbitrary angles;  I can
use Boise by specifying only rectilinear lines, but it doesn't look as
good).  There's no way to use it as is for the output devices mentioned
in the previous message, but the algorithms I used may be of interest.

There can be different types of nodes, each drawn with a
user-specified TeX macro.  The graph description says which nodes there
are and of what type, and what edges there are.  Edges go to and from
symbolically specified points on nodes.  The output looks best when
the graph is acyclic or nearly acyclic, since that's what my graphs
are so I didn't spend time on other cases.

The program isn't robust enough or easy enough to use for general use,
but I can point people to it. If you need the capability badly enough,
it's not too difficult to get used to.  It's written in Franz Lisp.

        Howard Trickey

------------------------------

Date: 3 Oct 84 12:46:11-PDT (Wed)
From: decvax!cwruecmp!atvax!ncoast!rich @ Ucb-Vax.arpa
Subject: AI decision systems - What are the risks for the vendor?
Article-I.D.: ncoast.386

The rapid advance of Artificial Intelligence Software has caused me to
wonder about some of the possible legal problems.

SITUATION:  We are a software vendor that develops an AI software package.
        this package has been tested and appears to be correct in design and
        logic.  Additionally, the package indicates several alternative
        solutions as well as stating that there could be alternatives that
        are overlooked.

        What risk from a legal standpoint does the developer/vendor have to the
        user IF they follow the recommendation of the package AND the decision
        is proven to be incorrect several months later?

I would appreciate your opinions and shall post the compiled responses
to the net.

From:                                  |   the.world!ucbvax!decvax!cwruecmp!
  Richard Garrett @ North Coast Xenix  |       {atvax!}ncoast!rich
  10205 Edgewater Drive: Cleveland, OH |...................................
   (216) 961-3397             \ 44102  |   ncoast (216) 281-8006 (300 Baud)

------------------------------

Date: Sat 6 Oct 84 14:01:30-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Liability

Just as damning as using an incompetent [software] advisor is
failing to use a competent one.  If a doctor's error makes you a
cripple for life, and if he had available (and perhaps even used)
an expert system counceling a better course of treatment, is he
not guilty of malpractice?  Does the doctor incur a different
liability than if he had used/not used a human consultant?

The human consultant would normally bear part of the liability.
Since you can't sue an expert system, do you sue the company
that sold it?  The programmer?  The theoretician who developed
the algorithm?  I'm sure there are abundant legal precedents for
all of the above.

For anyone with the answers to the above, here's an even more
difficult problem.  Systems for monitoring and interpreting
electrocardiograms are commonly adjusted at the "factory" to
match the diagnostic style of the purchasing physician.  Suppose
that the doctor requests that this be done, or even does it
himself.  Suppose further that he is incompetent at this type
of diagnosis (after all, he's buying a system to do it for him),
and that customization to match his preferences can be shown to
degrade the performance of the software.  Is he liable for operating
the system at less than full capability?  I assume so.  Is the
manufacturer liable for making the adjustment, or for providing
him the means of doing it himself?  I would assume that also.
What are the relative liabilities for all parties?

                                        -- Ken Laws

------------------------------

Date: 4 Oct 1984  09:51 EDT (Thu)
From: Walter Hamscher <WALTER%MIT-OZ@MIT-MC.ARPA>
Subject: GSL sponsored Theorem Proving Contest

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

               DATE: Friday, 5 October, 12 noon
               PLACE: 3rd Floor Playroom
               HOST: Reid Simmons

          REAGAN vs. MONDALE THEOREM PROVING CONTEST

To help the scientific community better assess this year's
presidential candidates, GSL (in conjunction with the Laboratory
for Computer Research and Analysis of Politics) proudly presents
the first Presidential Theorem Proving Contest.  The candidates
will have 10 minutes to prepare their proofs, 10 minutes to
present, and then 5 minutes to criticise their opponents' proofs.
A pseudorandom number generator will be used to determine the
order of presentation.  The candidates will be asked to
prove the following theorem:

* Let (a + a + a ...) be a conditionally convergent series.
        1   2   3
  Show by construction that there exists a rearrangement of
  the a  such that
       i
            lim      (a + ... a ) = 0.
          n -> inf     1       n

Note:
  To increase public interest in this contest, the theorem
  will actually be phrased in the following way:

  Let (deficit    + deficit    + deficit    ...) be a
              1980         1981         1982

  series with both positive and negative terms.
  Rearrange the terms so that:

            lim      (deficit    + ... deficit    ) = $ 0.00
         year -> inf         1980             year

------------------------------

Date: 2 Oct 84 21:50:35-PDT (Tue)
From: hplabs!ames!jaw @ Ucb-Vax.arpa
Subject: Re: Humor & Seminar - Slimy Logic
Article-I.D.: ames.548


     This B-Board article [on slimy logic] is a master parody, right down
to the "so to speak" mannerism.  Thanks for the entertainment!

     I took a couple of courses from Professor Zadeh at Berkeley in the 70s,
not just in Fuzzy Logic, but also formal languages, where we all struggled
with LALR(1) lookahead sets.  The fuzzy controversy was raging then, with
Prof. William Kahan, numerical analyst, being Zadeh's arch-enemy.  Kahan was a
natural devil's advocate, himself none too popular for raving on, in courses
on data structures, a bit muchly about the way CDC 6400 Fortrash treated
roundoff of the 60th bit.  Apparently, there's some bad blood over the size
of Zadeh's grants (NSF?) for his fuzzy baby.  They both have had tenure for
years, so maybe a pie-throwing contest would be appropriate.

     Anyway, looks like the fuzzy stuff is now making the rounds at MIT.
Zadeh, who ironically wrote the book on linear systems (circa 1948), at
least got the linguistics department hopping with the fuzzies, influencing
the Lakoffs (George, mainly) to trade in their equally ad hoc transformational
grammars for fuzzy logic.  Kinda soured me on natural language theory, too.
I mean, is there life after YACC?

     Old Lotfi has left an interesting legacy via his children.  Zadeh's
daughter, I understand is a brilliant lawyer.  One son, after getting his
statistics Ph.D. at 20 or so, claims to have draw poker figured out.
Bluffing is dealt with by simple probability theory.  As I remember,
"Winning Poker Systems" is one of those "just-memorize-the-equivalent-of-
ten-phone-numbers-for-instant-riches" books.  He worked his way through school
with funds won in Emeryville poker parlors.  Not too shabby, but not too
fuzzy, either ...

        -- James A. Woods  {ihnp4,hplabs,philabs}!ames!jaw  (jaw@riacs.ARPA)


[Dr. Zadeh also invented the Z-transform used in digital signal processing
and control theory.  -- KIL]

------------------------------

Date: 5 Oct 84 18:31:33-PDT (Fri)
From: hplabs!hao!seismo!rochester!rocksanne!sunybcs!gloria!colonel @
      Ucb-Vax.arpa
Subject: Re: fuzzy poker
Article-I.D.: gloria.578

    One son, after getting his statistics Ph.D. at 20 or so, claims to
    have draw poker figured out. ...

When I was working with the SUNY-Buffalo POKER GROUP, we managed to
verify some of N. Zadeh's tables with hard statistics.  Anybody who's
interested can find some of our results in Bramer's anthology _Computer
Game-Playing: Theory and Practice_ (1983).
--
Col. G. L. Sicherman
...seismo!rochester!rocksanne!rocksvax!sunybcs!gloria!colonel

------------------------------

Date: 05 Oct 84  1318 PDT
From: Carolyn Talcott <CLT@SU-AI.ARPA>
Subject: Continuing Seminar - FOL & First Order Logic Mechanization

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

Seminar on FOL: a mechanized interpretation of logic
presented by Richard Weyhrauch

Time:  4:15 to 6:00
Date:  Alternate Tuesdays begining October 9
Place: Room 252 Margret Jacks Hall

The topic of this seminar is a description of FOL, a collection of structures
that can be used to provide a mechanized interpretation of logic.  We will
present specific examples of interest for logic, philosophy and artificial
intelligence to illustrate how the FOL structures give formal solutions,
or at least shed light on, some classical problems.  We will also describe
the details of FOL, a computer program for constructing these structures.
This provides a link between logic and AI.

Mechanization is an alternative foundation to both constructive and
classical logic.  I have always found constructive foundations
unconvincing.  Taken by itself, it fails to explain how we can understand
classical semantics well enough to make the distinction.  Even more -- a
philosophically satisfactory account of reasoning must explain why in the
comparatively well behaved case of mathematical foundations the classical
arguments carry conviction for practising mathematicians.

On the other hand the use of set theoretic semantics also seems to require
infinite structures to understand elementary arguments.  This conflicts
with the simple observation that people understand these arguments and they
are built from only a finite amount of matter.

Mechanization provides a semantics that is both finitist and at the same
time allows the use of classical reasoning.

------------------------------

Date: Sat, 6 Oct 84 13:56:04 pdt
From: Vaughan Pratt <pratt@Navajo>
Subject: FOL seminar

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

    On the other hand the use of set theoretic semantics also seems to
    require infinite structures to understand elementary arguments.  This
    conflicts with the simple observation that people understand these
    arguments ...

In my day it was not uncommon for students to reason about all the reals in a
finite amount of time - in fact it was even required for exams, where you only
had three hours.  Whatever has modern mathematics come to?

    ... and they [people] are built from only a finite amount of matter.

By weight and volume, yes, but with elementary particles breeding like
rabbits one sometimes wonders about parts count.  Now here's a problem
spanning particle physics and number theory: if there exists such a thing
as an elementary particle, and if there are a fixed finite number of them in an
uncharged hydrogen atom at absolute zero, is that number prime?
-v

------------------------------

End of AIList Digest
********************
 8-Oct-84 23:11:41-PDT,19482;000000000000
Mail-From: LAWS created at  8-Oct-84 23:06:43
Date: Mon  8 Oct 1984 23:03-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #134
To: AIList@SRI-AI


AIList Digest            Tuesday, 9 Oct 1984      Volume 2 : Issue 134

Today's Topics:
  Seminars - AI Control Design & Fault Diagnosis & Composite Graph Theory,
  Lectures - Logic and AI,
  Program - Complexity Year at MSRI
----------------------------------------------------------------------

Date: Mon 8 Oct 84 09:31:31-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Seminar - AI Control Design

From the IEEE Grid newsletter for the SF Bay Area:

Some very exciting new ideas on the role of expert systems in control
design for AI will be presented at the Oct. 25 meeting of the Santa
Clara Valley Control Systems, Man and Cybernetics Society.

The talk, by Dr. Thomas Trankle and Lawrence Markosian of Systems
Control Technology, will report work in progress to develop an AI
system that implements a linear feedback control designer's expert
knowledge.  This AI system is a planning expert system written in LISP,
and has knowledge of linear control design rules and an interface
with a control CAD package.

The LISP code represents the design rules as operators that have
goals, preconditions, and side effects.  Higher-level operators
or "scripts" represent expert design procedures.  The control
design process takes the form of a recursive goal-directed search,
aided by the expert designer's heuristics.

Cocktails at 6:30 pm, dinner ($11) at 7:00, presentation at 8:00.
Rick's Swiss Chalet, 4085 El Camino, Palo Alto
Reservations by Oct. 24, Council Office, (415) 327-6622.

------------------------------

Date: Mon 8 Oct 84 09:48:09-PDT
From: Paula Edmisten <Edmisten@SUMEX-AIM.ARPA>
Subject: Seminar - Reasoning About Fault Diagnosis with LES

 [Forwarded from the Stanford SIGLUNCH distribution by Laws@SRI-AI.]

DATE:        Friday, October 12, 1984
LOCATION:    Chemistry Gazebo, between Physical and Organic Chemistry
TIME:        12:05

SPEAKER:     Walter Perkins
             Lockheed Palo Alto Research & Development

ABSTRACT:    Reasoning About Fault Diagnosis with LES

The Lockheed Expert System (LES) is a generic framework for helping
knowledge engineers solve problems in diagnosing, monitoring,
designing, checking, guiding, and interpreting.  Many of the ideas of
EMYCIN were incorporated into its design, but it was given a more
flexible control structure.  In its first "real" application, LES
was used to guide less-experienced maintenance personnel in the fault
diagnosis of a large electronic signal-switching network.  LES used
not only the knowledge of the expert diagnostician (captured in the
familiar form of "IF-THEN" rules), but also knowledge about the
structure and function of the device under study to perform rapid
isolation of the module causing the failure.  In this talk we show how
the topological structure of the device is modeled in a frame
structure and the troubleshooting rules of the expert are conveniently
represented using LES's case grammar format.  We also explain how
"demons" are used to setup an agenda of relevant goals and subgoals.
The system was fielded in November 1983, and is being used by Lockheed
technicians.  A preliminary evaluation of the system will also be
discussed.  LES is being applied in a number of other domains which
include design verification, satellite communication,
photo-interpretation, and hazard analysis.

Paula

------------------------------

Date: Sat 6 Oct 84 15:26:34-PDT
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: Seminar - Composite Graph Theory

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

AFLB talk
10/11/84 - Joan Feigenbaum (Stanford):

            Recognizing composite graphs is equivalent to
                      testing graph isomorphism

In this talk I will explore graph composition from complexity
theoretic point of view.  Given two graphs G1 and G2, we construct the
composition G = G1[G2] as follows: For each node in G2, insert a copy
of G1.  If two copies correspond to nodes that are adjacent in G2,
then draw in all possible edges x -- y such that x is in one copy and
y is in the other.  A graph that can be expressed as the composition
of two smaller graphs is called composite and one that cannot is
called irreducible.

Composite graphs have a great deal of structure and their abstract
mathematical properties have been studied extensively.  In particular,
Harary and Sabidussi have characterized the relationships between the
automorphism groups of G1 and G2 and the automorphism group of their
composition.  Graph composition has been used by Garey and Johnson and
Chv\'atal to study NP-complete problems.  Garey and Johnson used it to
derive upper bounds on the accuracy of approximation algorithms for
graph coloring.  Chv\'atal showed that the Hamiltonian circuit problem
remains NP-complete even if the input graph is known to be composite.
In this talk, I consider what seems to be a more basic question about
composite graphs; namely, how difficult are they to recognize?

The main result I will give is that testing whether a graph is
composite is equivalent to testing whether two graphs are isomorphic.
In the proof that recognizing composite graphs is no harder than
testing graph isomorphism, I will give an algorithm that either
declares a graph irreducible or finds a non-trivial decomposition.
This distinguishes graph- decomposition from integer-factorization,
where primality-testing and factoring are not known to have the same
complexity.  The inherent difficulty of the recognition problem for
composite graphs gives some insight into why some difficult graph
theoretic problems, such as Hamiltonian circuit, are no easier even if
the inputs are known to be composite.  Furthermore, assuming P does
not equal NP, graph isomorphism is one of the most important problems
for which neither a polynomial time algorithm nor a proof that there
cannot be such an algorithm is known.  Perhaps examining a problem
that is equivalent to it will yield insight into the complexity of the
graph isomorphism problem itself.  For example, if all irreducible
graphs have succinct certificates, then graph isomorphism is in Co-NP.

If there is time, I will also show that for cartesian multiplication,
another way to construct product graphs, the recognition problem is in
P.  This talk presents joint work with Alex Schaffer.

***** Time and place: October 11, 12:30 pm in MJ352 (Bldg. 460) ****

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Regular AFLB meetings are on Thursdays, at 12:30pm, in MJ352 (Bldg.
460).
                                                - Andrei Broder

------------------------------

Date: Mon, 8 Oct 84 15:00:32 edt
From: minker@maryland (Jack Minker)
Subject: Lectures - Logic and AI at Maryland, Oct. 22-26


                     FINAL ANNOUNCEMENT
                            WEEK
                             of
       LOGIC and its ROLE in ARTIFICIAL INTELLIGENCE
                             at
                 THE UNIVERSITY OF MARYLAND
                    OCTOBER 22-26, 1984


The Mathematics and  Computer  Science  Departments  at  the
University  of Maryland at College Park are jointly sponsor-
ing a Special Year in  Mathematical  Logic  and  Theoretical
Computer Science.  The week of October 22-26 will be devoted
to Logic and  its  role  in  Artificial  Intelligence.   The
titles and abstracts of the five distinguished lectures that
are to be presented are as follows:


                    Monday, October 22:

      RAYMOND REITER (University of British Columbia)

   LOGIC FOR SPECIFICATION: DATABASES, CONCEPTUAL MODELS
          AND KNOWLEDGE REPRESENTATION LANGUAGES.


AI systems and databases have  a  feature  in  common:  they
require  representations  for  various  aspects  of the real
world.  These representations are meant to be  queried  and,
in  response to new information about the world, modified in
suitable ways.   Typically,  these  query  and  modification
processes require reasoning using the underlying representa-
tion of the world as premises.  So, it  appears  natural  to
use   a  suitable  logical  language  for  representing  the
relevant features of the world, and  proof  theory  for  the
reasoning.  This is not the normal practise in databases and
AI. The representations used assume a variety of forms, usu-
ally  bearing  little  or  no  resemblance to logic.  In AI,
examples of such representation  systems  include:  semantic
networks,  expert  systems,  and  many  different  knowledge
representation languages such as KRL, KL-ONE, FRL.  In data-
bases,  example  representation  systems  are the relational
data model, and various conceptual or semantic  models  like
TAXIS  and the entity-relationship model. The point of these
representation systems is that they provide their users with
computationally  efficient  ways  of representing, and using
the knowledge about an application domain.  The natural role
of  logic  in  databases and AI is a language for specifying
representation systems.  On this view, one must  distinguish
between  the  abstract  specification,  using  logic, of the
knowledge content of a database or AI application,  and  its
realization  as  a  representation system.  This distinction
has pleasant consequences:

      1.  The  logical  specification  provides  a  rigorous
      semantics  for the representation system realizing the
      specification.

      2. One can prove  the  correctness  of  representation
      systems with repect to their logical semantics.

      3. By taking seriously the problem of logically speci-
      fying an application, one discovers some rich and fas-
      cinating epistemological issues e.g. the centrality of
      non-monotonic reasoning for representation systems.


                   Tuesday, October 23:

            JOHN McCARTHY (Stanford University)

               MATHEMATICS OF CIRCUMSCRIPTION


Circumscription (McCarthy 1980, 1984) is a  method  of  non-
monotonic  reasoning proposed for use in artificial intelli-
gence. Let A(P) be a sentence expressing  the  facts  "being
taken into account", where P stands for a "vector" of predi-
cates regarded as variable.  Let E(P,x) be a  wff  depending
on  a  variable x and the Ps.  The circumscription of E(P,x)
is a second order formula in P expressing the  fact  that  P
minimizes  lambda  x.E(P,x)  subject to the facts A(P).  The
non-monotonicity arises, because augmenting  A(P)  sometimes
reduces  the conclusions that can be drawn.  Circumscription
raises mathematical problems similar to those that arise  in
analysis  in  that  it involves minimization of a functional
subject  to  constraints.   However,  its  logical   setting
doesn't  seem  to  permit  direct  use  of  techniques  from
analysis.  Here are some open questions that will be treated
in the lecture.

      1. What is the relation between minimal models and the
      theory generated by the circumscription formula?

      2. When do minimal models exist?

      3. The circumscription formula is second order.   When
      is it equivalent to a first order formula?

      4.  There  are  several  variants  of  circumscription
      including  successive circumscriptions and prioritized
      circumscription.  What are the relations  among  these
      variants?

      References:

      McCarthy, John (1980):
      "Circumscription - A Form of Non-Monotonic Reasoning",
      Artificial   Intelligence,  Volume  13,  Numbers  1,2,
      April.

      McCarthy, John (1984):
      "Applications of Circumscription to Formalizing Common
      Sense  Knowledge".   This  paper is being given at the
      1984 AAAI conference on non-monotonic reasoning and is
      being submitted for publication to Artificial Intelli-
      gence.


                     Wednesday, October 24:

            MAARTEN VAN EMDEN (University of Waterloo)

      STRICT AND LAX INTERPRETATIONS OF RULES IN LOGIC PROGRAMMING


      The  strict  interpretation  says  only that is admit-
      ted  which   is  explicitly allowed by a rule. The lax
      interpretation  says  only that is excluded  which  is
      explicitly  disallowed.  This  distinction  is  impor-
      tant in mathematics and in  law,  for  example.  Logic
      programs   also  are  susceptible  to both interpreta-
      tions. We discuss  the  use  of  fixpoint   techniques
      to  determine Herbrand models of  logic  programs.  We
      find that least fixpoints and least models  correspond
      to  the strict interpretation  and  characterize  suc-
      cessful  finite  computations   of   logic   programs.
      Greatest  fixpoints  and greatest models correspond to
      the lax interpretation and  are  closely   related  to
      negations inferred by finite failure and to terms con-
      structed by certain infinite computations.


                      Thursday, October 25:

                JON BARWISE (Stanford University)

                        CONSTRAINT LOGIC.


      Constraint Logic is based on a semantics that grew out
      of  situation  semantics,  but  on a syntax similar to
      that from first-order logic.  The sematics is not car-
      ried out in set theory, as is usual in logic, but in a
      richer theory I call Situation Theory, a theory  about
      things  like  situations, roles, conditions, types and
      constraints.  While the syntax is not so unusual look-
      ing,  the  connection between the syntax and semantics
      is much more dynamic than  is  in  traditional  logic,
      since  the interpretation assigned to a given *use* of
      some expression will depend on context, in particular,
      on  the  history of the "session".  For example, vari-
      ables are interpreted as denoting roles, but different
      uses  of  a  given  variable x may denote increasingly
      constrained roles as a session proceeds.  This is  one
      feature  that  makes Constraint Logic interesting with
      regard to AI  in  general  and  with  regard  to  non-
      monotonic logic in particular.


                       Friday, October 26:

           LAWRENCE HENSCHEN (Northwestern University)

      COMPILING CONSTRAINT-CHECKING PROGRAMS IN DEDUCTIVE DATABASES.


      There are at least two kinds of formulas in the inten-
      sional  database  which  should always be satisfied by
      the  interpretations  corresponding  to  the   various
      states  of  the  database -- definitions and integrity
      constraints.  In our approach, formulas  defining  new
      relations  are  used in response to queries to compute
      portions of those defined relations; such formulas are
      therefore  automatically  satisfied  by the underlying
      database state.  On  the  other  hand  integrity  con-
      straints may need to be checked each time the database
      changes.  Of course, we believe there are  significant
      advantages  in  being  able  to express integrity con-
      straints in a non-procedural way, such as  with  first
      order  logic.   However, reevaluating an entire first-
      order  statement would be wasteful as normally only  a
      small portion of the database needs to be checked.  We
      present (resolution-based) techniques  for  developing
      from   first-order   statements  efficient  tests  for
      classes of updates.  These tests can be  developed  at
      database creation time, hence are compiled, and can be
      applied before a  proposed  update  is  made  so  that
      failure does not require backing out.


     Lectures will be given at:

                  MWF 11:00 AM - 12:30 PM
                  TTH 10:00 AM - 11:30 AM

     Location: Mathematics Building, 3rd Floor Room Y3206

     The lectures are open to the public.  If  you  plan  to
attend  kindly  notify  us  so  that we can make appropriate
plans  for  space.  We regret  that all  funds  available to
support junior faculty and graduate students have been allo-
cated.  For additional information contact:

                        Jack Minker
               Department of Computer Science
                   University of Maryland
                   College Park, MD 20742
                       (301) 454-6119
                      minker@maryland

------------------------------

Date: Mon, 8 Oct 84 15:24:48 pdt
From: ann%ucbernie@Berkeley
Subject: Program - Complexity Year at MSRI

      [Forwarded from the Univ. of Wisconsin by Udi@WISC-RSCH.]
           [Forwarded from the SRI bboard by Laws@SRI-AI.]


                         COMPLEXITY YEAR AT
               MATHEMATICAL SCIENCES RESEARCH INSTITUTE


     A year-long research program in computational complexity will take
place at the Mathematical Sciences Research Institute, Berkeley, California,
beginning in August, 1985.  Applications are solicited for memberships in
the Institute during this period.  The Institute will award eight or more
postdoctoral fellowships to new and recent Ph.D.'s who intend to participate
in this program.  These fellowships are generally for the entire year, but
half-year awards are also possible.  It is hoped and expected that members
at the more senior level will come with partial or full support from sab-
batical leaves and other sources.  Memberships for any period are possible,
although, for visits of less than three months, Institute support is limited
to awards to help offset living expenses.


     The Program Committee for the complexity year consists of Richard Karp
and Stephen Smale (co-chairmen) and Ronald Graham.  The program will emphasize
concrete computational problems of importance either within mathematics and
computer science or in the application of these disciplines to operations
research, numerical computation, economics and other fields.  Attention will
be given both to the design and analysis of efficient algorithms and to the
inherent computational complexity of problems.  Week-long workshops are planned
on topics such as complexity theory and operations research, complexity theory
and numerical analysis, algebraic and number-theoretic computation, and
parallel and distributed computation.  Programs in Mathematical Economics
and in Geometric Function Theory will take place concurrently with the
Computational Complexity program.


     Address inquiries and applications to:

                Calvin C. Moore, Deputy Director
                Mathematical Sciences Research Institute
                2223 Fulton St., Room 603
                Berkeley, California   94720

     Applicants' files should be completed by January 1, 1985.

     The Institute is committed to the principles of Equal Opportunity and
Affirmative Action.

------------------------------

End of AIList Digest
********************
10-Oct-84 11:09:11-PDT,13458;000000000001
Mail-From: LAWS created at 10-Oct-84 11:07:42
Date: Wed 10 Oct 1984 11:02-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #135
To: AIList@SRI-AI


AIList Digest           Wednesday, 10 Oct 1984    Volume 2 : Issue 135

Today's Topics:
  Expert Systems - NL Interfaces & Training Versions,
  AI Reports - Request for Sources & Computer Decisions Article,
  News - TI Lisp Machines & MCC,
  AI Tools - Printing Directed Graphs,
  Law - Liability for Expert Systems
----------------------------------------------------------------------

Date: 7 Oct 84 22:43:39-PDT (Sun)
From: hplabs!sdcrdcf!trwrba!cepu!ucsbcsl!discolo @ Ucb-Vax.arpa
Subject: Writing natural language/expert systems software.
Article-I.D.: ucsbcsl.172

I will be writing an simple expert system in the near future and was
wondering the advantages and disadvantages of writing something like
that in Prolog or Lisp.  I seem to prefer Prolog, even though I don't
know either one very well yet.  Are there any other languages out there
which are available under 4.2BSD for this purpose?

I would appreciate replies via mail.  Thanks.

uucp: ucbvax!ucsbcsl!discolo
arpa: ucsbcsl!discolo@berkeley
csnet: discolo@ucsb
USMail: U.C. Santa Barbara
        Department of Computer Science
        Santa Barbara, CA  93106
GTE: (805) 961-4178

------------------------------

Date: 9 Oct 84 3:42:10-PDT (Tue)
From: hplabs!kaist!kiet!sypark @ Ucb-Vax.arpa
Subject: Natural Language Processing Systems
Article-I.D.: kiet.232

Please send me the informations about natural language processing
systems which is machine translator or i/o interface for expert systems.
What I want is as following.
        1. Overview of the system
        2. Source is available ?
        3. How much price ?

------------------------------

Date: 9 Oct 84 09:14 PDT
From: Feuerman.pasa@XEROX.ARPA
Subject: Training version of Expert System Tools

John Nagle brings up a good idea when talking about M.1.  One major
problem in trying to investigate various Expert System Building Tools is
that they are very expensive just to buy to find out whether they
actually lend themselves well to solving a problem.  One never really
can find out what it is like to use a system from a canned demo or user
guides.  The idea of having a training version (a stripped down version
that doesn't allow full-sized applications) could give someone enough
experience with the system to allow them to know what sorts of
application a tool is good for.  (Undoubtedly this would be viewed as a
bad marketing ploy; why would anyone want to come up with a cheap system
that would probably only keep someone from buying the full-fledged
expensive version.)

With that comment, I pessimistically ask:  Does anyone know of any tool
out there that has such a stripped down training version?

--Ken <Feuerman.pasa@XEROX.ARPA>.

------------------------------

Date: 9 Oct 1984 16:32:15 EDT (Tuesday)
From: Charles Howell <m15434@mitre>
Subject: Various Technical Reports


I would like to know what Technical Reports  are  available  from
some of the leading centers for research in AI and related fields
(how's that for a broad topic?).  Any  addresses  of  Publications
Offices (or whatever) that have a catalog and ordering / purchase
information will be appreciated. Implicit in this  request  is  a
request  for  suggestions  about  what  places  are  putting  out
interesting reports; any and all suggestions will  be  cheerfully
accepted!  I'll collect the answers and post them to the AIList if
there is much response.

Thanks,
Chuck Howell      Howell at MITRE

------------------------------

Date: Tue 9 Oct 84 22:54:27-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Computer Decisions Article

I just ran across an AI-in-business article in the August issue
of Computer Decisions.  It features a roundtable of 14 consultants
and business bigwigs.  Phone numbers and reader service numbers
are given for 18 AI vendors, and mention is made of an annual
AI report -- AI Trends '84, a description of the technologies and
profile of 50 key vendors by DM Data Inc., Scottsdale AZ, $195,
(602) 945-9620.  The article includes advice on getting started
in AI (buy some Lisp machines, hire some hackers and AI experts,
and expect some failures), a short glossary (including Lisp,
a new language ...), and a short bibliography (including The
Mythical Man-Month).

                                        -- Ken Laws

------------------------------

Date: Tue 9 Oct 84 15:19:30-CDT
From: Werner Uhrig  <CMP.WERNER@UTEXAS-20.ARPA>
Subject: news from Austin: TI Explorer, MCC, and more

[ from the Austin American Statesman, p. D6 - Oct 9, 84 ]

            TI Explorer finds new path
        =================================

Texas Instruments in Austin has landed a major business prize: a
multi-million-dollar order for up to 400 of its highly sophisticated Explorer
symbolic processing systems from the Laboratory for Computer Science at MIT.
The computers will be bought over the next 2 years to establish the world's
largest network of LISP machines involved in computer research.  TI officials
said the order is significant in view of the fact that only about 1,000 of
the specialized computers are in existence. TI plans to deliver 200 machines
in 1985 and 200 in 1986.

     Boing joins MCC as 19th member of the consortium
   ====================================================

... paying a sign-up fee of $500,000.  The cost for joining goes up
to $1-million on Jan 1.

   There are 4 seperate research programs at MCC, with a combined annual
budget of more than $50 million.  Boing reportedly has joined only one
program thus far, an effort to find new ways to connect complex computer
chips with the equipment the chips are supposed to control, but is
considering joining the other three as well.

MCC's managers are especially eager for Boing to join the artificial
intelligence program.  They believe Boing's participation in that expensive
program would draw other aerospace companies to it, spreading out the expense
and making it a cheaper deal for everyone involved.

Boing is the fourth major aerospace defense contractor to become an MCC
member [following Rockwell, Lockheed, and Martin Marietta].

[ in other news:  real estate prices and traffic jams are coming along nicely,
thank you.   the city is being sued by the state for polluting the river and
trying to sue everyone connected with building 2 nuclear power reactors, which
are WAY overdue and WAY over-budget, and not close to being finished.  Austin
is still trying to sell its 16% of the project, and given that nobody wants to
buy it, is close to pushing for abandoning the whole project.    So you really
don't want to come here .....    (-: I don't make the news, only report it ]

------------------------------

Date: Tue 9 Oct 84 17:49:39-PDT
From: PENTLAND@SRI-AI.ARPA
Subject: TI's new Lisp Machines

News about TI's new Lisp Machines:

Timing figures, 1/60th of a second.
Both TI and 3600 were 1Mword memory, 300Mbyte disk

op              TI      3600            comment
---------------------------------------------------------------
bitblt          270     441     shows basic memory cycle time
floating pt     23      17      //,* about the same, TI has 25 bit number
cons            25-40   17-40   depends somewhat on paging
paging          225-280 160-450 same transfer rate, seek time 50% more for TI
create flavor
  instance      140     52      not fully microcoded yet
send msg        52      21      not fully microcoded yet
function call   31      16      not fully microcoded yet
32bit floating  33      17      includes consing in TI machine

It appears that by the April delivery date, the TI will be the equal of a
3600.  It is already much more than an LMI, Cadr or LM2 (I ran these
benchmarks on an LM2, it was 1/2 to 1/5 the TI in all cases).
Ask for the benchmark programs if you are interested in details.

------------------------------

Date: Mon, 8 Oct 84 16:58 CDT
From: Jerry Bakin <Bakin@HI-MULTICS.ARPA>
Subject: Re: Wanted: info on printing directed graphs

Some friends of mine came up with such a program.  I have included its
first comment below.

It is written in Pascal, somewhere; I have a version I rewrote (i.e.,
force translated) into Multics PL/I.  If you can use either one, let me
know.  We do not support FTP, so if their is a wide demand for this, I
may ask someone else to take it off my hands.

There might be a small problem, they are currently selling a some of
their software, I have to find out if this is a portion of that software.

Even if it is, the following provided a source for more information.


(* TRPmod - A routine to print N-ary trees on any character printer.  This
   routine takes as input an arbitrary N-ary tree, some interface routines, and
   assorted printer parameters and writes a pictorial representation of that
   tree using an output routine provided in the call to treeprint.  The tree is
   nicely formatted and is divided into vertical stripes that can be taped
   together after printing.  Options exist to print the tree backwards or
   upside down if desired.

   The algorithm for treeprint originally appeared in "Pretty-Printing of
   Trees", by Jean G. Vaucher, Software-Practice and Experience, Vol. 10,
   pages 553-561 (1980).  The algorithm used here has been modified to support
   N-ary tree structures and to have more sophisticated printer format control.
   Aside from a common method of constructing an ancillary data structure and
   some variable names, they are now very dissimilar.

   treeprint was written by Ned Freed, Kevin Carosso, and Douglas
   Grover at Harvey Mudd College. (714) 621-3219 (ask for the Mathlib
   Director)

------------------------------

Date: 6 Oct 84 8:51:42-PDT (Sat)
From: decvax!mcnc!idis!cadre!geb @ Ucb-Vax.arpa
Subject: re: liability for expert systems
Article-I.D.: cadre.57

This is a subject that we are quite interested in as we
develop medical expert systems.  There has been no court
case nor precedent nor law covering placement of blame
in the cases of errors in expert systems.  The natural
analogy would be medical textbooks.  As far as I know,
no author of a textbook has been found liable for errors
that resulted in mistreatment of a patient.  Therefore,
the logical liability should lie with the treating physician
to properly apply the knowledge.

Having said this, it is best to recognize that customs such
as this were developed in a much different society of 100
years ago.  Now every possible person in a case is considered
fair game and undoubtedly until a court rules or legislation
is passed, you must consider yourself at risk if you distribute
an expert system.  Unfortunately, there is no malpractice
insurance available for programmers and you will find a clause
in just about any other insurance that you might carry that
states that the insurance you have doesn't cover any lawsuits
stemming from the practice of your profession.  Sorry.

------------------------------

Date: 10 October 1984 0854-PDT (Wednesday)
From: bannon@nprdc (Liam Bannon (UCSD Institute for Cognitive Science))
Reply-to: bannon <sdcsla!bannon@nprdc>
Subject: Liability and Responsibility wrt expert systems

        I was interested in the messages raising the issue of where
responsibility lies if a person follows the advice of an AI system and
it turns out to be wrong, or where the person disregards the computer
system advice, but the system turns out to be right (AI Digest V2#133).
        I am not a lawyer or AI system builder,
but I am concerned about some of the social dimensions
of computing, and have been concerned about how expert systems might
actually be used in the work environment.  There have been few
full-length papers on this topic, to my knowledge.  One that I have
found interesting is that by Mike Fitter and Max Sime "Creating
Responsive Computers: Responsibility and Shared Decision-Making" which
appeared in the collection H. Smith and T. Green (Eds.) Human
Interaction with Computers (Academic Press, 1980).  They point out "the
possibility that a failure to use a computer might be judged negligent
if, for example, a physician neglected to ask a question, the answer
to which was crucial to a diagnosis, AND a computer system would have
asked the question." This hinges on a famous 1928 case in the US, called
the T.J. Hooper, where a tugboat owner was found negligent for not having
radio sets on them, thus not hearing radio reports of bad weather which
would have made them seek safety avoiding the loss of the barges
which the tugs had in tow - this despite the fact that at that
time radio was only used by one tugboat company!
        This raises a host of interesting questions about how expert
systems could/should be used, especially in medicine, where the
risks/benefits are highest. Comments?
                                        -liam bannon

------------------------------

End of AIList Digest
********************
11-Oct-84 09:59:40-PDT,15257;000000000001
Mail-From: LAWS created at 11-Oct-84 09:56:05
Date: Thu 11 Oct 1984 09:52-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #136
To: AIList@SRI-AI


AIList Digest           Thursday, 11 Oct 1984     Volume 2 : Issue 136

Today's Topics:
  AI Tools LMI (Uppsala) Prolog & Kahn's DCG's,
  Law - Liabilities of Sofware Vendors,
  Games - Preliminary Computer Chess Results,
  Psychology - Distributed Intelligence,
  Linguistics - Sastric Sanskrit,
  Conference - Computational Linguistics Call for Papers
----------------------------------------------------------------------

Date: 8 Oct 84 13:53:26-PDT (Mon)
From: hplabs!sdcrdcf!trwrba!logico!burge @ Ucb-Vax.arpa
Subject: LMI (Uppsala) Prolog + Kahn's DCG's: User Experiences
Article-I.D.: logico.124

Does anyone have any experiences to relate about "LM-Prolog", implemented in
Zetalisp at the University of Uppsala by Ken Kahn and Mats Carlsson? And/or
of the DCG and "Grammar Kit" that comes with it? (We've been using the DEC-11
implementation for several years, but now it's time to expand...)

Also, our site is new to the net, and if anyone could send me previous
items, it would help me find out what all has been happening out there...!!

--John Burge                                                    [818] 887-4950
LOGICON, Operating Systems Division, 6300 Variel #H, Woodland Hills, Ca. 91367

------------------------------

Date: Wed, 10 Oct 84 13:55:07 cdt
From: "Walter G. Rudd" <rudd%lsu.csnet@csnet-relay.arpa>
Subject: Liabilities of Sofware Vendors

Maybe I am being naive or something, but I don't see why AI software should
be different from any other when it comes to the liability of the vendor.
My attorney has written me a boilerplate contract that contains a clause
something to the effect that "vendor is not liable for third-party or
consequential damages that result from the use of the product."
Doesn't that take care of the problem?  If not, maybe I had better find
an expert attorney system.

------------------------------

Date: Wed 10 Oct 84 01:57:07-PDT
From: Donald S. Gardner <GARDNER@SU-SIERRA.ARPA>
Subject: Preliminary computer chess results

The computer chess championship is almost over and BELLE has severly
bitten the dust.  This special purpose hardware (with ~1600 integrated
circuits and a PDP-11/23) first tied a program called Phoenix running
on a VAX-11/780 and then was beat by NuChess running on a CRAY 1M.
NuChess was the program previously called chess 4.7 and was the champion
until 1980 when it was beaten by BELLE.

The first place winner during the fourth round was declaired to be
the program CRAY BLITZ running on a cluster of 4 (FOUR) CRAY's.
This system checks in at 420 million instructions per second.
Now, CRAY time costs appx $10,000 per hour per computer and each
game lasts around 5 hours.  This adds up to a cool $1M in computer time!
Of course that is in "funny money", but still impressive.  There was also
a program from Canada which ran on 8 Data General computers (Novas and
an Eclipse), two more CRAYs (80 mips each), two Amdahl computers (10 &
13 mips), one CDC Cyber 176 (35 mips) and a Burrough's 7800 (8 mips).

------------------------------

Date: 9 Oct 84 11:53:24 PDT (Tuesday)
From: Jef Poskanzer <Poskanzer.PA@XEROX.ARPA>
Reply-to: SocialIssues^.PA@XEROX.ARPA
Subject: Distributed Intelligence

          [Excerpted from Human-Nets Digest by Laws@SRI-AI.]

By Erik Eckholm
New York Times

    Computer buffs call it "flaming."  Now scientists are documenting
and trying to explaim the surprising prevalence of rudeness,
profanity, exultation and other emotional outbursts by people when
they carry on discussions via computer.  [...]  "It's amazing," said Kiesler.
"We've seen messages sent out by managers - messages that will be seen
by thousands of people - that use language normally heard in locker rooms."

[...] in addition to calling each other
more names and generally showing more emotion than they might face to
face, people "talking" by computer took longer to agree, and their
final decisions tended to involve more risks than those reached by
groups meeting in person.  [...]

    "This is unusual group democracy," said Sara Kiesler, a
psychologist at Carnegie-Mellon.  "There is less of a tendency for
one person to dominate the conversation, or for others to defer to the
one with the highest status."  [...]

------------------------------

Date: 9 Oct 1984 11:09-PDT (Tuesday)
From: Rick Briggs <briggs@RIACS.ARPA>
Subject: Sastric Sanskrit

        I would like to respond to recent criticisms concerning Sastric
Sanskrit.
        Firstly, Kiparsky is confusing Sanskrit in general from Sastric
Sanskrit.  His example, "bhikshuna rajna..." is NOT Sastric Sanskrit but
plain ordinary Classical Sanskrit.  I did not mean to imply that lack of
word order is a sufficient condition for unambiguity, only that it is
an indication.
        As to Dr. Dyer's comments: Yes, a parser will be needed due to
the difficulty with translations but this is due to the nature of what
one translates into.  In the case of English, the difference between
the two languages creates the difficulty in translation, not inherent
complexities in Sastric Sanskrit. The work I mentioned was edited by
Pandit Sabhapati Sharma Upadhyaya in Benares, India and published
recently(1963) by the Chowkhamba Sanskrit Series Office.  Also, there
is something like a concept of scripts in that subsets of discourse
(possibly nested) are marked off("iti" clauses) and therefore the
immediate context is defined.
        My comments about English stem from its lack of case.  Languages
like Latin are potentially capable of rendering logical formulation
with less ambiguity since a mapping from its syntactic cases can be made
to a set of "semantic cases", depending on how good the case system is.
Sanskrit has 8(including the vocatice) and a correspondance(though
not complete) is made between the cases of Classical Sanskrit and the
"karakas" or auxiliary actions in grammatical Sastric Sanskrit.  For
example, the dative case is "usually" mapped onto the semantic case
"recipient" but not always.  The exceptions make up the extension
from the commonly known language and the Sastra.
        An example is in order:

        "Caitra cooks rice in a pot" is expressed ordinarily in
Sanskrit as
        "Caitra: sthaalyaam taNDulam pacati" (double vowels indicate
length, capitals indicate retroflex)

        In Sastric Sanskrit:

        sthaliiniSTataNDulaniSTa:
        viklittijanakashcaitraabhinnaashrayako: vyaapaara:

        which translates into English:

        "There is an activity(vyaapaara:) , subsisting in the pot,
        with agency residing in one substratum not different from
        Caitra, which produces the softening which subsists in rice."

        The vocabulary is the same as in Classical Sanskrit, with
the addition of terms such as "none other than", and "not different
from".  Syntax is eliminated in the sense that the sentence is read
as "there is an abstract activity" with a series of "auxiliary
activities" which "agree" semantically with vyaapaara:.  Thus each
agreement here ends with ah: which indicates its SEMANTIC agreement with
the abstract activity.  What I am saying is that each "karaka" is
equivalent to a semantic net triple, which can be stored away as
eg. "activity, agent, none other than Caitra" etc.

        Thirdly, the first two points of O'Keefe's have been addressed.
Sanskrit is definitely Indo-European but its daughter languages
inherited the verbal roots(dhatus) not the methodology of its
grammarians.  Even though no other(that I know of) natural language
has found it worthwhile to pursue the developement of unambiguous
languages for a thousand years or so, one  parallel can be found:
recent work in natural language processing.  The difference is
that THEY used it in ordinary communication and AI techniques have
computer processing in mind.  Even though the language is dead there
are theoretical works which deal specifically with unambiguity.
After reading these, even though you may argue that ambiguity exists
(I'd like to see those arguments), you must concede that total
precision and an escape from syntax and ambiguity was a primary aim
of these scientists.  I find that interesting in itself.  It is
a possible indication that we do actually think "in semantic nets"
at some deep level.  Point e) again is a confusion with regular
Sanskrit.  The example of 4 people in a room A,B,C,D would not
be a problem in this language.  Since precision is required in
utterances(see the example above) one would simply not say
"we came from X", you would say "there was an activity connected
to a coming-activity, having as object X and having agency residing
in none other that (we 2, we 3 etc.)."  The number would have to
be specified.  "Blackbird" would be specified as either "a color-event
residing in a bird or "blackbird" would be taken as a primitive
nominal.

        Lastly, Jeff Elman's criticisms.  A comparison between
mathematics and Satsra is not a fair one.  Sastric texts have
been written in the domains of Science, Law, Mathematics, Archery,
Sex,Dance, Morality...  I wonder how these texts could be written
in mathematical formulisms; the Sastric language is, however,
beautifully and elegently suitable for these texts (Sastra means
basically "scientific").  I disagree with the statement that
"Surface ambiguity gives the language a flexibility of expression.
That flexibility does not necessarily entail lack of clarity."
Even if ambiguity adds flexibility I do not see how it follows
that clarity is maintained.  If there are 4 people in the room and
one says "we", that is less clear than the case where the language
necessitates saying we 3.  I also disagree with "...structural
ambiguity is not particularly bad nor incompatible with 'logical'
expression."  Certainly ambiguity is a major impediment to designing
an intelligent natural language processor.  It would be very desireable
to work with a language that allows natural flexibility without
ambiguity.  And I still maintain that the language is syntax free,
word order or no word order.  And maybe this is the linguistic
find of the century.

        One last point about metaphor, poetry etc.  As an example
to illustrate these capabilities in Sastric Sanskrit, consider
the "bahuvrihi" construct (literally "man with a lot of rice")
which is used currently in linguistics to describe references outside of
compunds.  "Bahuvrihi" is itself an example, literally "bahu"-many
"vrihi" rice.  Much rice is taken here as he who posesses a lot of
rice, and in Classical Sanskrit different case endings can make
"bahu-vrihi" mean "he or she who wants a lot of rice" , "is on a
lot of rice" etc.  Aha! Ambiguity?  Only in Classical, in Sastric
Sanskrit the use of semantic cases instead of syntactic do
not allow any ambiguity.

Rick

------------------------------

Date: 8 Oct 1984 11:10:37 PDT
From: Bill Mann <MANN@USC-ISIB.ARPA>
Subject: Conference - Computational Linguistics Call for Papers


                        CALL FOR PAPERS

23rd Annual Meeting of the Association for Computational Linguistics

                         8-12 July 1985
                      University of Chicago
                        Chicago, Illinois


This international conference ranges over all of computational linguistics,
including understanding, generation, translation, syntax and parsing,
semantics, natural language interfaces, speech understanding and generation,
phonetics, discourse phenomena, office support systems, author assistance,
translation, and computational lexicons.  Its scope is intended to encompass
the contents of an Applied Natural Language Processing Conference as well as
one on Theoretical Issues in Natural Language Processing.  In short, we are
striving for comprehensiveness.

The meeting will include presented papers, system demonstrations, and, on
8 July, a program of computational linguistics tutorials.

Authors should submit, by 18 January 1985, 6 copies of an extended summary
(6 to 8 pages) to William C. Mann, ACL85 Program Chairman, USC/ISI,
4676 Admiralty Way, Marina del Rey, CA 90292, USA; (213)822-1511;
mann@isib.

The summaries should describe completed work rather than intended work, and
should indicate clearly the state of completion and validation of the
research reported, identify what is novel about it, and clarify its status
relative to prior reports.

Authors will be notified of acceptance by 8 March 1985.  Full length
versions of accepted papers prepared on model paper must be received,
along with a signed copyright release notice, by 26 April 1985.

All papers will be reviewed for general acceptability by one of
the two panels of the Program Committee identified below.  Authors
may designate their paper as either an Applications Paper or a
Theory Paper; undesignated papers will be distributed to one or
both panels.


Review Panel for Applications Papers:

Timothy Finin           University of Pennsylvania
Ralph Grishman          New York University
Beatrice Oshika         System Development Corporation
Gary Simons             Summer Institute of Linguistics
Jonathan Slocum         MCC Corporation

Review Panel for Theory Papers:

Robert Amsler           Bell Communications Research
Rusty Bobrow            Bolt Beranek and Newman
Daniel Chester          University of Delaware
Philip Cohen            SRI International
Ivan Sag                Stanford University


Those who wish to present demonstrations of commercial, developmental,
and research computer programs and equipment specific to computational
linguistics should contact Carole Hafner, College of Computer Science,
Northeastern University, 360 Huntington Avenue, Boston MA 02115, USA;
(617)437-5116 or (617)437-2462; hafner.northeastern@csnet-relay.  For
planning purposes, we would like this information as early as possible,
but certainly before 30 April.

Local arrangements will be handled by Martha Evens, Computer Science
Department, Illinois Institute of Technology, Chicago, IL 60616, USA;
(312)567-5153 or (312)869-8537; evens@sri-ai.

For other information on the conference, on the 8 July tutorials, and
on the ACL more generally, contact Don Walker (ACL), Bell Communications
Research, 445 South Street, Morristown, NJ 07960, USA; (201)829-4312;
bellcore!walker@berkeley.

Please note that the dates of the conference will allow people to
attend the National Computer Conference, which will be held in Chicago
the following week.

========================================================================

                        PLEASE POST

                        PLEASE REDISTRIBUTE

------------------------------

End of AIList Digest
********************
12-Oct-84 23:38:37-PDT,15064;000000000001
Mail-From: LAWS created at 12-Oct-84 23:35:03
Date: Fri 12 Oct 1984 23:28-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #137
To: AIList@SRI-AI


AIList Digest           Saturday, 13 Oct 1984     Volume 2 : Issue 137

Today's Topics:
  Fuzzy Logic - Query,
  AI Literature - The AI Report,
  AI Tools - OPS5 & LM-Prolog & VMS PSL 3.2,
  Lisp Machines - TI Explorers,
  Games - ACM Chess Tournament & Chess Planning,
  Seminar - Knowledge Based Software Development,
  Conference - AI Society of New England
----------------------------------------------------------------------

Date: 10 Oct 84 8:55:33-PDT (Wed)
From: hplabs!intelca!qantel!dual!fortune!polard @ Ucb-Vax.arpa
Subject: Fuzzy logic references wanted
Article-I.D.: fortune.4472

Would anyone be kind enough to send me (or post) a list of readings
that would serve as an introduction to fuzzy logic?

                        Thank you,
                        Henry Polard

Henry Polard (You bring the flames - I'll bring the marshmallows.)
{ihnp4,cbosgd,amd}!fortune!polard
N.B: The words in this posting do not necessarily express the opinions
of me, my employer, or any AI project.

------------------------------

Date: Thu 11 Oct 84 21:09:44-PDT
From: ROBINSON@SRI-AI.ARPA
Subject: Omission

Your list of AI information resources omits a significant
publication:

        The Artificial Intelligence Report

published by Artificial Intelligence Publications.

------------------------------

Date: 11 Oct 84 13:38:27 EDT
From: BIESEL@RUTGERS.ARPA
Subject: Addendum to OPS5 list.

Some readers of this list pointed out a couple of omissions on the OPS5
summary posted a few days ago; thanks are due them for the additional
material.
A version of OPS5 called OPS5E, running on the Symbolics 3600 is available from
        Verac, Inc.
        10975 Torreyana Road, Suite 300
        San Diego, CA 92121
Prices: $3000 object code, $10000 source, $1000 one year support.
There is also a version for the Xerox D series machines (Dandelion, Dolphin,
Dorado) available from
        Science Applications International Corp.
        1200 Prospect St.
        P.O.Box 2351
        La Jolla, CA 92038
        (619) 454-3811
Price: $2000.

------------------------------

Date: 12 Oct 84 09:23 PDT
From: Kahn.pa@XEROX.ARPA
Subject: Re: LM-Prolog, Grammar Kit

My experiences using LM-Prolog have been very positive but I am surely
not an un-biased judge (being one of the co-authors of the system).   (I
am tempted to give a little ad for LM-Prolog here, but will refrain.
Interested parties can contact me directly.)

Regarding the Grammar Kit, the main thing that distinguishes it from
other DCGs is that it can continuously maintain a parse tree.  The tree
is drawn as parses are considered and parts of it disappear upon
backtracking.  I have found this kind of dynamic graphic display very
useful for explaining  Prolog and DCGs to people as well as debugging
specific grammars.

------------------------------

Date: Thu 11 Oct 84 07:16:44-MDT
From: Robert R. Kessler <KESSLER@UTAH-20.ARPA>
Subject: PSL 3.2 for Vax VMS

                        PSL 3.2 for Vax VMS

We are pleased to announce that Portable Standard LISP (PSL) version 3.2 is
now available for Vaxen running the VMS operating system.  PSL is about the
power, speed and flavor  of Franz LISP or  MACLISP, with growing  influence
from Common  LISP.  It  is recognized  as an  efficient and  portable  LISP
implementation with  many  more capabilities  than  described in  the  1979
Standard LISP Report.  PSL's main  strength is its portability across  many
different  systems,   including:   Vax  BSD   Unix,   Extended   Addressing
DecSystem-20 Tops-20, Apollo DOMAIN  Aegis, and HP  Series 200.  A  version
for the IBM-370 is in beta test, a Sun version is 90% complete and two Cray
versions are being used on an experimental basis.  Since PSL generates very
efficient code, it is an ideal delivery vehicle for LISP based applications
(we can  also provide  PSL reseller  licenses for  binary only  and  source
distributions).

PSL is distributed for the  various systems with executables, all  sources,
an approximately  500 page  manual and  release notes.   The release  notes
describe how to install the system and how to rebuild the various  modules.
We are charging  $750 for the  Vax/VMS version of  PSL for Commercial  Site
licenses.  Non-profit institutions and all  other versions of PSL will  not
be charged a license fee.  We are also charging a $250 tape or $350  floppy
distribution fee for each system.

PSL is in heavy use at Utah, and by collaborators at Hewlett-Packard, Rand,
Stanford, Columbia and over  200 other sites.   Many existing programs  and
applications have been  adapted to  PSL including  Hearn's REDUCE  computer
algebra system and GLISP, Novak's object oriented LISP dialect.  These  are
available from Hearn and Novak.

To obtain a copy of the license  and order form, please send a NET  message
or letter with your US MAIL address to:

Utah Symbolic Computation Group Secretary
University of Utah - Dept. of Computer Science
3160 Merrill Engineering Building
Salt Lake City, Utah 84112

ARPANET: CRUSE@UTAH-20
USENET:  utah-cs!cruse

------------------------------

Date: Thu, 11 Oct 84 03:52:11 pdt
From: weeks%ucbpopuli.CC@Berkeley (Harry Weeks)
Subject: TI Lisp Machines.

The recent article from PENTLAND@SRI-AI has some interesting benchmark
data.  I am looking seriously at Lisp Machines for purchase in the near
future, so I went around to the Xerox, Symbolics and LMI people at ACM
84.  I was told by the LMI folks that they were OEMs for the TI
machines.  (The machines do look almost identical.)  So I didn't chat
with the TI folks -- perhaps a mistake.  If LMI does OEM their machines
to TI, why the difference in performance?  Perhaps someone in the know
can clarify this.

If anyone out there with comparative experience in these various
machines can say a few words on what they think are the relative merits
of each vendor's product it would be quite helpful to prospective
buyers.  I came away with little substantive basis for comparison from
talking with the salesmen.  Most of them were high on pretension, low
on comprehension and quite adept at parrying questions.

As an incidental note, I found at the conference that Lisp and Prolog
are now available under PRIMOS on Prime computers.  A positive side-
effect of the increased interest in AI is the widening spectrum of
environments supporting AI languages, an important factor for soft-
ware producers looking for a wide market.

                                            Harry Weeks
                                            (Weeks@UCBpopuli)

P.S.
I just happened to read the latest Datamation today [10/11] and it
contains a news article which also provides some information on the
TI machines.

------------------------------

Date: Thu 11 Oct 84 23:52:54-PDT
From: PENTLAND@SRI-AI.ARPA
Subject: TI Lispm Timings - Clarification

Re: TI Lisp Machine timings

People have criticized me for the recently circulated comparison of TI
and Symbolics machines; mistaking the simple, rough timings I ran on
the TI and Symbolics machines for serious benchmarks.  I am surprised
that anyone thinks that benchmarking a machine can be a simple as the
comparison I did, which was limited by a need for extreme brevity.
I therefore want to make clear that the timings I ran were ROUGH, QUALITATIVE
measures of very limited portions of the machines performance, and
bear only a VERY ROUGH, ORDER-OF-MAGNITUDE RELATIONSHIP TO THE TRUE
PERFORMANCE of the machines.  That is, there is NO warranty of
accuracy for such simple tests.  Serious benchmarking has yet to be
done.
        Alex Pentland

------------------------------

Date: Fri 12 Oct 84 16:49:27-CDT
From: CMP.BARC@UTEXAS-20.ARPA
Subject: TI Explorers for MIT

Mike Green of Symbolics told us that MIT's "multi-million-dollar order" is
essentially a gift from TI to MIT.  He said that MIT has confirmed this.
Apparently, TI is donating 200 machines to MIT and giving them the option to
buy another 200 at $28K each over the next two years.  However, TI is working
to get DARPA to pay for the second 200!  If this is true, I just may "order"
a few hundred myself.

Dallas Webster
CMP.BARC@UTexas-20.ARPA

------------------------------

Date: 11 Oct 84 12:06:18 EDT
From: Feng-Hsiung.Hsu@CMU-CS-VLSI
Subject: ACM Chess Tournament

           [Forwarded from the CMUC bboard by Laws@SRI-AI.]

The following message was posted on usenet:

     The standings follow.  Ties were broken by considering the sum of
the opponents' scores.  Since 'Bebe' and 'Fidelity X' deadlocked here, the
sum of the opponents' opponents' scores were tallied.  Deadheat again, so
by fiat, Fidelity walked home with the second place trophy, as Bebe finished
second at ACM '83.  (At least, I think this is what happened, the groggy
hardcore disbanding at 1 am).

     There were surprises, including a disappointing showing by Belle.
I shall leave game commentary to the experts.  Mike Valvo and Danny Kopec
emceed the fourth round, and several other masters were in attendance,
including former World Juniors champ Julio Kaplan.

     Blitz was running on a 420 MIP four-barrel Cray XMP-48, computing
100K nodes per second (Belle does 160K).  Bebe is a custom bit-slice micro,
with hardware assist for various functions.  Fidelity is a commercial 6 MHz
6502, and International Software Experimental is David Levy's Apple II.

        Cray Blitz      2150        4
        Fidelity X      1900        3
        Bebe            1927        3
        Chaos           1714        3
        Belle           2200        2.5
        Nuchess         2100        2
        Phoenix         1910        2
        Novag X         1970        2
        Int. Soft. X    2022 (est)  2
        Schach 2.7      N/A         1.5
        Ostrich         1475        1
        Awit            1600        1
        Merlin          N/A         1
        Xenarbor        N/A         0

------------------------------

Date: Tue, 9 Oct 84 21:07:34 edt
From: krovetz@nlm-mcs (Bob Krovetz)
Subject: chess and planning

A very nice paper on a program that uses planning in making chess
moves is:

 "Using Patterns and Plans in Chess", Dave Wilkins, Artificial
  Intelligence, Vol. 14, 1980.

The program is called PARADISE, and has found a mate that was 19 ply
deep!


-Bob (Krovetz@NLM-MCS)

------------------------------

Date: 11 Oct 1984 1306-EDT
From: Scott Dietzen <DIETZEN@CMU-CS-C.ARPA>
Subject: Seminar - Knowledge Based Software Development

           [Forwarded from the CMUC bboard by Laws@SRI-AI.]

                           Friday, October 12
                           2:00 PM in Wean 5409

              Knowledge Based Software Development in FSD
                          Robert Balzer
                             USC/ISI


        Our group is persuing the goal of an automation based software
development paradigm.  While this goal is still distant, we have embeded
our current perceptions and capabilities in a prototype (FSD) of such a
software development environment.  Although this prototype was built
primarily as a testbed for our ideas, we decided to gain insight by
using it, and have added some administrative services to expand it from
a programming system to a computing environment currently being used by
a few ISI researchers for all their computing activities.  This "AI
operating system" provides specification capabilities for Search,
Coordination, Automation, Evolution and Inter-User Interaction.

        Particularly important is evolution, as we recognize that useful
systems can only arise, and remain viable, through continued evolution.
Much of our research is focused on this issue and several examples will
be used to characterize where we are today and where we are headed.
Naturally, we have started to use these facilities to evolve our system
itself.

------------------------------

Date: Thu, 11 Oct 84 17:43:05 edt
From: Douglas Stumberger <des%bostonu.csnet@csnet-relay.arpa>
Subject: Conference - AI Society of New England


               The Sixth Annual Conference of the
         Artificial Intelligence Society of New England

                        Oct. 26-27, 1984


It is time once again for our legendary annual AISNE meeting!  In
keeping  with our time-honored tradition, we will have an invited
speaker for Friday night, with panel  discussions  and  talks  by
students on Saturday.

Accommodations on Friday night will be informal. Bring a sleeping
bag,  and we can find you a place to stay. If you want us to find
you a place, tell Doug Stumberger at Boston University  how  many
bodies  you  have.  Note: If you have a faculty representative at
your institution, they can pass this information on to  Doug  for
you in order to minimize long distance phone calls. (If you don't
know who your faculty rep. is, it's probably the person who  dis-
tributed  this  announcement.)  There is no admission charge, and
no formal registration necessary, though if you need informal ac-
comodations for Friday night, please let Doug know.


The event will be held at:

                 Department of Computer Science
                        Boston University
                      111 Cumington Street
                           Boston, MA

The Program is:

                         Friday, Oct. 26

8:00 pm. Invited Talk by David Waltz (Brandeis University)
         "Massively Parallel Models and Hardware for AI"

9:00 pm. Libational Social Hour

                       Saturday, Oct. 27:

10:00 am. Panel discussion chaired by Elliot Soloway (Yale)
                 "Intelligent Tutoring Systems"

11:30 am. Talks on Academic Research Projects (15 min. each)

12:30 pm. Lunch

2:00 pm. Panel discussion chaired by Michael  Lebowitz  (Columbia U.)
                    "Natural Language - What Matters?"

3:30 pm. More Talks

4:30 pm. AISNE Business Meeting


Program Coordinator:                     Local Coordinator:

Wendy Lehnert                           Douglas Stumberger
COINS                                   Department of Computer Science
University of Massachusetts             111 Cumington Street
Amherst, MA 01003                       Boston, MA 02215
413-545-3639                            617-353-8919

csnet: lehnert@umass-cs                 csnet: des@bostonu
                                        bitnet: csc10304@bostonu

------------------------------

End of AIList Digest
********************
14-Oct-84 19:47:18-PDT,16737;000000000000
Mail-From: LAWS created at 14-Oct-84 19:45:18
Date: Sun 14 Oct 1984 19:38-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #138
To: AIList@SRI-AI


AIList Digest            Monday, 15 Oct 1984      Volume 2 : Issue 138

Today's Topics:
  Metadiscussion - Citing AIList,
  AI - Definition,
  Linguistics - Mailing List & Sastric Sanskrit & Language Evolution,
  Conference - SCAIS
----------------------------------------------------------------------

Date: 14 Oct 84 19:56:17 EDT
From: Allen <Lutins@RU-BLUE.ARPA>
Subject: AILIST as a source of info....


Many recent AILIST discussions have fascinated me, and I'm sure that
at some point in the near future I'll be using information presented
here for a paper or two.  Just exactly how do I credit an electronic
bboard in a research paper?  And who (i.e. moderator, author of
info, etc.) do I give credit to?
                                        -Allen LUTINS@RU-BLUE


[Certainly the author must be credited.  I am indifferent as to
whether AIList is mentioned since I consider the digest just a
communication channel by which authors circulate their unpublished
ideas.  (You wouldn't cite Ma Bell or your Xerox copier.)  This
viewpoint is intended to avoid copyright difficulties.  On the
other hand, a reference to AIList might help someone look up the
full context of a discussion.  Does any librarian out there know
a good citation form for obscure newsletters, etc., that the
reader could not be expected to track down by name alone?  -- KIL]

------------------------------

Date: 14 Oct 84 14:49:51 EDT
From: McCord @ DCA-EMS
Subject: Model for AI Applications


Since the beginning, some intelligence, albeit explicit and highly
focused, has been built into nearly every program written.  This is
obviously not the "artificial" intelligence we now talk, market, and
sell.  Surely, to be worthy of the title "artificial" intelligence,
an AI application must exhibit some minimum characteristics such as
a specified level of control over its environment, the ability to learn,
and its transportability or adaptability to related applications.
Has anyone developed a model of an AI application that may be used to
discriminate between "programs" and "artificial" intelligence?

Also, does anyone have any comments on Dr. Frederick Brook's (of
The Mythical Man-Month fame) pragmatitic approach ("Intelligence
Amplification (IA) Is Better Than Artificial Intelligence (AI)")
to AI?

------------------------------

Date: Fri, 12 Oct 84 17:09:29 edt
From: Douglas Stumberger <des%bostonu.csnet@csnet-relay.arpa>
Subject: natural language mailing list


        Does anyone know of a mailing list devoted solely to
linguistics/computational linguistics?

douglas stumberger
csnet: des@bostonu
bitnet: csc10304@bostonu

------------------------------

Date: Thu 11 Oct 84 18:15:59-MDT
From: Uday Reddy <U-REDDY@UTAH-20.ARPA>
Subject: Sastric Sanskrit

Coming from India and having learnt a little bit of Sanskrit, let me make a
few comments to add to Rick Briggs's claims.  I do not know for a fact if
Sastric Sanskrit is unambiguous.  In fact, I have not heard of it before.
But, its unambiguity seems plausible.

First of all, as to the history of Sanskrit.  It is an Indo-European
language but it has an independent line of development from all the
languages spoken outside the Indian subcontinent, i.e., all its daughters
are spoken, to the best of my knowledge, only in the subcontinent.  Not
only its dhatus but its methodologies have been inherited by its daughters.
Even the Dravidian languages (the other family of languages spoken in the
subcontinent which are not daughters of Sanskrit) have been influenced by
its methodologies.  For example, the first formal grammar of my own mother
tongue, which is not born of Sanskrit, was written in Sanskrit
Panini-style.

Strictly speaking, neither Sanskrit nor its daughters have a word order.
The sophisticated case system makes it possible to communicate without word
order.  The subject and object are identifiable from their own cases
independent of their position in a sentence.  Incidentally, the cases are
merely a convenience.  The prepositions (which become suffixes in Sanskrit
and its daughters) serve the same purpose, though they are more verbose.
However, the role of various words in a sentence is not always
independently identifiable.  This leads to ambiguity rather than
unambiguity.  Kiparsky's example
        "rajna bhikshuna bhavitavyam"
has BOTH the meanings
        "the beggar will have to become the king"
and
        "the king will have to become the king"
The latter meaning is normally understood, because it interprets the
sentence in the word order "subject-object-verb" which is the most
frequently used.  This kind of unambiguity is more of an exception than
the standard.  I would say it occurs not more than 5% of the time in normal
prose.  It is resolved by resorting to the "natural" word order.

Sastric Sanskrit is a subset of normal Sanskrit, i.e., every sentence of
Sastric Sanskrit is also a sentence of normal Sanskrit.  This also means
that Sastric Sanskrit did not evolve naturally on its own, but was the
result of probably hundreds of years of research to eliminate ambiguity in
communication.  It should be possible for the initiated and knowledgeable
to dig up the research that went into the development of this subset.

What seems to be important is whether an unambiguous subset of a language
can be formed by merely imposing rules on how sentences can be formed.  I
am not convinced about that, but I cannot also say it is impossible.
Ancient Indian scholars had a curious mixture of dogma and reason.  One
cannot take their claims at their face value.

If an unambiguous subset of Sanskrit could be developed, it should also be
possible for all the languages.  What is special about Sanskrit is that the
redundancy needed to disambiguate the language could be added in Sanskrit
without substantial loss of convenience.  In English, adding this
redundancy leads to a lot of awkwardness, as Briggs's examples exemplify.

Uday Reddy

------------------------------

Date: 12 Oct 84 09:30 PDT
From: Kahn.pa@XEROX.ARPA
Subject: Language Evolution

This discussion of Sanskrit leads me to ask the question of why
languages have evolved  the way they have.  Why have they moved away
from case?  Generalizing from the only example I know of (Old Norse to
Modern Swedish) I wonder why distinctions that seem useful have
disappeared.
For example, Old Norse had singular, plural, and dual (when two people
were involved).  Why would such a distinction come into a language and
then disappear hundreds of years later.  Why did Sastric Sanskrit die?

[Otto Jesperson (1860-1943), famous Danish linguist, studied such matters
at a time when classical Greek and Latin were very much in vogue and
modern languages with few cases, genders, tenses, and moods were
considered retrogressive.  He held the opposed view that English and
Chinese were the most advanced languages, and that the superiority
of modern languages stems from seven characteristics:

  1) Shorter forms, easier and faster to speak.  The Gospel of St.
     Matthew contains 39k syllables in Greek, 33k in German, 29k
     in English.

  2) Fewer forms to burden memory.  Gothic habaida, habaides,
     habaidedu, and 12 other forms map to just "had" in English.

  3) Regular formation of words.

  4) Regular syntactic use of words.

  5) Flexible combinations and constuctions.  Danish "enten du
     cller jeg har uret" is straightforward, whereas the inflected
     "either you or I am wrong" or "either you are wrong, or I"
     is awkward.

  6) Lack of repetitious concord.  Latin "opera virorum omnium
     bonorum veterum" expresses plurality four times, genitive
     case four times, and masculine gender twice; the English
     "all good old men's works" has no such repetition.

  7) Ambiguity is eliminated through regular word order.

Jesperson designed his own artificial language, Novial, after working
on a modified (and never adopted) form of Esperanto called Ido.

For more information, see Peter Naur's Programming Languages, Natural
Languages, and Mathematics in the December 1975 issue of Communications
of the ACM.  -- KIL]

------------------------------

Date: Sat, 13 Oct 84 13:39:03 PDT
From: Southern California AI Society <scais@UCLA-LOCUS.ARPA>
Subject: Conference - SCAIS

I noticed the announcement of AISNE on AIList.  Since SCAIS is
inspired by AISNE,  it seems appropriate to announce it in
AIList also.   Here goes:

************************************************************************
                     1ST MEETING OF SCAIS

SCAIS -- Southern California Artificial Intelligence Society
         (Pronounced "skies".)

The purpose of SCAIS is to help create an AI community spirit  among  AI
researchers  and research labs in the Southern California area.  (As far
south as San Diego and as far  north  as  Santa  Barbara,  but  probably
concentrated in the greater LA area.)

SCAIS is inspired by AISNE (AI Society of New England).  AISNE meets  at
least  once  a year, at locations such as Yale, MIT, UMass, Stoneybrook,
etc. in the New England area.  (See prior AIList announcement of AISNE.)

Our  first  SCAIS meeting is intended to give everyone an opportunity to
meet other active AI researchers and  graduate  students  in  the  area.
Short  talks  on  research projects will be given by students and AI lab
project leaders, who will describe what AI research is going on  in  the
area.   In  addition,  we  hope  to  generate a list of the names, phone
numbers, net mailing addresses, and research interests of the attendees.
If  our  first SCAIS meeting is successful, future meetings will then be
held on a periodic basis at different sites.

SCAIS  is  intended  for serious AI researchers and graduate AI students
who reside in S.  Calif., who are working  in  the  field  and  who  are
interested  in learning about the research of others in the 'greater' LA
area.  SCAIS is NOT intended as a forum for industrial recruiting or for
interested  on-lookers.   Attendance  at  our  first  SCAIS  meeting  is
expected to be 100-150 people and is by invitation only.

AI researchers in the S.  Calif.  area  can  request  an  invitation  by
contacting:  SCAIS-REQUEST@UCLA-CS.ARPA or SCAIS-REQUEST@UCLA-LOCUS.ARPA
(or ...!ucla-cs!scais-request on uucp).  You should include  your  name,
affiliation, address, net-address, phone number, and research area.
************************************************************************

    (almost complete) AGENDA of 1st SCAIS Conference

(Oct 29, 8:00am-7:00pm, California Room, UCLA Faculty Center)

8:00 - 8:30    Morning OPEN HOUSE at UCLA AI Lab & Demos
8:30 - 8:40    Michael Dyer -- Welcome and Overview of UCLA AI
8:40 - 10:15   SESSION #1

     UCLA  (75 min)
     ==============
     Sergio Alvarado (stu) -- "Comprehension of Editorial Text"
     Uri Zernik (stu)    --  "Adult Language Learning"
     Erik Mueller (stu)  --  "Daydreaming and Story Invention"
     Charlie Dolan (stu)  --  "Reminding and Analogy"
     Judea Pearl -- "Learning Hidden Causes from Raw Data"
     Ingrid Zuckerman (stu) -- "Listener Model for Generation
                             of Meta-Technical Utterances in Math Tutoring"
     Rina Dechter (stu) -- "Mechanical Generation of Heuristics for
                            Constraint-Satisfaction Problems.
     Tulin Mangir -- "Applications of Expert Systems to CAD and CAT of VLSI"
     Vidal -- "Reconfigurable Logic Knowledge Representation and
               Architectures for Compact Expert Systems"

     Aerospace Corp. (20 min)
     ========================
     Steve Crocker  --  "Overview"
     Paul Mazaika -- "False Event Elimination"
     Ann Brindle  --  "Automated Satellite Control"
     John Helly -- "Representational Basis for A Distributed Expert System"
     *Break*  (coffee & danish)  10:15 - 10:30

10:30 - 11:50  SESSION #2

     UC Irvine (60 min)
     ==================
     Pat Langley -- "Overview of UCI AI Research"
     Rogers Hall (stu) -- "Learning in Multiple Knowledge Sources"
     Student (w/ Rick Granger) --  " NEED TITLE "

     IBM (20 min)
     ============
     John Kepler -- "Overview of IBM Scientific Center Activities in AI"
     Gary Silverman -- "The Robotics Project"
     Alexander Hurwitz -- "Intelligent Help for Computer Systems"

11:50 - 1:10   LUNCH  (Sequoia Rooms 1,2,3 in Faculty Center)
1:10 - 2:40    SESSION #3

     USC/ISI (90 min)
     ================
     Kashif Chaudhry (stu) -- "The Advance Robot Programming Project"
     Shari Naberschnig (stu) -- "The Distributed Problem Solving Project"
     Yigal Arens  --   "Natural Language Understanding Research at USC"
     Ram Nevatia  --   "Overview of Computer Vision Research at USC"
     Dan Moldovan --  "Parallel Processing in AI"

     Jack Mostow -- "Machine Learning Research at ISI"
     Bill Mann -- "Natural Language Generation Research at ISI"
     Norm Sondheimer -- "Natural Language Interface Research at ISI"
     Tom Kaczmarek -- "Intelligent Computing Environment Research at ISI"
     Bob Neches -- "Expert Systems Research at ISI"
     Bob Balzer -- "Specification-Based Programming Research at ISI"
     *Break*        2:40 - 2:55    (coffee & punch)

3:00 - 4:20    SESSION #4

     Hughes AI Center (20 min)
     =========================
     D. Y. Tseng -- "Overview of HAIC Activities"

     JPL (20 min)
     ====================
     Steven Vere -- "Temporal Planning"
     Armin Haken -- "Procedural Knowledge Sponge"
     Len Friedman   "Diagnostics and Error Recovery"

     TRW (20 min)
     ============
     Ed Taylor -- "AI at TRW"

     Rand Corp (20 min)
     ==================
     Phil Klahr     --  "Overview of Rand's AI Research"
                        "AI in Simulation"
     Henry Sowizral --  "Time Warp"
                        "ROSIE: An Expert System Language"
     Don Waterman   --  "Explanation for Expert Systems"
                        "Legal Reasoning"
     Randy Steeb    --  "Cooperative Intelligent Systems"
     *Break*        4:20 - 4:40  (coffee & punch)

4:40 - 6:00    SESSION #5

     UC San Diego  (20 min)
     ======================
       Paul Smolensky -- "Parallel Computation:  The Brain and AI"
       Paul Munro -- " Self-organization and the Single Unit:  Learning
                       at the Neuronal Level"

     SDC  (20 min)
     =============
       Dan Kogan -- "Intelligent Access to Distributed Data Management"
       Robert MacGregor -- "Logic-Based Knowledge Management System"
       Beatrice T. Oshika -- "User Interfaces:  Speech and Nat. Lang."

     Cal State, Fullerton (10 min)
     =============================
     Arthur Graesser -- "Symbolic Procedures of Question Answering"

     Rockwell Science Center (5 min)
     ===============================
     William Pardee -- "A Heuristic Factory Scheduling System"

     General Research Corp (5 min)
     =============================
     Jim Kornell -- "Analogical Inferencing"

     Northrup  (5 min)
     =================
     Steve Lukasis   -- 'NEED TITLE"

     Aerojet (5 min)
     ===============
     Ben Peake  --  "NEED TITLE"

     Litton (5 min)
     ==============
     speaker  -- "NEED TITLE"

     Logicon (5 min)
     ===============
     John Burge -- "Knowledge Engineering at Logicon"


6:00 - 7:00    GENERAL MEETING OF SCAIS MEMBERS

     SCAIS Panel & General Meeting
          possible themes:
               * Assessment - Where from here?
               * State of AI in S. Calif.
               * Organization of SCAIS
               * Future Hosting
               * Univ - Industry connections
               * Software - hardware community sharing
               * Arrival of IJCAI-85 in LA
               * LA AI Consortium/Institute ???

7:00 - 7:30    Evening OPEN HOUSE at UCLA AI Lab & Demos
               (3677 Boelter Hall)

> 7:30pm       Interested parties may form groups and dine
               at various restaurants in Westwood Village

------------------------------

End of AIList Digest
********************
17-Oct-84 11:02:53-PDT,18973;000000000000
Mail-From: LAWS created at 17-Oct-84 10:59:47
Date: Wed 17 Oct 1984 10:55-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #139
To: AIList@SRI-AI


AIList Digest           Wednesday, 17 Oct 1984    Volume 2 : Issue 139

Today's Topics:
  Seminars - Monotonic Processes in Language Processing
    & Qualitative Analysis of MOS Circuits
    & Knowledge Retrieval as Specialized Inference
    & Juno Graphics Constraint Language
    & PECAN Program Development System
    & Aesthetic Experience,
  Symposium - Complexity of Approximately Solved Problems,
  Course - Form and Meaning of English Intonation
----------------------------------------------------------------------

Date: Wed, 10 Oct 84 16:09:03 pdt
From: chertok%ucbkim@Berkeley (Paula Chertok)
Subject: Seminar - Monotonic Processes in Language Processing

             BERKELEY COGNITIVE SCIENCE PROGRAM
                         Fall 1984

           Cognitive Science Seminar -- IDS 237A

    TIME:                Tuesday, October 16, 11 - 12:30
    PLACE:               240 Bechtel Engineering Center
    DISCUSSION:          12:30 - 2 in 200 Building T-4

SPEAKER:        Martin Kay, Xerox Palo Alto Research Center;
                Center  for the Study of Language and Infor-
                mation, Stanford University

TITLE:          Monotonic Processes in Language Processing

ABSTRACT:       Computation  proceeds  by  manipulating  the
                associations  between  (variable)  names and
                values  in  accordance  with  a  program  of
                rules.  If an association, once established,
                is never changed,  then  the  process  as  a
                whole is monotonic.  More intuitively, mono-
                tonic processes can add arbitrary amounts of
                detail  to  an  existing  picture so long as
                they never change  what  is  already  there.
                Monotonic  processes underlie several recent
                proposals in linguistic theory  (e.g.  GPSG,
                LFG  and  autosegmental  phonology)  and  in
                artificial intelligence (logic programming).
                I  shall  argue  for seeking monotonic solu-
                tions to linguistic problems wherever possi-
                ble  while  rejecting  some  arguments  fre-
                quently made for the policy.

------------------------------

Date: 15 Oct 1984  11:17 EDT (Mon)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Qualitative Analysis of MOS Circuits

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


Wednesday   October 17, 1984    4:00pm  8th floor playroom

Brian C. Williams
Qualitative Analysis of MOS Circuits

With the push towards sub-micron technology, transistor models have
become increasingly complex.  The number of components in
integrated circuits has forced designers' efforts and skills towards
higher levels of design.  This has created a gap between design
expertise and the performance demands increasingly imposed by the
technology.  To alleviate this problem, software tools must be developed
that provide the designer with expert advice on circuit performance and
design.  This requires a theory that links the intuitions of an expert
circuit analyst with the corresponding principles of formal theory (i.e.,
algebra, calculus, feedback analysis, network theory, and
electrodynamics), and that makes each underlying assumption explicit.

Temporal Qualitative Analysis is a technique for analyzing the
qualitative large signal behavior of MOS circuits that straddle the line
between the digital and analog domains.
Temporal Qualitative Analysis is based on the
following four components:  First, a qualitative representation is
composed of a set of open regions separated by boundaries.  These
boundaries are chosen at the appropriate level of detail for the
analysis.  This concept is used in modeling time, space, circuit state
variables, and device operating regions.  Second, constraints between
circuit state variables are established by circuit theory.  At a finer
time scale, the designer's intuition of electrodynamics is used to
impose a causal relationship among these constraints.  Third, large
signal behavior is modeled by Transition Analysis, using continuity and
theorems of calculus to determine how quantities pass between
regions over time.  Finally, Feedback Analysis uses
knowledge about the structure of equations and the properties of
structure classes to resolve ambiguities.

------------------------------

Date: 1 Oct 1984 13:27-EDT
From: Brad Goodman <BGOODMAN at BBNG>
Subject: Seminar - Knowledge Retrieval as Specialized Inference

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


On Thursday, October 11th, at 10:30 a.m., Alan Frisch, from the
Cognitive Studies Programme, University of Sussex, Brighton, England
and from the Department of Computer Science,  University of Rochester,
Rochester, New York, will speak at the 3rd floor large conference room
at BBN, 10 Moulton Street in Cambridge.

         Knowledge Retrieval as Specialized Inference

  Artificial intelligence reasoning systems commonly employ a
  knowledge  base module that stores a set of facts expressed
  in a representation  language  and provides  facilities  to
  retrieve  these  facts.  Though  there  has  been a growing
  concern  for  formalization  in  the  study   of  knowledge
  representation,  little  has  been  done  to  formalize the
  retrieval process.  This research remedies the situation in
  its  study  of  retrieval  from  abstract  specification to
  implementation.

  Viewing retrieval as a highly specialized inference  process
  that attempts to derive a queried fact from the set of facts
  in the knowledge base enables techniques of formal logic  to
  be  used  in  abstract  specifications.   This talk develops
  alternative specifications for an idealized version  of  the
  retriever incorporated in the ARGOT natural language system,
  shows how  the  specifications  capture  certain  intuitions
  about  retrieval,  and uses the specifications to prove that
  the retriever  has  certain  properties.   A  discussion  of
  implementation  issues  considers an inference method useful
  in both retrieval and logic programming.

------------------------------

Date: 15 October 1984 1240-EDT
From: Staci Quackenbush at CMU-CS-A
Subject: Seminar - Juno Graphics Constraint Language

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

Name:   Greg Nelson
Date:   October 22, 1984
Time:   3:30 - 4:30 p.m.
Place:  WeH 5409
Title:  "An Overview of Juno"


Connect a computer to a marking engine, and you have a drawing instrument
of unprecedented precision and versatility.  Already some graphics
artists have given up their T-squares and pens for the new world of raster
displays, pointing devices, and laser printers.  But they face a serious
difficulty: to exploit the power and generality of the computer requires
programming.  We can't remove this difficulty, but we can smooth it by
programming in the language of the geometry of images rather than in the
low-level language of some particular representation for images.

These considerations led to the design of Juno, an interactive and
programmable graphics system.  The first basic principle of Juno's design
is that geometric constraints be the mechanism for specifying locations.
For example, a Juno program might specify that points A, B, and C be
collinear and that the distance from A to B equal the distance from
B to C; the interpreter will solve these constraints by numerical methods.
The second principle of the design is that the text of a Juno program be
responsive to the interactive editing of the image that the program produces.
For example, to create a program to draw an equilateral triangle, you don't
type a word: you draw a triangle on the display, constrain it to be
equilateral, and command Juno to extract the underlying program.

------------------------------

Date: Tue 16 Oct 84 09:46:34-PDT
From: Susan Gere <M.SUSAN@SU-SIERRA.ARPA>
Subject: Seminar - PECAN Program Development System

        EE380/CS310 Computer Systems Laboratory Seminar

Time:  Wednesday, October 17,  4:15 p.m.
Place:  Terman Auditorium

Title:  PECAN: Program Development Systems that Support Multiple Views

Speaker:  Prof. Steven Reiss,  C.S.D. Brown University


This talk describes the PECAN family of program development systems.
PECAN is a generator that is based on simple description of the
underlying language and its semantics.  Program development systems
generated by PECAN support multiple views of the user's program.  The
views can be representations of the program, its semantics and its
execution.  The current program views include a syntax-directed
editor, a Nassi-Schneiderman flow graph, and a declaration editor.
The current semantic views include expression trees, data type
diagrams, flow graphs, and the symbol table.  Execution views include
the interpreter control and a stack and data view.  PECAN is designed
to make effective use of powerful personal machines with
high-resolution graphics displays, and is currently implemented on
APOLLO workstations.

------------------------------

Date: Tue, 16 Oct 84 16:56:22 pdt
From: chertok@ucbcogsci (Paula Chertok)
Subject: Seminar - Aesthetic Experience

             BERKELEY COGNITIVE SCIENCE PROGRAM
                         Fall 1984
           Cognitive Science Seminar -- IDS 237A

    TIME:                Tuesday, October 23, 11 - 12:30
    PLACE:               240 Bechtel Engineering Center
    DISCUSSION:          12:30 - 2 in 200 Building T-4

SPEAKER:        Thomas  G.  Bever,  Psychology   Department,
                Columbia University

TITLE:          The Psychological basis of aesthetic experi-
                ence:  implications for linguistic nativism

ABSTRACT:       We define the notion of Aesthetic Experience
                as   a   formal   relation   between  mental
                representations:   an  aesthetic  experience
                involves  at least two conflicting represen-
                tations that are  resolved  by  accessing  a
                third  representation.   Accessing the third
                representation releases  the  same  kind  of
                emotional  energy as the 'aha' elation asso-
                ciated with discovering the  solution  to  a
                problem. We show how this definition applies
                to  various  artforms,  music,   literature,
                dance.   The  fundamental aesthetic relation
                is similar to the  mental  activities  of  a
                child  during  normal cognitive development.
                These considerations explain the function of
                aesthetic  experience:  it elicits in adult-
                hood the characteristic mental  activity  of
                normal childhood.

                The fundamental activity revealed by consid-
                ering the formal nature of aesthetic experi-
                ence involves developing  and  interrelating
                mental  representations.   If  we  take this
                capacity  to  be  innate  (which  we  surely
                must),   the question then arises whether we
                can account for the phenomena that are  usu-
                ally argued to show the unique innateness of
                language as a mental organ.  These phenomena
                include  the  emergence of a psychologically
                real grammar,  a critical  period,  cerebral
                asymmetries.     More    formal   linguistic
                properties may be accounted for as partially
                uncaused (necessary) and partially caused by
                general  properties  of  animal  mind.   The
                aspects  of  language  that may remain unex-
                plained (and therefore non-trivially innate)
                are  the  forms of the levels of representa-
                tion.

------------------------------

Date: Mon 15 Oct 84 11:32:31-EDT
From: Delores Ng <NG@COLUMBIA-20.ARPA>
Subject: Symposium - Complexity of Approximately Solved Problems

       SYMPOSIUM ON THE COMPLEXITY OF APPROXIMATELY SOLVED PROBLEMS


                             APRIL 17-19, 1985


                        Computer Science Department
                            Columbia University
                            New York, NY  10027


SUPPORT:  This symposium is supported by a grant from the System Development
Foundation.

SCOPE:  This multidisciplinary symposium focuses on problems which are
approximately solved and for which optimal algorithms or complexity results
are available.  Of particular interest are distributed systems, where
limitations on information flow can cause uncertainty in the solution
of problems.  The following is a partial list of topics: distributed
computation, approximate solution of hard problems, applied mathematics,
signal processing, numerical analysis, computer vision, remote sensing,
fusion of information, prediction, estimation, control, decision theory,
mathematical economics, optimal recovery, seismology, information theory,
design of experiments, stochastic scheduling.

INVITED SPEAKERS: The following is a list of invited speakers.

L. BLUM, Mills College                  C.H. PAPADIMITRIOU, Stanford University
J. HALPERN, IBM                         J. PEARL, UCLA
L. HURWICZ, University of Minnesota     M. RABIN, Harvard University and
                                                  Hebrew University
D. JOHNSON, AT&T - Bell Laboratories    S. REITER, Northwestern University
J. KADANE, Carnegie-Mellon University   A. SCHONHAGE, University of Tubingen
R. KARP, Berkeley                       K. SIKORSKI, Columbia University
S. KIRKPATRICK, IBM                     S. SMALE, Berkeley
K. KO, University of Houston            J.F. TRAUB, Columbia University
H.T. KUNG, Carnegie-Mellon University   G. WASILKOWSKI, Columbia University and
                                                        University of Warsaw
D. LEE, Columbia University             A.G. WERSCHULZ, Fordham University
M. MILANESE, Politecnico di Torino      H. WOZNIAKOWSKI, Columbia University
                                                     and University of Warsaw


CONTRIBUTED PAPERS:  All appropriate papers for which abstracts are contributed
will be scheduled.  To contribute a paper send title, author, affiliation, and
abstract on one side of a single 8 1/2 by 11 sheet of paper.


         TITLES AND ABSTRACTS MUST BE RECEIVED BY JANUARY 15, 1985


PUBLICATION:  All invited papers will appear in a new journal, JOURNAL OF
COMPLEXITY, published by Academic Press, in fall 1985.

REGISTRATION:  The symposium will be held in the Kellogg Conference Center on
the Fifteenth Floor of the International Affairs Building, 118th Street and
Amsterdam Avenue.  The conference schedule and paper abstracts will be
available at the registration desk.  Registration will start at 9:00 a.m.
There is no registration charge.

FOR FURTHER INFORMATION:  The program schedule for invited and contributed
papers will be mailed by about March 15 only to those responding to this
account with the information requested below.  If you have any questions,
contact the Computer Science Department, Columbia University, or call
(212) 280-2736.


To help us plan for the symposium please reply to this account with the
following information.


Name:                                   Affiliation:

Address:


 ( )  I will attend the Complexity Symposium.
 ( )  I may contribute a paper.
 ( )  I may not attend, but please send program.

------------------------------

Date: Mon 15 Oct 84 22:43:20-PDT
From: Bill Poser <POSER@SU-CSLI.ARPA>
Subject: Course - Form and Meaning of English Intonation


                        COURSE ANNOUNCEMENT

        Mark Liberman and Janet Pierrehumbert of AT&T Bell Laboratories
will give a course sponsored by the Linguistics Department and the Center
for the Study of Language and Information entitled:


                FORM AND MEANING OF ENGLISH INTONATION


Place: Seminar Room, CSLI, Stanford University
Dates: Monday 5 November - Saturday 17 November
Hours: MWF      16:30-18:00
       TTh      16:30-18:00 & 19:30-21:30
       Sat      10:00-12:30 & 14:00-17:00


A brief description follows:

(1) What

Participants will learn to describe and interpret the stress, tune and
phrasing of English utterances, using a set of systematically arranged
examples, given in the form of transcripts, tapes and pitch contours.
The class will also make use of an interactive real-time pitch detection
and display device.

We will provide a theory of English intonation patterns and their
phonetic interpretation, in the form of an algorithm for generating
synthetic F0 contours from underlying phonological representations.
We will investigate the relation of these patterns to the form, meaning
and use of the spoken sentences that bear them, paying special attention to
intonational focus and intonational phrasing.

Problem sets will develop or polish participants' skills in the exploration
of experimental results and the design of experiments.

(2) Who

No particular background knowledge will be presupposed, although
participants will have to acquire (if they do not already have) at least
a passive grasp of many technical terms and concepts. Thus, it will
be helpful to have had experience (for instance) with at least some of
the terms "hertz" (not the car company), "fricative," "copula," "lambda
abstraction," "gradient vector." Several kinds of people, from engineers
through linguists and psychologists to philosophers, should find the course's
contents interesting. However, we will angle the course towards participants
who want to study the meaning and use of intonation patterns, and we hope
that a significant fraction of the course will turn into a workshop on this
topic.

(3) Registration

Pre-registration is not mandatory, but if you expect to attend
it would be helpful if you would let Bill Poser (poser@su-csli) know.

Stanford students wishing to take the course for credit may enroll
for a directed reading with Paul Kiparsky or Bill Poser.

------------------------------

End of AIList Digest
********************
17-Oct-84 22:43:05-PDT,16535;000000000001
Mail-From: LAWS created at 17-Oct-84 22:39:17
Date: Wed 17 Oct 1984 22:29-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #140
To: AIList@SRI-AI


AIList Digest           Thursday, 18 Oct 1984     Volume 2 : Issue 140

Today's Topics:
  Applications - Agriculture & Biofeedback,
  AI Tools - InterLisp-D DBMS & OPS5 & OPS5E & Verac & Benchmarks,
  Law - Liability of Software Vendors,
  Metadiscussion - List Citations
----------------------------------------------------------------------

Date: Tue, 16 Oct 84 11:45:49 cdt
From: "Walter G. Rudd" <rudd%lsu.csnet@csnet-relay.arpa>
Subject: AI applications in agriculture

I would like to know of any work in applying AI techniques to improve
agricultural production.  Tou at Florida and Michalski at Illinois had
some things going; what is the status of these projects?  Is there
anything else going on?

Thanks in advance for any help you can give me.

Walt Rudd
Department of Computer Science
298 Coates Hall
Louisiana State University
Baton Rouge, Louisiana 70803
rudd@lsu

------------------------------

Date: 3-Oct-84 23:53 PDT
From: William Daul / Augmentation Systems Div. / McDnD <WBD.TYM@OFFICE-1.ARPA>
Subject: PC <--> Biofeedback Instrument Link (info wanted)

A friend has asked me to see if I can uncover some information for him.
So...here goes...

   He wants to connect an EEG biofeedback instrument to a personal computer
   (IBM or APPLE).  He hasn't decided on which.

   1.  What are the necessary componets of such a system (hard disk, disk
   controller, etc)?

   2.  He wants to get a spectrum analysis (FFT) of the recordings, both real
   time and compressed.  Does anyone know of existing software he could use?

   Emre Konuk
   MRI
   555 Middlefield Rd.
   Palo Alto, CA.  94301
   Tel: 415-321 3055 -- wk
        415-856 0872 -- hm

I suspect he would like to know if anyone knows of existing groups doing similar
work.  If you have information, you can send it to me "electronically" and I
will pass it on to him.  Thanks,  --Bi//  (WBD.TYM@OFFICE-2.ARPA)

------------------------------

Date: 15 Oct 84 16:55:43 PDT (Monday)
From: Cornish.PA@XEROX.ARPA
Subject: InterLisp-D based Database Management Systems

I would like information on any Database Management Systems that are
implemented in InterLisp-D.  More generally, I'd like literature
pointers to the issues of Database Management in AI.

Thank you,

Jan Cornish

------------------------------

Date: 14 Oct 1984 21:00-EST
From: George.Wood@CMU-CS-G.ARPA
Subject: Another OPS5 Version

There is also a Common Lisp version of OPS5, running on VAX/VMS Common lisp,
PERQ (Spice) Lisp, Data General's Common lisp for the MV 4000/8000/10000
series, and Symbolics 3600 in common lisp mode. This version was derived
from Forgy's Franz Lisp Implementation by George Wood (GDW@CMU-CS-PS1)
with help from Dario Giuse (Dario.Giuse@CMU-CS-SPICE) on the PERQ
version and standardization.

Sorry this missed the original call for information.

------------------------------

Date: 16 Oct 84 14:35 PDT
From: Tom Perrine <tom@LOGICON.ARPA>
Subject: OPS5E and Verac

Verac has moved. The new address is:
        Verac
        9605 Scranton Rd. Suite 500
        San Diego, CA 92121
        Attn: Pete Paine
        (619)457-5550

I believe that you must already have OPS5 before you can get OPS5E,
which is OPS5E(xtended).  It runs on all of the Symbolics machines, and
(now) also the TI Explorer.

------------------------------

Date: 15 October 1984 18:32-EDT
From: George J. Carrette <GJC @ MIT-MC>
Subject: LMI, TI, and Lisp Benchmarks.

Note: Comments following are due to George Carrette and Ken Sinclair,
      hackers at LMI, mostly covering specific facts which have been
      disclosed in previous announcements in "the trades."

* As far as benchmarks are concerned we would suggest that people
  at least wait until RPG publishes his results, which we consider to
  be the most serious effort to honestly represent the speed capabilities
  of the various machines.

* TI and LMI OEM arrangements.
  (1) LMI buys NuMachines on an OEM basis from TI. To these LMI adds
      the LAMBDA processor, software to support multiple LAMBDA and
      68000 Unix processors to run together on the NuBus, sharing
      disks, ethernet, and other devices.
  (2) LMI has a license to build NuMachines.
  (3) It was a technology transfer agreement (license) between LMI and TI that
      led to the transfer of technology to TI which was the basis of
      the Explorer.
  (4) LMI has an OEM agreement to purchase Explorers from TI.
      To these we will add our own microcode, optimizing compiler,
      and other products to be announced.


[Thank you very much for the reliable information.  I'm afraid most of
us don't keep up with the trade press, and messages like yours are a
great help.

A reader providing benchmarks a year ago (some of RPG's old benchmarks,
in fact) was chastised for not waiting for RPG's report.  At the time,
I had never heard of RPG; I assume many other people still have not.
If he hurries he may be able to benchmark the machines before the
good citizens of Palo Alto start using them for doorstops.  Meanwhile,
I see no harm in someone publishing timing statistics as long as he
offers to provide the code involved.

One further note: the benchmarks recently published in AIList were
originally circulated privately.  It was at my request that they
were made available to the list.  I thank Dr. Pentland for letting
me pass them along, and I regret any inconvenience he may have had
as a result.  -- KIL]

------------------------------

Date: Fri, 12 Oct 84 13:26:18 EDT
From: Stephen Miklos <Miklos@YALE.ARPA>
Subject: Liability of software vendors


>     "Maybe I am being naive or something, but I don't see why
>     AI software should
>     be different from any other when it comes to the liability of the vendor.
>     My attorney has written me a boilerplate contract that contains a clause
>     something to the effect that "vendor is not liable for third-party or
>     consequential damages that result from the use of the product."
>     Doesn't that take care of the problem?  If not, maybe I had better find
>     an expert attorney system."

Afraid not. Product liability can jump over the middleman (here the
doctor) and is not a contractually-based liability, thus contract terms
between the software vendor and the doctor or hospital cannot prevent
the liability from attaching. If the aggrieved party sued the doctor,
the doctor could not turn around and sue the software vendor (due to
the limitation of liability clause given above) but the aggrieved party
could sue the software vendor directly and avoid the contract
limitation (since he never signed any contract with the vendor).

  So much for standing to sue. As far as actual liability is concerned,
it becomes dicy. Products Liability relies on a product being used
in the normal way it is intended to be used causing some kind of
injury. It seems to me that the cause of the injury is the doctor's
reliance on the software, and therefore the doctor is the "proximate
cause." If, however, the particular software product becomes widely
used by doctors, the causation seems to shift. A reason for this might
be that a single doctor trying out a new piece of technology is
responsible for taking greater care to make sure it works than is a
doctor who is doing what is accepted in the medical community. For
instance, a medical malpractice charge can be avoided by proving that
all the doctor's actions were such as would be recommended by the
medical community in touch with the state of the art.

So, an experimental medical program ought to be safe--the doctor is
the guilty party for fooling around with experimental stuff while
treating a patient (at least without getting a waiver). But an
established program that has a deeply hidden bug in it is the stuff
plaintiffs' fortunes are made on.

By the way, you are not naive in assuming that an ai program will not
be treated differently by the courts than a regular program. But what
the AI program is trying to do--make judgments, diagnose illnesses, god
knows what all else--will introduce the risk of injury. No one is
going to be killed by a defective copy of Visi-calc.

****Disclaimer****--> I got my law degree back in '79, but I am not
now, and never have been, a practising attorney in any jurisdiction.
(I did pass the Connecticut Bar Exam.) These remarks are not to be
construed as legal advice, and should not be relied on as such by
anyone. These remarks are also not necessarily the opinions of my
employer, or of Mario Cuomo, whom I have never met.

                                  Stephen J. Miklos
                                  Cognitive Systems
                                  New Haven, CT

------------------------------

Date: Mon 15 Oct 84 08:48:26-PDT
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: AI List--Crediting Ideas From AI List

My first reaction to the question about how to cite something from AI List
is that it is an organized form of communication.  That is, there are dates,
volumes, numbers, an electronic place etc.  To me, this is what distinguishes
it from "just a communication channel" like the telephone or the xerox
copier.  I view AI List much closer to the journals but in electronic format.
Therefore if I were to cite something from AI LIst, I would use the format
for journal articles: author, possibly topic for title of comment, AI List
for title; the number, volume, and date of the list; and one additional
item, the electronic address.  If these lists are going to be kept and
can be looked up and referred to, I would recommend as complete a citation
as possible.

If AI List is viewed as more closely related to informal communications between
researchers, then the format would be that which one uses when referrring
to a conversation or personal letter.  However to me that would indicate
that another person would not have access to the primary discussion.

Harry Llull, Mathematical and Computer Sciences Library, Stanford University.

------------------------------

Date: 15-Oct-84 14:10 PDT
From: Kirk Kelley  <KIRK.TYM@OFFICE-2.ARPA>
Subject: Re: AILIST as a source of info....

From: Allen <Lutins@RU-BLUE.ARPA>

  Many recent AILIST discussions have fascinated me, and I'm sure that at some
  point in the near future I'll be using information presented here for a paper
  or two.  Just exactly how do I credit an electronic bboard in a research
  paper?  And who (i.e. moderator, author of info, etc.) do I give credit to?

This reminds me of Ithiel de Sola Pool's lament in note 8 to a paragraph in his
chapter on electronic publishing in Technologies of Freedom (Belknap Harvard
1983):

   "... The character of electronic publishing is illustrated by the problem of
   citing the information in this paragraph, which came from these interest
   group exchanges themselves.  Shall I cite the Arpanet list as from Zellich
   at Office-3?"

I am NOT an expert on obscure citations, so I can freely throw out the
following suggestion using Allen Lutins' original query for an
example.  "12345" would be the message ID if any had been provided:

   Lutins, Allen, "AILIST as a source of info...." message 12345 of 14 Oct
   1984 19:56 EDT, Lutins@RU-BLUE.ARPA or AIList Digest, V2 #138, 15 Oct 1984,
   AIList@SRI-AI.ARPA.

 -- kirk

[Alas, the title of a message is not a good identifier.  Many of the
messages in the AIList mailbox have meaningless titles (e.g., Re:
AIList Vol. 2, No. 136) or titles appropriate to some other bboard.
Some even have no titles.  I commonly supply another title as a service
to readers and as aid to my own sorting of the messages.  The title
sent out to Arpanet readers may thus differ from the title Usenet
readers see before I get the messages.  -- KIL]

------------------------------

Date: 15 October 1984 2252-PDT (Monday)
From: bannon@nprdc (Liam Bannon (UCSD Institute for Cognitive Science))
Reply-to: bannon <sdcsla!bannon@nprdc>
Subject: citing information on electronic newsboards

Allen Lutins query about how to cite information obtained from AIList
interests me, as I have confronted this issue recently. I sent out a
query on netnews on "computer-mediated social interaction" (it even
got on this List) and received a no. of interesting replies. I just
sent out a note on the "results" to net.followup, including quotations
from several msgs sent to me. I don't identify authors explicitly,
partly because of requests for anonymity. (I have however privately
acknowledged the contributions, and certainly do not try to pass them
off as being my own work.) I think this is ok for a net reply, but as
I am writing a large paper on the topic, I have decided to explicitly
ask all the people that I quote  a) for permission to quote them, and
b)for permission to include their names with the quotes.

As to citing AIList, or net.general, or whatever, some of the msgs
sent to me were also broadcast to a newsgroup, others
were sent privately over the net to me, so I am unsure how to
cite them.  It is an interesting issue though, as if credit is
not given properly for ideas that first appeared on the net, then
there is a danger that people will be reluctant to share ideas on
the net until after "official" publication, thus destroying the
vitality of the net. I'll go ask some librarians to see if they
have any thoughts. I would be interested in other people's opinions
on the issue.
-liam bannon (bannon@nprdc)

------------------------------

Date: Tue, 16 Oct 1984  14:08 EDT
From: MONTALVO%MIT-OZ@MIT-MC.ARPA
Subject: AILIST as a source of info....


    [Certainly the author must be credited.  ... ]

I'm not a librarian but have had some experience in citing obscure
reference.  I think it can be cited just like a newsletter is cited,
after all, it is a newsletter: citing author, title, newletter name,
Vol., and No.; maybe method of publication (ARPANET).  It is a form of
publication, though informal, just like a newsletter.  As for
copyright, I don't see that there is any problem since none of the
authors I've seen have ever copyrighted their material.  I'm assuming
it's fair game for copying, but that scientific (or literary) protocol
would oblige us to credit authors.

Fanya


[The welcome message I send out to each new subscriber states:

  List items should be considered unrefereed working papers, and
  opinions to be those of the author and not of any organization.
  Copies of list items should credit the original author, not
  necessarily the AIList.  The list does not assume copyright, nor does
  it accept any liability arising from remailing of submitted material.

The phrase "working papers" (which is also used by the SIGART newsletter)
is intended to mean that the author is not ready to officially publish
the material and thus is not surrendering copyright.  This might not
hold up in court, but it does establish the context in which people have
been submitting their material.

I have not been as strict as some list moderators in protecting authors
against unauthorized copying.  (The Phil-Sci list is/was particularly
strict about this.)  I have treated AIList as just another bboard that
happens to have a distributed readership.  I have forwarded items to
AIList from university bboards (as well as physical bboards), and I have
no objection to similar copying in return.  I would draw the line at
some major journal or copyrighted book quoting directly from the list
without at least asking the readership whether anyone objected to the
copying.  As I do not hold copyright, however, it really makes no
difference where I draw the line.  If someone copies material and the
author sues, the resolution will be up to a judge.  All that I can do
is to clarify the intention that should be ascribed to submitters in
the absence of other declarations.  -- KIL]

------------------------------

End of AIList Digest
********************
18-Oct-84 10:36:00-PDT,15673;000000000001
Mail-From: LAWS created at 18-Oct-84 10:33:29
Date: Thu 18 Oct 1984 10:28-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #141
To: AIList@SRI-AI


AIList Digest           Thursday, 18 Oct 1984     Volume 2 : Issue 141

Today's Topics:
  LISP - Common Lisp Flavors,
  AI Tools - OPS5 & Benchmarks,
  Linguistics - Language Evolution & Sastric Sanskrit & Man-Machine Language,
  AI - The Two Cultures
----------------------------------------------------------------------

Date: Thursday, 18 Oct 1984 06:11:24-PDT
From: michon%closus.DEC@decwrl.ARPA  (Brian Michon  DTN: 283-7695 FPO/A-3)
Subject: Common Lisp Flavors

Is there a flavor package for Common Lisp yet?

------------------------------

Date: 18 Oct 84 02:26 PDT
From: JonL.pa@XEROX.ARPA
Subject: OPS5 & Benchmarks

Two points, inspired by issue #140:

1) Xerox has a "LispUsers" version of OPS5, which is an unsupported
transliteration from the public Franz version, of a year or so ago, into
Interlisp-D.  As far as I know, this version is also in the public
domain.  [Amos Barzilay and myself did the translation "in a day or so",
but have no interest in further debugging/supporting it]


2)   Richard Gabriel is out of the country at the moment; but I'd like
to take a paragraph or two to defend his benchmarking project, and
report on what I witnessed at the two panel discussions sessions it
sponsored -- one at AAAI 83 and the other at AAAI 84.  The latter was
attended by about 750+ persons (dwindling down to about 300+ in the
closing hours!).  In 1983, no specific timing results were released,
partly because many of the machines under consideration were undergoing
a very rapid rate of development; in 1984, the audience got numbers
galore, more perhaps than they ever wanted to hear.  I suspect that the
TI Explorer is also currently undergoing rapid development, and numbers
taken today may well be invalid tomorrow (Pentland mentioned that).
     The point stressed over and over at the two panel sessions is that
most of these benchmarks were picked to monitor some very specific facet
of Lisp performance, and thus no single number could adequately compare
two machines.  In the question/answer session of 1983, someone tried to
cajole some such simplistic ratio out of Dr Gabriel, and his reply is
worth re-iterating "Well, I'll tell you -- I have two machines here, and
on one of the benchmarks, they ran at the same speed; but on another
one, there was a factor of 13 difference between them.  So, now, which
number do you want?  One, or Thirteen?"
     One must also note that many of the more important facets for
personal workstations were ignored -- primarily, I think because it's so
hard to figure out a meaningful statistic to monitor for them, and
partly because I'm sure Dick wanted to limit somewhat the scope of his
project.  How does paging figure into the numbers?  if paging is
factored out, then what do the numbers mean for a user who is frequently
swapping?  What about local area network access to shared facilities?
What about the effects of GC?  I don't know anyone who would feel
comfortable with someone else's proposed mixture of "facets" into a
whetstone kind of benchmark; it's just entirely possible that the
variety of facet mixtures found in Lisp usage is much greater than that
found in Fortran usage.  [Nevertheless, I seem to remember that the
several facets reported upon by Pentland are at the core of almost any
Lisp (or, rather, ZetaLisp-like Lisp) -- function call, message passing,
and Flavor creation -- so he's not entirely off the wall.]
     In summary, I'd say that both manufacturers and discerning buyers
have benefited from the discussions brought about by the Lisp timings
project; the delay on publication of the (voluminous!) numbers has had
the good effect of reminding even those who don't want to be reminded
that *** a single number simply will not do ***, and that "the numbers",
without an understanding analysis, are meaningless.  Several of the
manufacturer's representatives even admitted during the 1984 panel
sessions that their own priorities had been skewed by monitoring facets
involved in the Lisp system itself, and that seeing the RPG benchmarks
as "user" rather than "system" programs gave them a fresh look at the
areas that needed performance enhancements.


-- Jon L White --

------------------------------

Date: 18 October 1984 0646-PDT (Thursday)
From: mbr@nprdc
Reply-to: mbr@NPRDC
Subject: Re: Timings

I along with about 8 million others heard RPG (Richard Gabriel) talk
at AAAI this year and at the Lisp Conference both this year and 2
years ago, so the benchmarks are around. I dunno if he has the
results on line (or for that matter what his net address is--
he was at LLL doing common lisp for the S1 last I heard), but
someone in net land might know, and a summary could be posted to
AIList mayhaps?

Mark Rosenstein


[Dr. Gabriel is on the net, but I will let him announce his own
net address if he wishes to receive mail on this subject.  -- KIL]

------------------------------

Date: 15 Oct 1984 09:40-EST
From: Todd.Kueny@CMU-CS-G.ARPA
Subject: Language Evolution - Comments

For what its worth:

Any language in use by a significant number of speakers is under
constant evolution.  When I studied ancient Greek only singular and
plural were taught; dual was considered useful only for very old texts,
e.g. Homer or before.  The explanation for this was twofold:

        1) as the language was used, it became cumbersome to worry about
           dual when plural would suffice.  The number of endings for
           case, sex and so on is very large in ancient Greek; having
           dual just made things more cumbersome.

        2) similarly, as ancient Greek became modern Greek, case to a
           large extent vanished.  Why? Throughout its use, Greek
           evolved many special forms for words which were heavily used,
           e.g. to be. Presumably because no one took the time to speak
           the complete original form and so its written form changed.

I pose two further questions:

        1) Why would singular, dual, and plural evolve in the first
           place?  Why not a tri and quad as well?  Dual seems to be
           (at least to me) very unnatural.

        2) I would prefer English to ancient Greek principally because
           of the lack of case endings and conjugations.  It is very
           difficult to express certain new ideas, e.g. the concept of a word
           on its own with no sex or case, in such a language.  Why
           would anyone consider case useful?

                                                        -Todd K.

------------------------------

Date: 15 Oct 1984 09:52-PDT (Monday)
From: Rick Briggs <briggs@RIACS.ARPA>
Subject: Re: Langauge Evolution


        Why do languages move away from case?  Why did Sastric Sanskrit
die?  I think the answer is basically entropy.  The history of
language development points to a pattern in which linguists write
grammars and try to enforce the rules(organization), and the tendency
of the masses is to sacrifice elaborate case structures etc. for ease
of communication.
        One of the reasons Panini codified the grammar of Sanskrit so
carefully is that he feared a degeneration of the language, as was
already evidenced by various "Prakrits" or inferior versions of
Sanskrit spoken by servants etc.  The Sanskrit word for barbarian
was "mleccha" which means "one who doesn't speak Sanskrit"; culture
and high civilization were equated with language.  Similarly English
"barbarian" is derived from the greek "one who makes noises like
baa baa" i.e. who doesn't speak Greek.
        Current Linguistics has begun to actually aid this entropy by
paying special attention to slang and casual usage(descriptive vs.
prescriptive).  Without some negentropy from the linguists, I fear
that English will degenerate further.

Rick Briggs

------------------------------

Date: Monday, 15-Oct-84 19:32:13-BST
From: O'KEEFE HPS (on ERCC DEC-10) <okeefe.r.a.%edxa@ucl-cs.arpa>
Subject: Sastric Sanskrit again

     Briggs' message of 9 Oct 84 makes things a lot clearer.
The first thing is that Sastric Sanskrit is an artificial language,
very like Fitch's "unambiguous English" subset (he is a philosopher
who has a paper showing how this rationalised dialect is clear
enough so you can do Natural Deduction proofs on it directly).

     One thing he confuses me about is case.  How is having case
a contribution to unambiguity?  What is the logical difference
between having a set of prepositions and having a set of cases?
Indeed, most languages that have cases have to augment them with
prepositions because the cases are just too vague.  E.g. English
has a sort of possessive case "John's", but when we want to be
clear we have to say "of John" or "for John" or "from John" as
the case may be.  Praise of Latin is especially confusing, when
you recall that (a) that language hasn't got a definite article
(it has got demonstratives) and (b) the results of a certain
church Council had to be stated in Greek because of that ambiguity.
If you can map surface case to semantic case, surely you can map
prepositions to semantic case?

     The second thing which Briggs makes clear is that Sastric
Sanskrit is unbelievably long-winded.  I do not believe that it can
ever have been spontaneously spoken.

     The third thing is that despite this it STILL isn't unambiguous,
and I can use his own example to prove it.

     He gives the coding of "Caitra cooks rice in a pot", and
translates it back into English as "There is an activity(vyaapaara:),
subsisting in the pot, with agency residing in one substratum not
different from Caitra, which produces the softening which subsists in
rice."  Is Caitra BOILING the rice or STEAMING it?  It makes a
difference!  Note that this doesn't prove that Sastric Sanskrit
can't describe the situation unambiguously, only that it contains at
least one ambiguous sentence.  Then too, suppose I wanted to
translate this into Greek.  I need to know whether or not to use
the middle voice.  That is, is Caitra cooking the rice for HIMSELF,
or for someone ELSE?  Whichever choice I make in my translation, I
run the risk of saying something which Briggs, writing Sastric
Sanskrit, did not intend.  So it's ambiguous.

     Now that Briggs has made things so much clearer, I would be
surprised indeed if AI couldn't learn a lot from the work that
went into the design of Sastric Sanskrit.  Actually using their
formalism for large chunks of text must have taught its designers
a lot.  Though if "blackbird" really is specified as "a colour-
event residing in a bird" the metaphysical assumptions underlying
it might not be immune to criticism.

     A final point is that we NEED languages which are capable
of coding ambiguous propositions, as that may be what we want to
say.  If Briggs see Caitra cooking some rice in a pot, he may
not KNOW whether it is for Caitra or for another, so if Briggs
is going to tell me what he sees, he has to say something I may
regard as ambiguous.  Similarly, when a child says "Daddy ball",
that ambiguity (give me the ball?  bounce the ball? do something
surprising with the ball?) may be exactly what it means to say;
it may have no clearer idea than that it would like some activity
to take place involving Daddy and the ball.  A language which is
incapable of ambiguous expression is suited only to describing
mathematics and other games.

------------------------------

Date: 16 Oct 84 11:01:49-CDT (Tue)
From: "Roland J. Stalfonovich" <rjs%okstate.csnet@csnet-relay.arpa>
Subject: AI Natural Language

Much has been said in the last few notes about old or forgotten human
languages.  This brings up an interesting point.
Has anyone thought of making (or is there currently) a 'standard' language
for AI projects?  Not a programming language, but rather a communication
language for interspecies communication ,man to machine-man (that is the whole
hope of AI after all).

Several good choices exist and have existed for several generations.  The
languages of Esperanto and Unifon are two good choices for study.
Esperanto was devised around the turn of the century for the purpose of
becoming the international language of the world.  To these ends it has
obviously failed.  This does not however say that it is not without merit.
It's advantages of an organized verb conjugation and easy noun and pronoun
definition make it a good choice for an 'easily implemented' language.
Unifon is a simplification of English.  It involves the replacement of the
26 characters of the English alphabet by a set of 40 characters representing
the 40 phonics (thus the name) of the English language.  This would allow
the implementation of the language for speech synthesis (a pet project of many
research groups).

There are many more languages, and I am sure that everyone has his or her
own favorite.  But for the criteria of being easily implemented on a computer
in both the printed and spoken form, Esperanto and/or Unifon should be
seriously considered.

------------------------------

Date: Mon 15 Oct 84 11:22:35-PDT
From: BARNARD@SRI-AI.ARPA
Subject: The Two Cultures of AI

It seems to me that there are two quite separate traditions in AI.
One of them, which I suppose includes the large majority of AI
practitioners, is devoted to rule-based deductive methods for problem
solving and planning. (I would include most natural language
understanding work in this category, as well.)  The other, which
occupies a distinctly minority position, is concerned with models of
perception --- especially visual perception.  It is my experience that
the followers of these two traditions often have trouble
communicating.

I want to suggest that this communication problem is due to the
fundamental difference in the kinds of problems with which these two
groups of people are dealing.  The difference, put simply, is that
"problem solving" is concerned with how to find solutions to
well-posed problems effectively given a sufficient body of knowledge,
while "perception" is concerned with how to go beyond the information
given.  The solution of a well-defined problem, once it is known, is
known for certain, assuming that the knowledge one begins with is
valid.  Perception, on the other hand, is always equivocal.  Our
visual ability to construct interpretations in terms of invariant
properties of physical objects (shapes, sizes, colors, etc.) is not
dependent on sufficient information, in the formal logical sense.

As a researcher in perception, I have to admit that I am often annoyed
when problem-solving types insist that their formal axiomatic methods
are universal in some sense, and that they essentially "define" what
AI is all about.  No doubt they are equally annoyed when I complain
about the severe limitations of the deductive method as a model of
intelligence, and relentlessly promote the inductive method.  I'll
end, therefore, with a plea for tolerance, and for a recognition that
intelligence may, and in fact must, incorporate both "ways of
knowing."

------------------------------

End of AIList Digest
********************
19-Oct-84 10:02:03-PDT,15572;000000000001
Mail-From: LAWS created at 19-Oct-84 09:57:57
Date: Fri 19 Oct 1984 09:50-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #142
To: AIList@SRI-AI


AIList Digest            Friday, 19 Oct 1984      Volume 2 : Issue 142

Today's Topics:
  Applications - Biofeedback Instrument Link,
  LISP - Common Lisp Flavors,
  AI Tools - TI Expert System Development Tool & Benchmarks,
  Linguistics - Languages and Cases,
  Knowledge Representation - Universal Languages,
  Administrivia - Sites Receiving AIList & Net Readership
----------------------------------------------------------------------

Date: 18 Oct 84 14:28:34 EDT
From: kyle.wbst@XEROX.ARPA
Subject: Biofeedback Instrument Link

The John F. Kennedy Institute For Handicapped Children (707 North
Broadway, Baltimore, Maryland 21205 phone 955-5000) has done work in
this area. Contact Lynn H. Parker, or Dr. Michael F. Cataldo. They have
also published in things like  Journal of Behavioral Medicine.

Dr. D. Regan at Dalhousie University Department of Psychology Halifax,
N.S. B3H 4J1 has also done a lot in this area including the real time
Fourier analysis in a feedback loop. You can read about his work in the
Dec. 1979 issue of Scientific American (Vol. 241, No. 6 around p. 144 as
I recall).

At Carnegie-Mellon University were some people with experience in this
area. You may try contacting: A. Terry Bahill (Bioengineering ); Mark B.
Friedman (Psychology and EE). They may also be able to put you in touch
with a person they worked with about 4 years ago at the Pittsburgh Home
for Crippled Children called Mata Loevner Jaffe. I think she left full
time status at HCC and is now a professor at the University of
Pittsburgh.

If you want historical info, look in the literature for a system called
PIAPACS (I forgot what the acronym stands for now) that was developed by
LEar Siegler Co. in Michigan for test pilots at Edwards Air Force Base
in California in the mid 1960's.

And finally there is the historical work at Cambridge Air Force Research
Labs in the early 1960's to put a man in a feedback loop to use
amplitude modulation of the brain waves (alphas) to send morse code via
a PDP-8 (to clean up the signals and do some limited pattern
recognition) to a teletypewriter to transmit the first message
"CYBERNETICS". Shortly thereafter, Barbara Brown (I'm not sure of the
first name here) at the VA Hospital in Los Angeles used BFT techniques
to have subjects control lights and small model railroad trains.

Earle.

P.S. The ultimate source of commercially available hardware and software
in this area would be the TRACE Center at the University of Wisconsin at
Madison.

------------------------------

Date: Thu, 18 Oct 1984  17:53 EDT
From: Steven <Handerson@CMU-CS-C.ARPA>
Subject: Common Lisp Flavors


I am working on Flavors as part of the Spice Lisp project at CMU.  Although a
prototype system has been finished, we are currently in the process of
redesigning the thing from the ground up in an attempt to make it more modular
and portable [we've pretty much trashed the idea of a "white-pages"
(manual-level) object-oriented interface for now].  Could be another month.

-- Steve <Handerson at CMU-CS-C>

------------------------------

Date: Thu, 18 Oct 84 11:43:53 pdt
From: Stanley Lanning <lanning@lll-crg.ARPA>
Subject: Expert System Development Tool from TI

[From the October 1984 issure of Systems & Software magazine, page 50]

  TI AI tool prompts users to develop application


With many companies now entering the artificial-intelligence business, the
question, "Are there enough AI experts to write the programs?" has been
raised.  The answer is that Ph.D.s in AI are no longer needed to write expert
systems because several expert-system-development tools are available,
including one just introduced by Texas Instruments.

To ensure that AI tools can be used by nonexperts, Texas Instruments has
introduced a first-of-a-kind tool that prompts users for all information
needed to develop an expert system.  The Personal Consultant is a menu-and
window-oriented system that devolps rule-based, backward-chaining expert
systems on the TI Professional Computer under MS-DOS operating systems...

------------------------------

Date: 19 October 1984 12:07-EDT
From: George J. Carrette <GJC @ MIT-MC>
Subject: LMI, TI, and Lisp Benchmarks.

Glad to be of some help. The main problem I had with Pentland's note
was the explanatory comments which were technically not as informative
as they could have been. Let me take a moment to review them:

(1) BITBLT. This result has more to do with the different amounts
    of microcode dedicated to such things and the micro instruction
    execution speed. Both the TI and 3600 have a simple and fast
    memory bus talking to similar dynamic ram technology. (On the
    other hand the LAMBDA has a cache and block/read capability)
(2) FLOATING POINT. Unless TI has extensively reworked the seldomly used
    small-floating-point-number code from what LMI sent them, it is the
    case that small floats are converted into longs inside the microcode
    and then converted back.
(2)(3) CONS & PAGING. ??? Would be more interesting to know how long
    a full-gc of a given number of mega-conses takes. That bears more on the
    real overall cost of consing and paging.
(4) MAKE-INSTANCE. Could indeed be improved on both the TI and the 3600.
    People who need to make instances fast and know how usually resort
    to writing their own %COPY-INSTANCE, since overhead of system default
    MAKE-INSTANCE depends a lot on sending :INIT methods and other parsing
    and book-keeping duties.
(5)(6) SEND/FUNCALL. These are full microcoded, although improvements are
    possible. There are some fundamental differences between the
    LMI/TI micro architecture and the 3600 when it comes to function
    calling though. In a "only-doing-function-calls-but-no-work"
    kind of trivial benchmark there are good reasons why a
    LMI/TI architecture will never equal a 3600 architecture.
(7) 32bit floating. Similar comment as applied to small floats,
    there wasn't any 32-bit floating point number representation in the
    code before, the floating point numbers were longer than 32 bits total.

Then there was a reference to "it is already much more than an LMI,
CADR or LM2." First of all, the Explorer *is* an LMI product, and
secondly the main product line based on the LMI-LAMBDA has some
fundamentally different features including pageable microstore,
lisp->microcode compiler, plenty of room for user loadable microstore,
SMD disk interface, multiple-processor software support, physical memory
cache, which can very strongly and materially change the performance of many
applications interesting in both AI research and practice.
If you need raw performance in simulation, vision research, array processing,
the classic way to go is special microcode
or special purpose hardware. The rule may be that simple operations
(such as what one may find in trivial benchmarks) done many times call
for specilization. The LAMBDA has better support
for microcode development, (more statistics counters, micro history,
micro stack, micro store, the possibility of
doing lambda->lambda debug using multiple-processor lambda configuration,
paging microcode good for patching during development) than any other
lispmachine. Of course, it does have a high degree of microcode
compatibility with the Explorer, which does suggest some possible ways
to do things probably of interest more to applying technology than to pure
get-it-up-the-first-time research.

-gjc

------------------------------

Date: Thu 18 Oct 84 15:44:36-MDT
From: Uday Reddy <U-REDDY@UTAH-20.ARPA>
Subject: Languages and cases

When we discuss why cases have disappeared, we should also consider why
they have appeared.  It is clear that they have appeared as "naturally" as
they have disappeared.  Which of these represents a rise in "entropy"?

A reasonable explanation seems to be that cases have appeared for the sake
of convenience and brevity.  Before their proliferation, probably
prepositions and suppositions were used.  Eventually, the cases became such
a burden that people moved away from their complexity.  Don't we see the
same trend in programming languages?

Uday Reddy

------------------------------

Date: Fri 19 Oct 84 09:22:50-MDT
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: Universal Languages (again!)

Before worrying about a universal language for man-machine communication,
we need a universal mechanism for knowledge representation!  After all,
the external language cannot include concepts (words) for things that
are not internally expressible.  And while there have been numerous
claimants for the status of a UKRL (Universal Knowledge Representation
Language) (one of my own projects included), there are none that can
really qualify, except perhaps on the basis of Turing-equivalence.
Perhaps the best overall candidate is some kind of logical formalism,
but as one makes the formalism more general, it seems to become
more content-free.  Seems to me (from examination of the literature)
that the search for a UKRL was very active about 3-5 years ago, but
that now everybody has given it up as being the wrong thing to look
for (does anybody who was there disagree with this analysis?).

These days, I'm inclined to believe that one might establish conditions
for *sufficiency* in a KRL.  There's the obvious condition that the
KRL should be Turing-equivalent.  Less obviously perhaps, the KRL should
also have the means of automatically translating expressions written
using that KRL to ones in some other KRL.  Also, the KRL should have
complete knowledge of itself (the second condition probably implies
this).  There may be other reasonable conditions (such as some condition
stating that KRL expressions should have some explicit relation to things
in the "real world"), but I think the three above should be a minimum.
Notice that they also make the question of a *single* UKRL irrelevant.
Two sufficiently powerful KRLs can translate themselves back and forth
freely, so neither is more "universal" than the other.  Notice also
that any given KRL must have knowledge of at least one other KRL, in
order to facilitate the translation process.  When such KRLs are
available, then we can profitably think about standard ways of
communicating (to ease the poor humans' difficulty with handling
69 KRLs all at once!)

                                                        stan shebs

ps I haven't actually seen any research along these lines (although
Genesereth and Mackinlay made some suggestive remarks in their AAAI-84
paper).  Is anybody out there looking at KRL translation, or maybe
something more specific, like OPS5 <-> Prolog?

------------------------------

Date: Thu 11 Oct 84 10:44:38-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Sites Receiving AIList

Readers at the following sites have responded to my Sep. 26 list of
AIList recipients (Volume 2, No. 125), or have since signed up for
the digest.  (There are still many other sites, of course, particularly
on Usenet.  I have also had contact with individuals who receive
the digest but cannot respond via the net.)


Army Ballistic Research Laboratory
Army Missile Command
Defense Communications Agency
DoD Computer Security Center
Edwards Air Force Base

Arthur D. Little, Inc.
Battelle Northwest (Pacific Northwest Laboratory)
Bell Communications Research
Interactive Systems Corporation
Lockheed
Microelectronics and Computer Corporation
Varian Associates

Case Western Reserve University
Dundee College of Technology, Scotland
Indiana University
Northeastern University
Southern Methodist University
Stockton State College
University of California at San Diego
University of Illinois at Urbana
University of Waterloo
Washington University in St Louis


My apologies to any sites I previously misspelled, including

Naval Personnel Research and Development Center
Naval Research Laboratory
Naval Surface Weapons Center
University of Rochester

                                        -- Ken Laws

------------------------------

Date: Wed, 10 Oct 84 01:50:10 edt
From: bedford!bandy@mit-eddie
Subject: Net Readership

     [Forwarded from the Human-Nets digest by Laws@SRI-AI.]

        Date: Mon, 8 Oct 84 14:28 EDT
        From: TMPLee@MIT-MULTICS.ARPA

        Has anyone ever made an estimate (with error bounds) of how
        many people have electronic mailboxes reachable via the
        Internet?  (e.g., ARPANET, MILNET, CHAOSNET, DEC ENET, Xerox,
        USENET, CSNET, BITNET, and any others gatewayed that I've
        probably overlooked?)  (included in that of course group
        mailboxes, even though they are a poor way of doing business.)

Gee, my big chance to make a bunch of order of magnitude
calculations.... [...]

USENET/DEC ENET: 10k machines, probably on the order of 40 regular
users for the unix machines and 20 for the "other" machines so that's
100k users right there.

  [Rich Kulaweic (RSK@Purdue) notes 15k users on 40 Unix machines
  at Purdue, with turnover of several thousand per year.  -- KIL]

BITNET: something like 100 machines and they're university machines in
general, which implies that they're HEAVILY overloaded, 100-200
regular active users for each machine - 10k users.

  [A news item in the latest CACM mentions 200 hosts at 60 sites,
  soon to be expanded to 200 sites worldwide.  A BITNET information
  center is also being developed by a consortium of 500 U.S.
  universities, so I expect they'll all get nodes soon.  -- KIL]

Chaos: about 100-300 machines, 10 users per machine (yes, oz and ee
are heavily overloaded at times, but then there's all those unused
vaxen on the 9th floor of ne43). 1k users for chaosnet.

I think that we can ignore csnet here (they're all either on usenet or
directly on internet anyway...), so they count for zero.

ARPA/MILNET: Hmm... This one is a little tougher (I'm going to include
the 'real' internet as a whole here), but as I remember, there are
about 1k hosts. Now, some of the machines here are heavily used
(maryland is the first example that pops to mind) and some have
moderate loads (daytime - lots of free hardware at 5am!), let's say
about 40 regular users per machine -- another 10k users.

I dare not give a guesstimate for Xerox.

  [Murray.PA@Xerox estimates 4000 on their Grapevine system.  -- KIL]

So it's something on the order of 100k users for the community. [...]
Well, it could be 50k people, but these >are< order of magnitude
calculations...

  [Mark Crispin (MRC@Score) notes that there are 10k addressable
  mailboxes at Stanford, but that the number of active users is
  perhaps only a tenth of this.  Andy's final estimate might be
  inflated or deflated by such a factor.  -- KIL]

Now that I've stuck my neck out giving these estimates, I'm awaiting
for it to be chopped off.

        andy beals
        bandy@{mit-mc,lll-crg}

------------------------------

End of AIList Digest
********************
20-Oct-84 22:19:08-PDT,14557;000000000001
Mail-From: LAWS created at 20-Oct-84 22:13:42
Date: Sat 20 Oct 1984 22:07-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #143
To: AIList@SRI-AI


AIList Digest            Sunday, 21 Oct 1984      Volume 2 : Issue 143

Today's Topics:
  Programming Languages - Buzzwords,
  AI Tools - LISP Machine Benchmarks,
  Linguistics - Language Evolution & Sastric Sanskrit,
  Seminar - Transformational Grammar and AI,
  PhD Oral: Theory-Driven Data Interpretation
----------------------------------------------------------------------

Date: 19 October 1984 22:52-EDT
From: Herb Lin <LIN @ MIT-MC>
Subject: buzzwords for different language types

Could someone out there please tell me the usual catch phrases for
distinguishing between languages such as C, Pascal, Ada on one hand
and languages such as LISP on the other?

Is it "structured" vs "unstructured"?  List vs ??

Thanks.

------------------------------

Date: Fri 19 Oct 84 13:08:44-PDT
From: WYLAND@SRI-KL.ARPA
Subject: LISP machine benchmarks

A thought for the day on the AI computer benchmark controversy.

We need a single, simple measure for machine quality in order to
decide which machine to buy.  It must be simple and general
because these are typically intended to be used as general
purpose AI research machines where we cannot closely define and
confine the application.

We already have one single, simple measure called price.  If
there is no *simple* alternative number based on performance,
others (i.e. those funding the effort) will use price as the only
available measure, and we will have to continually struggle
against it using secondary arguments and personal opinion.

It should be possible to create a simple benchmark measure.  It
will - of necessity - be highly abstracted, necessarily crude.
This has been done for conventional computer systems: the acronym
MIPs is now fairly common, for good or ill.  Yes, there are
additional measures, but they are used in addition to simple ones
like MIPs.

We need good, extensive benchmarks for these machines: they will
point out the performance bugs that are unique to particular
designs.  After we do the benchmarks, however, we need to boil it
down to some simple number we can use for general purpose
comparason to place in opposition to price.

------------------------------

Date: 19 Oct 84 10:32 PDT
From: Schoppers.pa@XEROX.ARPA
Subject: The Future of the English Auxiliary

In response to Ken Kahn's question on language evolution, my own theory
is that the invasion of a language by foreign cultures, or vice versa,
has a lot to do with how simple a language becomes: cross-cultural
speakers tend to use only as much as absolutely necessary for them to
consider themselves understood. The English spoken in some communities,
eg "Where they goin'?" (missing an auxiliary), "Why he be leavin'?"
(levelling the auxiliary), "He ain't goin' nowhere" (ignoring double
negatives), etc may well be indicative of our future grammar. On the
other hand, "Hey yous" for plural "you" (in Australia), and "y'all"
(here), are pointing towards disambiguation. Well, there does have to be
a limit to the simplification, lest we "new-speak double-plus ungood".
Then again, "ain't" can mean any one of "am not", "aren't", "isn't",
"haven't", "hasn't" --- effectively replacing both the primary English
auxiliaries (to be, to have) in all their conjugations! United States
"English", being the lingo of the melting pot, will probably change
faster than most.

Marcel Schoppers
Schoppers@XEROX

------------------------------

Date: Fri 19 Oct 84 15:23:26-MDT
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: Cases & Evolution of Natural Language

Has anybody at all researched the origins of language?  Not an expert
on the subject, but I do know that the languages of aboriginal tribes
are extraordinarily complicated, as languages go.  But they probably
don't give us much clue to what the earliest of languages were like.
If you believe that the earliest of languages arose along with human
intelligence, then you can suppose that the most primitive languages
had a separate "word" for each concept to be expressed.  Such concepts
might include what would correspond to entire sentences in a modern
language.  Thus the most primitive languages would be completely
non-orthogonal.  When intelligence developed to a point where the
necessary vocabulary was just too complex to handle the wide range
of expressible concepts, then perhaps some individuals would start
grouping primitive sounds together in different ways (the famous
chimpanzee and gorilla language experiments suggest that other primates
already have this ability), resulting in the birth of syntactic
rules.  Obvious question:  can all known languages be derived
as some combination of arbitrarily bizarre syntactic/semantic rules?
(I would guess so, based on results for mathematical languages)

Word cases can then be explained as one of the last concepts to
be factored out of words.  In the most ancient Indo-European languages,
for instance, prepositions are relatively infrequent, although
the notions of subject, object, verb, and so forth have already
been separated into separate words.  Perhaps in the future, singular
and plural numbers will be separated out also (anyone for "dog es"
instead of "dogs"?).

                                                        stan shebs

------------------------------

Date: 19 Oct 1984 15:17-PDT (Friday)
From: Rick Briggs <briggs@RIACS.ARPA>
Subject: Sastric Sanskrit

        Firstly, the language is NOT artificial.  There is a LITERATURE
which is written in this language.  It is different from toy artificial
languages like Fitch's in that for three thousand years scientists
communicated and wrote texts in this language.  There are thus
two aspects which are interesting and relevent; one is that research
such as I have been describing was carried out in its peculiar context,
the other is that a natural language can function as an unambiguous,
inference-generating language without sacrificing simplicity or
stylistic beauty.
        The advantage of case is that (assuming it is a good case system)
you have a closed set with which a correspondance can be made with a
closed set of semantic cases, whereas prepositions can be combined in
a multitude of ways and classifying prepositions is not easy.
Secondly, the fact that prepositions are not attached to the word
allows a possibility for ambiguity "a boat on the river near the
tree" could be "a boat on the (river near the tree)" or "a boat (on the
river) near the tree". Attaching affixes directly to words allows you
(potentially) to express such a sentence without ambiguity.  The Sastric
approach is to allow one to express a sentence as a series of "facts",
each agreeing with "activity".  Prepositions would not allow this.
If one hears "John was killed", some questions come to mind: who did
it, how, why.  These are actually the semantic cases agent, instrument,
and semantic ablative (apaadaanakaaraka). Instead of "on" and "near"
one would say "there is a proximity, having as its substratum an
instance of boatness... etc." in Sastric Sanskrit.  The real question
is "How good a case system is it?".  Mapping syntactic case to semantic
is much easier than mapping prepositions since a direct correspondance
is found automatically if you have a good case system, whereas
prepositions do not lend themselves to easy classification.
        Again, Sanskrit is NOT long-winded, it is the english
translation which is, since their vocabulary and methodology was more
exact than that of English.
        "Caitra cooks rice in a pot" is not represented ambiguously.
Since it is not specified whether the rice is boiled, steamed, or fried
the correct representation should include the fact that the means of
softening the rice is unspecified, and the language does have the
ability to mark slots as unspecified (anabhihite).  Actually, cooking is
broken down even further (if-needed) and since rice is cooked by boiling
in India, that fact would be explicitly stated.  The question is how deep
a level of detail is desired, Sanskrit maintains: as far as is necessary but
"The notion 'action' cannot be applied to the solitary point reached by
extreme subdivision", i.e. only to the point of semantic primitives.
Sentences with ambiguity like "the man lives on the Nile" in Sastric
is made up of the denotative meaning (the man actually lives on the
river) and the implied meaning (the man lives on the bank of the Nile).
The latter is the default meaning unless it is actually specified
otherwise.  There is a very complex theory of implication in the
literature, but sentences with implied meanings are discouraged because:
"when purport (taatparya) is present, any word may signify any meaning",
thus the Sastric system where implied meanings are made explicit.
        I do not agree that languages need to tolerate ambiguity,
in fact that is my main point.  One can take a sentence like
"Daddy ball" and express it as an imperative of  "there is a
desire of the speaker for an unspecified activity involving the ball
and Daddy."  By specifying what exactly is known and what is unknown,
one can represent a vague mental notion as precisely as is possible.
But do we really need to allow such utterances?  Would something
humanistic be lost if children simply were more explicit?  Children
in this culture are encouraged to talk this way by adults engaging
in "baby talk".  All this points to the fact that the language you
speak has a tremendous influence on the your mental make-up.  If
a language more specific than english was spoken, our thoughts would
be more clear and ambiguity would not be needed.
        I conclude with another example:

  Classical Sanskrit--> raama: araNye baaNena baalinam jaghaana (Rama
  killed Baalin in the forest with an arrow) --->
  raamakartRkaa araNyaadhikaraNikaa baaNakaraNikaa praaNaviyogaanukuulaa
  parokSHaatiitakaalikii baalinkarmakaa bhaavanaa (There is an activity
  relating to the past beyond the speaker's ken, which is favourable to
  the separation of life, which has the agency of Rama, which has the
  forest as locus, Baalin as object, and which has the arrow as the
  implement.

Note that each word represents a semantic case with its instantiation,
(eg., raama-kartRkaa having as agent Rama), with the verb "kill"
(jaghaana) being represented as an activity which is favourable
(anukuulaa) to the separation (viyoga) of praana (life).  Thus the
sentence is a list of assertions with no possibility of ambiguity.
Notice that Sanskrit expresses the notion in 42 syllables (7 words)
and English takes 75 syllables (43 words).  This ratio is fairly
indicative of the general case.

Rick Briggs

------------------------------

Date: 19 Oct 1984  15:41 EDT (Fri)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Transformational Grammar and AI

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


        Transformational Grammar and Artificial Intelligence:
                        A View from the Bridge

                            Robert Berwick

It has frequently been suggested that modern linguistic theory is
irreconcilably at odds with a ``computational'' view of human
linguistic abilities.  In part this is so because grammars were
thought to consist of large numbers of explicit rules.  This talk
reviews recent developments in linguistic theory showing that, in
fact, current models of grammar are quite compatible with a range of
AI-based computational models.  These newer theories avoid the use of
explicit phrase structure rules and fit quite well with such
lexically-based models as ``word expert'' parsing.


Wednesday   October 24  4:00pm      8th floor playroom

------------------------------

Date: 19 Oct 84 15:35 PDT
From: Dietterich.pa@XEROX.ARPA
Reply-to: DIETTERICH@SUMEX-AIM.ARPA
Subject: PHD Oral: Theory-Driven Data Interpretation

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

        PHD ORAL:       TOM DIETTERICH
                        DEPARTMENT OF COMPUTER SCIENCE

                        2:30PM OCTOBER 25
                        SKILLING AUDITORIUM


               CONSTRAINT PROPAGATION TECHNIQUES FOR
                 THEORY-DRIVEN DATA INTERPRETATION


This talk defines the task of THEORY-DRIVEN DATA INTERPRETATION (TDDI)
and investigates the adequacy of constraint propagation techniques for
performing it.  Data interpretation is the process of applying a given
theory T (possibly a partial theory) to interpret observed facts F and
infer a set of initial conditions C such that from C and T one can infer
F.  Most existing data interpretation programs do not employ an explicit
theory T, but rather use some algorithm that embodies T.  Theory-driven
data interpretation involves performing data interpretation by working
from an explicit theory.  The method of local propagation of constraints
is investigated as a possible technique for implementing TDDI.  A model
task--forming theories of the file system commands of the UNIX operating
system--is chosen for an empirical test of constraint propagation
techniques.  In the UNIX task, the "theories" take the form of programs,
and theory-driven data interpretation involves "reverse execution" of
these programs.  To test the applicability of constraint propagation
techniques, a system named EG has been constructed for the "reverse
execution" of computer programs.  The UNIX task was analyzed to develop
an evaluation suite of data interpretation problems, and these problems
have been processed by EG.  The results of this empircal evaluation
demonstrate that constraint propagation techniques are adequate for the
UNIX task, but only if the representation for theories is augmented to
include invariant facts about the programs.  In general, constraint
propagation is adequate for TDDI only if the theories satisfy certain
conditions: local invertibility, lack of constraint loops, and tractable
inference over propagated values.

------------------------------

End of AIList Digest
********************
24-Oct-84 12:01:22-PDT,17822;000000000001
Mail-From: LAWS created at 24-Oct-84 11:57:33
Date: Wed 24 Oct 1984 11:47-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #144
To: AIList@SRI-AI


AIList Digest           Wednesday, 24 Oct 1984    Volume 2 : Issue 144

Today's Topics:
  Courses - Decision Systems & Introductory AI,
  Journals - Annotated AI Journal List,
  Automatic Programming - Query,
  AI Tools - TI Lisp Machines & TEK AI Machine,
  Administrivia - Reformatting AIList Digest for UNIX,
  Humor - Request for Worst Algorithms,
  Seminars - Metaphor & Learning in Expert Systems &
      Representing Programs for Understanding
----------------------------------------------------------------------

Date: Tue 23 Oct 84 13:33:06-PDT
From: Samuel Holtzman <HOLTZMAN@SUMEX-AIM.ARPA>
Subject: Responses to Decision Systems course.

Several individuals have requested further information on the course
in decision systems I teach at Stanford (advertised in AILIST a few
weeks ago).  Some of the messages I received came from non-ARPANET
sites, and I have had trouble replying electronically.  I would
appreciate getting a message from anyone who has requested information
from me and has not yet received it.  Please include a US (paper) mail
address for my reply.

Thanks,
Sam Holtzman
(HOLTZMAN@SUMEX or P.O. Box 5405, Stanford, CA  94305)

------------------------------

Date: 22 Oct 1984 22:45:40 EDT
From: Lockheed Advanced Software Laboratory@USC-ISI.ARPA
Subject: Request for information

A local community college is considering adding an introductory course in
AI to its curriculum.  Evening courses would be of benefit to a large
community of technical people interested in the subject.  The question
is what will be the benefit to first and second year students.

If anyone knows of any lower division AI courses taught anywhere, could
you please drop me a line over the net.

Also, course descriptions on introductory AI classes, either lower or
upper division, would be appreciated.

Comments on the usefulness or practicality of such a course at this level
are also welcome.

                                Thank You,
                                Michael A. Moran
                                Lockheed Advanced Software Laboratory

                                address: HARTUNG@USC-ISI

------------------------------

Date: Tue, 23 Oct 84 11:34 CDT
From: Joseph_Hollingsworth <jeh%ti-eg.csnet@csnet-relay.arpa>
Subject: annotated ai journal list


I am interested in creating an annotated list of the AI related journals list
that was published in AIList V1 N43.  I feel that this annotated list would be
beneficial for those persons who do not have easy access to the journals
mentioned in the previously published list, but who feel that some of them may
apply to their work.

I solicit information about each journal in the following form, (which I will
compile and release to the AIList if there is enough interest shown).

1) Journal Name
2) Subjective opinion of the type of articles that frequently appear in that
   journal (short paragraph or so).
3) Keywords and phrases that characterize the articles/journal, (don't let
   formalized keyword lists hinder your imagination).
4) The type of scientist, engineer, technician, etc. that the journal
   would benefit.
5) Address of journal for subscription correspondence, (include price too,
   if possible).

Please send this information to
Joe Hollingsworth at
  jeh%ti-eg@csnet-relay  (if you are on the ARPANET)
  jeh@ti-eg              (if you are on the CSNET; I am on the CSNET)


The following is the aforementioned list of journals:

AI Magazine
AISB Newsletter
Annual Review in Automatic Programming
Artificial Intelligence
Artificial Intelligence Report
Behavioral and Brain Sciences
Brain and Cognition
Brain and Language
Cognition
Cognition and Brain Theory
Cognitive Pshchology
Cognitive Science
Communications of the ACM
Computational Linguistics
Computational Linguistics and Computer Languages
Computer Vision, Graphics, and Image Processing
Computing Reviews
Human Intelligence
IEEE Computer
IEEE Transactions on Pattern Analysis and Machine Intelligence
Intelligence
International Journal of Man Machine Studies
Journal of the ACM
Journal of the Assocation for the Study of Perception
New Generation Computing
Pattern Recognition
Robotics Age
Robotics Today
SIGART Newsletter
Speech Technology

------------------------------

Date: 23 October 1984 22:28-EDT
From: Herb Lin <LIN @ MIT-MC>
Subject: help needed on automatic programming information

I need some information on automatic programming.

1.  How complex a problem can current automatic programming systems
handle?  The preferred metric would be complexity as measured by the
number of lines of code that a good human programmer would use to
solve the same problem.

2.  How complex a problem will future automatic programming systems be
able to handle?  Same metric, please.  Of course, who can predict the
future?  More precisely, what do the most optimistic estimates
predict, and for what time scale?

3.  In 30 years (if anyone is brave enough to look that far ahead),
what will automatic programming be able to do?

Please provide citable sources if possible.

Many thanks.

------------------------------

Date: 22 Oct 1984 12:07:39-PDT
From: William Spears <spears@NRL-AIC>
Subject: TI Lisp machines


     The AI group at the Naval Surface Weapons Center is interested in the new
TI Lisp Machine. Does anyone have any detailed information about it? Thanks.

                                       "Always a Cosmic Cyclist"
                                        William Spears
                                        Code N35
                                        Naval Surface Weapons Center
                                        Dahlgren, VA 22448

------------------------------

Date: 22 Oct 84 08:10:32 EDT
From: Robert.Thibadeau@CMU-RI-VI
Subject: TEK AI Machine

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

I have good product literature on the Tektronix 4404 Artificial
Intelligence System (the workbook for their people).  This appears
to be a reasonable system which supports Franz Lisp, Prolog,
and Smalltalk-80.  It uses a 68010 with floating point hardware
and comes standard with a 1024^2 bit map, 20mb disk, floppy,
centronics 16 bit port, RS232, 3-button mouse, ethernet interface,
1 mbyte RAM, and a Unix OS.  The RAM upgrades at least 1 more mbyte
and you can have a larger disk and streaming tape. The major thing
is that the price (retail without negotiation) is $14,950 complete.
It is apparently real, but I don't know this system first hand.
The product description is all I have.

------------------------------

Date: Sat, 20 Oct 84 23:10:53 edt
From: Douglas Stumberger <des%bostonu.csnet@csnet-relay.arpa>
Subject: reformatting AILIST digest for UNIX


        For those of you on Berkeley  UNIX  installations,  there  is  a
program  available  which does the slight modifications to ailist digest
necessary so it is in the correct format for  a  "mail  -f  ...".   This
allows  using the UNIX mail system functionality to maintain your ailist
digest files.

For a copy of the program, net to:

douglas stumberger
csnet:  des@bostonu

------------------------------

Date: Mon 22 Oct 84 10:30:00-PDT
From: Jean-Luc Bonnetain <BONNETAIN@SUMEX-AIM.ARPA>
Subject: worst algorithms as programming jokes

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

After reading the recent complaint(s) about those people who slow down the
system with their silly programs to sort a 150-element list, and talking with
a friend, I came up with the following dumb idea :

A lot of emphasis is understandably put on good, efficient algorithms, but
couldn't we learn also from bad, terrible algorithms ? I have heard that Dan
Friedman at Indiana collects elegant LISP programs that he calls LISP poems.
To turn things upside down, how about LISP jokes (more generally, programming
jokes) ? I'm pretty sure most if not all of programmers have some day (night)
burst into laughter when encountering an algorithm that is particularly dumb,
and funny for the same reason.

I don't know whether anyone ever collected badgorithms (sorry, that was the
worst name I could find), so I suggest that you bright guys send me your
favorite entries.

To qualify as a badgorithm, the following conditions should be met:
(if you don't like them, send me your suggestions for a better definition)

1. It *is* an algorithm in the sense described by Knuth Vol 1.
2. It *does* solve the problem it addresses. Entering the Knuth-Bendix
   algorithm as a badgorithm for binary addition is illegal (though I admit it
   is somewhat funny).
3. Though it solves the problem, it must do so in an essentially clumsy way.
   Adding loops to slow down the algorithm is cheating. In some sense a
   badgorithm should totally miss the right structure to approach the problem.
4. The hopeless off-the-track-ness of a badgorithm should be humorous for
   someone knowledgeable with the problem addressed. We are not interested
   in alborithms, right ? Just being the second or third best algorithm for
   a problem is not enough to qualify (think of the "common sense" algorithm
   for finding a word in a text as opposed to the Boyer-Moore algorithm, or of
   the numerous ways to evaluate a polynomial as opposed to Horner's rule;
   there is nothing to laugh at in those cases). There is nothing funny in just
   being a O(n^(3/(pi^3)-1/e)) algorithm, I think.
5. It should be described in a simple, clear way. Remember that the best jokes
   are the shortest ones. I'm sure there are enough badgorithms for well-known
   problems (classical list manipulation, graph theory, arithmetic,
   cryptography, sorting, searching, etc). Please don't enter algorithms
   to solve NP problems unless you have good reasons to think they are
   interesting in our sense.




If anyone out there is willing to send me an entry, please send the following:

* a simple description of the problem (the name is enough if it's a well-known
  problem).
* a verbal description of the badgorithm if possible.
* a programmed version of the badgorithm (in LISP preferably). this is not
  necessary if your verbal description makes it clears enough how to write
  such a program, but still it would be nice.
* a description of a good algorithm for the same problem in case most people
  are not expected to be familiar with one. Comparing this to the badgorithm
  should help us in seeing what's wrong with the latter, and I would say that
  this could have good educational value.


To start things, let me enter my favorite badgorithm (I call it "stupid-sort"):

* the problem is to sort a list, according to some "order" predicate.
* well, that's easy. just generate all permutations of the list, and then
  check whether they are "order"ed. would you bet that someone in CS105
  does actually use this one ?

  [I once had to debug an early version of the BMD nonparametric
  package.  It found the min and max of a vector by sorting the
  elements ...   (Presumably most users would also request the
  median and other sort-related statistics.)  For a particularly
  slow sort routine see the Hacker's Dictionary definition of JOCK,
  quoted in Jon Bentley's April Programming Pearls in CACM.  -- KIL]


I understand perfectly that some people/organizations do not wish to have their
names associated with badgorithms, but please don't refrain from entering
something because of that. I swear that if you request it there will be no
trace of the origin of the entry if I ever compile a list of them for personal
or public use (you know, "name withheld by request" is the usual trick).

jean-luc

------------------------------

Date: 17 Oct 1984 16:25-EDT
From: Andrew Haas at BBNG.ARPA
Subject: Seminar - Metaphor

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

Next week's BBN AI seminar is on Thursday, October 25th at 10:30
AM in the 3rd floor large conference room.  Bipin Indurkhya of
the University of Massachusetts at Amherst will speak on "A
Computational Theory of Metaphor Comprehension and Analogical
Reasoning".  Abstract follows.

   Though the pervasiveness and importance of metaphors in
natural languages is widely recognised, not much attention has
been given to them in the fields of Artificial Intelligence and
Computational Linguistics.  Broadly speaking, a metaphor can be
characterized as application of terms belonging to source domain
in describing target domain.  A large class of such metaphors are
based on structural analogy between the two domains.

   A computational model of metaphor comprehension was proposed
by Carbonell which required an explicit representation of a
mapping which maps terms of the source domain to the terms of the
target domain.  In our research we address ourselves to the
question of how one can characterize this mapping in terms of the
knowledge of the source and the target domains.

       In order to answer this question, we start from Gentner's
theory of Structure-Mapping.  We show limitations of Gentner's
theory and propose a theory of Constrained Semantic Transference
[CST] that allows part of the structure of the source domain to
be transferred to the target domain coherently.  We will then
introduce two recursive operators, called Augmentation and
Positing Symbols, that make it possible to create new structure
in the target domain constrained by the structure of the source
domain.

     We will show how CST captures several cognitive properties
of metaphors and then discuss its limitations with regard to
computability and finite representability.  If time permits, we
will use CST as a basis to develop a theory of Approximate
Semantic Transference which can be used to develop computational
models of the cognitive processes involved in metaphor
comprehension, metaphor generation, and analogical reasoning.

------------------------------

Date: Tue 23 Oct 84 10:45:51-PDT
From: Paula Edmisten <Edmisten@SUMEX-AIM.ARPA>
Subject: Seminar - Learning in Expert Systems

 [Forwarded from the Stanford SIGLUNCH distribution by Laws@SRI-AI.]


DATE:        Friday, October 26, 1984
LOCATION:    Chemistry Gazebo, between Physical and Organic Chemistry
TIME:        12:05

SPEAKER:     Li-Min Fu
             Electrical Engineering

ABSTRACT:    LEARNING OBJECT-LEVEL AND META-LEVEL KNOWLEDGE IN EXPERT SYSTEMS

A high performance expert system can be built by exploiting machine
learning techniques.  A learning method has been developed that is
capable of acquiring new diagnostic knowledge, in the form of rules,
from a case library.  The rules are designed to be used in a
MYCIN-like diagnostic system in which there is uncertainty about data
as well as about the strength of inference and in which the rules
chain together to infer complex hypotheses.  These features greatly
complicate the learning problem.

In machine learning, two issues that can't be overlooked are
efficiency and noise.  A subprogram, called "Condenser," is designed
to remove irrelevant features during learning and improve the
efficiency.  It works well when the number of features used to
characterize training instances is large.  One way of removing noise
associated with a learned rule is seeking a state with minimal
prediction error.

Another subprogram has been developed to learn meta-rules which guide
the invocation of object-level rules and thus enhance the performance
of the expert system using the object-level rules.

By embodying all the ideas developed in this work, an expert program
called JAUNDICE is built, which can diagnose the likely cause and
mechanisms of a patient with jaundice.  Experiments with JAUNDICE show

the developed theory and method of learning are effective in a complex
and noisy environment where data may be inconsistent, incomplete, and
erroneous.

Paula

------------------------------

Date: Tue, 23 Oct 84 00:08:10 cdt
From: rajive@ut-sally.ARPA (Rajive Bagrodia)
Subject: Seminar - Representing Programs for Understanding

        [Forwarded from the UTexas-20 bboard by Laws@SRI-AI.]

                      Graduate Brown Bag Seminar:

                Representing Programs For Understanding
                                  by
                              Aaron Temin

                         noon  Friday Oct. 26
                               PAI 3.36


        Automatic help systems would be much easier to generate than
        they are now if the same code used to create the executable
        version of a program could be used as the major database for
        the help system.  The desirable properties of such a program
        representation will be discussed.  An overview of MIRROR,
        our implementation of those properties, will be presented with
        an explanation of why MIRROR works.  It will also be argued
        that functional program representations are inadequate for the
        task.


If you are interested in receiving mail notifications of graduate brown bag
seminars in addition to the bboard notices, please send a note to
                            briggs@ut-sally

------------------------------

End of AIList Digest
********************
27-Oct-84 22:12:24-PDT,16286;000000000000
Mail-From: LAWS created at 27-Oct-84 22:06:38
Date: Sat 27 Oct 1984 21:56-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #145
To: AIList@SRI-AI


AIList Digest           Saturday, 27 Oct 1984     Volume 2 : Issue 145

Today's Topics:
  Administrivia - Usenet Disconnection,
  AI Languages - Buzzwords,
  Expert Systems - Logic-Based Expert Systems & Critique,
  Humor - Expert Systems & Recursive Riddle & Computational Complexity,
  Algorithms - Bad Algorithms as Programming Jokes,
  Seminars - Nonmonotonic Inference & Mathematical Language,
  Symposium - Expert Systems in the Government
----------------------------------------------------------------------

Date: Sat 27 Oct 84 21:36:47-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Usenet Disconnection

The SRI-UNIX host that has been the AIList gateway between Arpanet
and Usenet has been undergoing system changes.  This broke the
connection about a week ago, and I do not know how soon communication
will be restored.  Meanwhile the discussion continues asynchronously
in the two networks.

                                        -- Ken Laws

------------------------------

Date: Mon 22 Oct 84 11:18:59-MDT
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: Re: buzzwords for different language types

My favorite buzzwords are "low-level" for C, Pascal, and Ada, and
"high-level" for Lisp  :-)

But seriously, one can adopt a very abstract (i.e. applicative/functional)
programming style or a very imperative (C-like) style when using Lisp.
On the other hand, adopting an applicative style in C is difficult (yes,
I've tried!).  So Lisp is certainly more versatile.  Also, Lisp's direct
representation of programs as data facilitates the construction of
embedded languages and the writing of program-analysing programs, both
important activities in the construction of AI systems.  On the other
hand, both of these are time-consuming, if not difficult to do in C or
Pascal.

Incidentally, these remarks largely apply to Prolog also (although Prolog
doesn't make it easy to do "low-level" programming).

                                                        stan shebs

------------------------------

Date: Thu 25 Oct 84 20:59:56-CDT
From: Charles Petrie <CS.PETRIE@UTEXAS-20.ARPA>
Subject: Logic-based Expert Systems

Regarding expert system tools: would anyone like to offer some reasoned
opinions regarding the suitability of logic-based systems for such?
I have no strong definition of "logic-based" to offer, but I have in
mind as prime examples MRS from Stanford and DUCK from SST which provide
interfaces to LISP, forward and back chaining, and various
extra-logical functions to make life easier for the system builder.  I
am interested in large systems (1000+ rules desirable) and the control
and performance problems and solutions that people have found.  Can
such systems be built successfully?  What techniques to constrain
search have been tried and worked/failed?  Any references?

Charles Petrie

------------------------------

Date: Sun, 21 Oct 84 20:28:24 pdt
From: weeks%ucbpopuli.CC@Berkeley (Harry Weeks)
Subject: Expert system critique.

An article appears in the current (November/December) issue of
``The Sciences'' (New York Academy of Sciences) by Hubert and
Stuart Dreyfus of Berkeley.  The article ``Mindless Machines''
asserts that `computers don't think like experts, and never
will,' invoking, in part, Plato's ``Euthyphro'' (Euthyphro is
a theologian queried by Socrates as to the true nature of
piety) as an allegory.  The basic assertion is that so-called
expert systems reason purely from rules, whereas human experts
intuit from rules using the vast experience of special cases.
They cite this `intuition' as being an insurmountable barrier
to building intelligent machines.
                                            Harry Weeks
                                            (Weeks@UCBpopuli)

------------------------------

Date: Fri 26 Oct 84 06:46:39-CDT
From: Werner Uhrig  <CMP.WERNER@UTEXAS-20.ARPA>
Subject: is there an Expert-System like that ?? (-:

[ cartoon in from InfoWorld, Nov 5, 84, page 7]

( 2 ladies having tea in the 'parlor', chatting.  with a somewhat perplexed
  expression, one stares at a small dirt-pile on the carpet, while the
  obvious hostess explains with a smug grin:)

        "I thought he was a vacuum cleaner salesman.  He came in,
         sprinkled dirt on the carpet and then tried to sell me a
         software program that would show me how to clean it up.   "

------------------------------

From: gibson@unc.UUCP (Bill Gibson)
Subject: Recursive Riddle

               [Forwarded from Usenet by SASW@MIT-MC.]


   How many comedians does it take to tell a Light Bulb Joke ?

   Two - one to say,
   "How many comedians does it take to tell a Light Bulb Joke?
    Two - one to say,
    "How many comedians does it take to tell a Light Bulb Joke?
     Two - one to say,
     "How many comedians does it take to tell a Light Bulb Joke?
      Two - one to say,
      "How many comedians does it take to tell a Light Bulb Joke?
         ...
                 and one to ask nonsense riddles."
         ...
       and one to ask nonsense riddles."
      and one to ask nonsense riddles."
     and one to ask nonsense riddles."
    and one to ask nonsense riddles."
   and one to ask nonsense riddles.

 - from the parallel process of -     Bill Gibson

------------------------------

Date: Wed 24 Oct 84 19:13:16-PDT
From: Jean-Luc Bonnetain <BONNETAIN@SUMEX-AIM.ARPA>
Subject: minor correction on my msg on "badgorithms"

Afte  reading again the message, I *do* find interesting and unusual an
O(n^(3/(pi^3) - 1/e)) algorithm. I'd be real glad to see, and maybe even
touch, one.

------------------------------

Date: Thu, 25 Oct 84 07:38 EDT
From: MJackson.Wbst@XEROX.ARPA
Subject: Re: worst algorithms as programming jokes

A very interesting idea, but "badgorithm" as a label should have been
strangled at birth.

How about "algospasm"?

Mark

------------------------------

Date: Thu, 25 Oct 84 08:14:31 cdt
From: "Duncan A. Buell" <buell%lsu.csnet@csnet-relay.arpa>
Subject: Bad Algorithms

Jean-Luc Bonnetain suggests worst algorithms (badgorithms) as programming
jokes.  In a similar vein, with interests in winning the Cold War by
shipping some of these to the Soviet Union, what is the slowest possible
way to sort a list of N items?  The only requirement should be one (this
problem may not be well-defined yet, but I'm sure people could produce
subproblems that were) to the effect that repetition of a state or
sequence of states should not take place, and that the method actually at
some future date sort the list.

As an example of how to think about this, consider generating the permutations
of N things, then comparing the existing list against each permutation.
How slowly, then, can we generate the permutations of N things?  We could
isolate one element, generate permutations of N-1 things, and then insert
the isolated element in N different places.  Ignoring the symmetry of the
situation, we could isolate a second element and continue (is this cheating
on the rule?).  And generating permutations of N-1 things?

------------------------------

Date: 25 Oct 84 09:58 PDT
From: Kahn.pa@XEROX.ARPA
Subject: Re: Badgorithms in AIList Digest   V2 #144

The examples of badgorithms that come to mind (including the sorting by
selecting an ordered permutation and find min and max by sorting, or for
that matter defining the last element of a list as CAR of the reverse of
the list or empty intersection by computing the entire intersection and
then seeing if its empty) all have in common that they are making use of
existing constructs that do what is desired and much more.  I think that
these are very reasonable PROGRAMS even if they normally correspond to
bad ALGORITHMS.
   The point is that various projects in program transformation
(especially partial evaluation) take as input such programs and
automatically transform them into programs that correspond to very
reasonable algorithms.  Also, true fans of logic programming who believe
that an algorithm = logic + control use sort as ordered permutation as
their classic example.  They add control anontations that cause the
permutation activity to be coroutined with the order selection.
  I'm looking forward to the day when one can write programs that if
interpreted naively correspond to badgorithms and yet  are either
tranformed automatically or interpreted cleverly enough so that they run
like a bat out of hell.

------------------------------

Date: 24 Oct 1984 10:35-EDT
From: MVILAIN at BBNG.ARPA
Subject: Seminar - Nonmonotonic Inference

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


"A Non-Monotonic Inference System"
James W. Goodwin, University of Linkoping.

BBN Laboratories, 10 Moulton St, Cambridge.
Third floor conference room, 10:30 AM.
Tuesday October 30th.


We present a theory and implementation of incomplete non-monotonic
reasoning. The theory is inspired by the success of inference systems
based on dependency nets and reason maintenance. The process of inference is
conceived as a monotonic accumulation of constraints on belief sets.
The "current database" is just the set of constraints accumulated so far;
the current beliefs are then required to be a set which satisfies all the
constraints in the current database, and contains no beliefs which are not
forced by those constraints. Constraints may also be thought of as reasons, or
as dependencies, or (best) simply as individual inference steps.

This approach allows an inference to depend on aspects of the current state
of the reasoning process. In particular, an inference may support P on the
condition that Q is not in the current belief set. This sense of
non-monotonicity is conveniently computable (by reason maintenance), so the
undecidability of Non-monotonic Logic I and its relatives is avoided. This
makes possible a theory of reasoning which is applicable to real agents, such
as computers, which are compelled to arrive at some conclusion despite
inadequate time and inadequate information. It supports a precise idea
of "reasoned control of reasoning" and an additive representation for control
knowledge (something like McCarthy's Advice Taker idea).

------------------------------

Date: 26 Oct 84 15:47:53 EDT
From: Ruth.Davis@CMU-RI-ISL1
Subject: Seminar - Mathematical Language

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

Date:  Monday, October 29
Title:  PRL:  Practical Formal Mathematics
Speaker:  Joe Bates, Cornell University
Time:  1:30 pm
Location:  4605 WEH



             PRL: Practical Formal Mathematics
                     Joseph Bates
                  Cornell University

PRL is a family of development environments which are designed to
support the construction, validation, execution, and communication of
large bodies of mathematics text (eg, books on graph algorithms or
group theory).  The design of these systems draws on work in many
areas, from philosophy to Lisp hackery.  Tuesday, Constable will speak
on certain issues in the choice of PRL's mathematical language.  I
will present, in detail, the most significant aspects of the current
system architecture, and will suggest directions for future work.

------------------------------

Date: 26 Oct 1984  9:27:12 EDT (Friday)
From: Marshall Abrams <abrams@mitre>
Subject: Symposium - Expert Systems in the Government

I am helping to organize a Symposium on Expert Systems in the Federal
Government. In addition to papers, I am looking for people to serve on
the program committee and the conference committee, and to serve as
reviewers and session chairmen. The openings on the conference committee
include local arrangements, publicity, and tutorials.

Please contact me or the program chairman (or both by net-mail) with
questions and suggestions. The call for papers follows.

Call for Papers

Expert Systems in Government Conference

October 23-25, 1985

THE CONFERENCE objective is to allow the developers and implementers
of expert systems in goverenment agencies to exchange information and
ideas first hand for the purpose of improving the quality of
existing and future expert systems in the government sector.
Artificial Intelligence (AI) has recently been maturing so rapidly
that interest in each of its various facets, e.g., robotics, vision,
natural language, supercomputing, and expert systems, has acquired
an increasing following and cadre of practitioners.

PAPERS are solicited which discuss the subject of the conference.
Original research, analysis and approaches for defining  expert
systems issues and problems such as those identified in the
anticipated session topics, methodological approaches for analyzing
the scope and nature of expert system issues, and potential
solutions are of particular interest.  Completed papers are to be no
longer than 20 pages including graphics and are due 1 May 1985.
Four copies of papers are to be sent to:

Dr. Kamal Karna, Program Chairman
MITRE Corporation W852
1820 Dolley Madison Boulevard
McLean, Virginia  22102
Phone (703) 883-5866
ARPANET:  Karna @ Mitre

Notification of acceptance and manuscript preparation instructions
will be provided by 20 May 1985.

THE CONFERENCE is sponsored by the IEEE Computer Society and The
MITRE Corporation in cooperation with The Association for Computing
Machinery, The American Association for Artificial Intelligence and
The American Institute of Aeronautics and Astronautics National
Capital Section.  This conference will offer high quality technical
exchange and published proceedings.

It will be held at Tyson's Westpark Hotel, Tysons Corner, McLean,
VA, suburban Washington, D.C.


TOPICS OF INTEREST

The topics of interest include the expert systems in the following
applications domains (but are not limited to):

 1.  Professional:           Accounting, Consulting, Engineering,
                             Finance, Instruction, Law, Marketing,
                             Management, Medicine
                             Systems, Intelligent DBMS

 2.  Office Automation:      Text Understanding, Intelligent

 3.  Command & Control:      Intelligence Analysis, Planning,
                             Targeting, Communications, Air Traffic
                             Control

 4.  Exploration:            Space, Prospecting, Mineral, Oil

                             Archeology

 5.  Weapon Systems:         Adaptive Control, Electronic Warfare,
                             Star Wars, Target Identification

 6.  System Engineering:     Requirements, Preliminary Design,
                             Critical Design, Testing, and QA

 7.  Equipment:              Design Monitoring, Control, Diagnosis,
                             Maintenance, Repair, Instruction

 8.  Project Management:     Planning, Scheduling, Control

 9.  Flexible Automation:    Factory and Plan Automation

10.  Software:               Automatic Programming, Specifications,
                             Design, Production, Maintenance and
                             Verification and Validation

11.  Architecture:           Single, Multiple, Distributed Problem
                             Solving Tools

12.  Imagery:                Photo Interpretation, Mapping, etc.

13.  Education:              Concept Formation, Tutoring, Testing,
                             Diagnosis, Learning

14.  Entertainment and       Intelligent Games, Investment and
     Expert Advice Giving:   Finances, Retirement, Purchasing,
                             Shopping, Intelligent Information
                             Retrieval

------------------------------

End of AIList Digest
********************
27-Oct-84 22:22:24-PDT,16425;000000000000
Mail-From: LAWS created at 27-Oct-84 22:19:14
Date: Sat 27 Oct 1984 22:10-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #146
To: AIList@SRI-AI


AIList Digest            Sunday, 28 Oct 1984      Volume 2 : Issue 146

Today's Topics:
  Report - CSLI Description,
  Linguistics - Indic Interlingua & Evolution & Shastric Sanscrit,
  Seminars - Knowledge and Common Knowledge & Gestalt Tutorial &
    AI and Real Life
----------------------------------------------------------------------

Date: Wed 24 Oct 84 18:33:02-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Institute Description - CSLI

         [Excerpted from the CSLI Newsletter by Laws@SRI-AI.]


                        NEW CSLI REPORT

Report No. 16, ``The Center for the Study of Language and Information,'' has
just been published. It describes the Center and its research programs. An
online copy of this report can be found in the <CSLI> directory in the file
``Report-No-16.Online.'' In addition to this report, the <CSLI> directory
contains other valuable information about the Center and Turing.  To obtain
a printed version of Report No. 16, write to Dikran Karagueuzian, CSLI,
Ventura Hall, Stanford 94305 or send net mail to Dikran at Turing.

------------------------------

Date: Sun, 21 Oct 84 20:06:59 pdt
From: weeks%ucbpopuli.CC@Berkeley (Harry Weeks)
Subject: Indic interlingua.

If I recall correctly, the continuing colloquy on Sastric Sanskrit was
motivated by the desire for a natural interlingua for machine trans-
lation.  Pardon my ignorance, but I do not see the efficacy of trans-
lating a language first into something like Sastric Sanskrit with its
concomitant declensional, conjugational and euphonic complexity, then
from there into the target language.  Are not less complex (and less
verbose) formalisms more appropriate, not being weighted with aesthe-
tic amenities and cultural biases?  If Sastric Sanskrit is otherwise
being offered as a paradigm for such a formalism, a more detailed in-
sight into its grammar is needed.

Another facet of the colloquy is its focus on ambiguity in the rela-
tionship of semantic elements (viz. words) in sentences.  There is
also the problem of determining unambiguously the meaning of a word,
when in natural languages words often have more than one meaning de-
pending on context.  Is Sastric Sanskrit unique in its vocabulary as
well as its grammar that each word has but one precisely circumscribed
meaning, and how eclectic and deep is this vocabulary?  Certainly the
professed unequivocality of the syntax is an aid to determining mean-
ings of the words whose interrelationship is thus well defined, but
it would seem preferable not to rely on context or on clumsy defining
clauses in an interlingua.

As an aside on ambiguity being requisite for a literature in a lan-
guage, I might profer two opinions.  A great writer is often charac-
terized by his ability to mold sentences which have an uncommon flui-
dity and expressivity -- would an unambiguous language allow such
freedom?  Great poetry invokes thoughts and emotions which defy written
expression through the use of rhythm and juxtaposition of disparate
images through words set in defiance of strict grammatical precepts.
Further, the beauty of prose or poetry lies in good part in the use
of ambiguity.  Especially in poetry, distilling many emotions into
a compact construction is facilitated by ambiguity, either semantic
or phonetic.  The beauty of poetry is a very different one from the
beauty of logic or mathematics.

                                                Harry Weeks
                                                (Weeks@UCBpopuli)

------------------------------

Date: Mon, 22 Oct 84 10:06 EDT
From: Tim Finin <Tim%upenn.csnet@csnet-relay.arpa>
Subject: language evolution


Marcel Schoppers (AIList Digest V2 #143) seems to suggest that certain
dialects (e.g. those which include "Why he be leavin'?" and  "He ain't goin'
nowhere") are the result of forces which SIMPLIFY the grammar of a language:

     ".. my own theory is that the invasion of a language by foreign cultures,
     or vice versa, has a lot to do with how simple a language becomes:
     cross-cultural speakers tend to use only as much as absolutely necessary
     for them to consider themselves understood."

The analyses that I have seen show that such dialects are just as complex,
linguistically, as the standard dialect.  They are just complex in different
ways.  As I understand it, simplified PIDGIN languages quickly evolve into
complex CREOLE languages - all it takes is one generation of native speakers.

Tim

------------------------------

Date: Wed 24 Oct 84 23:23:56-PDT
From: Bill Poser <POSER@SU-CSLI.ARPA>
Subject: linguistics

        I would like to respond to several linguistic questions discussed
recently. First, in response to Rick Briggs re-assertion that Shastric Sanskrit
is a natural language, his claim that there was a literature written in it
and that it was in use for over three thousand years is simply irrelevant.
The same could perfectly well be true of an artificial language. There is
literature written in Esperanto, an artificial language which is also used
for scientific communication. It is perfectly possible that Esperanto will
remain with us for thousands of years. But we all know that it is an artificial
language. What makes it artificial is that it was consciously designed
by a human being-it did not evolve naturally.
        This leads to the question of whether Shastric Sanskrit is a natural
language. It looks like it isn`t. Rather, it is an artificial language
based on Sanskrit that was used for very limited purposes by scholars. I
challenge Rick Briggs to present evidence that (a) it was in use for anything
like 3000 years; (b) that anyone ever spoke it; (c) that even in written form
it was used extensively at any period; (d) that it was not always restricted
to scholars just as mathematical language is today.
        There has also been some speculation about the historical development
of languages. One idea presented is that languages evolve from morphologically
complex to morphologically simple. This is just not true. It happens to be
true of a number of the Indo-European languages with which non-linguists are
most familiar, but it is not true in general. Second, someone claimed that
the languages of "aboriginal people" (I assume he means "technologically
primitive") are complex and badly organized, and that languages evolve
as people become technologically more advanced. This was a popular idea
in the early nineteenth century but was long ago discarded. We know of no
systematic differences whatever between the languages spoken by primitive
people and those spoken by technologically advanced people. There is no
evidence that language evolves in any particular direction.
        Finally, Briggs mistakenly characterizes linguists as prescriptivists.
That is quite false. In fact, the prescriptivists are mainly English and
Literature people or non-academics like William Safire. Linguistics is
non-prescriptive by definition since we are interested in describing what
occurs in natural language and characterizing the possible natural languages.
        Finally (here comes a minor FLAME), why don't you guys read some
serious Linguistics books or ask a linguist instead of posting ignorant
speculation about linguistic issues? Some of us do Linguistics for a
living and there is extensive technical literature on many of these
questions. If I want to, say, know about algorithms I don't sit
around guessing. I look it up in a book on algorithms or ask a computer
scientist.


------------------------------

Date: Thu, 25 Oct 1984  00:09 PDT
From: KIPARSKY@SU-CSLI.ARPA
Subject: Even "shastric" Sanskrit is ambiguous

Take the example "Caitra is cooking rice in a pot". It is ambiguous in
both Sanskrit and English as to whether it is the rice that is in the
pot, or Caitra himself. Clearly the "shastric" paraphrase "There is an
activity subsisting in a pot..."  doesn't resolve this ambiguity. That
can only be done by distinguishing between subject- and object-
oriented locatives (which, incidentally, some natural languages do).
The reason why the Sanskrit logicians' paraphrases don't make that
distinction is that they follow Panini in treating locatives, like all
other karakas, simply as arguments of the verb.  In general, shastric
paraphrases, though certainly very explicit and interesting, are by no
means an "unambiguous language". What they make explicit about the
meanings of Sanskrit sentences is limited by the interpretations
assigned to those sentences by the rules of Panini's grammar.  This
grammar introduces only such semantic categories as are needed to
account for the distribution of Sanskrit grammatical formatives.  So
shastric paraphrases wind up leaving some of the ambiguities of the
corresponding ordinary Sanskrit sentences unresolved.

This sentence and its shastric paraphrase are ambiguous in other ways
as well, namely with regard to aspect ("cooks" or "is cooking"), and
definiteness ("the pot" or "a pot"). These categories don't play a
role here though they do in other areas of Sanskrit.  E.g.  the
generic/progressive distinction is important in derived nouns, where
English in turn ignores it: Sanskrit has two words for "driver",
depending on whether the activity is habitual/professional or not; a
shastric paraphrase might make the distinction explicit for such nouns.

The prevalence of this logicians' system of paraphrasing should not be
exaggerated, by the way. There is no evidence of it having been around
for anything like 3000 years(!), and it is not, to my knowledge, used in
any "literature" other than technical works on philosophy.

------------------------------

Date: Thu, 25 Oct 84 10:28 EST
From: Kurt Godden <godden%gmr.csnet@csnet-relay.arpa>
Subject: reply to schoppers@xerox

   'United States "English", being the lingo of the melting pot,
    will probably change faster than most.'

The historical linguists tell us that in fact when groups of speakers physically
move and establish a new language group, as has happened here in the US, that
the 'new' language dialect actually changes more slowly than the original
language group, in this case British English.  As simple evidence, witness the
fact of the diverse English dialects in the British Isles versus the far more
homogeneous regional dialects in the US.  There is also textual evidence from
poetry (rhythm, etc) showing that present day American English has preserved
the patterns of Middle English and early Modern English whereas present day
British English has changed.
-Godden@gmr

[Note that he availability of national radio and television broadcasts in
this century may be altering the evolution of modern dialects.  -- KIL]

------------------------------

Date: 25 Oct 1984 15:54-EDT
From: AHAAS at BBNG.ARPA
Subject: Seminar - Knowledge and Common Knowledge

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

There will be an AI seminar at 10:30 AM Friday November 2, in the
3rd floor large conference room. Abstract follows:


     Knowledge and Common Knowledge In Distributed Environments

                Yoram Moses, Stanford University


Knowledge plays a fundamental role in distributed environments.  An
individual in a distributed environment, be it a person, a robot, or a
processor in a network, depends on his knowledge to drive his
decisions and actions. When individuals' actions have an effect on one
another, it is often necessary that their actions be coordinated. This
coordination is acheived by a combination of having a predetermined
common plan of some kind, and communicating to expand and refine it.
The states of knowledge that are relevant or necessary in order to
allow the individuals to successfully carry out their individual plans
vary greatly according to the nature of the dependence of their plans
on the actions of others.

This work introduces a hierarchy of states of knowledge that a system may
be in.  We discuss the role of communication in ``improving'' the system's
state of knowledge of a given fact according to this hierarchy. The
strongest notion of knowledge that a group can have is Common Knowledge.
This notion is inherent in agreements and coordinated simultaneous actions.
We show that common knowledge is not attainable in practical systems, and
present a variety of relaxations of common knowledge that are attainable
in many cases of interest.  The reationship between these issues and
communication and action in a distributed environment is made clear through
a number of well known puzzles.

This talk should be of interest for people interested in distributed
algorithms, communication protocols, concurrency control and AI.  This work
is joint with Joe Halpern of IBM San Jose.

------------------------------

Date: Fri, 26 Oct 84 15:56:32 pdt
From: chertok@ucbcogsci (Paula Chertok)
Subject: Seminar - Gestalt Tutorial

             BERKELEY COGNITIVE SCIENCE PROGRAM
                         Fall 1984
           Cognitive Science Seminar -- IDS 237A

SPEAKER:        Steven E. Palmer, Psychology Department and
                Cognitive  Science Program, UC Berkeley

TITLE:           ``Gestalt Then and Now: A Tutorial Review''


TIME:                Tuesday, October 30, 11 - 12:30
PLACE:               240 Bechtel Engineering Center
DISCUSSION:          12:30 - 2 in 200 Building T-4

ABSTRACT:       I will present an overview of the nature and
                importance  of  the Gestalt approach to per-
                ception and cognition with  an  emphasis  on
                its  relation  to  modern  work in cognitive
                science. First I will discuss the nature  of
                the  contribution  made by Gestalt psycholo-
                gists in the  historical  context  in  which
                they worked.  Then I will trace their influ-
                ence on some current work in cognitive  sci-
                ence:  textural segmentation (Julesz, Beck &
                Rosenfeld), Pragnanz (Leeuwenberg,  Palmer),
                soap-bubble    systems   (Marr   &   Poggio,
                Attneave,  Hinton),  and  global  precedence
                (Navon, Broadbent, Ginsberg).


Beginning with this talk, the Cognitive Science Seminar will periodically
present tutorials as a service to its interdisciplinary audience.  Each
tutorial will review the ideas in some research area for workers outside

------------------------------

Date: Thu, 25 Oct 84 15:17:33 EDT
From: "Martin R. Lyons" <991@NJIT-EIES.MAILNET>
Subject: Seminar - AI and Real Life

                     ARTIFICIAL INTELLIGENCE AND REAL LIFE

     "Artificial Intelligence and Real Life", a talk by Paul Levinson of The
New School for Social Research, will be one of several topics discussed as
part of the Second Colloquium on Philospohy and Technology.  The event is
co-sposored by the Media Studies Program of the New School for Social Research
and the Philosophy & Technology Studies Center at the Polytechnic Institute of
New York.  The talk will be held at the New School's 66 W. 12th St. Building,
NYC, Monday November 12th, at 8pm, and the general public is invited.
Admission is free.

     I am passing this info on for Paul Levinson, the aforementioned speaker.
He can be reached directly at this site as:
Lev%NJIT-EIES.Mailnet@MIT-MULTICS.ARPA or
@MIT-MULTICS.ARPA:Lev@NJIT-EIES.Mailnet

     Please do not address inquiries to me, as all the info I have is above.

 MAILNET: Marty@NJIT-EIES.Mailnet
 ARPA:    Marty%NJIT-EIES.Mailnet@MIT-MULTICS.ARPA
 USPS:    Marty Lyons, CCCC/EIES @ New Jersey Institute of Technology,
          323 High St., Newark, NJ 07102    (201) 596-2932
 "You're in the fast lane....so go fast."

------------------------------

End of AIList Digest
********************
30-Oct-84 22:20:11-PST,16234;000000000001
Mail-From: LAWS created at 30-Oct-84 22:15:52
Date: Tue 30 Oct 1984 22:04-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #147
To: AIList@SRI-AI


AIList Digest           Wednesday, 31 Oct 1984    Volume 2 : Issue 147

Today's Topics:
  LISP - Function-itis & Comparison with C,
  Algorithms - Pessimal Algorithms & Real Programmers,
  Seminars - Robot Navigation & Accessibility of Analogies & Student Models
----------------------------------------------------------------------

Date: Sun 28 Oct 84 17:40:10-PST
From: Shawn Amirsardary <SHAWN@SU-SCORE.ARPA>
Subject: Lisp Function-itis

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

Lisp with its very elegant  syntax suffers from acute function-itis.   When
adhering to the traditional  lispish style of using  very few setq and  god
forbid even  fewer progs,  you  end up  with about  a  million and  a  half
functions that get called from usually  only one place.  Of course LET  and
lambda help, but not that much.  My question is does anybody know of a good
method for ordering and perhaps even naming the little suckers?  In  pascal
you have to define  procedures before you  use them, but  the lack of  such
restrictions in lisp means that functions are all over the place.  What  is
the cure?

                                --Shawn

------------------------------

Date: Tue, 30 Oct 84 21:42:58 -0200
From: eyal@wisdom (Eyal mozes)
Subject: Re: different language types

  > But seriously, one can adopt a very abstract (i.e.
  applicative/functional) > programming style or a very imperative
  (C-like) style when using Lisp.  > On the other hand, adopting an
  applicative style in C is difficult (yes, > I've tried!).  So Lisp is
  certainly more versatile.

Really!! I've never yet seen an "imperative" non-trivial LISP program
which is not impossible to read, full of bugs which nobody knows how to
correct, and horribly time-consuming (most of UNIX's VAXIMA is a good
example of what I mean). Writing "imperative" style in LISP is a
programming equivalent of "badgorithms".

You can be as abstract as you want to be in C or Pascal.  I don't think
there is anything for which you can't come up with a good program in C,
if your writing style is good (and if it isn't, no language will help).
Of course, there are some activities, especially in some areas of AI,
which are made much easier by the functional style of LISP, and its
representation of programs as data - but even this wouldn't be true for
*all* AI systems. But in terms of versatility, I don't think there can
be much question about the big advantage of C, Pascal, and languages of
this type.

------------------------------

Date: 29 Oct 84 14:41 PST
From: JonL.pa@XEROX.ARPA
Subject: Pessimal Algorithms ("Badgorithms"?)

The following "badgorithm" comes from a practical joke, perpretated many
years ago; although it is not a naturally occuring "badgorithm", it does
have a humorous side.

In high school, I worked part-time for a university computing center **
programming on an IBM 650 (don't ask how many years ago!).  [One really
shouldn't pooh-pooh the 650 -- it was the world's first List processing
computer!  explication follows].  It's main memory consisted of 2000
10-digit decimal words, stored on a rotating magnetic drum; there were
40 rotational channels on the drum, with each channel holding 50 words
(one complete revolution).  Since the various instructions took a
variable amount of time, it would be most unwise to sequence the
instructions by merely incrementing each instruction address by 1, for
an instruction which took more time than that which would elapse between
two successive words in a "channel" would thus be blocked for one full
drum rotation time.  An instruction consisted of an operator, an operand
address, and a "next instruction address" (i.e., a CDR link in Lisp
parlance); thus one could assign the sequenceing of instructions to be
"optimal" in that the successor of an instruction at word offset A (mod
50) would be A+n (mod 50) where n is the time in, fiftieths of a drum
rotation, required for the instruction exection of the operator stored
at A.

The IBM 704/709 series had a machine language assembler called SAP, for
"Symbolic Assembly Program"; the 650 had SOAP, for "Symbolic Optimal
Assembly Program".  One would speak of "Soaping" a program, meaning to
assemble a symbolic deck of cards into a self-loading machine-language
deck.  My cohorts and I dubbed the obvious pessimizing version of SOAP
as SUDS, for "Symbolic Un-optimal Disassembly System" (it would assign
the "next instruction" to a word offset just 1 short of optimal, and
thus would slow down the resultant object code by up to a factor of 50).

As a gag, we SUDS'd the SOAP deck, and left it for others to use.
Imagine the consternation when a program that normally took 10 minutes
to assemble suddenly began taking over an hour!  Of course, we were
quickly found out, and SUDS was relagated to the circular hacks file.

-- Jon L White --

------------------------------

Date: 22 Oct 84 16:18:41 EDT
From: Michael.Jones@CMU-CS-SPICE
Subject: Real Programmers

           [Excerpted from the CMU bboard by Laws@SRI-AI.]

[I regret having to truncate this, but the original was too long to
distribute on AIList.  I have decide to proceed with the following
because it fits in with other recent AIList messages.  -- KIL]


     A recent article devoted to the *macho* side of programming
     made the bald and unvarnished statement:

                Real Programmers write in Fortran.

     [...]
     I feel duty-bound to describe,
     as best I can through the generation gap,
     how a Real Programmer wrote code.
     I'll call him Mel,
     because that was his name.

     I first met Mel when I went to work for Royal McBee Computer Corp.,
     a now-defunct subsidiary of the typewriter company.  [...]
     Mel's job was to re-write
     the blackjack program for the RPC-4000.  [...]
     The new computer had a one-plus-one
     addressing scheme,
     in which each machine instruction,
     in addition to the operation code
     and the address of the needed operand,
     had a second address that indicated where, on the revolving drum,
     the next instruction was located.
     In modern parlance,
     every single instruction was followed by a GO TO!  [...]

     Since Mel knew the numerical value
     of every operation code,
     and assigned his own drum addresses,
     every instruction he wrote could also be considered
     a numerical constant.
     He could pick up an earlier "add" instruction, say,
     and multiply by it,
     if it had the right numeric value.
     His code was not easy for someone else to modify.

     I compared Mel's hand-optimized programs
     with the same code massaged by the optimizing assembler program,
     and Mel's always ran faster.
     That was because the "top-down" method of program design
     hadn't been invented yet,
     and Mel wouldn't have used it anyway.
     He wrote the innermost parts of his program loops first,
     so they would get first choice
     of the optimum address locations on the drum.
     The optimizing assembler wasn't smart enough to do it that way.

     Mel never wrote time-delay loops, either,
     even when the balky Flexowriter
     required a delay between output characters to work right.
     He just located instructions on the drum
     so each successive one was just *past* the read head
     when it was needed;
     the drum had to execute another complete revolution
     to find the next instruction.  [...]
     Mel called the maximum time-delay locations
     the "most pessimum".  [...]

     Perhaps my greatest shock came
     when I found an innocent loop that had no test in it.
     No test. *None*.
     Common sense said it had to be a closed loop,
     where the program would circle, forever, endlessly.
     Program control passed right through it, however,
     and safely out the other side.
     It took me two weeks to figure it out.

     The RPC-4000 computer had a really modern facility
     called an index register.
     It allowed the programmer to write a program loop
     that used an indexed instruction inside;
     each time through,
     the number in the index register
     was added to the address of that instruction,
     so it would refer
     to the next datum in a series.
     He had only to increment the index register
     each time through.
     Mel never used it.

     Instead, he would pull the instruction into a machine register,
     add one to its address,
     and store it back.
     He would then execute the modified instruction
     right from the register.
     The loop was written so this additional execution time
     was taken into account --
     just as this instruction finished,
     the next one was right under the drum's read head,
     ready to go.
     But the loop had no test in it.

     The vital clue came when I noticed
     the index register bit,
     the bit that lay between the address
     and the operation code in the instruction word,
     was turned on--
     yet Mel never used the index register,
     leaving it zero all the time.
     When the light went on it nearly blinded me.

     He had located the data he was working on
     near the top of memory --
     the largest locations the instructions could address --
     so, after the last datum was handled,
     incrementing the instruction address
     would make it overflow.
     The carry would add one to the
     operation code, changing it to the next one in the instruction set:
     a jump instruction.
     Sure enough, the next program instruction was
     in address location zero,
     and the program went happily on its way.

     I haven't kept in touch with Mel,
     so I don't know if he ever gave in to the flood of
     change that has washed over programming techniques
     since those long-gone days.
     I like to think he didn't.
     In any event,
     I was impressed enough that I quit looking for the
     offending test,
     telling the Big Boss I couldn't find it.  [...]
     I didn't feel comfortable
     hacking up the code of a Real Programmer."


         -- Source: usenet: utastro!nather, May 21, 1983.


[The Cray is so fast it can execute an infinite loop in three minutes?
This machine might beat it!  -- KIL]

------------------------------

Date: 28 Oct 1984  14:43 EST (Sun)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Robot Navigation

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


Wednesday, October 31;      4:00pm;     8th Floor Playroom

Navigation for Mobile Robots

Rodney A. Brooks

There are a large number of interesting questions in how to build a
mobile robot capable of navigating through unknown surroundings in
order to complete some desired task. Issues include obstacle avoidance
using local observations, overall path planning, registration with a
map and building a map from observations. There is a lot of ongoing
and promising work on the first two of these problems. Less has been
done on the last two.  Registration work has been most succesful with
detailed a priori maps in two domains: (1) indoors uncluttered areas
with flat walls giving unambigous geometric clues, and (2) areas with
reliably identifiable and accurately locatable landmarks visible over
a large area.  Re-registration with maps generated from a robot's own
observations has mainly been successful in two modes: (1) incremental
re-registration involving small motions from a known location, or (2)
in an environment with active beacons providing reliably indentifiable
and locatable landmarks.

This talk focus on some of the issues in building a map from
unreliable observations and in re-registering the robot to that map
much later, again using unreliable observations. In particular we
consider a new map represention, the requirements on the
representations of the world produced by vision, the role of
landmarks, and whether other sensors such as compasses or inertial
navigation systems are needed.

COMING SOON: Kent Pitman [Nov 7], Ryszard Michalski [Nov 14],
             Phil Agre   [Nov 28]

------------------------------

Date: 29 Oct 1984 14:10-EST
From: Brad Goodman <BGOODMAN at BBNG>
Subject: Seminar - Accessibility of Analogies

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


"Mental Models of Electricity"

Yvette Tenney, BBN Laboratories
Hermann Hartel, University of Kiel, West Germany

BBN Laboratories, 10 Moulton St, Cambridge.
Third floor large conference room, 10:30 AM.
Monday November 5th.


The presentation will consist of two short talks that were part of a
conference on Representations of Students' Knowledge in Electricity and
the Improvement of Teaching, held in Ludwigsburg, Germany this fall.

Talk 1:  Yvette Tenney (in collaboration with Dedre Gentner)
         "What makes analogies accessible:  Experiments on the
         water-flow analogy for electricity."

         In analogy, knowledge can be transferred from a known
         (base) domain to a target domain, provided the learner
         accesses the analogy.  We used the water-electric current
         analogy to test the hypothesis that prior familiarity
         with the base domain (Experiment 1) and pre-training
         on the base domain (Experiment 2) increase the
         likelihood of noticing the analogy.  Results showed
         that greater knowledge of the base domain did not
         increase accessibility, although it did increase the
         power of the analogy if detected.

Talk 2:  Hermann Hartel
         "The electric circuit as a system:  A new approach."
         [...]

------------------------------

Date: Tue 30 Oct 84 09:28:59-PST
From: Paula Edmisten <Edmisten@SUMEX-AIM.ARPA>
Subject: Seminar - Student Models

 [Forwarded from the Stanford SIGLUNCH distribution by Laws@SRI-AI.]

DATE:        Friday, November 2, 1984
LOCATION:    Chemistry Gazebo, between Physical and Organic Chemistry
TIME:        12:05

SPEAKER:     Derek Sleeman
             School of Education & HPP

ABSTRACT:    The PIXIE Project: The Inference and Use of Student
             (user) Models

For a decade or more the importance of having accurate student models
to guide Intelligent Tutoring Systems (ITSs) has been stressed.  I
will give an overview of the several types of models which have been
inferred and will talk in some detail about a system which infers
overlay models and Pixie which uses process-orientated models.
Currently, all these techniques effectively determine whether the
current user's behaviour falls within a previously defined
model-space.  The focus of some current work is to see whether these
techniques can be extended to be more data-sensitive.  (Analogous
issues arise when an ITS or ES is attempting to reason with an
incomplete database.)

Issues which arise in the use of models to control (remedial)
dialogues will be addressed.

The seminar will conclude with an overview of the fieldwork shortly to
be undertaken.  PIXIE now runs on a PC (in LISP) and several of these
machines will be used to "diagnose" the difficulties which high school
students have with Algebra and maybe Arithmetic.  It is envisaged that
PIXIE will be used to screen several classes, and that the class
teachers will remediate students on the basis of the diagnostic
information provided by PIXIE.  These sessions will then be analyzed
to determine how "real" teachers remediate; remedial subsystem(s) for
PIXIE will then be implemented.




Paula

------------------------------

End of AIList Digest
********************