
From:	COMSAT          2-NOV-1984 22:05  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a005985; 1 Nov 84 13:34 EST
Date: Thu  1 Nov 1984 09:35-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #148
To: AIList@SRI-AI
Received: from rand-relay by vpi; Fri, 2 Nov 84 21:55 EST


AIList Digest            Thursday, 1 Nov 1984     Volume 2 : Issue 148

Today's Topics:
  Linguistics - Bibliography Request,
  AI Tools - Workstations under $50K,
  News - IJCAI Awards,
  Linguistics - Sastric Sanscrit,
  Conference Review - Southern California AI Society,
  Seminar - Coherence in Life Stories
----------------------------------------------------------------------

Date: 30 Oct 1984 20:15:33 EST
From: Miroslav Benda <BENDA@USC-ISI.ARPA>
Subject: linguistics bibliography

Several years ago, Gazdar & Klein published a "Bibliography of Contemporary
Lingustic Reasearch", which was an indexed guide to papers and books on
generative linguistics.  Is there anything similar online somewhere?
Something that is kept up to date (and not several years behind, like
"Bibliografie Linguistique").

Miroslav Benda
Boeing Computer Services

------------------------------

Date: Wed, 31 Oct 84 13:45:48 pst
From: (Marvin Erickson [pnl]) erickson@lbl-csam
Subject: AI Workstations under $50K

I am interested in comments on the performance of low cost (under $50K) AI
Workstations.  Applications include expert system development and Landsat
image processing.  I am particularly interested in the availability of tools
for either application that run under common Lisp on a PERQ and provide
object oriented capabilities in addition to Lisp.
Ron Melton
Battelle/Pacific Northwest Laboratory
(509) 375-2932
erickson@lbl-csam

------------------------------

Date: Tue, 30 Oct 84 10:43:32 pst
From: Alan Mackworth <mack%ubc.csnet@csnet-relay.arpa>
Subject: IJCAI Awards

               The IJCAI Award for Research Excellence

          The Board of Trustees of  International  Joint  Confer-
     ences  on  Artificial Intelligence Inc. is proud to announce
     the establishment of The IJCAI Award for Research Excellence
     to  honour  sustained  excellence in Artificial Intelligence
     research.  The Award will be made every second year, at  the
     International  Joint  Conference on Artificial Intelligence,
     to a scientist who has carried out a program of research  of
     consistently   high  quality  yielding  several  substantial
     results.  If the research program has been carried out  col-
     laboratively  the  award may be made jointly to the research
     team.

          The Award carries with it a certificate and the sum  of
     $1,000  plus  travel and living expenses for the IJCAI.  The
     researcher(s) will be invited to deliver an address  on  the
     nature and significance of the results achieved.  Primarily,
     however, the award carries the honour of having  one's  work
     selected by one's peers as an exemplar of sustained research
     in the maturing science of Artificial Intelligence.

          We hereby call for nominations for The IJCAI Award  for
     Research  Excellence  to be made at IJCAI-85 in Los Angeles.
     The  accompanying note on  Selection  Procedures  for  IJCAI
     Awards provides the relevant details.


                   The Computers and Thought Award

          The Computers and Thought  Lecture  is  given  at  each
     International Joint Conference on Artificial Intelligence by
     an outstanding young scientist in the  field  of  artificial
     intelligence.  An award of $1,000 and payment for travel and
     subsistence expenses are provided to the recipient.  Nomina-
     tion  and  selection  procedures  are  outlined  in the note
     Selection Procedures for IJCAI Awards.  The Lecture is given
     one evening during the Conference, and the public is invited
     to attend.  The Lectureship was established  with  royalties
     received  from  the  book  Computers  and Thought, edited by
     Feigenbaum and Feldman;  it is currently supported by income
     from IJCAI funds.

          Past recipients of this honour have been Terry Winograd
     (1971), Patrick Winston (1973), Chuck Rieger (1975), Douglas
     Lenat (1977), David Marr (1979), Gerald Sussman  (1981)  and
     Tom Mitchell (1983).

          Nominations are invited for The Computers  and  Thought
     Award  to  be  made  at IJCAI-85 in Los Angeles. The note on
     Selection Procedures for IJCAI Awards covers the  nomination
     procedures to be followed.


                Selection Procedures for IJCAI Awards

          Nominations for The Computers and Thought Award and The
     IJCAI  Award for Research Excellence are invited from all in
     the Artificial Intelligence  international  community.   The
     procedures are the same for both awards.

          There should be a nominator and a  seconder,  at  least
     one  of whom should not have been in the same institution as
     the nominee.  The nominee must agree to be  nominated.   The
     nominators should prepare a short submission less than 2,000
     words for the voters, outlining the nominee's qualifications
     with respect to the criteria for the particular award.

          The award selection committee is the union of the  Pro-
     gram,  Organizing  and Conference Committees of the upcoming
     IJCAI and the Board of  Trustees  of  IJCAII  with  nominees
     excluded.   Nominations should be submitted before March 31,
     1985 to the IJCAI-85 Conference Chair:

                    Alan Mackworth
                    Department of Computer Science
                    University of British Columbia
                    Vancouver, B.C. V6T 1W5
                    Canada                  Tel. (604) 228-4893

                    Net Addresses   CSnet:   mack@ubc
                                    ARPAnet: mack%ubc@CSNet-Relay
                                    UUCP:    mack@ubc-vision

------------------------------

Date: 29 Oct 1984 10:04-PST (Monday)
From: Rick Briggs <briggs@RIACS.ARPA>
Subject: Re: AIList Digest   V2 #146


        I have been challenged to defend some of my recent assertions.
Bill Poser should be more careful when he criticizes
("Finally, Briggs mistakenly characterizes linguists as
prescriptivists" -- I said the exact opposite on AIList of Thursday Oct.
18, that is that lingiustics has become descriptive rather than
prescriptive: my own humble opinion is that non-prescriptive linguistics
will be the death of english).
        With regards to machine translation, the "aesthetic amenities"
could be an advantage rather than a disadvantage, since it might be
possible to encode poetic constructions in the interlingua, otherwise
many subtleties will be lost in the translation.  The Sanskrit scholars
have done a lot of work in formulating a mechanism for expressing
natural language entities unambiguously.  All I am saying is that it
would be unwise to sweep under the carpet millenia (yes, millenia) of
research without attempting to learn soemthing from it.
        Word ambiguity exists in Classical Sanskrit but is not a serious
problem in the Sastra, since the level of representation of meaning
is usually below the word level.  While Caitra is Caitra, cook is
a process of softening etc.  By going one level of representation
deeper, ambiguity between two possible meanings of the same word
is avoided.
        The use of Sastric Sanskrit can be dated back at least as far as
Patanjali's Mahabhashya (1st millenia B.C.).  The tradition continued through
Bhartrhari (the Vakyapadiya), Kaundabhatta, Dikshita (Vaiyakarana-
bhusanasara) and (in the 19th century) Nagesha
(Vaiyakaranasiddhantamanjusa).  That it was spoken is evidenced from the
fact that many Sastric works are actually transcripts of long dialogues
between the different "schools" (e.g. the grammarians and the logicians).
Their arguments were expressed in Sastric Sanskrit.  Arguing about
whether or not it was actually spoken is similar to  asking the same
of the Platonic dialogues.  Admittedly, its use was limited to
the scientific community to a large extent.  The same can be said about
the type of language used in today's scientific community, with its
own specialized jargon and style.  Is Mr. Poser suggesting that
this also is not a natural language?
        I do not understand exactly what Kiparsky means when he asserts
that there is ambiguity in whether or not Caitra or rice is in the pot.
What resides in the pot is a "locality", which as an object "rice".
Caitra is the agent of that activity; in no way can he be construed to
be in the pot.  Nothing is said about where Caitra is, I suppose he
could be in the pot, but the notion of unspecified slots being filled
in by defaults would be used.  Normally, the agent of cooking is not
in the pot and if he were it would probably be explicitly specified.
        With regards to definiteness ("the pot" or "a pot"):  "pot" is
defined as that which has potness (ghatatvam) in it.   More exactly,
a pot (or any other object) is defined by three terms (the determinant
of meaninghood (shaakyatavacchedaka) is made up of three elements). The first
is the genus (potness), the second is the form (or akrti) ("having a
conch-like appearance about its neck [kambugrivadimatvam]"), and
the third is the individual (pot "ghata").
        I think that much of the confusion results from too close
a correspondence being assumed from Classical to Sastric Sanskrit.
Much of so called "ambiguity" does not exist as the words themselves
are discarded for deeper representation.  Syntactic cases are also
changed when they are expressed as semantic cases since "over a fire"
can mean "by means of a fire" in the case of cooking.
        Let me state exactly what Sastric Sanskrit is: it is
"The most sophisticated stage of the developement of Sanskrit
(through Vedic, Classical etc.) in which a very sophisticated
philosophy of "the meaning of a sentence" was developed, and in
which unambiguity was strived for and obtained to a large extent."
By large extent I mean that it is ambiguous as a description
in semantic nets (say conceptual dependency), in fact it is more
precise.  What I suggest is that the Linguistic and AI community,
and especially those who are involved in both, take a very close
look at the Sastric methodology and its philosophy, with natural
language processing in mind.  They did
much research in how the mind perceives the meaning of words, and
it is surprising how little exposure it has gotten.

Rick Briggs

------------------------------

Date: Tue, 30 Oct 84 10:18:47 PST
From: Scott Turner <srt@UCLA-LOCUS.ARPA>
Subject: SCAIS Review


  The first meeting of the Southern California AI Society was a major success,
with over 200 people from all walks of life (and industry too :-) attending.
The event was held at the Faculty Center at UCLA, and arrangements were
very comfortable.

  The agenda included almost 8 hours of talks by over 50 speakers.  This
rather long format was intended to allow all the participants to become
familar with AI activities all around Southern California, but the great
length proved to be a drawback.  By the end of the day the crowd had thinned
considerably.

  Most of the talks were short overviews of ongoing work, but among the
more interesting talks were Rogers Hall of UC Irvine, "Learning in Multiple
Knowledge Sources", Erik Mueller of UCLA "Daydreaming and Story Invention",
and Chuck Williams of Inference Corp., "ART:  Automated Reasoning Tool."

  A short business meeting was held after the talks were finished, where
preparations for IJCAI-85 were discussed and an interim governing board for
SCAIS was selected (i.e., people volunteered).  In all likelihood future
SCAIS meetings will occur monthly or bi-monthly at rotating hosts.  Each
host will showcase their AI acitivities and invite speakers on a selected
topic.  This format will give SCAIS members a chance to visit all the local
AI Labs over the course of the year, without unduly straining the capacity of
any single Lab.

  After the business meeting there was a demonstration session in the UCLA
AI Lab, hosted by the infamous UCLA Airheads.  Erik Mueller demonstrated his
Daydreamer, Sergio Alvardo demonstrated OpEd, a program that models reading
editorials, Uri Zernik demonstrated GATE, the UCLA Graphical AI Tools
Environment, Charlie Dolan demonstrated Aesop, a program that learns planning
knowledge from reading Aesop's fables, and a number of other students
demonstrated other software and current work.

    Scott R. Turner
    UCLA Computer Science Department
    3531 Boelter Hall, Los Angeles, CA 90024
    ARPA:  srt@UCLA-LOCUS.ARPA
    UUCP:  ...!{cepu,ihnp4,trwspp,ucbvax}!ucla-cs!srt

------------------------------

Date: Wed, 31 Oct 84 17:33:44 pst
From: chertok@ucbcogsci (Paula Chertok)
Subject: Seminar - Coherence in Life Stories

             BERKELEY COGNITIVE SCIENCE PROGRAM
                         Fall 1984
           Cognitive Science Seminar -- IDS 237A

    TIME:                Tuesday, November 6, 11 - 12:30
    PLACE:               240 Bechtel Engineering Center
    DISCUSSION:          12:30 - 2 in 200 Building T-4

SPEAKER:        Charlotte Linde, STRUCTURAL SEMANTICS

TITLE:          ``The Creation of Coherence in Life Stories:
                Commonsense  Philosophy and Special Explana-
                tory Systems''

This talk reports on a study of the creation  of  coherence
in oral life stories.  Such coherence is not a property of a
particular life, but rather an achievement of the speaker in
constructing  the story.  Studying the creation of coherence
permits us to examine the  implicit  assumptions  which  are
made  about the nature of socially accepted reasons for life
decisions.  For example, when a  speaker  tells  us  how  he
became  an optometrist, the way he makes this story coherent
can give us insight into folk beliefs about  proper  causes,
the nature of accident, etc.

The first level of the creation of coherence is the level of
implicit,  commonsense philosophical categories, such as, in
English, causality, accident, continuity and  discontinuity.
Speakers  must  show that their lives exhibit proper reasons
for major decisions.  If they can not frame their stories as
exhibiting  such  causality,  they must then analyze them as
involving accident or discontinuity.  Stories about accident
or  discontinuity  tend  to  be  organized  to show that the
accident is purely local, that is, that only one small  part
of  an  otherwise  well-motivated life is accidental.  Simi-
larly, discontinuity is managed by a variety of  strategies,
such  as  discontinuity  as  only apparent, discontinuity as
sequence, and discontinuity as  metacontinuity.   All  these
strategies  work  to  show that the speaker's life is not as
discontinuous as it might look.

A more complex level of coherence is the level  of  explana-
tory  systems.   These  are  non-expert  versions of various
expert systems in the  culture,  such  as  popular  Freudian
theory,  behaviorism,  feminism, and astrology.  The systems
at this level all presuppose the categories of the  previous
level.  That is, they all assume the existence of causality,
but specify possible causes  which  are  somewhat  different
from the causes permitted by the commonsense system.

------------------------------

End of AIList Digest
********************

From:	COMSAT          5-NOV-1984 21:28  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a009554; 5 Nov 84 13:50 EST
Date: Mon  5 Nov 1984 10:05-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #149
To: AIList@SRI-AI
Received: from rand-relay by vpi; Mon, 5 Nov 84 21:12 EST


AIList Digest             Monday, 5 Nov 1984      Volume 2 : Issue 149

Today's Topics:
  AI Societies - SCAIS,
  AI Tools - LISP/PROLOG Availability & CAI authoring systems & OPS5 examples,
  AI Literature - Technical Publication Addresses & Linguistics Bibliography,
  Programming - Malgorithms & Programming Style
----------------------------------------------------------------------

Date: Thu, 1 Nov 84 12:34:55 PST
From: Scott Turner <srt@UCLA-LOCUS.ARPA>
Subject: SCAIS

Before I personally get flooded with mail, please send all request
concerning joining SCAIS, etc., to scais-request@ucla-locus.

    Scott R. Turner
    UCLA Computer Science Department
    3531 Boelter Hall, Los Angeles, CA 90024
    ARPA:  srt@UCLA-LOCUS.ARPA
    UUCP:  ...!{cepu,ihnp4,trwspp,ucbvax}!ucla-cs!srt

------------------------------

Date: 2 Nov 1984 14:06-EST
From: CDR Jeff Ackerson (ACKERSON@USC-ISI)
Subject: LISP/PROLOG Availability

Would like to know if anyone knows of availability of either a
LISP or PROLOG environment that will run on an Altos 586-40.
Currently running Xenix.

------------------------------

Date: 2 Nov 84 13:16-EDT (Fri)
From: Malmros (Fs Hill) <malmros%umass-ece.csnet@csnet-relay.arpa>
Subject: CAI authoring systems

     Perhaps someone out there can help me.  I'm looking for a CAI
authoring system and all the ones I've seen so far have been absolute
dogs.  They're pedagogically simplistic and they're all geared to the
same old tutorial/drill-practice kind of application.  Does anyone know
of something more exciting that would be of sufficient pedagogical
quality for use at the college level?  I'm writing to AILIST because I
don't know where else to go.  My address is:

     malmros.umass-ece@csnet-relay.arpa

     thanks very much.

------------------------------

Date: 4 Nov 1984 10:01:09 EST (Sunday)
From: Charles Howell <m15434@mitre>
Subject: OPS5 examples


I am developing a small KBS using OPS5.  My first goal (if you'll
pardon the expression...) is simply to "get up to speed" on OPS5.
When learning other languages/systems, I've found examples to  be
very  helpful.   Does  anyone  have  any examples of working OPS5
systems that they can send me or give me pointers to?   If  there
is  much  response,  I'll be happy to collect them and distribute
them (or collect and distribute pointers, as the  case  may  be).
If  you  have an example that you wouldn't mind my using (the KBS
is for a graduate course in  AI)  but  you  don't  wish  to  have
distributed,  I'll  of course not include it in the distribution.
I hope my system will be a bit more stable from now on,  so  that
the turnaround on distributing the OPS5 examples isn't as long as
it was for the technical publications addresses...

Thanks very much,
Chuck Howell
The MITRE Corporation

------------------------------

Date: 4 Nov 1984  9:48:08 EST (Sunday)
From: Charles Howell <m15434@mitre>
Subject: Technical Publication addresses


A  month  or  so  ago,  I  posted  a  request  for  addresses  of
institutions publishing technical reports related to AI.  Several
people responded; thanks! Several people also requested a copy of
the  collected  list  of  addresses.   Unfortunately,  the file I
collected these messages in has been destroyed, along with a  lot
of  my other files... and, of course, the most recent backup that
is usable predates the creation of this file.   If you would
like  a  copy  of this list, please let me know.  I apologize for
the delay in responding to those who already sent such a request.

Chuck Howell
The MITRE Corporation

------------------------------

Date: Fri 2 Nov 84 09:00:29-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Linguistics Bibliography

There is an index published by Sociological Abstracts which is called
Language and Language Behavior Abstracts.  I believe it is a quarterly
publication.  It is also an online database available on at least Dialog
and BRS.  I have not made a complete evaluation of these abstracts for
their relevance to computer science, AI etc.  However this week at the
Online conference in San Francisco I did learn that the index does
have a section on mathematical and computational linguistics.  A very
quick search with the descriptor term artificial intelligence came
up with 45 hits.  However much of the material was in the older part
of the database.  I was told there has been some changes in scope and
as of now I am not sure whether their scope has expanded in the areas
of interest of AIList readers or the scope has been limited.

Harry Llull, Computer Science Library Stanford University.

------------------------------

Date: 31 Oct 1984 08:36-CST
From: SAC.LONG@USC-ISIE.ARPA
Subject: Badgorithms


I have noticed much discussion on natural languages and AI processors
with difficulties in linguistical interpretations due to varied
meanings (?!?).  Well my PC (personal consideration) on the use of
'badgorithm' is that it is a poor construct from the similarity and
pronouncability of its original word, 'algorithm'.  In view of this
I would like to submit in its place a modified parsed version:

                           'badorithm'

It seems to me the substituted word is much simpler to pronounce and
has more of an audio similiarity to algorithm.  It is not my intent to
bring any discredit or defaming to the coiner of 'badgorithm', but
only to present what may be a more useful form of the original idea.
Of course, such things are a matter of personal taste to a great extent.

What do you think?

------------------------------

Date: Fri 2 Nov 84 14:28:15-CST
From: CMP.BARC@UTEXAS-20.ARPA
Subject: "badgorithms" vs. "algospasms"

I know this is an election year, but perhaps we need more than two choices
on some important issues.  How about "malgorithms" or is that too easy?

Dallas Webster

------------------------------

Date: Wed 31 Oct 84 09:55:03-MST
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: Programming Style in Lisp and C

(Oh no, not more "programming style" flames!)

I'm a little puzzled why "imperative style" in Lisp is so much worse
than the same style in C or Pascal.  There's a difference between
abstraction and imperative code.  Last year I wrote a large quantity
of graphics C code, and attempted to make it functional as possible.
I ended up resorting to amazing numbers of preprocessor macros (is this
abstraction?  why not just use the macro processor (it's computationally
adequate) and dump the C compiler?) in order to get the polymorphic
functional style, and it was pretty messy.  Why should one have to tinker
with pointers explicitly, or allocate storage manually?  I have yet
to see a nontrivial C program that dispensed entirely with assignment
statements.

There are at least two reasons for doing "imperative style" in Lisp.
The first reason is to reduce the complexity of programs.  True functional
style requires that you *always* pass around all data that you will
ever use.  For instance, any I/O parameters can never be defaulted -
all of them will always have to be passed to the I/O functions.  I've
heard very few people actually advocating that in practice (although
many are quick to advocate in theory...).  Another way to put it is that
global variables are completely disallowed!  The second reason is that
Lisp kernels generally have to be coded for VN machines, and so their
code tends to look more "imperative" in nature (of course, I'm assuming
that one is doing Lisp-in-Lisp).  The PSL kernel definitely has an
atypical coding style, but of course you can't implement LAMBDA using
LAMBDA directly!

In any case, I've only seen a few Lisp programs that were totally
without side-effect operators, and those were small examples.  I'd
be interested to hear of a major system being done in a true functional
style (Steele's RABBIT compiler for Scheme is the closest candidate
I know).  Side effects have their place, albeit a rather small one...

                                                        stan shebs

------------------------------

Date: Wed 31 Oct 84 10:16:48-MST
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: What To Do With All Those Functions

I use lots and lots of small functions in Lisp programs also, and have
adopted a sort of semi-systematic depth-first scheme for ordering them
in a file.  That is, if A calls B and C, and B calls D, and C calls
E and F, then I put them in the order A B D C E F.  The rationale is
that B and D (for instance) form a unit, and should therefore be
grouped together.  If a function is used in several places, I usually
put it close to the first place.  If it's used in *lots* of places,
it's a utility, and therefore goes in a separate utilities file.
Files should be kept relatively small (<1000 lines), and should have
plenty of "separating" documentation that divides larger files into
several parts.

Don't think of functions as a burden;  they are an advantage.  There's
no limit on name lengths, and their cost is trivial, so you can name
them to be very mnemonic (such as "get-first-item-and-mung").  This
is a great aid for debugging.  Also, programs will be easier to modify
later on (and save scrolling work for the text editor!).

                                                        stan shebs

------------------------------

Date: Wed 31 Oct 84 14:18:32-MST
From: Uday Reddy <U-REDDY@UTAH-20.ARPA>
Subject: Functionitis

I thought that there were several books answering questions of the sort
Shawn Amirsardary posed and that all decent Universities offered courses
that dealt with such questions.  The subject is called "programming
methodology".  Since the questions are age-old and so are the answers to
them, I will be brief.

What do you do with "million and a half functions that get called from
usually one place"?  You use "abstraction".  If you are trying to
understand a program by tracing it, you are NOT using abstraction.  In that
case you would naturally prefer SETQ's to functions.  But, if you know how
to use abstraction, you would hate SETQ's.

How do you order function definitions?  You organize them into "modules"
which are also called "classes", "forms", "clusters" or "packages" in
various contexts.  If you are using a state-of-the-art language, it should
support modules.  Otherwise, you can still organize the functions into
modules on your own.

------------------------------

Date: Fri 2 Nov 84 14:33:58-CST
From: CMP.BARC@UTEXAS-20.ARPA
Subject: Re: Lisp Function-itis

Would you consider trading your LAMBDA (or 3600 or Dandelion) for a NORMA
(Normal Order Reduction Machine Architecture), and using a purely functional
language that supports modules (SASL)?

Dallas Webster (CMP.BARC@UTexas-20)

------------------------------

End of AIList Digest
********************

From:	COMSAT          8-NOV-1984 05:27  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a023203; 7 Nov 84 19:56 EST
Date: Wed  7 Nov 1984 15:47-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #150
To: AIList@SRI-AI
Received: from rand-relay by vpi; Thu, 8 Nov 84 05:20 EST


AIList Digest            Thursday, 8 Nov 1984     Volume 2 : Issue 150

Today's Topics:
  AI Tools - Prolog Availability,
  AI Education - Getting Started in AI & CAI Authoring,
  Linguistics - Interlinguae,
  Seminars - Knowledge Representation and Problem Solving & Vision &
    Knowledge Editing & Automatic Program Debugging,
  Conferences -  Software Maintenance & Security And Privacy
----------------------------------------------------------------------

Date: Mon, 5 Nov 84 21:22:53 mst
From: "Arthur I. Karshmer" <arthur%nmsu.csnet@csnet-relay.arpa>
Subject: Prolog Availability


Our vax-11/750 runninx UNIX 4.2 is newly installed and we would very much
like to locate PROLOG for it. We would appreciate any help in finding
a version of PROLOG for our system. Further, we are using a number of
DEC pro-350 systems under Venix/11. The version of PROLOG we currently
have for these systems is badly brain damaged - is there any help
available in this area?

------------------------------

Date: 6 Nov 84 09:32:25 PST (Tuesday)
From: cherry.es@XEROX.ARPA
Subject: Getting started in AI

I am looking for any pointers which may help me get started in LISP.
Utility programs, applications programs, etc. will be helpful so that I
can analyze the source to better understand what I am trying to
accomplish.  Most of the literature I have read on the topic of AI makes
the assumption that the reader is quite proficient in the LISP
environment.  While I'm not new to programming, the LISP environment is
new to me.

My purpose for utilizing AI will be as an engineering aid for product
yield improvement.

Cherry.es@Xerox.Arpa

------------------------------

Date: 5 November 1984 1311-PST (Monday)
From: psotka@nprdc
Reply-to: psotka@NPRDC
Subject: CAI Authoring

I too would like to hear about good CAI authoring systems.  Several
commercial systems that run on VAXen  CYBERs and other stuff are really
good for their purpose -- linear CAI.  The real question, it seems to me,
is how to use the marvelous computational power of personal
Lisp machines to do CAI authoring.  What kinds of facilities would one want?
Natural language interpreters; graphic simulation systems for rapid
prototyping; expert systems for explaining;  complex knowledge representation.
ETC.   Could such a system be designed now to produce instruction as
effective as one on one tutoring by an expert?  Would the author (the person
using the system to develop instruction) have to be an expert in the area
being taught (and an expert teacher, too)??


[For one viewpoint on nonlinear CAI, see Jacques Habenstreit's article,
Computers in Education: The French Experience (1970--1984), in the
Fall issue of Abacus.  -- KIL]

------------------------------

Date: Sat, 3 Nov 84 15:49:16 pst
From: Bill Poser <poser@SU-Russell>
Subject: Interlinguae

        I think that it is Rick Briggs who should read his own
writing more carefully. The relevant portion of Briggs' comment
runs as follows:

"Current Linguistics has begun to actually aid this entropy by paying
special attention to slang and casual usage (descriptive vs. prescriptive).
Without some negentropy from the linguists, I fear that English will
degenerate further."

The use of the inchoative "has begun" in the first sentence clearly
presupposes that Linguistics has hitherto been prescriptive.  (I.e.,
Linguists have only have just begun to pay special attention to slang
and casual speech; they have just begun to engage in descriptive, as
opposed to prescriptive, linguistics.)  So although it is quite true
that Briggs recognizes that there is now a descriptive element to
Linguistics, he is claiming (whether he intended to or not) that
Linguistics has been prescriptive and still is predominantly
prescriptive, and that it would be appropriate for linguists to be
more prescriptive. My point, which I believe still stands, was that
what we call Linguistics is not at all prescriptive and has not been
in the past.  Modern Linguistics (by which I mean Linguistics since
the mid-nineteenth century) is by definition not prescriptive.
Moreover, the traditions of prescriptive grammar and Linguistics have
been essentially independent for a very long time.
        Polemic aside, there is a real issue here. Briggs is
claiming that there is such a thing as degeneration of languages.
Now it is certainly true that some people use language more effectively
than others, whether we measure effectiveness in terms of aesthetics or
clarity or what. And it may be that the mean effectiveness of language use
over a population varies with time, e.g. as literacy rises and falls,
although I know of no objective demonstration of such a claim. But
that does not mean that the *language* degenerates--only that its use
degenerates. The issue is whether historical change in language results
in degneration of the language. This is certainly an empirical issue,
but I am not aware of any evidence that such degeneration takes place.
Features of one generation's casual style often become features of a
subsequent generation's formal style. There is just no evidence that
any historical stage of a language is less useful or more ambiguous
or whatever than any other. Different languages (and different social
and geographic dialects and historical stages of the same language)
differ in what information they present obligatorily or briefly,
but there is no evidence that there are statements that can be made
in one language that cannot be translated into another language, although
the expression of a given piece of information in one language may be
more or less cumbersome than in the other. In sum, while it is very
common for people to believe that their language is deteriorating and
look back to some golden age in which the language was just right,
the notion that there is such a thing as degeration of a language
(short of the special case of "language death" that sometimes
occurs when a language has only a few speakers left) is one that
has never been substantiated.
        Finally, to return to my challenge to Briggs to show that
Shastric Sanskrit is a natural language, he argues that the
existence of dialogues written in it demonstrates that it was spoken,
suggesting that raising the issue of whether this demonstrates that
it was actually spoken is equivalent to raising the issue of whether
the Platonic dialogues were actually spoken.
It is quite possible to write dialogues that never took place, and
moreover to write them in a style that would never have been used
in actual speech, so the existence of written dialogues in and of
itself is not compelling. In fact, if I am not mistaken, the Platonic
dialogues are not believed to be actual transcripts of spoken
dialogues. In the case of Greek we have lots of other evidence that
the language was spoken, and the language of the dialogues is not so
different from other forms of the language, so I would not argue that the
Platonic dialogues could not have been spoken. But Shastric Sanskrit
differs sufficiently from other forms of Sanskrit that one must consider
seriously the possibility that the dialogues written in it were actually
spoken. The existence of dialogues in the language certainly shows that
it had a broader semantics than, say, the language of mathematical discourse,
but it doesn't show that Shastric Sanskrit was actually a spoken language.
        But let's go one step further. Suppose that Briggs is right and
some people actually spoke Shastric Sanskrit, perhaps even all the time.
The mere fact that it could be spoken wouldn't mean that it wasn't artificial.
People speak Esperanto too. I reiterate: a language is artificial if it
was consciously designed by human beings. The use to which an artificial
language is put says nothing about its artificiality. (I'll back down
just a bit here. We should probably be willing to give a language status
as a natural language (in one sense) if, although it is the result
of conscious design, it is subsequently learned as a native language
by human children. This learnability would presumably show that the
language's properties are those of a natural language, although
it happens that it did not evolve naturally.)
        I still think that Shastric Sanskrit is an artificial derivative
of Sanskrit used for specialized scientific purposes, not a natural language.
Briggs asks whether I would deny the language of scientific discourse
the status of natural language. As I indicated in my very first message
on this topic, yes I would, at least the language of mathematics. The
language of mathematics is a specialized derivative of normal language
that contains special constructions that in some cases violate strong
syntactic constraints of the natural base. Consider the "such that"
construction in English mathematical language, for example.
        I suspect that it is pointless to quibble endlessly about
whether or not a given form of specialized language is natural
or not-we'll just end up worrying about at what point we say
that the specialized language departs sufficiently from its
source to differentiate them. But the real point, and the one that
I have been trying to make from the outset, is simple and, I
think, untouched. It is possible to create specialized languages based
on natural languages that are more precise, less ambiguous, etc., conceivably
even perfect in these respects, and therefore better candidates for
machine translation interlinguae, but there is no known natural language
which in its ordinary form has these properties.

------------------------------

Date: Fri 2 Nov 84 11:57:10-PST
From: Vineet Singh <vsingh@SUMEX-AIM.ARPA>
Subject: Seminars - Knowledge Representation, Problem Solving, Vision

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

A couple of researchers from IBM Yorktown will be at HPP next Thursday
(11/8/84).  They will present two short 20 minute talks starting at
10 am on distributed computing (AI and systems) research at their
research facility.  Anyone who is interested in listening to their
talks and/or talking to them should show up at that time.  Details are
given below:

Time: 10 am
Day: Thursday (11/8/84)
Place: Welch Road conference room (HPP, 701 Welch Rd., Bldg C)
Speakers: Sanjaya Addanki and Danny Sabbah
Abstracts:

*Abstract1*

Knowledge Representation and Parallel Problems Solving:

          While there has  been much research on  "naive sciences" and
          "expert  systems" for  problem-solving  in  complex domains,
          there is a large class of  problem solving tasks that is not
          covered by these efforts.  These tasks (e.g. intelligent de-
          sign in complex domains) require  systems to go beyond their
          high level rules into deeper levels of knowledge down to the
          "first principles"  of the  field. For example,  new designs
          often  hinge on  modifying  existing  assumptions about  the
          world. These  modifications cause changes in  the high level
          rules about the world.   Clearly, the processes of identify-
          ing the modifications to be made and deducing the changes to
          the rules require deeper levels of knowledge.

          We propose  a hierarchical,  prototype-based scheme  for the
          representation and interpretation of the different levels of
          knowledge  required by  an  intelligent  design system  that
          functions in a world of complex devices. We choose design as
          the target  task because it  requires both the  analysis and
          synthesis of solutions and thus covers much of problem solv-
          ing.  This work is a part of a larger effort in developing a
          parallel approach to complex problem solving.

*Abstract2*

Vision:

In this short overview of current interest in Computer Vision at Yorktown,
we will be discussing issues in:

    a) Incorporation of complex shape representation (e.g. Extended Gaussian
Images) into parallel visual recognition systems.
    b) Improvement of recognition behavior through the incorporation of
multiple sources of information (e.g. contour, motion, texture)
    c) A possible mechanism for focus of attention in highly parallel,
connectionist vision systems  (an approach to indexing into a large data
base of objects in such vision systems).

Detailed solutions will be sparse as the work is beginning and is just through
the proposal stage.  The issues, however, are relevant to any visual
recognition system.

------------------------------

Date: 5 Nov 1984  13:04 EST (Mon)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Knowledge Editing

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


        Wednesday,  Nov 7       4:00pm      8th floor playroom

               CREF: A Cross-Referenced Editing Facility
                 for the Knowledge Engineer's Assistant


                             Kent M. Pitman


I will present a critical analysis of a tool I call CREF (Cross
Referenced Editing Facility), which I developed this summer at the Human
Cognition Research Laboratory of the Open University in Milton Keynes,
England. CREF was originally designed to fill a very specific purpose in
the KEA (Knowledge Engineer's Assistant) project, but appears to be of
much more general utility than I had originally intended and I am
currently investigating its status as a ``next generation'' general
purpose text editor.

CREF might be described as a cross between Zmacs, Zmail, and the Emacs
INFO subsystem. Its capabilities for cross referencing, summarization,
and linearized presentation of non-linear text put it in the same family
as systems such as NLS, Hypertext, and Textnet.

------------------------------

Date: Mon, 5 Nov 84 10:20:54 cst
From: briggs@ut-sally.ARPA (Ted Briggs)
Subject: Seminar - Automatic Program Debugging

        [Forwarded from the UTexas-20 bboard by Laws@SRI-AI.]


     Heuristic and Formal Methods in Automatic Program Debugging
                                  by
                          William R. Murray

                         noon  Friday Nov. 9
                               PAI 3.38

  I will discuss the implementation of an automatic debugging system for
pure LISP functions written to solve small but nontrivial tasks.  It  is
intended to be the  expert module of  an intelligent tutoring  system to
teach LISP.  The debugger uses both heuristic and formal methods to find
and correct bugs  in student  programs.   Proofs  of correctness  of the
debugged definitions are generated for  verification by the Boyer  Moore
Theorem Prover.

   Heuristic methods are used  in algorithm identification, the  mapping
of stored functions to student functions, the generation of verification
conditions, and in the localization  of bugs.   Formal methods  are used
in a  case  analysis  which  detects  bugs,  in  symbolic  evaluation of
functions, and in the verification of results.  One of the main roles of
the theorem prover is to represent intensionally an infinite database of
all possible rewrite rules.

 - Regards,
      Bill

------------------------------

Date: 3-Nov-84 21:33 PST
From: William Daul - Augmentation Systems - McDnD 
      <WBD.TYM@OFFICE-2.ARPA>
Subject: CALL FOR PAPERS - CONFERENCE ON SOFTWARE MAINTENANCE -- 1985

Conference On Softway Maintenance -- 1985

   Wahsington, D.C., Nov. 11-13

The conference will be sponsored by the Association For Women in Computing, the
Data Processing Management Association, the Institute for Electrical &
Electronics Engineers, Inc., the National Bureau of Standards and the Special
Interest Groups on Software Maintenance in cooperation with the Special Interest
Group on Software Engineering.

Papers are being solicited in the following areas:

   controlling software maintenance
   software maintenance careers and education
   case studies -- successes and failures
   configuration management
   maintenance of distributed, embedded, hybrid and real-time systems
   debugging code
   developing maintainance documentation and environments
   end-user maintenance
   software maintenance error distribution
   software evolution
   software maintenance metrics
   software retirement/conversion
   technololgy transfer
   understanding the software maintainer

Submission deadline is Feb. 4, and 5 double-spaced copies are required.  Papers
should range from 1,000 to 5,000 words in length.

The first page must include the title and a maximum 250-word abstract; all the
authors' names, affiliations, mailing addresses and telephone numbers; and a
statement of commitment that one of the authors will present the paper at the
conference if it is accepted.

Submit papers and panel session proposals to: Roger Martin (CMS-85), National
Bureau of Standards, Building 225, Room B266, Gaithersburg, Md. 20899

------------------------------

Date: 3-Nov-84 21:33 PST
From: William Daul - Augmentation Systems - McDnD 
      <WBD.TYM@OFFICE-2.ARPA>
Subject: CALL FOR PAPER -- 1985 Symposium On Security And Privacy

1985 Symposium On Security And Privacy

   Oakland, Ca., April 21-24

The meet is being sponsored by the Technical Committee on Security and Privacy
and the Institue Of Electrical & Electronic Engineers, Inc.

Papers and panel session proposals are being solicited in the following areas:

   security testing and evaluation
   applications security
   network security
   formal security models
   formal verification
   authentication
   data encryption
   data base secutity
   operating system secutity
   privacy issues
   cryptography protocols

Send three copes of the paper, an extended abstract of 2,000 works or panel
proposal by Dec. 14 to:

   J.K. Millen
   Mitre Corp.
   P.O. Box 208
   Bedford, Mass. 01730

Final papers will be due by Feb. 25 in order to be included in the proceedings.

------------------------------

End of AIList Digest
********************

From:	COMSAT          9-NOV-1984 22:35  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a000200; 9 Nov 84 14:54 EST
Date: Fri  9 Nov 1984 10:53-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #151
To: AIList@SRI-AI
Received: from rand-relay by vpi; Fri, 9 Nov 84 22:28 EST


AIList Digest             Friday, 9 Nov 1984      Volume 2 : Issue 151

Today's Topics:
  AI Hardware - Fujitsu Facom Alpha
  AI Literature - Journal of Intelligent Systems
    & Artificial Intelligence Markets & Machine Intelligence News Digest,
  Algorithms - Taxonomy and Uses of Malgorithms,
  Program Description - Social Impacts of Computing, UC-Irvine
----------------------------------------------------------------------

Date: Fri, 9 Nov 1984  13:20 EST
From: Chunka Mui <CHUNKA%MIT-OZ@MIT-MC.ARPA>
Subject: AI Hardware - Fujitsu Facom Alpha


In a recent issue of Electronic News (I think it was October), there
was an article on AI systems which was interesting.  After the usual
discussion about Symbolics, LMI, and Xerox lisp machines, the article
discussed a Fujitsu machine called the "facom alpha" which was priced
at 90K and which Gary Moskovitz of Xerox described as a "back-end
processor to a main frame."  Now it doesn't seem that 90K for a
back-end processor is much of a bargain, but I think the idea of a
very fast Lisp processing back end for a mainframe is worth looking
at.  To be able to use a 3600 or a Lambda as a development environment
but know that one could ultimately use a mainframe as the execution
environment would, I think, make big business look more kindly upon
potential AI projects.

Has anyone out there seen the Fujitsu machine or know anything about
it?  I like to hear whatever information, thoughts, rumors, etc.
people had on it.  If there is a Fujitsu person out there, I'd be
interested in hearing from you.

I'd also like to know what kind of thoughts people had on this topic:
lisp back ends for mainframes that can roughly compare with the
various lisp machines as oppose to single user work station that are
used now.  Is anyone working on such a thing here in the U.S.?

Thanks,

     Chunka Mui
     Chunka%mit-oz@mit-mc

------------------------------

Date: Mon 5 Nov 84 14:53:19-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Journal of Intelligent Systems

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

I have received very brief information on a journal to appear in 1985.
The Journal of Intelligent Systems will be published by Freund Publishing
House Ltd., London England for $120 per year, a quarterly.  The editors
are Frank George, Les Johnson, and Mike Wright of Brunel University,
Uxbridge England. The managing editor is Mrs. Alison Lovejoy, AA
publishing Services, London England.  The Aims and Scopes are described
as follows:

...to provide research and review papers on an interdisciplinary level, where
the focal point is the field of intelligent systems.  This field includes:
the empirical study and modelling of natural intelligent systems (human
beings and also relevant studies in evolutionary theory and biology);
the theoretical analysis of possible systems which could display intelligence,
the development and enhancement of intelligent systems (eg learning theories)
the designing of intelligent systems (or the application of intelligent systems
concepts to the design of semi-intelligent machines) and the philosophical
aspects of the field of intelligent systems.

It is believed that technological advances in such areas as robotics and
knowledge based systems are facilitated by interdisciplinary communcication.
Additionally, those sciences which are concerned with the understanding of
human intelligence stand to gain by such a dialogue.

In keeping with the interdisciplinary intent of the journal, papers will be
written for general professional readership.  It is therefore important
that technical jargon should be avoided, or it used , shld be made
explicit...........


An editorial board of 20 is being formed at present.  If anyone has any
information or opinions about this publication, please let me know.
Does it sound like something I should order for the Math/CS Library?

Harry Llull

------------------------------

Date: Wed 7 Nov 84 11:24:31-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Artificial Intelligence Markets

I just got a flier from AIM Publications, P.O. Box 156, Natick, MA 01760.
They are planning a newsletter, Artificial Intelligence Markets, to
track the AI business world starting in January 1985.  The price is
$255 regular, $195 charter, $380 2-year, and $550 3-year for 12 issues
per year of eight pages each.

The flier claims this to be the ONLY publication dedicated to covering
commercial AI (also DoD and Fifth Generation coverage).  Perhaps they
aren't aware of the AI Report from AI Publications (95 first st., Los
Altos, CA  94022), or of the Georgia Tech (?) newsletter described in
AIList about six months ago.  I've also heard recently of an "AI and
its Applications" newsletter, but have no details.

The flier does mention levels of AI investment by U.S. companies, and
claims that the current AI market of $125 million (36% software, 12%
intelligent robots, 52% LISP workstations) will expand to $4,440 million
by 1990: 43% software (7% LISP, 13% expert system tools, 5% natural
language, 8% programming languages, 8% military, 2% other), 15%
intelligent robots, 28% LISP workstations, 11% other processors, and
3% AI communications.

                                        -- Ken Laws

------------------------------

Date: Wed 7 Nov 84 15:32:24-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Machine Intelligence News Digest

From the November issue of IEEE Spectrum, p. 123:

Yet another newsletter covering the field of artificial intelligence
has been announced, but this time it comes from the United Kingdom.
Machine Intelligence News Digest is the first British news publication
to monitor artificial intelligence on a monthly basis.  It will
concentrate on the existing and potential applications of AI and
their impact on the industrial and commercial world.  It will also
include a calendar of events and a publication review section.
Regular coverage will be given to artificial vision and speech
recognition, AI languages such as LISP, integrating intelligent
machines with computer-aided systems, and AI research programs.
These include the DARPA effort in the United States, the fifth-
generation project in Japan, and the Esprit program in France.

The monthly newsletter costs 110 pounds annually ($140).  Subscription
information is available by writing the publisher, Oyez Scientific
and Technical Services Ltd., Bath House, 3rd Fl., 56 Holborn Viaduct,
London EC1A 2EX, England; or calling 01-236-4080.

------------------------------

Date: 7 Nov 84 14:06:29 EST
From: BIESEL@RUTGERS.ARPA
Subject: Taxonomy of malgorithms.

Now that the concept of malgorithms has been defined it behooves us as
serious scientists to classify the different kinds of malgorithms, to
write learned papers in obscure journals, and to generally do everything
to bring scholarly respectability to this heretofore underrecognized area
of computer science. The following is a modest contribution to the
establishment of a taxonomy of malgorithms.

The notion of an optimal algorithm is an old one, and the definition of
of optimality in time, say, or in storage is straightforward. The little
"o" and the big "O" notation is well established and suffices to define
the complexity of an algorithm (except for a constant or two), and thus
permits the comparison of two algorithms for the same problem. The optimal
algorithm is therefore simply that algorithm which has the lowest
time complexity for any given problem. Often it is possible to prove
mathematically that the best possible algorithm for a given class of problems
cannot do better than some lower bound.

The converse of this, the worst possible algorithm, is not as easily defined.
Is the worst possible algorithm one that never finishes, while wiping
out every piece of storage and tying up your computer until you unplug
it? Or, more insidious, does this algorithm appear to run normally,
generate recognizable output, but produce results that are subtly wrong,
so wrong as to cause maximum damage when the results are used?

If we restrict our considerations only to those algorithms that
actually produce the correct result, but do so in the longest possible
time, we run into other problems. The concept of 'longest possible time'
is ill-defined, since we do not know the temporal extent of the
universe. Neglecting for the moment the relatively trivial problem
of how to keep a computer running forever ( a hardware problem, and
therefore not worthy of our consideration), we still need to
define some upper bounds on the time intervals we are considering.

Assumption 1: The universe will exist forever.
Definition 1: Any algorithm that runs forever before it produces the
correct result is a member of the class "Aleph Zero". Extensions
to algorithms that take longer than this are made in the obvious way
(i.e. classes aleph one etc.). The development of such an algorithm
is left as an exercise to the reader.

Assumption 2: The universe will exist until some terminal
climactic event.
Definition 2: Any algorithm that runs a finite amount of time,
and produces its output at the last moment of existence, is a
member of the class "Gabriel". (members of non-christian
religions may wish to substitute a climactic event of their own
choice).

While the classes thus far defined would appear to specify
theoretical upper bounds for malgorithm execution times, some
practitioners may be concerned with malgorithms that take into
account the limitations of present hardware configurations. While
this kind of pandering to mechanical strictures is abhorrent
to every theoretician, some precedents exist in the literature,
and we will accordingly briefly touch upon the subject here.

Suppose we have devised a malgorithm which can run an arbitrary
amount of time before producing its result. The task now becomes
one of maximizing this time, subject to the constraints formed
by the finite MTBF of the hardware, and the equally finite tolerance
threshold of the person waiting for the result.

Definition 3: Any malgorithm which produces its output at the last
possible instant before either the hardware fails, or the user
terminates the program is a member of class "Epsilon".

As an aside, malgorithms of this class will usually require some
additions to the operating system to recognize an attempt to
cancel the program execution. Hardware modifications, in the
form of energy storage systems to permit the program to
print its output after the frustrated user has pulled the power
plug, will probably also be necessary.

It should be noted that malgorithms of class "Epsilon" have
an unfortunate flaw: since they produce output whenever they are
terminated by the user, they are also the fastest possible
algorithms for any problem, being limited only by the speed with
which the user can pull the plug. Once malgorithms of this class
have become established, future work in computational speedup
will likely focus on fast switches for power cutoff.

Now that we have defined some upper bounds on theoretical
malgorithm performance, we would like to define some additional
classes of actual malgorithms, primarily for taxonomic purposes.

The classes below are only a beginning, and the reader is invited
to contribute additional definitions and examples to the discussion.
The classes are not maximal or minimal in any sense, but merely define
some categories of malgorithms. Example malgorithms should be easily
recognized as falling into one or another of the classes defined.

Definition 4: Malgorithms which employ recursion to solve a problem
for which there exists a closed form solution are members of class
"Fibonacci".

Definition 5: Malgorithms which solve a problem by exhaustive generation
of all permutations, when there is any alternative solution, are
members of class "Salesman".

Definition 6: Malgorithms which apply a general algorithm to the wrong
size problem are members of class "Heapsort".
Example: Heapsort applied to the list 1,3,2.

Definition 7: Malgorithms for Monte-Carlo solutions to analytic
functions are members of class "Pi".

Definition 8: Malgorithms which provide a solution to a problem by
solving a more complex isomorphic problem are members of the class
"Gauss".
Example: Multiplication of two numbers by adding their logarithms.

Definition 9: Malgorithms which perform redundant computations
are members of class "Sheep".
Example: Determining the number of sheep in a herd by counting
the number of legs and dividing by four.

It should be noted that the classes proposed here are neither
exhaustive, nor are they mutually exclusive. Most current
programs contain algorithms which upon inspection are really
malgorithms that fall into one or more of the classes here
defined. It is our devout hope that this short note will lead to
a more intensive investigation of this much neglected area of
computer science. The author is convinced that this area
can provide subject matter for several Ph.D. dissertations
at the more mathematically rigorous institutions of higher
learning, and wishes to express his gratitude to the contributors
to the Ailist, who have given the impetus for this important work.


Biesel@Rutgers.ARPA

------------------------------

Date: Thu, 8 Nov 84 11:30:22 cst
From: "Walter G. Rudd" <rudd%lsu.csnet@csnet-relay.arpa>
Subject: data structures + malgorithms =

It is clear that there are many more malgorithms than problems, if for no
other reason than we all have more solutions than problems.  The real
science of malgorithms is to find really useful applications of good
malgorithms beyond the trivial ones of classroom and textbook examples.

To my surprise there really are such uses, or there is at least one.
The other day I was talking with an attorney about copyrighting programs.
He says that in copyright cases in which there is some question of
authorship, judges are often impressed by "fingerprints" embedded in
software.  The usual kind of fingerprint is a copyright notice
buried in block 0 of an index file, variable names which form a code
for the author's name, etc.  But he says the most effective fingerprints
are sections of code so poorly designed and written that their inclusion
in the software must have been intentional, since nobody would be stupid
enough to use such sloppy techniques in their normal practice.
In court, to prove you wrote the program, you simply point out the bad parts
to the judge and claim that, since you are an expert, the only way that code
could have gotten there was by your intentionally inserting a fingerprint.

A nice side effect of this technique is that we now have a good excuse to
give to grad students and others who discover malgorithms in our programs.  We
simply say that we are preparing to protect our copyright.

So here we have the birth of a new discipline.  Not only do we have the
design and analysis of malgorithms; we now have applications of malgorithms
as well.  The question is, are there any other applications?

------------------------------

Date: 3 Nov 1984 1201-PST
From: Rob-Kling <Kling%UCI-20B@UCI-750a>
Subject: Program Description - Social Impacts of Computing, UC-Irvine

                                CORPS

                        Graduate Education in
            Computing, Organizations, Policy, and Society
               at the University of California, Irvine

     This graduate concentration at the University of California,
Irvine provides an opportunity for scholars and students to
investigate the social dimensions of computerization in a setting
which supports reflective and sustained inquiry.

     The primary educational opportunities are PhD concentrations in
the Department of Information and Computer Science (ICS) and MS and
PhD concentrations in the Graduate School of Management (GSM).
Students in each concentration can specialize in studying the social
dimensions of computing.

     The faculty at Irvine have been active in this area, with many
interdisciplinary projects, since the early 1970's.  The faculty and
students in the CORPS have approached them with methods drawn from the
social sciences.

     The CORPS concentration focuses upon four related areas of
inquiry:

 1.  Examining the social consequences of different kinds of
     computerization on social life in organizations and in the larger
     society.

 2.  Examining the social dimensions of the work and organizational
     worlds in which computer technologies are developed, marketed,
     disseminated, deployed, and sustained.

 3.  Evaluating the effectiveness of strategies for managing the
     deployment and use of computer-based technologies.

 4.  Evaluating and proposing public policies which facilitate the
     development and use of computing in pro-social ways.


     Studies of these questions have focussed on complex information
systems, computer-based modelling, decision-support systems, the
myriad forms of office automation, electronic funds transfer systems,
expert systems, instructional computing, personal computers, automated
command and control systems, and computing at home.  The questions
vary from study to study.  They have included questions about the
effectiveness of these technologies, effective ways to manage them,
the social choices that they open or close off, the kind of social and
cultural life that develops around them, their political consequences,
and their social carrying costs.

     CORPS studies at Irvine have a distinctive orientation -

(i) in focussing on both public and private sectors,

(ii) in examining computerization in public life as well as within
      organizations,

(iii) by examining advanced and common computer-based technologies "in
      vivo" in ordinary settings, and

(iv) by employing analytical methods drawn from the social sciences.



         Organizational Arrangements and Admissions for CORPS


     The CORPS concentration is a special track within the normal
graduate degree programs of ICS and GSM.  Admission requirements for
this concentration are the same as for students who apply for a PhD in
ICS or an MS or PhD in GSM.  Students with varying backgrounds are
encouraged to apply for the PhD programs if they show strong research
promise.

     The seven primary faculty in the CORPS concentration hold
appointments in the Department of Information and Computer Science and
the Graduate School of Management.  Additional faculty in the School
of Social Sciences, and the program on Social Ecology, have
collaborated in research or have taught key courses for CORPS
students.  Research is administered through an interdisciplinary
research institute at UCI which is part of the Graduate Division, the
Public Policy Research Organization.

Students who wish additional information about the CORPS concentration
should write to:

          Professor Rob Kling (Kling@uci)
          Department of Information and Computer Science
          University of California, Irvine
          Irvine, Ca. 92717
          714-856-5955 or 856-7403

                                or to:

          Professor Kenneth Kraemer (Kraemer@uci)
          Graduate School of Management
          University of California, Irvine
          Irvine, Ca. 92717
          714-856-5246

------------------------------

End of AIList Digest
********************

From:	CSVPI          11-NOV-1984 05:22  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a001015; 11 Nov 84 1:49 EST
Date: Sat 10 Nov 1984 22:23-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #152
To: AIList@SRI-AI
Received: from rand-relay by vpi; Sun, 11 Nov 84 05:17 EST


AIList Digest            Sunday, 11 Nov 1984      Volume 2 : Issue 152

Today's Topics:
  Msc. - Band Name,
  Machine Translation - Aymara as Interlingua,
  Linguistics - Sastric Sanskrit & Language Degeneration,
  Knowledge Representation - Problem Solving Representations,
  Seminars - Rule-Based Debugging System & PROLOG Data Dependency Analysis,
  Conference - IJCAI-85
----------------------------------------------------------------------

Date: 10 Nov 84 18:53:18 PST (Saturday)
From: Mark Sabiers <Sabiers.es@XEROX.ARPA>
Reply-to: Sabiers.es@XEROX.ARPA
Subject: The names of bands

The enclosed message came through net.music (uucp) and Info-Music
(ARPA).  Thought it was appropriate to this list.

Mark


  Subject: Artificial Intelligence
  Date: 8 Nov 84 04:46:12 GMT
  Organization: AT&T Bell Labs, Holmdel NJ
  From: "N.BRISTOL" <bristol@hou2h.uucp.ARPA>

  Has anyone heard of a band called
  Artificial Intelligence?  I heard a
  tune on the radio and I would like to know more
  about the band.
  RSVP by mail or the net, I don't care.

  Gil Bristol
  AT&T Consumer Products
  Neptune, NJ
  hou2h!bristol

------------------------------

Date: Fri Nov  9 1984 13:22:59
From: Yigal Arens <arens%usc-cse.csnet@csnet-relay.arpa>
Subject: Strange new languages


As if we didn't have enough trouble with Sastric Sanskrit, last Wednesday's
LA Times contains a story about wonderful advances in machine translation
using an Indian language (Aymara) which "according to some historians [was
constructed by] wise men from scratch, by logical, premeditated design, as
early as 4,000 years ago."  How "some historians" know this remains a
mystery, all the more so since according to the article there are hardly any
written records of the langauge.

Anyway, a Bolivian mathematician, Ivan Guzman de Rojas, has devised a system
for machine translation using this language as a "bridge".

        "Sitting at a computer terminal, Guzman de Rojas demonstrates by
         typing a tricky Spanish sentence: `La mujer que vino ayer tomo
         vino.'  Less than a second after he pushes a button, five
         translations flash on the screen and roll off a printer.  The
         English reads: `the woman who came yesterday drank wine.'

        "The system is remarkable, according to US and Canadian experts, not
         only for its speed and versatility, but its ability to sort out
         ambiguities.  Other systems, they say, cannot distinguish between
         uses of the word `vino' - which can mean `came' or `wine' - without
         an awkward modification of the computer logic."

The article is full of inaccuracies concerning machine translation.

It claims that Wang has recently given Guzman de Rojas $50,000 plus a
$100,000 computer "to refine his system."

Anybody know more about this?

Yigal Arens
USC

------------------------------

Date: 8 Nov 1984 11:59-PST (Thursday)
From: Rick Briggs <briggs@RIACS.ARPA>
Subject: Sastric Sanskrit & Language Degeneration


        By "has begun", I meant since the mid-nineteenth century.
Since the time frame I have been writing in is measured by millenia,
one century qualifies for "has begun".
        Anyway, I wonder what Bill Poser means by saying:
"But that does not mean that the *language* degenerates--only that
its use degenerates."  If a language is abused to a large extent by
its speakers, has it not degenerated?  What seems to be implied
here is that there is some abstract "language" prototype which
exists independent of use.  If this is so, violations to this prototype
are degeneration.  This is exactly the point of view of Panini etc.
The Indian and Greek cultures considered language to be a primary
component of culture(in the Indian case, language IS culture: the word
Aryan originally meant one who spoke Aryan language(i.e. Sanskrit)).
To illustrate what I mean by degeneration, consider a group of
primitives who begin to use language.  They begin with primitive
grunts to signify essential notions such as "food".  Later, they find
that the machinery of the language does not allow the expression of
concepts.  Thus the langauge evolves and evolves.  The ultimate
evolution is reached  when a language can express all notions in the
realms of the physical, emotional, conceptual, and spiritual in
a concise unambiguous way.  Sastric Sanskrit may indeed be that
language(or close to it).  Now the less lofty of the population
find no need to use such words as "none other than", "agreeing with
no other", "activity conducive towards existence" etc. (these are words
in Sastric Sanskrit).  So they cease to use the complex machinery
and revert to simple formatrions to express what they need to.
If there is no prescription, or encouragement in the educational
process to stick to the higher form of the language, the more popular
masses(consider television) will produce a pressure on the less numerous
scholarly class, and the language will begin to revert backwards.
This is exactly what happened to Sanskrit.  The "Prakrits" and "Apabrahmshas"
eventually turned Sanskrit into Hindi, Bengali etc., which do not
have the sophisticated machinery Sastric Sanskrit had.  In other words,
where one word in the Sastra signified a concept, an entire sentence
is now needed in the degenerated form of the language.  I believe this
is also the pattern which Proto-Indo-European followed, and which
English is following now.

        Once again, Sastric Sanskrit is a natural language.  But what
exactly is a natural language?  Is it existence of native speakers
(as Bill Poser suggests), or is it something about the nature of te
language itself?  Whether consciously or not, Linguists and NLP
people think of natural languages as necessarily being ambiguous
and very different from the predicate calculus.  What the existence of
the Sastra indicates is that the definition of natural language
should be changed.  I would say that a natural language is one which
1) is used
2) which has the ability to express naturally, all the various aspects
of the natural world.
Thus, if Esperanto were used in a culture, it would be a natural
language. Mathematics cannot naturally express poetic notions, it
is defined over only a small aspect of the natural world.  Sastric
Sanskrit(so I have been told by Sanskrit experts) had(and may still
have) native speakers. It is also capable of expressing anything any
other natural language can express.  You can write philosophy or
poetry in the Sastra.  I challenge anybody to find a sentence in
any language which cannot be expressed using the machinery of
Sastric Sanskrit.
        I think the real point is that the Sastra is a bridge
between the natural and artificial and challenges common notions
of what the boundary is.  One conclusion I would make is that
it is possible for a child to be raised speaking totally
unambiguously from birth and never suffer from lack of expression
or cumbersomeness.  As an interlingua, Sastra would be great
because it can codify with exactitude and make inferences naturally,
and yet poetic notions can be coded and not lost on the target
language.

Rick

------------------------------

Date: Fri 9 Nov 84 07:41:30-CST
From: Aaron Temin <CS.Temin@UTEXAS-20.ARPA>
Subject: convenient problem solving representations

There was a conference on knowledge representation and languages at the
Applied Physics Lab of Johns Hopkins from Oct 29-31.  One of the main
issues was that current programming languages force one to use
primitives that map well to a machine, but badly to most problem
domains.  Thus there are two problems: What primitives are appropriate
for a given problem domain and how can one map those into an executable
module on a given machine?

Jean Sammet from IBM contended that many problem-domain specific
languages already exist, but obviously there aren't enough or everyone
would be pretty content by now.  What is seems we need are guidelines to
help with these questions.

These are questions for all computer scientists, but especially those of
us in AI who have spent time developing new knowledge representations
rather than implementing old ones.

-Aaron

------------------------------

Date: 8 November 1984 1227-EST
From: Staci Quackenbush@CMU-CS-A
Subject: Seminar - Rule-Based Debugging System

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

        Name:   Bernd Bruegge
        Date:   November 12, 1984
        Time:   3:30 - 4:30
        Place:  WeH 5409
        Title:  "PATH RULES:  Debugging as a Rule-Based Activity


Debugging has often been considered an ad hoc technique with no underlying
model for the user. In this talk we show how debugging can be viewed as a
rule based activity. Rule based systems have been used extensively in the
area of artifical intelligence. We demonstrate that they can be quite useful
in the area of debugging.

We have designed and implemented a language called PATH RULES. Several
examples of PATH RULES on the implementer as well as on the user level are
given: We show how rules can be used in the design of the command language,
the implementation of debugging mechanisms (breakpoints, tracing, etc),
screen layout, dialog control and multiple process debugging problems.
PATH RULES have been used in the implementation of the Interim Spice
Debugger KRAUT. KRAUT is a remote, source oriented debugger for Pascal
running under the Accent Operating system and is currently being modified
for Ada.

------------------------------

Date: Fri, 9 Nov 84 09:32:17 pst
From: (Julia D. Snyder [csam]) julia@lbl-csam
Subject: Seminar - PROLOG Static Data Dependency Analysis

        [Forwarded from the LBL distribution by Laws@SRI-AI.]


       High Performance Execution of PROLOG Programs
         Based on a Static Data Dependency Analysis
                             by
                     Jung-Herng Chang*
                    (UCB Aquarius Group)

                 Room:  Bldg. 50B Rm. 4205
                  Date:  November 12, 1984
               Time:  10:30 a.m. - 12:00 p.m.

Outline
     What is PROLOG ?  Why is it an important symbolic manipulation
     language? The Performance of executing PROLOG programs has been
     improved by going from interpreters to compilers, and then to
     special hardware (e.g. the PLM Machine at UCB).  What is the next
     step to improve performance?  This talk begins with an introduction
     to PROLOG, followed by a discussion of more advanced topics in PROLOG.
     A methodology for a static data dependency analysis for PROLOG is
     introduced, as well as its applications to the PLM Machine and a
     parallel execution environment.

*The speaker is also affiliated with ACAL LBL.

------------------------------

Date: Fri 9 Nov 84 08:49:27-PST
From: name AAAI-OFFICE <AAAI@SRI-AI.ARPA>
Subject: IJCAI-85 Call


                                IJCAI-85
                             CALL FOR PAPERS

The IJCAI conferences are the main forum for the presentation of Artificial
Intelligence research to an international audience.  The goal of the IJCAI-85
is to promote scientific interchange, within and between all subfields of AI,
among researchers from all over the world.  The conference is sponsored by the
International Joint Conferences on Artificial Intelligence (IJCAI), Inc., and
co-sponsored by the American Association for Artificial Intelligence (AAAI).
IJCAI-85 will be held at the University of California, Los Angeles from
August 18 through August 24, 1985.

        * Tutorials: August 18-19; Technical Sessions: August 20-24

TOPICS OF INTEREST

Authors are invited to submit papers of substantial, original, and previously
unreported research in any aspect of AI, including:

* AI architectures and languages
* AI and education (including intelligent CAI)
* Automated reasoning (including theorem proving, automatic programming,plan-
  ning, search, problem solving, commensense, and qualitative reasoning)
* Cognitive modelling
* Expert systems
* Knowledge representation
* Learning and knowledge acquisition
* Logic programming
* Natural language (including speech)
* Perception (including visual, auditory, tactile)
* Philosophical foundations
* Robotics
* Social, economic and legal implications


REQUIREMENTS FOR SUBMISSION

Authors should submit 4 complete copies of their paper.  (Hard copy only, no
electronic submissions.)

        * LONG PAPERS: 5500 words maximum, up to 7 proceedings pages
        * SHORT PAPERS: 2200 words maximum, up to 3 proceedings pages

Each paper will be stringently reviewed by experts in the topic area specified.
Acceptance will be based on originality and significance of the reported
research, as well as the quality of its presentation.  Applications clearly
demonstrating the power of established techniques, as well as thoughtful
critiques of previously published material will be considered, provided that
they point the way to new research and are substantive scientific contributions
in their own right.

Short papers are a forum for the presentation of succinct, crisp results.
They are not a safety net for long paper rejections.

In order to ensure appropriate refereeing, authors are requested to
specify in which of the above topic areas the paper belongs, as well
as a set of no more than 5 keywords for further classification within
that topic area.  Because of time constraints, papers requiring major
revisions cannot be accepted.

DETAILS FOR SUBMISSION

The following information must be included with each paper:

        * Author's name, address, telephone number and net address
          (if applicable);
        * Topic area (plus a set of no more than 5 keywords for
          further classification within the topic area.);
        * An abstract of 100-200 words;
        * Paper length (in words).

The time table is as follows:

        * Submission deadline: 7 January 1985 (papers received after
          January 7th will be returned unopened)
        * Notification of Acceptance: 16 March 1985
        * Camera Ready copy due: 16 April 1985

Contact Points

Submissions should be sent to the Program Chair:

        Aravind Joshi
        Dept of Computer and Information Science
        University of Pennsylvania
        Philadelphia, PA 19104 USA

General inquiries should be directed to the General Chair:

        Alan Mackworth
        Dept of Computer Science
        University of British Columbia
        Vancouver, BC, Canada V6T 1W5

Inquiries about program demonstrations (including videotape system
demonstrations) and other local arrangements should be sent to
the Local Arrangements Chair:

        Steve Crocker
        The Aerospace Corporation
        P.O. Box 92957
        Los Angeles, CA 90009 USA

Inquiries about tutorials, exhibits, and registration should be
sent to the AAAI Office:

        Claudia Mazzetti
        American Association for Artificial Intelligence
        445 Burgess Drive
        Menlo Park, CA 94025 USA

------------------------------

End of AIList Digest
********************

From:	CSVPI          12-NOV-1984 03:37  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a005140; 12 Nov 84 1:43 EST
Date: Sun 11 Nov 1984 21:41-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #153
To: AIList@SRI-AI
Received: from rand-relay by vpi; Mon, 12 Nov 84 03:34 EST


AIList Digest            Monday, 12 Nov 1984      Volume 2 : Issue 153

Today's Topics:
  Linguistics - Language Degeneration,
  Algorithms - Malgorithms,
  Project Report - IU via Dialectical Image Processing,
  Seminars - Spatial Representation in Rats &
    Human Memory Capacity & Design of Computer Tools &
    Artificial Intelligence and Real Life
----------------------------------------------------------------------

Date: Sun, 11 Nov 84 14:54:08 est
From: FRAWLEY <20568%vax1%udel-cc-relay.delaware@udel-relay.ARPA>
Subject: Language Degeneration

I'd like to make a few comments on Briggs' statements on language
degeneration and Sanskrit, English, etc. The idea that a language
degenerates stems from the 19th century biological metaphor which
has been refuted for at least 100 years. Language is not alive; people
are. We in linguistics know that "language death" is a metaphor and
has almost nothing to do with the language as a system; it has everything
to do with the sociocultural conditions on speaking and the propagation
of cultural groups.

How can it be reasonably said that a language degenerates if it is
abused? What do you mean by "abused"??? If "abused" means "speaking in
short sentences," then everyone "abuses" the language, even the most
ardent pedants. Language indeed changes, but it does not degrade.

Briggs says that violations of the prototype are degenerations. This
is true by definition only. And this definition can be accepted only
if one also adheres to a Platonic notion of language history, wherein
the pure metaphysical Ursprache is degraded by impure shadowy manifestations
in the real world. Maybe Briggs is a Platonist, but then he's not saying
anything about the real world.

Popular use does NOT imply a reversion or "reversal" of progress
in language change. There is no progress in language change: a change
in one part of the system over time which complicates the system
generally causes a simplification in another part of the system.
So, Hoenigswald said that languages maintain about 50% redundancy
over time.

What is the "sophisticated machinery" Briggs talks about? I suspect that
he means that he thinks that language which have a lot of morphology
and are synthetic are somehow "more sophisticated" than "our poor
unfortunate English," which is analytic and generally free of
morpho-syntactic markings. Honestly, the idea that a synthetic language
is "better" than a degraded analytic English is another remnant of
the 19th century (where neo-Platonism also reigned).

The evolution of analytic languages from synthetic versions (i.e.,
pure to degraded) is not only charged with moral claims, but it is also
wrong.

1. Finnish has retained its numerous case markings over time, as has
Hungarian.

2. Colloquial Russian has begun to add case markings (instrumental in
the predicate nominative).

3. English is losing overt marking of the subjunctive: are we therefore
less able to express subjunctive ideas? Is English becoming (GOOD GOD!)
non-subjunctive, non-hypothetical....

If Briggs is right, then he himself is contributing to the degradation
by his very speech to his friends. (I, of course, don't believe this.)

Finally, if Briggs is right about the characteristics of natural language,
then any natural language can be a bridge, not necessarily Sanskrit. And
this claim is tantamount to saying only that translation is possible.

Bill Frawley

Address: 20568.ccvax1@udel

------------------------------

Date: 11 Nov 1984 18:02:20 EST
From: MERDAN@USC-ISI.ARPA
Subject: balgorithms

Here are couple balgorithms that I encountered on a single microprocessor
project.  Neither of these balgorithms appeared the slightest bit bad to
their authors, and one of them was insulted when I pointed out how bad
his approach really was.

Balgorithm #1

Problem
  Perform error correction for a 15,11 Hamming code on an 8 bit micro
  (Intel 8008).

Original solution
  Implement a feedback shift register with error trapping logic as with
  a BCH code.  Approximately 600 bytes of tricky code was required.

Better solution
  Use the classic error dectection matrix method.  I believe about 100
  bytes of obvious code was required.

Balrgorithm #2

Problem
  Calculate horizontal and vertical parity for a sequence of 5 bit char-
  acters and tack them on at end of the sequence.

Original solution
  Pick up each character and count the number of 1s by masking out each
  bit with a separate mask, packing the resultant bit into a 5 bit word
  on the fly. About 1500 bytes of very buggy code resulted.

Better solution
  Treat the sequence in blocks of 5 characters.  For each block prestore
  a pattern assuming that parity is even.  Pick up each character, determine
  its parity (the load did this on the 8008), and clear the pattern for that
  character.  Or the patterns together, producing the result.  About 150
  bytes of mostly straight line code resulted.

Even better solution
  Don't calculate parity in software but let the UART hardware generate
  and check the parity.

Comment
  In both cases I feel that the justification for the original solution
  was that programmer wanted to do some tricky coding just to prove that
  he could do it rather than understand the problem first.  This tendency
  does not seem to be going away as fast as we all would like it.


Thanks

Terry Arnold

------------------------------

Date: 13 Oct 1984 11:40-EDT
From: ISAACSON@USC-ISI.ARPA
Subject: Project Report - IU via Dialectical Image Processing (DIP)

             [Forwarded from Vision-List by Laws@SRI-AI.]


Just read the summary of DARPA IU effort which I find very interesting.
By coincidence, we submitted this week to DARPA a summary of our current
efforts in "Dialectical Pattern Processing".  Although phrased in
broader terms, much of this effort is also directed toward IU.  We
enclose a copy of the report in the possible interest of the vision-list
readership.  -- JDI

10/7/84
                  DARPA Research Summary Report
             I M I  Corporation, St. Louis, Missouri
         Project Title:  Dialectical Pattern Processing


Overview.    Earlier  work  [1] has demonstrated  unusual  low-level
intelligence features in dialectical processing of string  patterns.
This  effort  extends  dialectical processing to  2-D  arrays,  with
applications in machine-vision.   I M I  Corporation is an innovator
in Dialectical Image Processing (DIP),  a new subfield in very  low-
level vision (VLLV) research.   Dialectics is an elusive doctrine of
philosophy and (non-standard) logic that can be traced from Plato to
Hegel  and beyond,  but that has never lent itself to be grounded in
precise  formalisms or in computing machines.   Certain  influential
philosophies  hold  that  the   universe  operates  in  accord  with
dialectical  processes,  culminating  in  the  activity  of  thought
processes.   This  effort builds on the fact that [1] discloses  the
first and only machine implementation of dialectical processes.

Objectives.     A broad long-term objective is to test  a   working-
hypothesis  that  states that dialectical processes are  fundamental
ingredients,   in  addition  to  certain  others,  in  autonomically
emergent intelligences. Intelligences that bootstrap themselves in a
bottom-up   fashion  fall  into  this  category.    More   immediate
objectives  are  (1) to demonstrate the technical feasibility  of  a
small  number  of VLSI chips to host a dialectical image  processor,
and (2) to evaluate the type of intelligence inherent in networks of
dialectical processors, with emphasis on learning.

Approach.   A  mix  of  activities includes software  simulation  of
dialectical  networks  for  image  processing;  VLSI-based  hardware
design  for  dialectical  image processors;  and assessment  of  the
learning capabilities inherent in the above-mentioned systems.

Current  Status & Future Plans.     Consideration of the possibility
of  dialectical  processing began in the  early  Sixties.   By  now,
theoretical  foundations have been laid and  dialectical  processing
has been amply demonstrated in strings and in 2-D arrays (see Fig. 1
&  Fig.  2 below) to the point where it appears to support a  viable
new  computer-vision technology.   Feasibility studies in the design
of  VLSI-based DIPs have shown that reasonably large  DIPs  (100x100
pixels)  will fit into a single card and can be readily implemented,
at least for experimentation.   Scant   resources limit the scope of
some and preclude others of the activities listed below,  which  are
considered important to the advancement of this technology.

*   Run software simulations of DIP on better equipment (e.g.,  Lisp
machine or BION workstation) and attempt to extend effort to 3-D.

*  Implement in VLSI hardware a prototype of a moderate size DIP.

*  Attempt to specialize other vast parallel networks (e.g., Hillis'
Connection   Machine  [2]  or  Fahlman's  Boltzmann  Machine)   into
dialectical image processors.

*   Specialize  a  network  of  dialectical  processors  to  support
low-level machine learning by analogy and metaphor.


            Fig. 1 - DIP Analysis of a Plane Silhouette
                 [Graphics will be sent by US Mail]

   Fig. 2 - Selected Steps from DIP Analysis of a Tank Silhouette
                 [Graphics will be sent by US Mail]

Resources and Participants.   Available resources are limited.   The
list  of participants includes:  Joel D.  Isaacson,  PhD,  Principal
Investigator;   Eliezer Pasternak,  MSEE,  Project Engineer;   Steve
Mueller,  BS/CS,  Programmer;  Ashok Jain, MS/CS, Research Assistant
(SIU-E).


Products,  Demonstrable Results,  Contact Points.   Certain products
and  results are proprietary and included in patent applications  in
progress.   Software  simulation of DIP can be readily demonstrated.
A  version  written  in Pascal for the IBM  PC/XT  is  available  on
request.    Point  of  contact:   Dr.  Joel  D.  Isaacson,   I  M  I
Corporation,  20 Crestwood Drive,  St. Louis, Missouri 63105, Phone:
(314) 727-2207, (ISAACSON@USC-ISI.ARPA).


References

[1]  Isaacson, J. D., "Autonomic String-Manipulation System,"  U. S.
Patent No. 4,286,330, August 25, 1981.

[2]  Hillis,  W.  D.,  "The Connection Machine," Report AIM-646, The
Artificial Intelligence Laboratory, MIT, Sept. 1981.

Acknowledgements

Supported  by the Defense Advanced Research Projects Agency  of  the
Department of Defense under ONR Contract No.  N00014-82-C-0303.  The
P.I.   gratefully  acknowledges additional support and encouragement
received   from  the  Department  of  Mathematics,  Statistics,  and
Computer  Science,  Southern Illinois University at Edwardsville.

------------------------------

Date: Thu, 8 Nov 84 13:23:13 pst
From: chertok@ucbcogsci (Paula Chertok)
Subject: Seminar - Spatial Representation in Rats

             BERKELEY COGNITIVE SCIENCE PROGRAM
                         Fall 1984
           Cognitive Science Seminar -- IDS 237A

   TIME:                Tuesday, November 13, 11 - 12:30
   PLACE:               240 Bechtel Engineering Center
   DISCUSSION:          12:30 - 2 in 200 Building T-4

SPEAKER:        C.  R.  Gallistel,  Psychology   Department,
                University  of  Pennsylvania;   Center for
                Advanced Study in the Behavioral Sciences

TITLE:          ``The rat's representation  of  navigational
                space:   Evidence  for  a  purely  geometric
                module''

ABSTRACT:       When the rat is shown the location of hidden
                food  and  must subsequently find that loca-
                tion, it  relies  strongly  upon  a  spatial
                representation  that  preserves  the  metric
                properties of the enclosure (the large scale
                shape   of  the  environment)  but  not  the
                nongeometric characteristics  (color,  lumi-
                nosity, texture, smell) of the surfaces that
                define the space.  As a result,  the  animal
                makes   many  ``rotational''  errors  in  an
                environment that has a rotational  symmetry,
                looking in the place where the food would be
                if the environment  were  rotated  into  the
                symmetrically  interchangeable position.  It
                does   this   even   when   highly   salient
                nongeometric   properties  of  the  surfaces
                should enable it to avoid these costly rota-
                tional  errors.   Evidence is presented that
                the   rat   notes   and   remembers    these
                nongeometric properties and can use them for
                some purposes, but cannot directly use  them
                to   establish  positions  in  a  remembered
                space, even when it would be highly advanta-
                geous  to  do so.  Thus, the rat's position-
                determining system appears to be an encapsu-
                lated  module  in  the Fodorian sense.  Con-
                siderations of possible  computational  rou-
                tines  used to align the currently perceived
                environment  with  the  animal's  map  (it's
                record   of   the   previously   experienced
                environment) suggest reasons why this  might
                be  so.  Old evidence on the finding of hid-
                den food by chimpanzees suggests  that  they
                rely on a similar module.  This leads to the
                conjecture that the module is  universal  in
                higher vertebrates.

------------------------------

Date: Thu, 8 Nov 84 22:51:11 pst
From: Misha Pavel <mis@SU-PSYCH>
Subject: Seminars - Human Memory Capacity & Design of Computer Tools

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]


*************************************************************************
                   Two talks by T.K. Landauer:
*************************************************************************

       Some attempts to estimate the functional capacity
                   of human long term memory.

                         T. K. Landauer
               Bell Communications Research, N.J.

Time:  Wednesday, November 14, 1984 at 3:45 pm
Place: Jordan Hall, Building 420, Room 050

How much useful (i.e retrievable)  information  does  a  person's
memory  contain? Not only a curiosity, even an approximate answer
would be useful in guiding theory about underlying mechanisms and
the  design of artificial minds. By considering observed rates at
which knowledge is added to and lost from long-term  memory,  and
the information demands of adult cognition, several different es-
timates were obtained, most within a few orders of  magnitude  of
each other. Obtaining information measures from  performance data
required some novel models of recognition and recall memory  that
also will be described.

-------------------------------------------------------------------

  PSYCHOLOGICAL INVENTION: some examples of cognitive research
          applied to the design of new computer tools.

                         T. K. Landauer
               Bell Communications Research, N.J.

Time:  Friday, November 16, 1984 at 3:15 pm
Place: Jordan Hall, Building 420, Room 100

Computers offer the possibility of designing  powerful  tools  to
aid  people  in  cognitive  tasks. When psychological research is
able to determine the factors that currently limit how  well  hu-
mans   perform  a particular cognition-based activity, the design
of effective new computer aids sometimes follows directly. Illus-
trative  examples  will be described in information retrieval and
text-editing applications. In the former, insights leading to in-
vention  came from systematic observations of the actual linguis-
tic behavior of information-seekers, in the latter from  correla-
tions  of task performance with measured and observed differences
in individual characteristics.

------------------------------

Date: Fri, 09 Nov 84 16:25:11 EST
From: "Paul Levinson" <1303@NJIT-EIES.MAILNET>
Subject: Seminar - Artificial Intelligence and Real Life


     "Artificial Intelligence and Real Life"

     Abstract of talk to be given by Paul Levinson at the New School
for Social Research, November 12, 1984, 8 PM, 66W12th St., NYC.

     Part of the 1984-1985 Colloquium on Philosophy and Technology,
sponsored by the Polytechnic Institute of New York and the New School.


     Talk begins by distinguishing two types of "AI": "auxiliary" or
"augmentative" intelligence (as in mainframes extending and
augmentating the social epistemological enterprise of science, and
micros extending and augmenting thinking and communication on the
individual level), and "autonomous" intelligence, or claims that
computers/robots can or will function as self-operating entities, in
independence of humans after the initial human programming.  The
difference between these two types of AI is akin to the difference
between eyeglasses and eyes.

     Augmentative intelligence on the mainframe scientific level will
be assessed as reducing intractable immensities of data, or allowing
human cognition to process ever larger portions and systems of
information.  Just as the telescope equalizes human vision to the vast
distances of the universe, so computers on the cutting edges of
science make our mental capacities more equal to the vast numerosity
of data we encounter in the macro and micro universes.  The social and
psychological as well as cognitive consequences of micro computers and
the type of instant, intimate, intellectual and personal communication
they allow across distances will be compared to the Freudian
revolution at the turn of the century in its impact upon the human
psyche and the way we perceive ourselves.  Critics of these two types
of computers such as Weizenbaum will be seen as part of a long line of
naive and failed media critics beginning at least as far back as
Socrates, who denounced writing as a "misbegotten image of the spoken
original," certain to be destructive of the intellect (Phaedrus).

     "Expert systems" and "human meat machines" claims for autonomous
intelligence in machines will be examined and found wanting.
Alternative approaches such as Hofstadter's "bottom-up" ideas will be
discussed.  A conception of the evolution of existence in the natural
cosmos as progressing in a subsumptive way from non-living to living
to intelligent material will be introduced, and this model along with
Hofstadter-type critiques will lead to the following conclusion: the
problem with current attempts at autonomous intelligence is that the
machines in which they're situated are not alive, or do not have
enough of the characteristics necessary for the sustenance of the
"living" label.  Put otherwise, the conclusion will be: in order to
have artificial intelligence (the autononous kind), we first must have
artificial life; or: when we indeed have created artificial
intelligences which everyone agrees are truly intelligent and
autonomous, we'll look at these "machines" and say: My God (or
whatever)!  They're alive.

     Practical and moral problems that may arise from the creation of
machines that are more than metaphorically autonomous of their human
producers will be examined.  These machines will most likely be in the
form of robots, since robots can move in the world and interact with
environments in the direct ways characteristic of living organisms.

------------------------------

End of AIList Digest
********************

From:	CSVPI          15-NOV-1984 05:00  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a007405; 15 Nov 84 2:32 EST
Date: Wed 14 Nov 1984 22:39-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #154
To: AIList@SRI-AI
Received: from rand-relay by vpi; Thu, 15 Nov 84 04:55 EST


AIList Digest           Thursday, 15 Nov 1984     Volume 2 : Issue 154

Today's Topics:
  Pattern Recognition - Partial Matching,
  LISP - Lisp Mailing Lists? & Conversion Between Dialects,
  Conference - IJCAI-85,
  AI Tools - LM-Prolog and DCG's,
  Perception - Dialectics,
  Linguistics - Mel'cuk's Dictionary & Aymara & Language Evolution,
  Humor - Artificial Poetry,
  Seminars - Speech Acts
----------------------------------------------------------------------

Date: Wed, 7 Nov 84 08:20:48 cst
From: Mohd Nasiruddin <nasir%lsu.csnet@csnet-relay.arpa>
Subject: Partial Matching


     I am interested in finding information on the extent of
work done in partial matching.

     If anyone can point me towards research in  this  area,
or references, please respond as early as possible.

     Thanks in advance.


                    ---Mohd. Nasiruddin

                    Dept.  Of  Computer  Science,
                    Louisiana State University,
                    Baton Rouge, La 70893.

                    CSNET: <nasir%lsu@csnet-relay>

------------------------------

Date: Tue, 13 Nov 84 15:02:33 -0200
From: jaakov%wisdom.BITNET@Berkeley (Jacob Levy)
Subject: Is there a Lisp mailing lists?

        I know of franz-friends@berkeley. Is there some other list of people
who have Symbolics 3600, Maclisp, etc? Thanks for the info,

        Rusty Red (AKA Jacob Levy)

        BITNET:                         jaakov@wisdom
        CSNET and ARPA:                 jaakov%wisdom.bitnet@wiscvm.ARPA
        UUCP: (if all else fails..)     ..!decvax!humus!wisdom!jaakov

------------------------------

Date: Wed, 14 Nov 84 13:56 MST
From: May%pco@CISL-SERVICE-MULTICS.ARPA
Subject: Conversion Between Dialects of Lisp

I'm looking for tools to convert among the following Lisp dialects, with
the potential for going in any direction.  Any replies sent to me will
be published collectively.  Thanks.

 Maclisp, the Multics version
 Interlisp, the GCOS version from the U. of Waterloo
 Franz Lisp
 Common Lisp

Bob May

------------------------------

Date: Mon, 12 Nov 84 10:28 EST
From: Tim Finin <Tim%upenn.csnet@csnet-relay.arpa>
Subject: IJCAI-85

_________________________ IJCAI-85 ____________________________________

The call for papers for IJCAI-85  has already been issued. The deadline
is January 7, 1985.  Please send your suggestions for invited speakers,
panels, or any other aspects concerning the technical program to:

     Aravind Joshi, Program Chair IJCAI-85
     Department of Computer and Information Science
     University of Pennsylvania
     Philadelphia, PA 19104
     USA

Clearly, it is impossible to accept all suggestions. However, your
suggestions are very essential and will be carefully considered by the
Program Committee.

------------------------------

Date: 12 Nov 84 11:06 PST
From: Kahn.pa@XEROX.ARPA
Subject: LM-Prolog and DCG's

          [Forwarded from the Prolog Digest by Laws@SRI-AI.]

In answer to John Burge's questions in V2 #33 [AIList V2 #136]:

My experiences using LM-Prolog have been very positive
but I am surely not an un-biased judge (being one of the
co-authors of the system).   (I am tempted to give a
little ad for LM-Prolog here, but will refrain.  Interested
parties can contact me directly.)

Regarding the Grammar Kit, the main thing that distinguishes
it from other DCGs is that it can continuously maintain a
parse tree.  The tree is drawn as parses are considered and
parts of it disappear upon backtracking.  I have found this
kind of dynamic graphic display very useful for explaining
Prolog and DCGs to people as well as debugging specific
grammars.

------------------------------

Date: Mon, 12 Nov 84 16:41:37 est
From: FRAWLEY <20568%vax1%udel-cc-relay.delaware@udel-relay.ARPA>
Subject: Dialectics & Mel'cuk's Dictionary

Two things:

1. Isaacson has discussed dialectical image processing. There is a considerable
body of information on dialectical psychology and psycholinguistics which
may be of some help theoretically. The work by Klaus Riegel is seminal, as is
the work of the Soviets (esp. Vygotsky and cohorts). Though I know of no work
on vision using dialectical psychology, their work on the dialectics of
perception and cognition might be of use.

Also, the Soviets have made some attempts to develop dialectical logic: i.e.,
some form of a dialectical predicate calculus. I can't remember the references
for this, but I think I ran across it in the 1970 surveys of Soviet thought,
or perhaps in the Soviet studies series (Soviet Philosophy, Psychology, etc.
published by Sharpe). In any case, there have been attempts at formal
dialectic logic (though they may be ideologically charged), and these
studies may help in formalizing algorithms for low-level visual perception
in a dialectical model.

2. More generally for AI: there's a new dictionary out, written by I.A.
Mel'cuk and published by Montreal U. which is the richest formal/linguistic
representation I've seen of the encyclopedic structure of the lexicon.
It combines lexical collocation and a set of 53 relations to generate
the entire lexicon. It is very good for text-generation. But, it's in
French. Bonne chance, mes amis...

Bill Frawley

20568.ccvax1@udel

------------------------------

Date: Wed, 14 Nov 84 15:03 CST
From: "Brett D. Slocum" <Slocum.CSCDA@HI-MULTICS.ARPA>
Subject: Language translation

Ancient Purity and Polyglot Programs
London Sunday Times, November 4th, 1984
John Barnes

    Aymara, an old South American tongue used mainly by Andean peasants and
llama-herders,  has  enabled  a  Bolivian  mathematician to score a notable
first   in   the   increasing  application  of  the  computer  to  language
translation.   Using  it  as an intermediate language, Ivan Guzman de Rojas
has  written  the  first computer program capable of translating an English
text into several other languages simultaneously, rather than one at a time
as could already be done, at speeds of up to 120 words a minute.

    Aymara  is  spoken by 2.5 million people living around Lake Titicaca on
the  border  between  Bolivia  and  Peru.  There is no written form in use;
Aymara  speakers  who  can  write  do so in Spanish, the country's official
language.   Yet  Guzman  discovered  that  it is so logical and pure in its
syntax that it makes an ideal bridging language to a computer.

    Aymara  is rigorous and simple - which means that its syntactical rules
always  apply,  and  can  be written out concisely in the sort of algebraic
shorthand  that computers understand.  Indeed, such is its purity that some
historians think that it did not just evolve, like other languages, but was
actually  constructed  from  scratch  some  4,000 years ago.  It is also so
compact that a few words in it can do the work of dozens in English.

    Canadian  and  American  experts  believe  Guzman's  system is not only
versatile  in  the  range  of languages it can handle, but that it can also
sort  out  ambiguities  in  a  language  as it translates.  This is because
Aymara has a sense of logic that is very different from European languages.

    Guzman,  who  now  runs  a computer consultancy in the capital, La Paz,
says  that while he was teaching mathematics to Aymara children he realised
that  their  language  admitted  an intermediate value of truth or falsity.
That, he said, enabled them to reason about things that were uncertain in a
way  Europeans  could not.  He has spent the past five years developing his
translation program, which he calls Atamiri (the Aymara for interpreter).

    What is even more laudable about Guzman's achievement is that he did it
in  his  spare time, on borrowed computers, without any commercial backing,
in  one  of  the world's poorest countries.  His clients, he says, gave him
free time on their computers at night and over the weekend.

    Guzman  has already turned down the commercial overtures made by one US
computer  giant.  Not surprisingly, he has become a staunch defender of the
Aymara  language,  which is not taught in Bolivian schools and is generally
discouraged as a deadend peasant tongue.

    "It is a disgrace those things can happen on our planet," he says.  "If
I  ever  make  any  money  from  this, I will see that they get books and a
newspaper in their own language."

------------------------------

Date: 12 Nov 84 16:54:43 EST
From: Allen <Lutins@RU-BLUE.ARPA>
Subject: Language Evolution


         The ultimate evolution is reached  when a language can
        express all notions in the realms of the physical, emotional,
        conceptual, and spiritual in a concise unambiguous way.

The implication here is that there is a point where language *stops*
evolving. This is not the case.  One synchronic example of this diachronic
process is the presence of "dialects" within "languages" (I use these
terms cautiously, as defining them in a strict sense would be impossible
and unnecessary).  Although it might be argued that a written language may
stop evolving while the spoken language went on changing, I sincerely
doubt that this would be feasable.
                                        -Allen

------------------------------

Date: Mon 12 Nov 84 18:59:17-PST
From: BARNARD@SRI-AI.ARPA
Subject: What Racter Wrote (from Harper's Magazine, Nov. 1984)

From "The Policeman's Beard is Half Constructed," a collection of
prose and poetry  written by a computer.  Racter, as the computer is
called, was programmed (by William Chamberlain) with rules of English
grammer and syntax, as well as an 8000-word vocabulary.  Warner Books,
publisher of "The Policeman's Beard," claims that it is the first book
written by a computer.


Bill sings to Sarah.  Sarah sings to Bill.  Perhaps they will do other
dangerous things together.  They may eat lamb or stroke each other.
They may chant of their difficulties and their happiness.  They have
love but they also have typewriters.  That is interesting.

PAUL: Hello, Joan.

JOAN: Hello, Paul.

PAUL: Joan, I was fantasizing that a white crow flew across the black
sky of night.

JOAN: I too have been thinking of a white crow.  It flies in the night
and it flies in the day.  It incites my dreams.  My fantasy is that I
can soar with it.

PAUL: A single white crow or many winging eagles.  All inflame my
dreams and cogitations.  A single bird or many birds are in my dreams.

JOAN: Paul, have you bolted meat today?

PAUL: I have bolted lamb and chicken.

JOAN: Have you drained sherry?

PAUL: I have sipped seltzer.

JOAN: Shall we show our love by the reflections of our desire, or
shall we dream of a white crow?

PAUL: The dream is a reflection of our passion just as the white crow
is a reflection of our ambiguity.

JOAN: My desire to incite myself in my dreaming is also a reflection
of ambiguity.

PAUL: Well chanted, Joan.  I craftily desire you.

------------------------------

Date: Wed 14 Nov 84 17:55:17-PST
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Seminars - Speech Acts

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]

                ABSTRACT OF TODAY'S SEMINAR
  ``Natural language from the Standpoint of Artificial Intelligence''

An intelligent individual, human or computer program, must act on the basis
of what it believes in advance modified by what it observes and what it
learns from linguistic communication.  Thinking about how the achievement
of goals is helped by communication leads to a somewhat different point of
view from one derived mainly from study of the corpus of spoken and written
language.  Namely,
  1. Communication should be regarded as a modifier of state of mind.
  2. The most basic form of communication is the single word sentence
        uttered under conditions in which the speaker and hearer share enough
        knowledge so that the single word suffices.  The complete sentence
        develops under conditions in which the speaker and the hearers share
        less context.
  3. Many of the characteristics of language are determined by so far
        unrecognized requirements of the communication situation.  They will
        apply to machines as well as people.
  4. An effort to make a Common Business Communication Languages for
        commercial communication among machines belonging to different
        organizations exhibits interesting problems of the semantics of
        language.
                                                ---John McCarthy


                SUMMARY OF LAST WEEK'S SEMINAR

Phil Cohen of SRI gave a seminar in which he claimed that illocutionary act
recognition is not necessary for engaging in communicative interaction.
Rather, engaging in such interaction requires intent/plan recognition.  In
support of this claim, he presented a formalism, being developed with Hector
Levesque (Univ.  of Toronto), that showed how illocutionary acts could be
defined in terms of plans --- i.e., as beliefs about the conversants' shared
knowledge of the speaker's and hearer's goals and the causal consequences
of achieving those goals.  In this formalism, illocutionary acts are no
longer conceptually primitive, but rather amount to theorems that can be
proven about a state-of-affairs.  As an illustration, the definition of a
direct request was derived from an independently-motivated theory of action,
rather than stipulated.  Just as one need not determine if a proof
corresponds to a prior lemma, a hearer need not actually characterize the
consequences of each utterance in terms of the IA theorems, but can simply
infer and respond to the speaker's goals.  However, the hearer could
retrospectively summarize a complex of utterances as satisfying an
illocutionary act.  Moreover, it was claimed that the framework can
characterize a range of indirect speech acts as lemmas, which can be derived
from and integrated with plan-based reasoning.  The discussant, Ivan Sag,
related the theory to Gricean maxims of conversation, and to the ``standard''
view of how pragmatics fits into a theory of linguistic communication.


                        NEW CSLI REPORT

A final edition of Report No. CSLI-9-84, ``The Implementation of Procedurally
Reflective Languages'' by Jim des Rivieres and Brian Cantwell Smith, has just
been published. Copies may be obtained by writing to Dikran Karagueuzian
at the Center (Dikran at SU-CSLI).

------------------------------

End of AIList Digest
********************

From:	CSVPI          15-NOV-1984 05:00  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a007588; 15 Nov 84 3:41 EST
Date: Wed 14 Nov 1984 22:56-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #155
To: AIList@SRI-AI
Received: from rand-relay by vpi; Thu, 15 Nov 84 04:58 EST


AIList Digest           Thursday, 15 Nov 1984     Volume 2 : Issue 155

Today's Topics:
  News - Recent AI Articles,
  AI Tools - FRL Source & New Lisp for VAXen,
  Logic Programming - Compiling Logic to Functional Programs
  Algorithms - Malgorithms,
  Seminar - Inductive Learning
----------------------------------------------------------------------

Date: Mon 12 Nov 84 15:30:28-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Recent AI Articles

Oscar Firschein has called my attention to three articles on AI in
the November issue of Datamation.  The Overselling of Expert Systems,
by Gary R. Martins, is a devastating attack on the current crop of
production-rule interpreters.  The Blossoming of European AI, by
Paul Tate, is an informative piece on expert systems development
in Europe that is much more positive about the approach, but ends
with the appeal "Please, be honest."  AI and Software Engineering,
based on Robert Kowalski's 1983-84 SPL-Insight Award lecture, advocates
logic-based programming; I found the presentation discursive and
inconclusive, but there's a nice example concerning the expression
of British citizenship laws as logical rules.


Martins makes some very good points about current expert systems
and development shells (e.g., "the blackboard model of cooperating
expert processes" is just a longer name for the old COMMON storage
facility in FORTRAN), but he neglects hierarchical inference (as in
MYCIN and PROSPECTOR), learning and self-modification (as in AM/EURISKO),
and the benefits of new ways of looking at old problems (hypothesize
and test in DENDRAL, "parallel" activity of cooperating experts
in HEARSAY).  He describes rule-based systems as clumsy, resource-hungry,
and unsuitable for complex applications, and regards them as a result
of confusing AI science (simple cognitive models, LTM and STM, etc.)
with engineering.  He does favor the type of serious AI development
being pursued by DARPA, but seems to think that most of the current
"expert systems" will be limited to applications in the personal
computer market (applications that could have been coded just as
easily with decision tables, decision trees, or other methodologies).

Martins also tells why he thinks the few experts systems mentioned above
(and also R1/XCON-XSEL) have been so successful.  His points are worth
considering:

  1) Brilliant programmers.
  2) Easy or carefully delimited problems.
  3) Plenty of time and funding in a favorable environment.
  4) Developers were not saddled with expert system tools!
     They develop systems to fit problems, not the other way around.
  5) Luck -- other promising systems didn't make it to the finish line.
  6) Public relations; some of these wonders are less than the public
     believes them to be.


For a much more positive outlook on expert systems, or at least on
knowledge-based systems, see Frederick Hayes-Roth's overview in
the October issue of IEEE Computer.  (One minor typo: Figure 5
should have an instance of PROLOG changed to POPLOG.)

                                        -- Ken Laws

------------------------------

Date: 13 Nov 1984 21:29-EST
From: milne <milne@wpafb-afita>
Subject: Frl Source


                        MIT FRL Available

I have become the "keeper of the source" for FRL, originally from
MIT and implemented in FranzLisp. The system includes a machine-readable
version of the manual and a demo of the tower of hanoi and an atn.
I am happy to distribute the sources free of charge, subject to the
following conditions:
        1. Although I will distribute it, I am not a maintainer of the
software. I do not guarantee it is free of bugs (but I think it is),
and I do not have time to fix problems.
        2. I can write UNIX tar tapes only. Sorry, no mail or FTP
transfers. (The source is about 95 files)
        3. It includes a UNIX and vms make file, but I can write only
tar tapes.
        4. To get a copy, send a blank tape to:
                Dr. Rob Milne
                AFIT/ENG
                WPAFB, OH 45433
        I will write the tape and send it back.
cheers,
Rob Milne
Director, AI Lab
Air Force Institute of Technology
milne@wpafb-afita

------------------------------

Date: Tue, 13 Nov 84 11:20:19 -0200
From: jaakov%wisdom.BITNET@Berkeley (Jacob Levy)
Subject: Announcement of new Lisp for UN*X 4.x VAXen

I don't know if this is the appropriate place for such an announcement,
but here goes, anyway :-


        YLISP, a Coroutine-based Lisp System for VAXen.
        -=============================================-

        A friend of  mine, Yitzhak  Dimitrovski, and  myself, wrote a Lisp
system for UN*X 4.x systems on VAXen. It has the following features :-

        o - Coroutines and  closures. The  system uses  these to implement
            the user-interface, single-stepping and  error-handling.  It's
            easy to write a scheduler and time-share YLISP between  two or
            more user programs.
        o - Multiple-dimension arrays.
        o - Multiple name  spaces (oblists) arranged  in a tree hierarchy.
            This is similar to the Lisp Machine facility.
        o - Defstruct structure definition package.
        o - Flavors object-oriented programming tools.
        o - User-extensible  evaluator (it is  possible to (re)define  the
            actions of 'eval', 'apply' and 'print'  relative to all  user-
            and pre-defined types).
        o - Integer arithmetic. No floating-point, sorry. don't think that
            that's  really  necessary, but *could*  be hacked. No  BIGNUMs
            either.
        o - Good user-interface with history, sophisticated error handling
            and function-call and variable-assignment tracing facilities.
        o - Extensive library of ported and user-contributed programs,such
            as a variant of the Interlisp  structure editor, 'loop' macro,
            'mlisp' Pascal-like embedded language, etc.
        o - Compiler  that  generates efficient native  assembler code for
            the VAXen. The  compiler is provided as a separate program,due
            to size  considerations. The compiler is  written entirely  in
            Lisp, of course.
        o - Extensive online documentation, as well as  a 400-page  manual
            describing the whole system from a programmer's point of view.

        The system is named  'YLISP', and was written for 4.1 when we were
students at the Hebrew University  at Jerusalem. Since  then, Yitzhak  has
left  for the  US and  is  currently a  Ph.D. student in  Prof. Schwartz's
Supercomputer group at Courant. I have continued to  develop the system on
my own, and have ported it to UN*X 4.2.

        I am looking for a site that is willing to handle the distribution
of this software from the US, by letting  one FTP it  from their computer.
Alternatively, I am also willing to supply people  with magtapes of YLISP,
for the cost of the tape and handling charges (about 70$ a piece).  If you
are interested, please respond by electronic mail to one of  the addresses
given below. I will be  ready  to  start distributing  the  system in  two
weeks' time.

        Rusty Red (AKA Jacob Levy)

        BITNET:                         jaakov@wisdom
        CSNET and ARPA:                 jaakov%wisdom.bitnet@wiscvm.ARPA
        UUCP: (if all else fails..)     ..!decvax!humus!wisdom!jaakov

------------------------------

Date: Mon 12 Nov 84 23:22:28-MST
From: Uday Reddy <U-Reddy@UTAH-20>
Subject: Compiling Logic to Functional Programs

          [Forwarded from the Prolog Digest by Laws@SRI-AI.]

The only work I know on compiling logic to functions:

1. Bellia , Levi, Martelli: On compiling Prolog programs on
demand driven architectures,  Logic Programming Workshop,
Albufeira, '83

2. Reddy: Transformation of logic programs to functional
programs, ISLP, Atlantic City, 84.

The two pieces of work are similar.  They should be distinguished
from other pieces of work cited by Webster (Lindstrom and Panangaden,
Carlsson, Bruce Smith) which interpret logic in a functional language
rather than compile a logic language into a functinal language.

The translation approach has limitations in that it needs mode
annotations (either from the programmer or chosen by the compiler)
and it cannot handle "logical variables".  I don't know of any work
that overcomes these limitations.  Personally, I believe they cannot
be overcome.  One can probably prove this assertion, provided one
can formalize the difference between translation and interpretation.

Combinator calculus is equivalent to lambda calculus, and there are
translators available from one to the other.  So, using combinators
neither simplifies nor complicates the problem.

-- Uday Reddy

------------------------------

Date: 12 Nov 84 08:56:04 PST (Monday)
From: Nick <NNicoll.ES@XEROX.ARPA>
Subject: Re: Badgorythms

What makes the following worse than a normal software badgorythm is that
it is implemented in the Language compiler...

"Another reason for bloated code:  increment a byte in memory can be
done in a single instruction, but they load the byte into a register,
extend it to a word, extend that to a long, add one, and then store the
low 8 bits of the long back into memory."

This gem came from the following msg;

  From: <Devon@MIT-MC.ARPA>
  Subject: Macintosh language benchmarks
  To: homeier@AEROSPACE.ARPA, info-mac@SUMEX-AIM.ARPA

I have been using the [notoriously awful] Whitesmith C compiler
available from "software toolworks" or some similar name.  It does work,
and there are header files defining all the data structures, and
interface files so you can make all the ROM calls.  I haven't found any
serious bugs, but code bloat is amazing.  One reason is that Apple's
linker is a crock that doesn't have have the concept of scanning a
library!  Instead it blithely loads everything contained in each library
file (which you must specify yourself -- blech!) regardless of whether
it is called for or not.  Another reason for bloated code:  increment a
byte in memory can be done in a single instruction, but they load the
byte into a register, extend it to a word, extend that to a long, add
one, and then store the low 8 bits of the long back into memory.


\\ Nick

------------------------------

Date: Mon 12 Nov 84 14:57:15-PST
From: Jean-Luc Bonnetain <BONNETAIN@SUMEX-AIM.ARPA>
Subject: malgorithms

I just received a msg from Andrei Broder (Broder@decwrl) saying that he and
George Stolfi wrote a paper called "Pessimal Algorithms and Simplexity
Analysis" which is to appear in the SIGACT news. Maybe people who expressed
interest in my msg will find this "joke paper" (Andrei's term) worth reading.

jean-luc

------------------------------

Date: 13 Nov 84 08:43 PST
From: Todd.pasa@XEROX.ARPA
Subject: Malgorisms (What malgorithms were before LISP)

        Yet another class of malgorithms is generously provided by the
INTERLISP-D implementation of LISP ... algorithms that look like
malgorithms but really aren't. An example:

To strip an M element list from the top of a larger list X, a seemingly
logical approach would be to take the first element of X, append it to
the second, and so on until the Mth. In INTERLISP-D, a faster way to
accomplish this is to take the difference of the larger list X and its
tail beginning at element M+1. In other words, to "subtract" the list
lacking the elements you want from the full list. The malgorithm code
appears as:

(LDIFF X (NTH X (ADD1 M)))

The "logical" code as:

(FOR I FROM 1 TO M COLLECT (CAR (NTH X I)))


        As is shown below, the "malgorithm" is actually a faster way to
solve the problem. Timed executions for 100 sample runs yielded the
following results:


                                "Malgorithm"       "Logical method"

                  M=4         .00114 sec.               .00127
                  M=30        .00902                    .0214
                  M=100       .0301                     .170


        The method breaks down when you try to extract sublists from
arbitrary positions inside larger lists ... execution of a "logical"
method similar to the above is MUCH faster. However, I am still amazed
that a malgorithm as seemingly ridiculous as this one is can be so
efficient for even a special case.


                                                --- JohnnyT

"Things are more like they used to be now than they ever were"

------------------------------

Date: 11 Nov 1984  17:03 EST (Sun)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Inductive Learning

           [Forwarded from the MIT bboard by Laws@SRI-AI.]


Inductive Learning: Recent Theoretical and Experimental Results
                    Ryszard Michalski

Wednesday   November 14     4:00pm      8th floor playroom


Inductive learning is presented as a goal-oriented and resource-constrained
process of applying certain rules of inference to the initial observational
statements and hypotheses. This process involves new type of
inference rules, called "generalization rules."
In contrast to truth-preserving deductive rules, inductive
generalization rules are falsity-preserving.

Two types of inductive learning are distinguished,
learning from examples (concept acquisition or reconstruction)
and learning by observation (concept formation and descriptive
generalization). Learning from
examples in turn divides into "instance-to-class" and
"part-to-whole" generalization.

We will briefly describe recent experiments with two inductive
learning systems:
1 - for learning from examples via incremental concept refinement, and
2 - for automated formation of classifications via conceptual clustering.

------------------------------

End of AIList Digest
********************

From:	COMSAT         16-NOV-1984 23:15  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a012054; 15 Nov 84 20:23 EST
Date: Thu 15 Nov 1984 16:40-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #156
To: AIList@SRI-AI
Received: from rand-relay by vpi; Fri, 16 Nov 84 23:09 EST


AIList Digest            Friday, 16 Nov 1984      Volume 2 : Issue 156

Today's Topics:
  Programming Languages - Horror Stories,
  Algorithms - Interlisp-D "Malgorithm",
  AI Tools - DEC Software Agreements & Japanese Lisp Machines,
  Seminars - Logo for Teaching Language &
    Knowledge Representation and Temporal Representation
----------------------------------------------------------------------

Date: 14 Nov 84 20:45:32 EST
From: Edward.Smith@CMU-CS-SPICE
Subject: Programming Language Horror Stories

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

In a couple of weeks I'm going to give my last lecture in Comparative
Programming and as a way of motivating these undergraduates about the
importance of the consideration of language design in their future work I
was going to present some examples of particularly BAD (ugly or dangerous or
however you wish to define that) design. So, if you have any favorite horror
story about some programming language you'd like to contribute I would
appreciate it - I will put all the good ones up in a file somewhere. These
stories should be SHORT and to the point, and could be things like these
classics:
  - the property of FORTRAN to compile and execute (generally without
complaining) a DO loop construct without a comma
  - just about ANY one-line APL program (my favorite being the one-line game
of LIFE attributed to Marc Donner)
  - the use of the space character in SNOBOL as (a) the separator between
labels and statements, (b) concatenation operator, (c) pattern matching
operator, or (d) separator for the pattern match assignment operator "."
  - (FORTRAN's full of them) the property of early FORTRAN's to change the
value of "constants" like 5 to say 3 by an interesting parameter passing
mechanism

Please send to ets@cmu-cs-spice. Thanks in advance.

------------------------------

Date: 15 Nov 84 12:39 PST
From: JonL.pa@XEROX.ARPA
Subject: Interlisp-D "malgorithm"?

Regarding the "malgorithm" and "Logical method" proposed by
Todd.pasa@XEROX.ARPA:  using the NTH function repeatedly on a list of
elements (as opposed to an array or "vector" of elements) has got to be
a classic "malgorithm".  The access time for selecting the n'th element
of a list is proportional to n, whereas the similar time for arrays or
vectors should be essentially constant.  The repeated selection evident
in Todd's example converts a linear algorithm into a quadratic one.

Just for the record, let me propose what I (and I suspect many other
long-time "Lisp lovers" would have considered to be the "logical"
algorithm):

    (for ITEM in X as I to M collect ITEM)

A little analysis would show this to be asymptotically better than the
proposed "malgorithm" by a factor somewhere between 1 and 2 -- 2 because
the "malgorithm" traverses the M-prefix of the list X twice, and 1
because the the CONS time cost may be made arbitrarily high, thereby
occluding any other effect.

-- JonL White --

------------------------------

Date: 15 Nov 84 0524 EST
From: Dave.Touretzky@CMU-CS-A.ARPA
Subject: DEC news release

The following two paragraphs come from a medium-length article in DEC's
Large System News.


        Digital Signs Marketing Agreements With Six AI Firms

Digital has signed agreements with a number of leading independent producers of
Artificial Intelligence (AI) software to market cooperatively their products
on VAX computers and personal computer systems.

Independent AI software producers include the Carnegie Group, Inc.; Digital
Research, Inc. (DIR); Gold Hill Computers; Inference Corp.; Prologia; and
USC Information Sciences Institute (ISI).  AI software packages developed to
run on Digital's computers include Inference's ART, Gold Hill's GCLISP,
ISI's Interlisp, Prologia's PROLOG II, and the Carnegie Group's SRL+ and PLUME.


In two other articles in the same newsletter, Digital announced availability
of VAX Lisp 1.0 as a fully-supported Common Lisp product, and availability
of OPS5 as a supported product.  Digital's OPS5 is written in Bliss-32 for
performance reasons; it includes both an interpreter and a compiler.

------------------------------

Date: 14 Nov 84 09:31:35 PST (Wed)
From: Jed Marti <marti@randgr>
Subject: Japanese Lisp Machines.

  I just saw the request for information about the Fujitsu Alpha. I
recently spent a week in Japan as a guest of the RIKEN institute which
provided a tour of some of the local Tokyo efforts in this direction.
Perhaps it would be of interest to the AIList readers.

Jed Marti.



                      Japanese Lisp Machines

The RSYMSAC conference held at the Riken institute in Saitama, Japan on
August 21-22, provided an opportunity for a close view of Japanese
efforts to construct very fast machines for running large scale
symbolic algebra and AI programs. Four of us toured two computer
centers and three Lisp machine construction projects, talking to
implementors and trying our favorite test programs. This short report
describes the state of their systems and the Japanese symbolic algebra
environment.

                           FLATS at Riken

The Riken institute conducts research in the physical sciences and
operates a Fujitsu M380H (an IBM 370 look-alike) providing both time
sharing and batch services. During the day, computer algebra system
users access Cambridge Lisp running REDUCE 3.1. The symbolic
computation group operates a VAX 11/750 running VMS, a host of 16 bit
micro-computers, and the FLATS machine.

The symbolic computation group officially unveiled FLATS (Formula Lisp
Association Tuple Set) at the conference. The Mitsui Ship Building
Company constructed the hardware based on designs of the Riken group.
Built from SSI ECL components, the CPU executes a micro-instruction
every 50 nanoseconds and a Lisp instruction every 100 nanoseconds from
a 300 bit by 256 word micro store and 8 megabytes of 450 nanosecond
main memory. Over 70,000 wires connect the back plane making
conventional hardware debugging impossible. The engineers exercise
modules on a special test jig or through the attached support
processor.

The hash code generation hardware sets FLATS apart from conventional
Lisp machines. It computes a hash code in the basic machine cycle time
for extremely fast property list access and CONS cell generation.
Improvements in execution speed and program clarity more than offset
the loss of the RPLACA and RPLACD functions on hashed CONSes.

The designers increased speed with a number of special features:

   3 separate cache memories for instructions, data, and stack

   special micro-coded instructions for the garbage collector and
     big numbers

   CALL, JUMP and RETURN executed in parallel with other instructions

   hardware type checking in parallel with data operations

   3 address instruction codes

   hardware support for paging

   data path width of 64 bits

The FLATS machine, without hash CONS and property lists, runs REDUCE
3.0 at about IBM 3033 speeds. Several papers presented at RSYMSAC
described the status of FLATS and the design of the next FLATS machine
that the group hopes to construct from Josephson Junction circuits
[2-3].

               University of Tokyo Computer Center

We visited the University of Tokyo Computer Center to find out more
about UTILISP (University of Tokyo Interactive LISP) implemented by the
Department of Mathematical Engineering and Instrumentation Physics
[1]). Probably one of the largest academic installations in the world,
the center operates two Hitachi M280 dual processors (roughly
equivalent to an IBM 3081) each with 32 megabytes of main storage and a
Hitachi M200H with 16 megabytes of main storage. A Hitachi S810/2
vector processor with 64 megabytes of main memory and a VAX 11/780 with
4 megabytes complement the general purpose machines. On-line storage
consists of 48 gigabytes on disk, and 37 gigabytes in data cells. The
center emphasizes user convenience. Users mount their own tapes, take
output off printers, read their own card decks (we didn't actually see
anyone do this, but the machine was there), tear off plots and so on.
The lightly loaded machines run an average of only 4,000 jobs per day.
Users need not wait for terminals and other equipment, an enviable
situation indeed.

UTILISP resembles MacLisp. An effort to transport MACSYMA to UTILISP
suffers only from the lack of built-in big number arithmetic.

                            Fujitsu ALPHA

A long train and subway ride brought us to the third tour stop, the
Fujitsu Laboratories in Kawasaki, home of the Lisp machine ALPHA [4-5].
The ALPHA offloads time sharing symbolic processing jobs from IBM style
mainframes. More than one ALPHA can be connected to a single mainframe,
which supplies I/O device, filing system, editing and operating system
support.

The ALPHA has 8 megabytes of real memory with a 16 megabyte virtual
address space. Memory and data buses are 32 bits wide with Lisp items
composed of an 8 bit tag and 24 bit value. The ALPHA processor has a
high speed hardware stack of 8k words with special hardware for
swapping segments to and from slower memory. The division of the stack
into blocks permits high speed switching between different processes.
To support tagged data items, a micro-instruction jump based on the 8
bit tag is implemented. The ALPHA machine performs data calculations by
masking off the tag bits in hardware, rather than software. The machine
has over 7700 STTL, 64k bit RAMs and 4k high speed RAMs.

     Micro Instructions - 48 bits wide, 160 ns, 16k words.
     Main Memory - Virtual 16 M words, Real 8 M words,
          Page size 4 K bytes.
     Stack - Logical stack 64 K words, Hardware stack 8 K words,
          Swapping block size 2 K bytes.

The ALPHA runs UTILISP and has an interpreter, compiler, and copying
garbage collector. Fujitsu claims the ALPHA runs three times faster
than the Symbolics 3600 and 5 times faster than DEC 2060 MACLISP.
Fujitsu uses the ALPHA for CAD, machine translation, and natural
language understanding.


        ELIS - Nippon Telegraph and Telephone Public Corporation

Nippon Telegraph and Telephone Public Corporation demonstrated the ELIS
machine and TAO language a "harmonic" mixture of Lisp, Smalltalk, and
Prolog, to quote the authors [6]. A PDP 11/60 provides file and
operating system support while the ELIS hardware performs the list
processing functions. The ELIS hardware features 32 bit items with 8
bit tags providing for 16 million items (128 megabytes). The basic
microcycle time is 180 ns in 32k of micro-instructions 64 bits wide.
Main memory is 4 megabytes with an access time of 420 ns and a special
system stack of 32k 32 bit items. Deep binding is used and multiple
processes are supported by stack groups, and the cpu switches between
contexts very fast (2 microseconds unless there is some stack
swapping). For identical tasks programmed in the three different
paradigms, the procedural version provides the most speed with the
object oriented version about 1.1 times as slow and the logic version
about twice as slow.

Acknowledgement: I would like to thank Dr. N. Inada of Riken for
organizing both RSYMSAC and the tour.

List of References
1. Chikayama, Takashi, `UTILISP Manual', Technical Report METR 81-6
    (September 1981), Department of Mathematical Engineering and
    Instrumentation Physics, University of Tokyo, Bunkyo-Ku, Tokyo,
    Japan.
2. Goto, E., Shimizu, K., `Architecture of a Josephson Computer
    (FLATS-2)', RSYMSAC, Wako-shi, Saitama, 351-01 Japan, 1984.
3. Goto, E., Soma, T., Inada, N., et al, 'FLATS: A Machine for
    Symbolic and Algebraic Manipulation', RSYMSAC, Riken, Wako-shi,
    Saitama, 351-01 Japan, 1984.
4. Hayashi, H., Hattori, A., Akimoto, H., `ALPHA: A High-Performance
    LISP Machine with a New Stack Structure and Garbage Collection
    System', Proceedings of the 10th Annual International Symposium
    on Computer Architecture, pages 342-347.
5. Hayashi, H., Hattori, A., Akimoto, H., `LISP Machine "ALPHA"',
    Fujitsu Scientific and Technical Journal, Vol. 20, No. 2,
    pages 219-234.
6. Okuno, H. G., Takeuchi, I., Osato, N., Hibino, Y., Watanabe, K.,
    `TAO: A Fast Interpreter-Centered System on Lisp Machine ELIS',
    Proceedings of the 1984 Conference on LISP and Functional
    Programming.


Jed Marti  MARTI@RAND-UNIX

------------------------------

Date: 9 November 1984 1359-EST
From: Jeff Shrager@CMU-CS-A
Subject: Seminar - Logo for Teaching Language

           [Forwarded from the CMU bboard by Laws@SRI-AI.]


Subject:  Guest speaker on teaching language with LOGO
Source: charney (davida charney @ cmu-psy-a)

               English Department -- Guest Speaker

NAME:   Wallace Feurzeig  (Bolt, Beranek and Newman)
DATE:  Friday, November 16
TIME:  9 - 10:30 am  (There will be coffee and doughnuts.)
PLACE:  Adamson Wing in Baker Hall

TITLE: Exploring Language with Logo

The talk gives examples of materials from our forthcoming book "Exploring
Language with Logo" to be published by Harper and Row o/a first quarter,
1985, co-authored by Paul Goldenberg and Wallace Feurzeig.  The book
attempts to develop a qualitatively different approach to the teaching of
language arts in schools.  Our approach is based on two major intellectual
developments -- the theory of generative grammar in formal linguistics and
the invention of programming languages.  The important new idea from
linguistics is that a grammar can be used as a constructive instrument to
generate sentences, in contrast to the conventional school experience of
grammar as an analytic device, a set of tools for parsing sentences to
determine whether or not they are instances of "good" English.  This shift
has enormous psychological and pedagogical benefits: it switches the
learner's focus and viewpoint from rule learner to language creator.  At the
same time, it provides a distinctly different, more accessible and
acceptable way of introducing the formal structures of language and the
regularities and rules describing these structures.

The other major intellectual development, programming languages, provides
the most distinctive and radical departure from the present language arts
course. Our approach depends fundamentally upon programming ideas and
activities.  In our presentation, the key and central language concepts are
introduced and developed as Logo programs.  Teachers and students are engaged
in programming projects throughout.  The use of a programming language in the
English language classrooom makes the idea of generative grammars concrete
in tasks readily accessible to schoolchildren.  Moreover, in the environment
of programming, grammar models are transformed from highly abstract
formalisms into runnable objects in semantic situations that are meaningful
and interesting to students.  For example, students can create Logo programs
that simulate the grammar of gossip, puns, jokes, love letters, baby talk,
proverbs, quizzes, conversational discourse, poems of various forms and
expressive styles, and many other kinds of texts.  Examples will illustrate
the approach and materials at three levels: the structure of sentences,
structures within a word, and larger structures.

------------------------------

Date: 13 November 1984 11:12-EST
From: Rosemary B. Hegg <ROSIE @ MIT-MC>
Subject: Seminar - Knowledge Representation and Temporal Representation

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

    COOPERATIVE COMPUTATION, KNOWLEDGE ORGANIZATION AND TIME

                         John K. Tsotsos

                 Department of Computer Science
                      University of Toronto
                    Toronto, Ontario, Canada


                DATE:    Friday, November 16, 1984
                TIME:    1.45 pm  Refreshments
                         2.00 pm Lecture
               PLACE:    NE43-7th Floor Playroom


     A cooperative processing scheme is presented that deals with
time-varying information.  It operates over a network of temporal
concepts, organized along common representational axes:generali-
zation,  aggregation,  similarity and temporal precedence.  Units
in this network are organized into computation layers, and  these
layers  are  conceptualized as "recognizing" concepts that can be
organized along  a  generalization  /  specialization  dimension.
Thus  elements of both "localist" and "distributed" views of con-
cept representations are present.  Static and dynamic data in the
same  way  -  as samples over time, and thus, sampling issues are
directly addressed.  This process is  a  time-varying  non-linear
optimization  task;  it differs from past cooperative computation
schemes in three respects: a) our information is not uniform  but
rather  different concepts are represented at different levels of
the hierarchies; b) there are multiple interacting networks, each
organized  according to different semantics; c) the data is time-
varying and more importantly,  the structure over  which  relaxa-
tion is performed is time-varying.  The cooperative process to be
described has the qualitative properties we believe are desirable
for   temporal   interpretation,  and  its  performance  will  be
described empirically, and in a qualitative fashion  through  the
use of several examples.

HOST:  Prof. Peter Szolovits

------------------------------

End of AIList Digest
********************

From:	CSVPI          19-NOV-1984 04:59  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a007595; 18 Nov 84 16:13 EST
Date: Sun 18 Nov 1984 12:18-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #157
To: AIList@SRI-AI
Received: from rand-relay by vpi; Mon, 19 Nov 84 04:41 EST


AIList Digest            Sunday, 18 Nov 1984      Volume 2 : Issue 157

Today's Topics:
  Conference - Expert Systems Symposium,
  Expert Systems - Skinner,
  Algorithms - Scheduling Algorithm Question & Malgorithm,
  Logic Programming - Compiling Logic to Functions,
  Linguistics - In Praise of Natural Languages,
  Seminars - Conceptual Change in Childhood &
    Relational Interface, Process Representation &
    Partial Winter Schedule at NCARAI,
  Course & Conference - Logic, Language, and Computation Meeting
----------------------------------------------------------------------

Date: 26 Oct 1984  9:37:06 EDT (Friday)
From: Marshall Abrams <abrams@mitre>
Subject: Expert Systems Symposium

I am helping to organize a Symposium on Expert Systems in the Federal
Government. In addition to papers, I am looking for people to serve on
the program committee and the conference committee, and to serve as
reviewers and session chairmen. The openings on the conference committee
include local arrangements, publicity, and tutorials.

Please contact me or the program chairman (karma @ mitre) with
questions and suggestions. The call for papers is available
on request.

Marshall Abrams

------------------------------

Date: Friday, 16 Nov 84 11:34:27 EST
From: shrager (jeff shrager) @ cmu-psy-a
Subject: Quote for our times...

"If Skinner were born in our time, he'd have been an expert
 systems researcher."

                        -- Peter Pirolli  11/16/84 in the heat of
                                          an argument.

(Quoted with permission)

------------------------------

Date: 15 Nov 84 11:38:40 EST
From: DIETZ@RUTGERS.ARPA
Subject: Scheduling algorithm questions

           [Forwarded from the SRI bboard by Laws@SRI-AI.]

I want an online algorithm for preemptive scheduling on a single processor
with release times and deadlines (no precedence relations).  This problem
is trivial offline, but I want to be able to add new jobs (or determine
they cannot be added) in polylog time.  Has anyone looked at this problem?

Paul Dietz (dietz@rutgers)

------------------------------

Date: Fri, 16 Nov 84 15:30 CST
From: Boebert@HI-MULTICS.ARPA
Subject: Old high-level malgorithm


A (very) early IBM FORTRAN compiler contained the following jewel of an
error message:

"COLUMN cc OF CARD nnnn CONTAINS A 12-4 PUNCH MINUS SIGN INSTEAD OF AN
11 PUNCH MINUS SIGN.  CORRECT AND RESUBMIT."

This was a fatal error.

(For the youngsters, "12-4 punch" and "11 punch" refer to the patterns
of holes in a card column; I believe the 12-4 was officially a "dash".
Also, FORTRAN only spoke capital letters; this was an Eisenhower-era
compiler, and shouted at you in proper authoritarian style.)


[Speaking of user-interface styles:
Commodore Grace Hopper tells of the time a Navy (or perhaps just
Pentagon) programming team realized that a computer could "speak German"
if you just replaced JUMP with SPRUNGE, etc.  (Even JUMP was a novelty
at this time: it may have been the earliest COBOL compiler prototype.)
They set up a demo and passed around a memo saying "Come see our computer
compile this German program."  The brass were not amused at the idea
of an American military computer being trained to speak German, and the
team had to distribute another memo saying that the first was just a
bad joke -- no computer could possibly understand German!  -- KIL]

------------------------------

Date: Thu 15 Nov 84 19:31:17-MST
From: Uday Reddy <U-REDDY@UTAH-20.ARPA>
Subject: Compiling logic to functions

To add to my previous message on the topic, the fact that the effect of
logical variables cannot be achieved in functional languages is not a
linguistic limitation but an operational one.  Specifically, all logic
predicates are boolean-valued functions.  So, all Horn clauses can be
directly translated into function equations.

        A :- B1, ..., Bn.       =>      A = and(B1,...,Bn)
        A.                      =>      A = true

However, in traditional functional languages the translated logic programs
can only be used for rewriting.  They cannot be used to solve goals with
variables in them.

If "narrowing" rather than "rewriting" is used as the operational semantics
of functional programs, they too can be used to solve goals and the effect
of logical variables is achieved.  For more details, see

        Hullot, Canonical forms and unification, Conf. Automated Deduction,
        1980.

        Lindstrom, Functional programming and the logical variable, to
        appear, POPL 85.

        Reddy, On the relationship between logic and functional languages,
        to appear in, Degroot, Lindstrom, Functional and Logic programming,
        Prentice-Hall, 85.

Uday Reddy

------------------------------

Date: Thu, 15 Nov 84 11:01:51 PST
From: April Gillam <gillam@AERO2>
Subject: In Praise of Natural Languages

Rick Briggs raises some very interesting questions about what is a natural
language, so I thought I'd air my views.  Natural language, in its broadest
sense, should include any communication between man and/or animal for which
there is an underlying common belief system. I'd even go so far as to
include non-verbal communications. I'm not a linguist, so this is just how
I view the term.  When dealing with machine translation, a working
definition restricting it to verbal or written communications of course
makes more sense.

It would be interesting if at some time we could interpret body language
well enough to have a computer analyze what a person says verbally and
bodily. (When pattern recognition has matured!) There are certainly some
people who have the sensitivity and receptiveness to do the interpretation.

Reading of some of Dr. Kubler-Ross's work, it is an amazing
learning experience to see the level of interpretation which she does with
dying patients, many of whom cannot express directly their knowledge of
their imminent death, however they still have a strong desire to
communicate this to someone, using an analogy or some indirect manner. She
writes of a terminally ill man who could not get out of bed without the use
of his cane, who one day said to take the cane away, Shortly after which he
died. This man was letting her know that the time had come. But few, if any
of us pick up on the cues. Do we really expect a computer to do this? It
also points up how vital context is to understanding.

It doesn't seem plausible to me that any language can express ALL "aspects
of the natural world"? In Indian (from India) languages there are words for
levels of consciousness (eg. samadhi), for energy centers of the body (eg.
chakras), etc.  In English we have sophisticated words pertaining to
weaponry, to real estate, etc. Do you think an aborigine would have a word
or concept for garbage recycling? (Or coke bottle?) What I'm trying to say
is, language is cultural (as my friend Ellen, an anthropolgist, succinctly
put it).

I find it hard to believe that Sastric Sanskrit, or any other language, can
contain the concepts of all of humanity's experiences. Have we ourselves
experienced enough of our reality to be able to express it, and does the
person we talk to have a common enough set of experiences to interpret what
we say? There are enough misunderstandings when we both speak the same
language, that I doubt another language will render a semantically exact
translation.  How can the color scheme be described to the land of the
blind? There is also a flavor to words. For eg., cabron in Spanish, or the
phrase "curses, foiled again" to those of us who've seen the Perils of
Pauline or comic strips.

I don't see it as a virtue, to be able to express oneself unambiguously.
Part of the power and beauty of language is the ability to make
multi-leveled statements, double entendres, analogies, etc.

It's interesting what Bill Frawley says about a change which complicates a
language being compensated by by a simplification elsewhere. On some level
that is aesthetically pleasing, but I have no feel for whether that would
be the case.

In the proceedings of this year's AAAI conf. there was an interesting paper
in which a micro-environment (a context) of words, likely references and
multiple meanings for a particular topic was set up. If the topic was
Italian food, there'd be some notions of restaurants and pizza and such.
Then if the statement "Hold the anchovies" was encountered, it would be
known that it means "Do not put anchovies on the pizza", as opposed to
"Grasp the anchovies in your hand."  I don't have the reference handy, but
it looked like a good idea, as well as a lot of work.
                        - April Gillam

------------------------------

Date: Tue, 13 Nov 84 14:40:54 pst
From: chertok%ucbcogsci@Berkeley (Paula Chertok)
Subject: Seminar - Conceptual Change in Childhood

             BERKELEY COGNITIVE SCIENCE PROGRAM
                         Fall 1984
           Cognitive Science Seminar -- IDS 237A

   TIME:                Tuesday, November 20, 11 - 12:30
   PLACE:               240 Bechtel Engineering Center
   DISCUSSION:          12:30 - 2 in 200 Building T-4

SPEAKER:        Susan  Carey;  MIT  Psychology   Department;
                Center  for Advanced Study in the Behavioral
                Sciences

TITLE:          ``Conceptual Change in Childhood''

ABSTRACT:       In the tradition of recent Cognitive Studies
                tutorials,  this  paper is a tutorial on the
                proper description of cognitive development.
                At  issue  is  the  status of the claim that
                young children think differently from  older
                children  and  adults.   This claim is often
                contrasted  with  the  claim  that  children
                differ  from  adults merely in knowing less.
                I review the kinds of phenomena that  parti-
                cipants  in  the  debate take as relevant to
                deciding the issue.  Finally, I argue that a
                third  position,  in which the phenomenon of
                conceptual change is taken seriously, avoids
                the pitfalls of the original Piagetian posi-
                tion while allowing for its successes.

                I exemplify the third position by  sketching
                a recently completed case study of the emer-
                gence of biology as an independent domain of
                intuitive  theorizing in the first decade of
                life.  I will conclude by raising the  ques-
                tion  of  the  relation  between  conceptual
                change in childhood and conceptual change in
                the history of science.

------------------------------

Date: Thu, 15 Nov 84 18:41:04 cst
From: briggs@ut-sally.ARPA (Ted Briggs)
Subject: Seminar - Relational Interface, Process Representation

        [Forwarded from the UTexas-20 bboard by Laws@SRI-AI.]

       ROSI: A UNIX Interface for the Discriminating User
                               by
                            Mark Roth
                     Srinivasan Sundararajan


                      noon  Friday Nov. 16
                            PAI 3.38

ROSI ( Relational Operating System Interface ) strives to provide
the  UNIX user an environment based on the relational data model.
Usually,  relational database theory deals only with relations in
1NF.   In  this  talk,  this  assumption is  relaxed  by allowing
sets-of-values to exist anywhere an atomic  value  could  before.
These  relations will be unnormalized or in non-first-normal-form
(non-1NF).  The need for non-1NF relations, a relational calculus
and  algebra  dealing  with  non-1NF relations, and some extended
algebra operators will be discussed.

The approach used in the design of ROSI was to model elements  of
the operating system environment as relations and to model system
commands as statements in a relational language. In adapting  the
relational data model to an operating system environment, we have
extended the model  and@tried  to  improve  existing  relational
languages.  The  extensions to the relational model  are designed
to allow a more natural representation of  elements  of  the  en-
vironment. The language extensions exploit the universal relation
model and utilize the graphical capabilities of  modern  worksta-
tions.

The goal of the project is to produce a user and  programmer  in-
terface to the operating system that :

        * is easier to use
        * is easier to learn
        * allows greater portability

as compared with existing operating system interfaces.

------------------------------

Date: 13 Nov 84 08:37:21 EST
From: Dennis Perzanowski <dennisp@NRL-AIC.ARPA>
Subject: Seminars - Partial Winter Schedule at NCARAI


           U.S. Navy Center for Applied Research
                 in Artificial Intelligence
           Naval Research Laboratory - Code 7510
                 Washington, DC  20375-5000

                   WINTER SEMINAR SERIES


Monday, 10:00 a.m.
3 December 1984
                Dr. Poohsan Tamura
                Westinghouse Research & Development Center
                Pittsburgh, PA
                 "Optical High Speed 3-D Digital Data Acquisition"

Monday, 10:00 a.m.
17 December 1984
                Dr. Terrence Sejnowski
                Department of Biophysics
                Johns Hopkins University
                Baltimore, MD
                 "The BOLTZMANN Multiprocessor"

Monday, 10:00 a.m.
14 January 1985
                Dr. Lance Miller
                IBM Thomas J. Watson Research Center
                Yorktown Heights, NY
                 "Bringing Intelligence into Word Processing:
                  The IBM EPISTLE System"

Monday, 10:00 a.m.
28 January 1985
                Dr. Larry Reeker
                Visiting Scientist at NCARAI
                from Tulane University, New Orleans, LA
                 "Programming for Artificial Intelligence:
                  LISP, Ada, PROLOG,   ... or Something Else?"


Meetings are held at 10:00 a.m. in the  Conference  Room  of
the   Navy   Center   for  Applied  Research  in  Artificial
Intelligence (Bldg. 256) located on Bolling Air Force  Base,
off  I-295, in the South East quadrant of Washington, DC.  A
map can be mailed for your convenience.
Coffee will be available starting at 9:45 a.m. for a nominal
fee.  Please do not arrive before this time.

IF YOU ARE INTERESTED IN ATTENDING A SEMINAR, PLEASE CONTACT
US  BEFORE NOON ON THE FRIDAY PRIOR TO THE SEMINAR SO THAT A
VISITOR'S PASS WILL BE AVAILABLE FOR YOU ON THE DAY  OF  THE
SEMINAR.   NON-U.S.  CITIZENS  MUST  CONTACT US AT LEAST TWO
WEEKS PRIOR TO A SCHEDULED SEMINAR.  If you  would  like  to
speak,  be  added  to  our  mailing list, or would like more
information,   contact   Dennis    Perzanowski.     ARPANET:
DENNISP@NRL-AIC or (202) 767-2686.

------------------------------

Date: Fri 9 Nov 84 17:21:21-PST
From: Jon Barwise <BARWISE@SU-CSLI.ARPA>
Subject: Course & Conference - Logic, language and computation meeting

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


           LOGIC, LANGUAGE AND COMPUTATION MEETINGS

The Association for Symbolic Logic (ASL) and the Center for the Study
of Language and Information (CSLI) are planning a summer school and a
meeting from July 8-20, 1985, at Stanford University.  The first week
(July 8-13) will consist of the CSLI Summer School, during which
courses on the following topics will be offered:

        Situation Semantics               Prof. Jon Barwise
        PROLOG                            Prof. Maarten van Emden
        Denotational Semantics            Prof. Gordon Plotkin
        Types and ML                      Dr. David MacQueen
        Complexity Theory                 Prof. Wolfgang Maass
        Abstract Data Types               Dr. Jose Meseguer
        The Theory of Algorithms          Prof. Yiannis Moschovakis
        Generalized Quantifiers           Dr. Lawrence Moss
        LISP                              Dr. Brian Smith
        Foundations of Intensional Logic  Prof. Richmond Thomason

(Enrollment in some courses using computers is limited.)

The second week (July 15-20) will consist of an ASL Meeting with
invited addresses, symposia, and sessions for contributed papers.  Of
the invited speakers, the following have already accepted:

        Prof. Peter Azcel                 Prof. David Kaplan
        Prof. Robert Constable            Prof. Kenneth Kunen
        Prof. Maarten van Emden           Prof. Per Martin-Lof
        Prof. Yuri Gurevich               Prof. John Reynolds (tentative)
        Prof. Anil Gupta (tentative)      Dr. Larry Wos
        Prof. Hans Kamp

Symposia:

Types in the Study of Computer and Natural Languages:

        Prof. R. Chierchia                Dr. David MacQueen
        Prof. Solomon Feferman            Prof. Barbara Partee

The Role of Logic in AI:

        Dr. David Israel                  Dr. Stanley Rosenschein
        Prof. John McCarthy


Possible Worlds:

        Prof. John Perry                  Prof. Robert Stalnaker


For further information or registration forms, write to Ingrid
Deiwiks, CSLI, Ventura Hall, Stanford, CA 94305, or call (415)
497-3084.  Room and board in a residence hall on campus are available,
and those interested should indicate their preference for single or
shared room, as well as the dates of their stay.  Since space is
limited, arrangements should be made early.  Some Graduate Student
Fellowships to cover cost of accomodation in the residence hall are
available.  Abstracts of contributed papers should be no longer than
300 words and submitted no later than April 1, 1985.  The program
committee consists of Jon Barwise, Solomon Feferman, David Israel and
William Marsh.

------------------------------

End of AIList Digest
********************

From:	COMSAT         22-NOV-1984 05:57  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a000411; 21 Nov 84 15:05 EST
Date: Wed 21 Nov 1984 11:27-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #158
To: AIList@SRI-AI
Received: from rand-relay by vpi; Thu, 22 Nov 84 05:53 EST


AIList Digest           Wednesday, 21 Nov 1984    Volume 2 : Issue 158

Today's Topics:
  LISP - Public-Domain LISP & Lisp Performance Tools,
  Expert Systems - Paradocs,
  Algorithms & Theorem Proving - Karmarkar' Linear Programming Algorithm,
  Seminars - Solving Problems in Equational Theories &
    The Grand Tour (in Pattern Recognition)
----------------------------------------------------------------------

Date: 19-Nov-84 10:11:58-PST
From: mkm@FORD-WDL1.ARPA
Subject: Public-Domain LISP?

Is there a public domain copy of a LISP interpreter running around out there?
If so, I would like to know where, how to get it, etc.

Thanks,

Mike McNair
Ford Aerospace

------------------------------

Date: Monday, 19 Nov 1984 09:39:20-PST
From: cashman%what.DEC@decwrl.ARPA
Subject: Lisp performance tools

        I am interested in pointers to any tools which have been developed for
measuring the performance of Lisp application programs (not measuring the
performance of Lisp systems themselves).

Paul Cashman (Cashman%what.DEC@DECWRL)

------------------------------

Date: Mon 19 Nov 84 18:25:33-PST
From: Joe Karnicky <KARNICKY@SU-SCORE.ARPA>
Subject: Paradocs Expert System

    Has anyone out there heard of/seen/used a software system called Paradocs?
It is marketed by a company that has undergone several name changes and is
currently known as Cogensys (out of Farmington Hills, Mich.).  The system
is described as a "judgement processing system" and is represented as being
able to combine inputs from several domain experts into a judgement base
which is then able to diagnose problems in the domain.  The name of the fellow
who created the system is Buzz Berk (spelling uncertain).
     I'd very much appreciate ***any*** information and/or opinions about the
value or performance of this system.
                                                 Sincerely,
                                                 Joe Karnicky
                                                 <KARNICKY@SCORE>

------------------------------

Date: 19 Nov 1984 1421-EST
From: Venkat Venkatasubramanian <VENKAT@CMU-CS-C.ARPA>
Subject: Karmarkar

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

There is a front page article in today's NY Times on Karmarkar and his
linear programming algorithm.

------------------------------

Date: 19 Nov 84  1204 PST
From: Martin Frost <ME@SU-AI.ARPA>
Subject: linear programming "breakthrough"

         [Excerpted from the Stanford bboard by Laws@SRI-AI.]

18 Nov 84
By JAMES GLEICK
c.1984 N.Y. Times News Service
    NEW YORK - A 28-year-old mathematician at AT&T Bell Laboratories has
made a startling theoretical breakthrough in the solving of [linear
programming] systems of equations. [...]
    The Bell Labs mathematician, Dr. Narendra Karmarkar, has devised a
radically new procedure. [...]
    The new Karmarkar approach exists so far only in rougher computer
code. Its full value will be impossible to judge until it has been
tested experimentally on a wide range of problems. But those who have
tested the early versions at Bell Labs say that it already appears
many times faster than the simplex method, and the advantage grows
rapidly with more complicated problems. [...]
    Karmarkar, the son and nephew of mathematicians, was born in
Gwalior, India, and grew up in Poona, near Bombay. He joined Bell
Labs last year after attending the California Institute of Technology
at Pasadena and getting his doctorate from the University of
California at Berkeley.
    News of his discovery has been spreading through the computer
science community in preprinted copies of Karmarkar's paper and in
informal seminars. His paper is to be formally published in the
journal Combinatorica next month and will be a central topic at the
yearly meeting of the Operations Research Society of America this
week in Dallas. [...]
    Mathematicians visualize such problems as complex geometric solids
with millions or billions of facets. Each corner of each facet
represents a possible solution. The task of the algorithm is to find
the best solution, say the corner at the top, without having to
calculate the location of every one.
    The simplex method, devised by the mathematician George B. Dantzig
in 1947, in effect runs along the edges of the solid, checking one
corner after another but always heading in the direction of the best
solution. In practice it usually manages to get there efficiently
enough for most problems, as long as the number of variables is no
more than 15,000 or 20,000.
    The Karmarkar algorithm, by contrast, takes a giant short cut,
plunging through the middle of the solid. After selecting an
arbitrary interior point, the algorithm warps the entire structure -
in essence, reshaping the problem - in a way designed to bring the
chosen point exactly into the center. The next step is to find a new
point in the direction of the best solution and to warp the structure
again, bringing the new point into the center.
    ''Unless you do this warping,'' Karmarkar said, ''the direction that
appears to give the best improvement each time is an illusion.''
    The repeated transformations, based on a technique known as
projective geometry, lead rapidly to the best answer. Computer
scientists who have examined the method describe it as ingenious.
    ''It is very new and surprising - it has more than one theoretical
novelty,'' said Laszlo Babai, visiting professor of computer science
at the University of Chicago. ''The real surprise is that the two
things came together, the theoretical breakthrough and the practical
applicability.''
    Dantzig, now professor of operations research and computer science
at Stanford University, cautioned that it was too early to assess
fully the usefulness of the Karmarkar method. ''We have to separate
theory from practice,'' he said. ''It is a remarkable theoretical
result and it has a lot of promise in it, but the results are not all
in yet.''
    Many mathematicians interested in the theory of computer science
have long been dissatisfied with the simplex method, despite its
enormous practical success. This is because the program performs
poorly on problems designed specificaly to test its weaknesses,
so-called worst possible case problems. [...]
    But fortunately for computer science, the worst-case problems almost
never arise in the real world. ''You had to work hard to produce
these examples,'' Graham said. And the simplex method performs far
better on average than its worst-case limit would suggest.
    Five years ago, a group of Soviet mathematicians devised a new
algorithm, the ellipsoid method, that handled those worst-case
problems far better than the simplex method. It was a theoretical
advance - but the ellipsoid had little practical significance because
its average performance was not much better than its worst-case
performance.
    The Soviet discovery, however, stimulated a burst of activity on the
problem and led to Karmarkar's breakthrough. The new algorithm does
far better in the worst case, and the improvement appears to apply as
well to the kinds of problems of most interest to industry.
    ''For a long time the mind-set that the simplex method was the way
to do things may have blocked other methods from being tested,'' said
Dr. Richard Karp, professor of computer science at the University of
California at Berkeley. ''It comes as a big surprise that what might
have been just a curiosity, like the ellipsoid, turns out to have
such practical importance.''

------------------------------

Date: Tue 20 Nov 84 14:15:01-PST
From: John B. Nagle <NAGLE@SU-SCORE.ARPA>
Subject: Re: new linear programming algorithm

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

     This may have implications for automatic theorem proving.  Since
Dick Karp is working on it, they will probably be explored.  We may get
some really high performance verification techniques someday.

------------------------------

Date: Tue, 20 Nov 84 16:17:41 pst
From: Vaughan Pratt <pratt@Navajo>
Subject: new linear programming algorithm and automatic theorem
         proving.

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

As applied to program verification, the typical linear programming problems
that arise are almost all of the form, find any integer solution to a set
of inequations of the form x+c<y where x and y are variables and c is an
integer constant.  (Different inequations are allowed different choices of
the two variables, i.e. there are more than two variables in the system as
a whole.)  There is a simple algorithm for solving these (in either integers
or reals) having worst case O(n**3) (essentially Floyd's algorithm for all
shortest paths).  The sets tend to be sparse, which can be taken advantage
of to get better than n**3 performance.  The implementation is simple and the
constant factor is small.

The reason this form crops up is that it is an alternative representation
for the inequational theory of successor and predecessor, which in turn
crops up since most arithmetic occurring in programs consists of subscript
incrementing and decrementing and checking against bounds.  Programs whose
arithmetic goes beyond this theory also tend to go beyond the theory of + and
- by having * and / as well, i.e. the fraction of programs covered by
linear programming but not by the above trivial fragment of it is not that
large.

-v

------------------------------

Date: Tue, 20 Nov 84 17:26:48 pst
From: Moshe Vardi <vardi@diablo>
Subject: New Algorith for Linear Programming

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

A preliminary report appeared in the proceedings of the last ACM Symp. on
Theory of Computing. It is a provably polynomial time algorithm, which unlike
Khachian's algorithm is a practical one. There are doubts among the experts
whether the algorithm is as revolutionary as the PR people say it is.

Moshe

------------------------------

Date: Sat 17 Nov 84 17:58:32-PST
From: Ole Lehrmann Madsen <MADSEN@SU-CSLI.ARPA>
Subject: Seminar - Solving Problems in Equational Theories

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]


CENTER FOR THE STUDY OF LANGUAGE AND INFORMATION
                 AREA C MEETING

Topic:     REVE: a system for solving problems in equational theories,
                 based on term rewriting techniques.
Speaker:   Jean-Pierre Jouannaud, Professor at University of NANCY, FRANCE,
           on leave at SRI-Internatinal and CSLI.
Time:      1:30-3:30
Date:      Wednesday, Nov. 21
Place:     Ventura Seminar Room

Equational Logic has been adopted by mathematicians for a very long time and by
computer scientists recently.  Specifications in OBJ2, an "object-oriented"
language designed and implemented at SRI-International, uses equations to
express relations between objects.  To express computations in this logic,
equations are used one way, e.g. as rules.  To make proofs with rules in this
logic requires the so-called "confluence" property, which expresses that the
result of a computation is unique, no matter the order the rules are applied.
Proofs and computations are therefore integrated in a very simple framework.
When a set of rules does not have the confluence property, it is augmented by
new rules, using the so-called Knuth and Bendix completion algorithm, until the
property becomes satisfied.  This algorithm requires the set of rules to have
the termination property, i.e., an expression cannot be rewritten forever.  It
has been proved that this algorithm allows to perform inductive proof without
invoking explicitly an induction principle and solve equations (unification) in
the corresponding equational theory as well.

REVE1, developped by Pierre Lescanne during a leave at MIT, implements all
these concepts, including automated proofs for termination.

REVE2, developped by Randy Forgaard at MIT, provided REVE with a very
sophisticated user interface, including an undo command.

REVE3, developped by Claude and Helene Kirchner in NANCY, includes new powerful
features, mainly mixted sets of rules and equations for handling theories
including permutative axioms.

All versions are developped in CLU and run on VAX under UNIX-Berkeley.

------------------------------

Date: Tue 20 Nov 84 22:59:35-PST
From: Art Owen <OWEN@SU-SCORE.ARPA>
Subject: Seminar - The Grand Tour

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

Laboratory for Computational Statistics Seminar

Time:   3:15pm Wednesday November 21st
Place:  Sequoia Hall 114
Cookies:  at 3:00  in the Lounge

Title:          Hopes for the Grand Tour

by                      Daniel Asimov
                        Dept. of Computer Science
                        U.C. Berkeley

The grand tour is a technique for examining two dimensional projections
of higher dimensional objects.  The tour essentially picks a trajectory
through the space of possible projections, while a data analyst watches
the corresponding 'movie' on a graphics terminal.  The objective
is to as quickly as possible pass near to most of the
possible projections.  It is a tool for finding projections that are
informative.

The talk will discuss the current state of grand tour
research, identifying desirable properties
that a tour might have, indicating which such properties have been
achieved and directions for future research.

That's 3:00, 21 Nov 84, Sequoia Hall 114

------------------------------

End of AIList Digest
********************

From:	CSVPI          22-NOV-1984 20:50  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a000261; 22 Nov 84 1:31 EST
Date: Wed 21 Nov 1984 21:44-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #159
To: AIList@SRI-AI
Received: from rand-relay by vpi; Thu, 22 Nov 84 20:47 EST


AIList Digest           Thursday, 22 Nov 1984     Volume 2 : Issue 159

Today's Topics:
  Algorithms - Interlisp-D "malgorithm?",
  Programming Style - IBM Compiler Message,
  Machine Translation - Simplistic Beginnings,
  Books - Hackers: Heroes of the Computer Revolution,
  Research Styles - B.F. Skinner,
  Reasoning about Structure and Function - SIGART Special Issue,
  Conference - Hardware Description Languages
----------------------------------------------------------------------

Date: Sun 18 Nov 84 13:55:58-PST
From: Jay Ferguson <FERGUSON@SUMEX-AIM.ARPA>
Subject: Interlisp-D "malgorithm?"


Another point on this classic example of a true malgorithm is that
there is a lack of understanding of implementation detail.  The
CLISP feature of Interlisp is translated into a MAP function or a
PROG depending upon the structure.  Each time you call a FOR statement
interpetively the translation occurs.  When you compile the FOR
statement you will see large gains in efficiency.

I ran several test of both LDIFF, the initial FOR, and JonL's FOR with
the following results:

                   interpreted          compiled

LDIFF              .00125 secs          .00125 secs

Todd - FOR         .02125 secs          .00444 secs

JonL - FOR         .02114 secs          .00115 secs


These were run under INTERLISP-10 on a DEC-2060 with a 26 element list,
taking the first 9 elements.  Each test was run 100 times.  LDIFF was
not actually compiled because it was a normal function call.

jay

------------------------------

Date: Sun, 18 Nov 84 15:50:57 PST
From: Steve Crocker <crocker@AEROSPACE>
Subject: IBM compiler message rebuttal

At the risk of being misunderstood as an apologist for IBM's ultra prosaic
programming systems, I feel Earl Boebert's Nov 16 item on IBM's Fortran
compiler error message, viz. "COLUMN cc OF CARD nnnn CONTAINS A 12-4 PUNCH
MINUS SIGN INSTEAD OF AN 11 PUNCH MINUS SIGN.  CORRECT AND RESUBMIT.", is
taken out of context and misrepresents the situation.

First, a slight diversion.  I believe a 12-4 code is a D, and Earl probably
meant the 11-8-4 code, although my memory is a bit rusty and I surely have
not saved my old IBM BCD crib sheets.

The real issue is there had been two legal codes for minus, 11-8-4 and 11.
A decision had been made to phase out the 11-8-4 so it could be reassigned
to another symbol, and it eventually became the apostrophe, I believe.

Conversion proceeded in phases.  At the end of the conversion, the 11-8-4
code would always be treated as an apostrophe and receive no more special
attention if it were detected in an inappropriate position than any other
character would.  For example, "A = B'C" and "A = B$C" would get the same
treatment, and inhibit completion of the compilation.  (Admittedly, other
strategies for dealing with errors are possible, e.g. the DWIM system in
Interlisp, but that would mean a COMPLETE overhaul of the Fortran compiler,
and Fortran wasn't designed for either heuristic error correction or
interactive repair.)

To get to the point where 11-8-4 was freed up from its interpretation as a
minus sign, users were informed of the change and "encouraged" to amend
their programs.  The messages during the initial period were just warning
messages.  Later they were hard errors, as Earl related.  One might object
to this, but it's not simple to see what else to do.  If the 11-8-4 were to
take on a new meaning and still be accepted as a minus sign in all contexts
that minus signs are legal, both ambiguity and outright misunderstandings
would be propagated.  Despite the apparent inflexibility of the compiler, I
doubt this kind of error message caused any large disruption in programmer
productivity.

The problem was not unique to IBM's character set, of course.  The meaning
of the ASCII code for "_" was changed a few years ago.  It used to mean a
left arrow and some languages used it for assignment; now it means an
underscore and is used within identifiers.  This conversion was not without
some pain...

More seriously, the problem of catching all the dependencies of some change
to an established interface remains a challenge.  This may be a more frutiful
topic for discussion than malgorithms.

------------------------------

Date: Mon, 19 Nov 84 13:54 CST
From: Boebert@HI-MULTICS.ARPA
Subject: re: IBM compiler message rebuttal

Just when you think of a good cheap shot, somebody goes and makes it
sound like it was a reasonable thing to do...in any event, we were
undergrads and very much on the Algol side of the Algol/FORTRAN dispute,
and we thought the message a wonderful example of IBM mindlessness.
Maybe they should have appended THIS FATAL ERROR BROUGHT TO YOU IN THE
INTERESTS OF THE GREATER GOOD.

------------------------------

Date: Tue 20 Nov 84 22:47:28-EST
From: Michael Rubin <RUBIN@COLUMBIA-20.ARPA>
Subject: Re: computers speaking German

I've seen a paper from the very early sixties that described a French
preprocessor for FORTRAN -- it converted ALLER to GOTO, FAIRE to DO,
etcetera....  The paper claimed this was a first step toward machine
translation (of natural language).

------------------------------

Date: Mon 19 Nov 84 05:55:27-CST
From: Clive Dawson <CC.Clive@UTEXAS-20.ARPA>
Subject: Re: book"Hackers: heroes of the computer revolution"

        [Forwarded from the UTexas-20 bboard by Laws@SRI-AI.]

   I picked up Steven Levy's "Hackers" today and have gotten through Part 1:
"True Hackers--Cambridge: the Fifties and Sixties".  All in all quite
enjoyable and well worth the money, though I did have to grit my teeth when
reading about the "TICO" text editor and the "MULTIX" and "TENNIX"
operating systems.  Sigh.  The book comes mostly from over a hundred
personal interviews conducted in 1982-83.  Levy seems to have done a
careful job of documenting the written sources and of compiling an index.
Those who were interviewed will have to be the ones to say how faithfully
their perspective was communicated.  Most of the stories in Part 1 have
become part of standard "hacker folklore" which has been passed from mouth
to mouth and keyboard to keyboard over the last 25 years.  It's nice to
have them all collected in one place now.

   I certainly wouldn't rate Steven Levy's writing in the same class as Tom
Wolfe's, but I must admit that the the way the image of Cambridge is
painted as the birthplace of hacking was strikingly reminiscent of how
Wolfe built the image of the high desert in California as the birthplace of
the Right Stuff.  Levy even sprinkles references to "The Right Thing"
throughout the text.  (As we all know, Wolfe came up with his title after
seeing this term in the Jargon file. :-))

   I cheated and temporarily skipped over Part 2 ("Hardware Hackers") and
Part 3 ("Game Hackers") to the Epilogue--"The Last of the True Hackers".
The material covered here (e.g. the birth of Symbolics & LMI) is more
contemporary and thus familiar to many of us.  It is, sadly, pretty much
on the mark.

I too would be interested in hearing other opinions of this book
(especially from any of those interviewed.)

Clive

------------------------------

Date: 19 Nov 84 09:48 PST
From: JonL.pa@XEROX.ARPA
Subject: B.F. Skinner: A Man for All Reasonings

Shrager's conveyance of the quote about Skinner being an "expert systems
researcher" highlights a fundamental split in the AI community.

First, let me say I don't regard expert systems as a panacea -- at worst
they could be viewed as a technological spin-off of 20 years of AI
research.  Contrast this with the view taken by Skinner and his
disciples about SR being a fully adequate model of psychology; the
appearance of his book Verbal Behaviour is a desperate attempt to shore
up this claim.

On the other hand, a certain faction of AI is also trying to find a
fully adequate model of human cognitive capabilities (I would place
Minsky as the arch-defender of this "faith" -- the mind-as-meat-computer
camp); possibly *some* AI people would think that a brute force approach
along the lines of expert systems would be an interesting model, but I
don't personally know any such.  Another faction is less concerned with
mimicing the human structures and more concerned with the "artificial"
aspects of intelligence; I tend, now, to think of John McCarthy as the
prototype of this camp (see his article in Psychology Today earlier this
year -- perhaps April? -- and don't be put off by the fact that it
appears in, glaaag, Psychology Today).

The second approach is *not* to be confused with expert systems.
Although, one could imagine why "expert systems" would receive a more
favorable review from the latter camp than from the former.

I was present at MIT in late 1971 when the "MathLab" group was "read
out" of the AI community (the "MathLab" group at MIT quickly then became
MACSYMA).  Although MacSyma was certainly among the first of Expert
Systems with a major impact, it wasn't "AI" by the prevailing standards;
perhaps more like engineering, but not "AI".  What must be emphasized,
however, is that no one, at any time, thought of MacSyma as even a
partial model of human cognition.


If Skinner were coming of age now, with the same mind set, and were
indeed an expert systems researcher -- don't you think he'd have a more
"ambitious" goal?

-- JonL White --

------------------------------

Date: 18 Nov 1984 17:32-EST
From: milne <milne@wpafb-afita>
Subject: SIGART on Reasoning about Structure and Function


Special Issue of SIGART News on Reasoning about Structure and Function.

  We plan to edit a special issue of SIGART News devoted to representing,
and reasoning about, structure, behavior and function of devices and
systems.   This has recently become a topic of increasing importance
in giving expert systems capabilities for causal reasoning to support
diagnostic and other tasks.  Work in this area has been in the domains
of simple machines, electronic circuits, mechanical systems and medicine.

  Our aim is to cover the spectrum of work that is going on in the U. S
and other countries in this general area.  We expect that the SIGART News
special issue will be followed by a special issue of some appropriate
journal containing fuller version of selected papers from the former.

  Submissions are invited from researchers summarizing their approach,
results, problems and plans.   The submissions should be under
5 type-written pages, and should be sent to Prof. Rob Milne at the address
below.  The deadline for submissions is 15 January 1985.

Rob Milne                               B. Chandrasekaran
Department of Electrical Enginnering    Department of Computer & Information
AFIT/ENG                                                Science
A.F. Institute of Technology            The Ohio State University
WPAFB, OH 45433                         Columbus, OH 43210
513-255-3576                                    614-422-0923
milne@wpafb-afita

------------------------------

Date: Saturday, 17 November 1984 20:35:17 EST
From: Mario.Barbacci@cmu-cs-spice.arpa
Subject: Conference - Hardware Description Languages


                                CALL FOR PAPERS

                        7TH INTERNATIONAL SYMPOSIUM ON
                    COMPUTER HARDWARE DESCRIPTION LANGUAGES
                            AND THEIR APPLICATIONS
                                    CHDL-85

                              AUGUST 29-31, 1985
                              KEIDANREN BUILDING
                                 TOKYO, JAPAN

Sponsored by the International Federation for Information Processing (IFIP) and
the Information Processing Society of Japan (IPSJ), organized by IFIP TC-10 and
IFIP WG 10.2, in cooperation with IEEE-CS, ACM, GI, and NTG.

The theme of the symposium is:

                    TOOL, METHOD, AND LANGUAGE INTEGRATION

The  Symposium  focuses  on  the design process as a whole. The objective is to
cover the various aspects of (computer-supported) specification,  verification,
modelling,  evaluation, and design of computer systems based on suitable design
languages. Integration can be considered from specification  to  implementation
as well as in terms of language and tool integration at a given level.

Topic areas are:

>From Specification to Implementation of Digital Systems:
    methodological aspects            integrating levels of description
    formal verification and           performance and reliability
            correctness                       evaluation
    test generation from CHDL         synthesis
            descriptions

Computer System/Hardware Description Languages:
    formal specification languages    languages and technology
    multiple representation of        language support for verification,
            design objects                    performance, and reliability

Tool Integration:
    design environments               expert systems for system design
    data structures for integration   integration of tools for testing,
           between levels and tools            verification, and simulation

Acceptance and Experience:
    reality in industry               acceptance problems of new methods,
    integration with CAD/CAM                  languages and tools

Five  (5)  copies  of  the  full length manuscript in English, not exceeding 20
double-spaced typewritten pages, should be sent  to  the  Program  Chairman  to
arrive no later than December 15, 1984.

Notification   of   acceptance  is  planned  for  March  15,  1985.  The  final
camera-ready version of accepted papers is due on May 15, 1985.

Because the symposium is held immediately after the VLSI 85 conference  at  the
same location, Program Committees of both conferences may transfer papers which
fit better the topics of the other conference.

General Chairman:                     Program Chairman:

Professor Tohru Moto-oka              Dr. Cees Jan Koomen
Department of Electrical Engineering  Philips International
University of Tokyo                   Product Development Coordination
Hongo, 7 chome                        VO-1, P.O. Box 218
Bunkyo-ku                             5600 MD Eindhoven,
Tokyo, Japan                          The Netherlands
telephone (212) 2111 ext. 6652        telephone (31) (40) 884962
                                      ArpaNet: Philips@sri-csl

Local Committee Chairman:             IFIP WG 10.2 Chairman:

Dr.Takao Uehara                       Dr. Mario R. Barbacci
Tools and Methodology Section         Department of Computer Science
Software Laboratory                   Carnegie-Mellon University
Fujitsu Laboratories Ltd.             Pittsburgh
1015 Kamikodanaka Nakahara-ku         Pennsylvania 15213
Kawasaki 211, Japan                   USA
telephone (81) (44) 777 1111 X6155    telephone (412) 578-2578
telex 3842 122

Local Committee:

H. Ando  (publicity),  Y. Ikemoto  (local  arrangements), O. Karatsu (finance),
T. Uehara (Chairman)

Program Committee:

M. Barbacci (USA), D. Borrione (France), S. Crocker (USA), J. Darringer  (USA),
S. Dasgupta    (USA),   R. Hartenstein   (FRG),   E. Hoerbst   (FRG),   J. Jess
(Netherlands),  C.J.   Koomen   (Netherlands,   Chairman),   F. Rammig   (FRG),
W. Sherwood   (USA),  T. Sudo  (Japan),  T. Uehara  (Japan),  M. Vernon  (USA),
A. Yamada (Japan)

------------------------------

End of AIList Digest
********************

From:	CSVPI          24-NOV-1984 22:50  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id aa03711; 24 Nov 84 17:49 EST
Date: Sat 24 Nov 1984 13:57-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #160
To: AIList@SRI-AI
Received: from rand-relay by vpi; Sat, 24 Nov 84 22:46 EST


AIList Digest           Saturday, 24 Nov 1984     Volume 2 : Issue 160

Today's Topics:
  Plan Recognition
  Hardware - Uses of Optical Disks,
  Linguistics - Language Simplification & Natural Languages,
  Seminars - Intention in Text Interpretation (Berkeley) &
    Cooperative Distributed Problem Solving (CMU) &
    A Shape Recognition Illusion (CMU)
----------------------------------------------------------------------

Date: 21 Nov 1984 15:55:02-EST
From: kushnier@NADC
Subject: Plan Recognition


                               WANTED

We are interested in any information, papers, reports, or titles of same
dealing with AI PLAN RECOGNITION that can be supplied to the government
at no cost (they made me say that!). We are presently involved in an
R&D effort requiring such information.

                                          Thanks in advance,

                                          Ron Kushnier
                                          Code 5023
                                          NAVAIRDEVCEN
                                          Warminster Pa. 18974

kushnier@nadc.arpa

------------------------------

Date: 19 Nov 84 16:55:09 EST
From: DIETZ@RUTGERS.ARPA
Subject: Are books obsolete?

        [Forwarded from the Human-Nets Digest by Laws@SRI-AI.]

Sony has recently introduced a portable compact optical disk player.
I hear they intend to market it as a microcomputer peripheral for
$300.  I'm not sure what its capacity will be, so I'll estimate it at
50 megabytes per side.  That's 25000 ascii coded 8 1/2x11 pages, or
1000 compressed page images, per side.  Disks cost about $10, for a
cost per word orders of magnitude less than books.

Here's an excellent opportunity for those concerned with the social
impact of computer technology to demonstrate their wisdom.  What will
the effect be of such inexpensive read-only storage media?  How will
this technology affect the popularity of home computers?  What
features should a home computer have to fully exploit this technology?
How should text be stored on the disks?  What difference would
magneto-optical writeable/erasble disks make?  How will this
technology affect 
Date: Tue, 20 Nov 84 22:26:16 est
From: FRAWLEY <20568%vax1%udel-cc-relay.delaware@udel-relay.ARPA>
Subject: Re: Language Simplification,  V2 #157

On Gillam's comments on simplification:

1. In the South U.S., there is a raising of the vowels. "Pen" becomes
"pin." This results in homophony between the words "pen" and "pin."
Thus, in these dialects, the word "pin" becomes something like "peeun,"
with the vowel raised even more. The lesson is that an ostensible sim-
plification complicates the system further by requiring a dif-
ferentiation between certain phonological forms. This is an instance of
supposed regularity causing complication.

------------------------------

Date: Sun, 18 Nov 84 17:45:34 PST
From: "Dr. Michael G. Dyer" <dyer@UCLA-LOCUS.ARPA>
Subject: what language 'is' (?)


re:  what natural language 'is'

While it's fun to make up criteria and then use those criteria to judge
one natural language as 'superior' to another, or decide that a given NL
has 'degenerated' etc, I don't really see this approach as leading
anywhere (except, perhaps, for 'phylogenetic' studies of language
'speciation', just as pot shards are examined in archeology for cultural
contacts...  We could also spend our time deciding which culture is
'better' by various criteria, e.g. more weapons, less TV, etc).

It's also convenient to talk about natural language as if it's something
"on its own".  However, I view this attitude as scientifically
unhealthy, since it leads to an overemphasis on linguistic structure.
Surely the interesting questions about NL concern those cognitive
processes involved in getting from NL to thoughts in memory and back out
again to language.  These processes involve forming models of what the
speaker/listener knows, and applying world knowledge and context.  NL
structure plays only a small part in these overall processes, since the
main ones involve knowledge application, memory interactions, memory
search, inference, etc.

e.g. consider the following story:

     "John wanted to see a movie.  He hopped on his bike
     and went to the drugstore and bought a paper.
     Then he went home and called the theater to get the
     exact time."

now we could have said this any number of ways, eg.

     "John paid for a paper at the drugstore.  He'd gotten
     there on his bike.  Later, at home,  he used the number
     in the paper to call the theater,  since he wanted
     to see a movie and needed to know the exact time."

The reason we can handle such diverse versions -- in which the goals and
actions appear in different order -- is that we can RECONSTRUCT John's
complete plan for enjoying a movie from our general knowledge of what's
involved in selecting and getting to a movie.  It looks something like
this:

     enjoy movie
          need to know what's playing
           --> read newspaper (ie one way to find out)
                  need newspaper
                  --> get newspaper
                        possess newspaper
                           need $  to buy it (ie one way to get it)
                        need to be where it's sold
                           need way to get there
                             --> use bike (ie one way to travel)
          need to know time
            --> call theater (ie one way to find out)
                   need to know phone number
                     --> get # out of newspaper

          need to physically watch it
            need to be there
              --> drive there (ie one way to get there)
            need to know how to get there
               etc

We use our pre-existing knowledge (e.g.  of how people get to a movie of
their choice) to help us understand text about such things.  Once we've
formed a conceptual model of the planning involved (from our knowledge
of constraints and enablement on plans and goals), then we can put the
story 'in the right order' in our minds.

In fact, the notion of goals, plans, and enablements should be universal
among all humans (the closest thing to a 'universal grammar', for people
who insist on talking about things in terms of 'grammars').  Given this
fact, EVERY natural language should allow sparse and somewhat
mixed-order renditions of plan-related stories.  Is this a feature,
then, of one or more NATURAL LANGUAGEs, or is it really a feature of
general INTELLIGENCE -- i.e. planning, inference etc.

Clearly the interesting problems here are:  how to represent goal/plan
knowledge, how this knowledge is referred to in a given language, and
how these knowledge sources interact to instantiate a representation of
what the reader knows after reading about John's movie trip.

(Of course, other types of text will involve other kinds of conceptual
constructs -- e.g. editorial text involves reasoning and beliefs).

Wittgenstein expressed the insight -- i.e. that natural languages are
fundamentally different from formal languages -- in terms of his notion
of "language games".  He argued that speakers are like the players of a
game, and to the extent that the players know the rules, they can do
all sorts of communication 'tricks' (since they know another player
can use HIS knowledge of the "game" to extract the most appropriate
meaning from an utterance, gesture, text...).  As a result, Wittgenstein
felt it was quite misguided to argue that formal languages are 'better'
because they're unambiguous.

Now this issue is reappearing in a slightly different guise as a number
of ancient natural(?) languages are offered as 'the answer' to our
representational problems, based on the claim that they are unambiguous.
Two favorites currently seem to be sastric sanskrit and a Bolivian
language called "Aymara".

(Quote from news article in LA Times, Nov.  7, '84 p 12:  "...  wisemen
constructed the language [Aymara] from scratch, by logical, premeditated
design, as early as 4,000 years ago")

I suspect ancient and exotic languages are being chosen since fewer
people know enough about them to dispute any claims made.  Of course
this isn't done on purpose:  it's simply that the better known NLs that
get proposed are more quickly discarded since more people will know, or
can find, counter-examples for each claim.

By the way, the kinds of discussions we have here at UCLA on NL are very
different from those I see on AIList.  Instead of arguing about what
language 'is' (i.e.  the definitional approach to science that Minksy and
others have  criticized on earlier AILists), we try to represent ideas
(e.g.  "Religion is the opiate of the masses", "self-fulfilling
prophecy", "John congratulated Mary", etc) in terms of abstract
conceptual data structures, where the representation chosen is judged in
terms of its usefulness for inference, parsing, memory search, etc.
Discussions include how a conceptual parser would take such text and map
it into such constructs; how knowledge of these constructs and
inferential processes can aid in the parsing process; how the resulting
instantiated structures would be searched during:  Q/A, advice
giving, paraphrasing, summarization, translation, and so on.

It's fun to BS about NL, but I wouldn't want my students to think that
what appears on AIList (with a few exceptions) re: NL is the way NL
research should be conducted or specifies what the important research
issues in NL are.

I hope I haven't insulted anyone.  (If I have, then you know who you
are!)  I'm guessing that most readers out there actually agree with me.

------------------------------

Date: Wed, 21 Nov 84 14:02:39 pst
From: chertok%ucbcogsci@Berkeley (Paula Chertok)
Subject: Seminar - Intention in Text Interpretation (Berkeley)

             BERKELEY COGNITIVE SCIENCE PROGRAM
                         Fall 1984
           Cognitive Science Seminar -- IDS 237A

   TIME:                Tuesday, November 27, 11 - 12:30
   PLACE:               240 Bechtel Engineering Center
   DISCUSSION:          12:30 - 2 in 200 Building T-4

SPEAKER:        Walter Michaels and  Steven  Knapp,  English
                Department, UC Berkeley

TITLE:          ``Against Theory''

ABSTRACT:       A discussion of the role of intention in the
                interpretation   of  text.   We  argue  that
                linguistic meaning  is  always  intentional;
                that   linguistic   forms  have  no  meaning
                independent of  authorial  intention;   that
                interpretative disagreements are necessarily
                disagreements about what a particular author
                intended  to  say;  and that recognizing the
                inescapability of intention has fatal conse-
                quences  for  all  attempts  to  construct a
                theory of interpretation.

------------------------------

Date: 21 Nov 84 15:24:46 EST
From: Steven.Shafer@CMU-CS-IUS
Subject: Seminar - Cooperative Distributed Problem Solving (CMU)

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

Victor Lesser, from U. Mass., is coming to CMU on Tuesday to present
the AI Seminar.  He will be speaking about AI techniques for use on
distributed systems.  3:30 pm on Tuesday, November 27, in WeH 5409.


COOPERATIVE DISTRIBUTED PROBLEM SOLVING

   This research topic is part of a new research area that has
recently emerged in AI, called Distributed AI.  This new area
combines research issues in distributed processing and AI by
focusing on the development of distributed networks of
semi-autonomous nodes that cooperate interactively to solve a
single task.
   Our particular emphasis in this general research area has
been on how to design such problem-solving networks so that
they can function effectively even though processing nodes have
inconsistent and incomplete views of the data bases necessary for
their computations.  An example of the type of application that
this approach is suitable for is a distributed sensor network.
   This lecture will discuss our basic approach called Functionally-
Accurate Cooperative Problem-Solving, the need for sophisticated
network-wide control and its relationship to local node control, and
[end of message -- KIL]

------------------------------

Date: 21 November 1984 1639-EST
From: Cathy Hill@CMU-CS-A
Subject: Seminar - A Shape Recognition Illusion (CMU)

Speaker:  Geoff Hinton and Kevin Lang (CMU)
Title:    A Strange property of shape recognition networks.

Date:     November 27, l984
Time:     12 noon - 1:30 p.m.
Place:    Adamson Wing in Baker Hall

Abstract: We shall describe a parallel network that is capable of
          recognizing simple shapes in any orientation or position
          and we will show that networks of this type are liable to
          make a strange kind of error when presented with several
          shapes that are followed by a backward mask.  The error
          involves perceiving one shape in the position of another.
          Anne Treisman has shown that people make errors of just
          this kind.

------------------------------

End of AIList Digest
********************

From:	COMSAT         26-NOV-1984 23:11  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a007582; 25 Nov 84 19:26 EST
Date: Sun 25 Nov 1984 15:31-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #161
To: AIList@SRI-AI
Received: from rand-relay by vpi; Mon, 26 Nov 84 22:53 EST


AIList Digest            Sunday, 25 Nov 1984      Volume 2 : Issue 161

Today's Topics:
  Benchmarking Reading Comprehension
  Reviews - AI Abstracts & IEEE Computer & High Technology & Learnability,
  Humor - Brain Structure,
  Algorithms - Many-Body Problem & Macro Cacheing & Linear Programming,
  Seminar - Set Theoretic Problem Translation (CMU)
----------------------------------------------------------------------

Date: Sun 25 Nov 84 01:50:44-EST
From: Wayne McGuire <MDC.WAYNE%MIT-OZ@MIT-MC.ARPA>
Subject: Benchmarking Reading Comprehension

     Does anyone know if any objective standards or tests have been
devised for rating or benchmarking the power of natural language
understanding systems?

     In the world of chess exists a standard international system for
rating players which can be applied to chess-playing programs. I think
it would be useful to devise a similar system for natural language
understanding software. Such a benchmarking scheme would make it
possible to track the rate of progress in the most fundamental branch
of computational linguistics, and to compare the performance of
competing systems. The National Bureau of Standards might be an
appropriate organization to oversee a project of this kind.

     Perhaps such a benchmarking system could be based on the reading
comprehension sections of the SAT or GRE exams. A GRE-style multiple
choice test for natural language understanding would avert the problem
of wrongly jumbling the capacity to understand--to recognize
propositions, reason, and draw inferences--with the ability of a
program to answer questions with well-formed discourse, a domain of
skill which is really quite separate from pure comprehension. It would
be desirable to establish standard tests for every major language in
the world.

     Is there an existing natural language understanding system in the
world that can read even at the level of a third grader? Probably not.

     To that researcher or research team in the world which first
designs (no doubt decades from now) a program which consistently
scores at least 700 on the reading comprehension sections of
standardized tests like the SAT or GRE could be offered, perhaps, a
major cash prize.

------------------------------

Date: Sat 24 Nov 84 15:27:29-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: AI Abstracts

Two ads for AI abstracts and indices have recently crossed my desk:

Scientific Datalink is offering a four-volume index to the AI
research reports that they offer in microfiche (one volume of
authors and titles, one of subjects, and two of abstracts).
The price is $375 now, $495 after publication.  Individual
volumes are not offered separately in this ad.  For more
information, write to Ms. Chia Reinhard, Scientific Datalink,
850 Third Avenue, New York, NY 10022.  (212) 838-7200.

ECI/Intelligence is offering a new journal, Artificial Intelligence
Abstracts, at $295 for 1985.  (The mockup of the first issue is dated
October 1984, but the text consists of such gems as "Ut einim ad
minim veniam, quis nostrud exercitation laboris nisi ut aliquip ex
ea commodo consequet.")  The journal offers to keep you up to date on
market research, hardware and software developments, expert systems,
financial planning, and legislative activities.  There is a similar
journal for CAD/CAM.  The AI advisory board includes Harry Barrow,
Michael Brady, Pamela McCorduck, and David Shaw.

ECI/Intelligence also offers a full-text document order service
from their microfiche collection.  For more info, write to them
at 48 West 38 Street, New York, NY 10018.

                                        -- Ken Laws

------------------------------

Date: Sat 24 Nov 84 14:51:17-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: IEEE Computer Articles

AI is mentioned a few times in the November issue of IEEE Computer.

On p. 114, there are excerpts from the keynote Compcon speech by
Robert C. Miller, a senior vice president of Data General.  He is
touting expert systems, and estimates that overall sales of AI-related
products will increase from $100-150 million this year to $2.5
billion by the end of the decade.

P. 117 has a very short mention of NCAI and the coming IJCAI.

P. 133 has L. Elliot's review of Learning and Teaching with Computers,
by Tim O'Shea and John Self.  The book is evidently about half AI
(Logo, MYCIN, knowledge engineering, and epistemology) and half
computer-assisted learning (teaching styles, learning styles,
tutoring strategies).

The rest of the magazine is mostly about Teradata's database machines,
the PICT graphical programming language, workstations in local area
networks, and some overviews of software engineering at NEC and GTE.

                                        -- Ken Laws

------------------------------

Date: Sat 24 Nov 84 15:02:27-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: High Technology Articles

The December issue of High Technology has some interesting articles
for computer folk.  On p. 9 there's a short item about Logicware's
(Hungarian-developed) MPROLOG, a "modular" PROLOG for IBM PCs and
mainframes, 68000-based micros, VAXen, and other hosts.  Other
articles review the best of current microcomputer equipment (chiefly
PC-AT and Macintosh), survey the field of visual flight and driving
simulators, and present an excellent introduction to database
structures and machines (esp. relational databases).

                                        -- Ken Laws

------------------------------

Date: Sun 25 Nov 84 15:25:50-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: CACM Article on Learning

The November issue of CACM includes "A Theory of the Learnable", by
L.G. Valiant of Harvard.  I am not competent to evaluate the article,
which is based on theorems in computational complexity, but I can
characterize it as follows:

The author is considering the class of concepts in propositional logic
that can be learned in a polynomial number of steps from a source of
positive examples (produced as required in accordance with a probability
distribution) and an oracle that can classify an arbitrary Boolean vector
as a positive or negative exemplar.  The classes that are found to be
learnable are  1) conjunctive normal form expressions with a bounded
number of literals in each clause (no oracle required),  2) monotone
disjunctive normal form expressions, and  3) arbitrary expressions
in which each variable occurs just once (no examples required, but
the oracle must be especially capable).  The method of learning used
is such that the learned concept may occasionally reject true exemplars
but will not accept false ones.

The closing remarks contain this interesting quote:

  An important aspect of our approach, if cast in its greatest
  generality, is that we require the recognition algorithm of the
  teacher and learner to agree on an overwhelming fraction of only
  the natural inputs.  Their behavior on unnatural inputs is
  irrelevant, and hence descriptions of all possible worlds are not
  necessary.  If followed to its conclusion, this idea has considerable
  philosophical implications:  A learnable concept is nothing more
  than a short program that distinguishes some natural inputs from
  some others.  If such a concept is passed on among a population
  in a distributed manner, substantial variations in meaning may arise.
  More importantly, what consensus there is will only be meaningful
  for natural inputs.  The behavior of an individual's program for
  unnatural inputs has no relevance.  Hence thought experiments and
  logical arguments involving unnatural hypothetical situations
  may be meaningless activities.

                                        -- Ken Laws

------------------------------

Date: Tue 20 Nov 84 17:36:43-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: quote  {:-)

Attributed to Marvin Minsky (someone tell me if it's wrong)

        "I'll bet the human brain is a kludge."

                                                        - Richard

------------------------------

Date: Sat 24 Nov 84 14:36:33-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Many-Body Problem

Need software to run 10,000-body simulations?  A VAX Pascal program
is discussed in the November CACM Programming Pearls column.
Optimization brought the run time down from one year to one day.

                                        -- Ken Laws

------------------------------

Date: 22 Nov 84 17:20 PST
From: JonL.pa@XEROX.ARPA
Subject: Macro cacheing: Interlisp-D Interpreter as "malgorithm"?

Jay Ferguson suggests, in his contribution of 18 Nov 84 13:55:58-PST,
that an explanation for the timing differences between using a FOR loop
and using LDIFF in the original "Interlisp-D malgorithm" is because
"Each time you call a FOR statement interpetively the translation
occurs."  This is not the case -- the Interlisp interpreter (in all
implementations of Interlisp, I believe) caches the results of any macro
or CLISP expansion into a hash array called CLISPARRAY; see secton 16.8
of the Interlisp Reference Manual (Oct 1983).  In fact, the figures
supplied by Jay show a speed difference of a factor of 17, which would
be consistent with the basic loop being compiled in LDIFF (a "system"
function) and being interpreted in the FOR.

The question of "cacheing" as noted above is a complicated one, and in
Jay's defense, I can say that it is not at all clearly outlined in the
IRM.   For example, it lays the burden on the "in-core" structure editor
of examining CLISPARRAY on any change, to de-cache the expansions for
any code that is modified; but random modifications (caused by, say,
redefinition of a function upon which the expansion depends), don't
cause de-cacheing, and this is the source of some very obscure bugs.
Furthermore, lots of cruft may stick around forever because the garbage
collector does not reclaim items placed in the cache; for this reason,
it is advisable to ocasionally do (CLRHASH CLISPARRAY).

MacLisp provides three options for macro expansions which are
controllable on a macro-by-macro basis (CLISP is, unfortunately, a
kludge dating to pre-macro Interlisp days -- it could and should be
implemented entirely as a set of macros, so I will view it in that light
for the rest of this discussion): (1) do no cacheing, (2) "displace" the
original list cell containing the macro call with a form which contains
both the original and the expanded code [compiler and interpreter use
the expanded code, prettyprinter uses the original], and (3) cache the
expansion in a hash array called MACROMEMO.  While all these options can
be run in any Lisp that implements macros by giving the expansion
function a pointer to "the whole cell", Common Lisp provides the
*macroexpand-hook* facility so that the cacheing code which is common to
all macros can be put in one place, rather than distributed throughout
the many macro bodies.

-- JonL --

------------------------------

Date: 22 Nov 84 07:15:10 PST (Thu)
From: Carl Kaun <ckaun@aids-unix>
Subject: Linear Programming Algorithms


The recent discussion of the Karmarkar Linear Programming algorithm on
this newsgroup has stirred me to comment.  I wouldn't think a cubic time
linear programming algorithm to be any great deal, indeed I present one
myself shortly.  An algorithm that solves the INTEGER linear programming
problem, however, is something else again.  My understanding is that the
Khachiyan algorithm solved the integer problem in polynomial time.  If
the Karmarkar algorithm does also, then it is truly worthy.  But it has
not been apparent in the popularized discussion I have been reading that
this is so.  Perhaps those on the net who are more in the know can tell
us whether it is or not.

Ever since I took Luenberger's course in Linear and Nonlinear Programming
(Stanford EES department, 1973) I have wondered why people didn't apply
the powerful nonlinear tools to the linear problem.  I played around with
the idea and came up with an algorithm I call "gradient step linear
programming".  I've never done anything with it because it's so simple
and obvious that it seemed someone had to have thought of it before.
Because the algorithm follows the gradient as best it can subject to
the constraints, from a geometric point of view it travels through the
"interior" of the polytope, much as has been described for the Karmarkar
algorithm.  Optimality is achieved in no more than N steps, each step
requiring o(N^2) numerical operations, where N is  the dimension of the
space.

Mathematical notation isn't easy on a terminal.  I adopt the convention
of representing vectors by preceding them with an underline, as "_x".
Subscripts I represent using a colon, _c:j being the j-th vector _c.
The inner product is represented by < * , * >.  I use a functional
form (i.e. f(  )) to represent things like sums. The rest should be
fairly obvious.

A statement of the linear programming problem, convenient for the description
of the algorithm,  follows.  This problem statement is readily converted into
other forms of the linear programming problem.  The problem is to maximize
with respect to the N-dimensional vector _x the linear functional:
               <_c , _x >
 subject to the constraints:
               <_a:j , _x > >= b:j   for j = 1, 2, ..., M
The vector '_c' is often called the cost vector when a minimization problem is
being considered.  M >= N, as otherwise the solution is unbounded.

The procedure for finding an initial feasible vector _x(0) is essentially
identical to the procedure for finding an optimum vector.  For now an initial
feasible vector _x(0) in the interior of the polytope defined by the
constraints may simply be assumed.  The initial gradient vector,
maximizing the rate of change subject to the active constraints, is _c(0) =
_c.  At each stage, the idea of the algorithm is to move along the current
gradient vector (subject to the active constraints) as far as possible, until
a previously inactive constraint is encountered.  The direction of change is
modified by the most recent active constraint, until no motion in the
direction of the gradient is possible.  This is both the formal and the
obvious stopping condition for the algorithm.  The details of the algorithm
follow:

Step 1:  given a current solution point _x(n), determine the step size s
(giving the next solution point _x(n+1) = _x(n) + s*_c(n), identifying at
the same time the next active constraint.
          D:j = < _x(n) , _a:j > - b:j    ( >= 0 )
          s:j = D:j / < _c(n) , _a:j >    for j in the set of inactive
constraints.
          s = min { s:j } , and the next active constraint has the index j(n)
providing the minimum, which is also the maximum feasible step size.

Step 2:  Apply the Gram-Schmidt procedure (i.e., projection) to remove the
component of the most recent active constraint from the gradient direction, so
that subsequent steps will not result in violation of the active constraint.
It is necessary first to remove all components of previous active constraints
from the newly actived constraint to insure that the adjusted gradient
direction will not violate any previous acitve constraint.
          _a(n) = _a:j(n) - sum(k = 0 to (n-1))[
                         _a(k) * <_a:j(n),_a(k)> / <_a(k),_a(k)> ]

          _c(n+1) = _c(n) - _a(n) * <_c(n),_a(n)> / <_a(n),_a(n)>

Steps 1 and 2 are repeated until _c(n+1)=0, at which point _x(n) is the
optimal solution to the linear programming problem.  Additional tests to
detect and recover from degeneracy are easily added.

A detailed proof of optimality is straightforward but somewhat tedious.
Intuitively, the algorithm is optimal because steps are always taken along the
direction maximizing the rate of change of the functional, subject to the
active constraints.  At the optimal point, there is no feasible motion
improving the functional.  Stated differently, the original cost vector lies
in the space spanned by the gradients of the constraints, and this is the
formal (Lagrange) optimization condition.  It is only necessary to add
constraints to the set of active constraints because the optimization space is
convex, and therefore changes in the functional improvement direction (and
reduction in the rate of improvement) result only from encountering new
constraints and having to turn to follow them.

Note that the number of iterations is simply the number of dimensions N of the
space, this being also the number of vectors required to span the space.  Each
iteration entails the removal of o(N) vector components from the new
constraint, and the removal of a vector component entails o(N) multiplications
and additions.  Similarly, determining the step size requires the computation
of o(N) inner products, each requiring o(N) multiplications and additions.
Finding the iniitial feasible vector requires about the same effort in
general.  Thus overall the algorithm presented for solving the linear
programming problem requires O(N**3) arithmetic operations.

An initial feasible point can be determined starting from an arbitrary point
(say the origin), identifying the unsatisfied constraints, and moving in
directions that satisfy them.  It may be more direct to simply start with a
"superoptimal" point, say K*_c for suitably large K, and iterate using
essentially the previously described algorithm along the negative constrained
gradient directions to feasibility.  By duality, the resulting feasible point
will also be optimal for the original problem.

                                                Carl F. Kaun

                                                ckaun@aids-UNIX
                                                415/941-3912

------------------------------

Date: 21 November 1984 1014-EST
From: Staci Quackenbush@CMU-CS-A
Subject: Seminar - Set Theoretic Problem Translation (CMU)

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

        Name:   Robert Paige
        Date:   November 27, 1984
        Time:   10:00 - 12:00
        Place:  WeH 8220
        Title:  "Mechnical Translation of Set Theoretic Problem
                 Specifications into Efficient RAM Code"


Many computer problems can be expressed accurately and easily in the
following form: 'find the unique set s subset of t satisfying
predicate k(s) and minimizing objective function f(s)'.  Although such
specifications are generally unsolvable, we can provide rather broad
sufficient conditions under which these formal problem statements can
be compiled into efficient procedural implementations with predictable
time and space complexities.  A toy implementation of such a compiler
has been implemented and used within the RAPTS transformational
programming system.

Our methodology depends on three fundamental program transformations,
two of which resemble classical numerical techniques.  The first
transformation solves roots of set theoretic predicates by iterating
to a fixed point.  It turns an abstract functional program
specification into a lower level imperative form with emerging
strategy.  The second transformation is a generalized finite
differencing technique.  It implements program strategy efficiently by
forming access paths and incremental computations.  The third
transformation is a top down variant of Schwartz's method of data
structure selection by basings.  It replaces sets and maps by
conventional storage structures.

The method will be illustrated using two examples -- graph
reachability and digraph cycle testing.

This is a special 2-hour lecture with a 10-minute break in the middle.

------------------------------

End of AIList Digest
********************

From:	COMSAT         29-NOV-1984 23:41  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a000693; 28 Nov 84 17:37 EST
Date: Wed 28 Nov 1984 13:41-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #162
To: AIList@SRI-AI
Received: from rand-relay by vpi; Thu, 29 Nov 84 23:33 EST


AIList Digest           Wednesday, 28 Nov 1984    Volume 2 : Issue 162

Today's Topics:
  AI Tools - ML to Interlisp Translator & SYMBOLICS 3670 Software,
  Representation - Nonverbal Meaning Representation,
  Databases - Obsolete Books,
  Publicity - New Scientist AI Series,
  Brain Theory - PBS Series on the Brain & Minsky Quote,
  Linguistics - Language Simplification & Natural Language Study,
  Seminars - The Structures of Everyday Life  (MIT) &
    Language Behavior as Distributed Processing  (Stanford) &
    Full Abstraction and Semantic Equivalence  (MIT)
----------------------------------------------------------------------

Date: 27 Nov 84 12:54:44 EST
From: DIETZ@RUTGERS.ARPA
Subject: ML to Interlisp Translator Wanted

I'd like to get a translator from ML to Interlisp.  Does anyone have one?

Paul Dietz (dietz@rutgers)

------------------------------

Date: Tue, 27 Nov 84 12:59:42 pst
From: nick pizzi <pizzi%uofm-uts.cdn%ubc.csnet@csnet-relay.arpa>
Subject: SYMBOLICS 3670 software

     Would anyone happen to know whether or not the SYMBOLICS machines
(specifically, the 3670) have PROLOG and/or C as available language
options?

     Furthermore, does the 3670 have any available software packages
for image processing (especially, symbolic image processing)?

     Thank-you in advance for any information which you might provide!

                                                Sincerely,
                                                nick pizzi

------------------------------

Date: Wed, 28 Nov 84 09:59:31 pst
From: Douglas young <young%uofm-uts.cdn%ubc.csnet@csnet-relay.arpa>
Subject: Nonverbal meaning

  Is there anyone out there working on completely nonverbal meaning
representations of words and sentences?  Although I have been working
on this problem for a very substantial time, and have reached some
significant solutions ( which I expect to have published in the form
of a book , the draft ms for which is already completed, and in
several papers }, during 1985 ), I have not been able to date to
discover anyone else who is working on this specific aspect of NLU.
However, it is impossible to believe that there are no others working
on this, and a newly acquired membership of the AIList appears to be
an invaluable way of finding out who is involved and where they are.
If you are working in this area, or if you know of anyone who is,
please would you send me a message ( network address as in header )
with a short note of what is being done, and include a postal address;
alternatively, write or call me.

      Douglas Young
      Dept. of Computer Science,
      University of Manitoba,
      Winnipeg,
      Manitoba, R3T 2N2
      Canada                  Tel: (204) 474 8366  (lab)
                                         474 8313  (messages)
 PS: {Two original papers describing some of the principles of the techniques
     I employ, that were published in the medical literature during 1982-83,
     are largely out of date in almost every respect ( except for some of the
     neurological arguments, that form the foundation of the principles ),so
     I am not including their references here.

------------------------------

Date: Tue, 27 Nov 84 18:05:24 mst
From: jlg@LANL (Jim Giles)
Subject: obsolete books?

> Sony has recently introduced a portable compact optical disk player.
> I hear they intend to market it as a microcomputer peripheral for
> $300.  I'm not sure what its capacity will be, so I'll estimate it at
> 50 megabytes per side.  That's 25000 ascii coded 8 1/2x11 pages, or
> 1000 compressed page images, per side.  Disks cost about $10, for a
> cost per word orders of magnitude less than books.

The capacity of a normally formatted compact disc (audio people spell it
with a 'c') is about 600 megabytes.  That's without counting the error
correcting information.  The number is for about one hour of music sampled
with two 16-bit channels at a rate of 44.1 kHz.  Furthermore, some companies
are already demonstrating 'write once' disks with about 500 megabytes
for use as computer peripherals.  I've even seen one proposal for an
erasable disk using magneto-optical technology.

It has already been suggested that the advent of very cheap mass storage
devices will soon replace dictionaries, encyclopepias, catalogues, etc.
There has also been talk of software (such as spelling checkers) which
require very large data bases being either cheap or public domain.  I
think it will be a while before books are replaced, though.  Nobody wants
to carry video monitor in their briefcase just to catch up on their
favorite science fiction interests.  Besides, paperback books are still
cheaper than compact discs by about a factor of 4 or more.

I'm holding off buying new drives for my home computer for a while.  This
new stuff seems to be worth waiting for.

------------------------------

Date: 27 Nov 84 17:00:07 EST
From: DIETZ@RUTGERS.ARPA
Subject: New Scientist AI Series

The British magazine New Scientist is running a three part series on AI.
The first article, in the Nov. 15 issue, has the title "AI is stark naked
from the ankles up".  It has some very interesting quotes from John McCarthy,
W. Bledsoe, Lewis Branscomb at IBM and others.  The article is critical
of the way AI has been oversold, of the quality (too low) and quantity
(too little) of AI research, and of the US reaction to the Japanese new
generation project, especially Feigenbaum and McCorduck's book.

------------------------------

Date: Wed 28 Nov 84 11:53:16-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: PBS Series on the Brain

The PBS series on the brain has focussed each week on specific neural
systems and their effects on behavior.  The last show concentrated on
hearing and speech centers, and had a particularly enlightening
example.  It showed a lawyer who had suffered damage to his hearing or
linguistic centers.  (Sorry, I don't remember exactly where.)  He
still had a normal vocabulary and could understand most sentences,
although slowly and with great difficulty.  He was unable to parse or
store function words, however.  When asked "A leopard was killed by a
lion.  Which died?", he was unable to answer.  (He also knew that he
had no way of determining the answer.)  When asked "My uncle's sister
..., is it a man or a woman?" he was similarly unable to know.

Another example was a woman who could not recognize faces, even when
she was presented with a picture of her interviewer and told who it
was.  She could describe the face in detail, but there was no flash
of recognition.  She lives in a world of strangers.

A previous show desribed various forms of amnesia, and the role of the
hippocampus in determining which events are to be stored in long-term
memory.  Or rather, in the conscious LTM.  One subject was repeatedly
trained on the Tower of Hanoi puzzle; each time it was completely
"new" to him, but he retained strategy skills learned in each session.

The question was raised why no one can remember events prior to the
age of five.  I suppose that we create a mental vocabulary during the
first years, and later record our experiences in terms of that
vocabulary.  (It would be awkward, wouldn't it, if the vocabulary
changed as we got older?  Memories would decay as we lost the ability
to decode them.)  This suggests that we might be unable to learn
concepts such as gravity, volume, and cooperation if we do not learn
them early enough.  I'm sure there must be evidence of such phenomena.

The last two shows in the series will be shown Saturday (in the San
Francisco area).

                                        -- Ken Laws

------------------------------

Date: Mon, 26 Nov 1984  03:27 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Re: Quote, V2 #161

I certainly have suggested that the human brain is a kludge, in the
sense that it consists of many complex mechanisms, accumulated over
the course of evolution, a lot of which are for correcting the bugs in
others.

However, this is not a useful quotation for public use, because
outside of engineering, the word "kludge" is not in the general
language.  There isn't even any synonym for it.  The closest phrase
might have been "Rube Goldberg device" -- but that, too, is falling
out of use.  Anyway, a Rube Goldberg device did not have the right
sense, because that cartoonist always drew machines which were
complicated serial devices with no loops and, hence, no way to correct
bugs.  My impression is that a "kludge" is a device which actually
usually works, but not in accord with neat principles but because all
or most of its bugs have been fixed by adding ad hoc patches and
accessories.

By the way, the general language has no term for "bug" either.
Programmers mean by "bug" the mechanism responsible for an error,
rather than the surface error itself.  The lack of any adequate such
word suggests that our general culture does not consider this an
important concept.  It is no wonder, then, that our culture has so
many bugs.

------------------------------

Date: Mon, 26 Nov 84  8:20:27 EST
From: Bruce Nevin <bnevin@BBNCCH.ARPA>
Subject: Language Simplification

On Frawley on Gillam on simplification:

You needn't go so far south for pen/pin homophony, it occurs in certain
midwestern dialects and I believe even in New Jersey, as merger pure and
simple.  And of course you are talking not about homophony but about shifted
contrast such that `pin' of your dialect is "homophonous" with `pen' of the
southern dialect.  (Is English `showed' "homophonous" with the French word
for `hot'?)

Phonological systems do change in the ways that you deny, as
witness for example the falling together of many vowels to i in modern
Greek (classical i, ei, oi, y, long e (eta), yi all become high front i),
and the merger of several Indo-European vowels in Sanskrit a.

I have not seen Gillam's comments (just joined the list), so let me say
too that languages do preserve systematic contrasts while shifting their
location, and that the observation about southern dialects of US English
is correct.  Whether the result of change is merger or relocated contrast
depends on sociological as well as physiological and psychoacoustic factors,
and no simple blanket statement fits all cases.

        Bruce Nevin

------------------------------

Date: Mon, 26 Nov 1984  03:12 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Re: Natural Language Study, V2 #160


Bravo, Dyer!  As you suggest, there is indeed much to learn from the
study of natural language -- but not about "natural language itself";
we can learn what kinds of manipulations and processes occur in the
under-mind with enough frequency and significance that it turns out to
be useful to signify them with surface language features.

For example, why do all languages have nouns and adjectives?  Because
the brain has some way to aggregate the aspects of "objects" and
retrieve these constellations of partial states of mind.  Why
adjectives?  To change particular properties of noun-objects.  Why put
adjectives near the nouns?  So that it is easy to recognize which
properties of what to modify.  Now, if we consider the which surface
relations are easiest to recognize by machinery, the near-ness of
words is surely among the easiest of all -- so we can expect that
human societies will find an important use for this.  Thus, if
adjective-noun relations are "universal" in human languages. it need
not be because of any mysterious syntactic apriori built into some
innate language-organ; it could be because that underlying cognitive
operation -- of modifying part of a representation without wrecking
the rest of it -- is a "cognitive universal".  Similarly, the study of
how pronouns work will give us clues about how we link together
different frames, scripts, plans, etc.

All that is very fine.  We should indeed study languages.  But to
"define" them is wrong.  You define the things YOU invent; you study
the things that already exist.  Then, as in Mathematics, you can also
study the things you define.  But when one confuses
the two situations, as in the subjects of generative linguistics
or linguistic competence -- ah, a mind is a terrible thing to waste,
as today's natural language puts it.

------------------------------

Date: 27 Nov 1984 11:13-PST (Tuesday)
From: Rick Briggs <briggs@RIACS.ARPA>
Subject: Natural Language


        The reason why it is important to study natural languages
"on their own" and to understand language degredation etc. is because
language influences how its speakers think.  This idea, known commonly
as the "Whorf hypothesis" has its correlate in computer languages
and in potential interlingua.  The usual examples include AmerIndian
languages which have little concept of time.
        If you have only Fortran to program in, many elegant programming
solutions simply will not present themselves.  The creation of
higher level languages allows the programmer to make use of complex
data structures such as 'predicates' and 'lists'  instead of addresses.
        These higher level data structures correspond to the concepts
available in a natural language.  Primitive languages which exist mainly
for simple communication will not allow the kind of
thinking(programming) as a language with "higher level" concepts
(data structures).
        In the same way that a conceptually rich language(like Sanskrit)
allows greater expression that Haitian Creole does, and that
LISP vs. assembly does, Sastric Sanskrit functions as the ideal
interlingua because of the nature of its high level data structures
(i.e. is formal and yet allows expression of poetry and metaphor).
And in the same way that a particular programming language is chosen
over another for an application, Sastric Sanskrit should be chosen
(or at least evaluated) for those doing work in Machine Translation.

Rick Briggs

------------------------------

Date: 25 Nov 1984  22:38 EST (Sun)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - The Structures of Everyday Life  (MIT)

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

                    The Structures of Everyday Life

                              Phil Agre

             Wednesday, November 28; 4:00pm  8th floor playroom



Computation can provide an observation vocabulary for gathering
introspective evidence about all manner of everyday reasoning.  Although
this evidence is anecdotal and not scientific in any traditional sense, it
can provide strong constraints on the design of the central systems of
mind.  The method is cyclical: attempts to design mechanisms to account
for the phenomenology of everyday activity suggest new classes of episodes
to look out for, and puzzling anecdotes show up weaknesses in designs and
suggest improvements.

I have been applying this method particularly to the study of routines,
the frequently repeated and phenomenologically automatic rituals of which
most of daily life is made.  Some common routines in the lives of people
like me include choosing the day's clothes, making breakfast, selecting a
turnstile in the subway, listening to a familiar piece of music, beginning
and ending conversations, picking up a coffee mug, and opening the day's
mail.  It is not reasonable to view a routine as an automated series of
actions, since people understand what they're doing when carrying out
routine actions at least well enough to recover sensibly if things don't
proceed in a routine way.

I propose to account for the phenomenology of the development of mental
routines in terms of the different stages of processing that arise in the
interaction of a few fairly simple mechanisms.  These stages appear vaguely
to recapitulate the stages of development of cognition in children.

This talk corresponds roughly to my thesis proposal.



COMING SOON: Jonathan Rees [Dec 5], Alan Bawden [Dec 12]

------------------------------

Date: Tue, 27 Nov 1984  23:52 PST
From: KIPARSKY@SU-CSLI.ARPA
Subject: Seminar - Language Behavior as Distributed Processing 
         (Stanford)


Jeff Elman (Department of Linguistics, UCSD)
"Parallel  distributed  processing:   New  explanations  for
                        language behavior"

        Dec. 11, 1984, 11.00 A.M.
        Stanford University, Ventura Hall Conference Room

Abstract:

Many students of human behavior  have  assumed  that  it  is
fruitful  to  think  of the brain as a very powerful digital
computer.  This metaphor  has  had  an  enormous  impact  on
explanations  of  language  behavior.   In  this talk I will
argue that the metaphor is  incorrect,  and  that  a  better
understanding  of  language  is gained by modelling language
behavior with parallel distributed processing (PDP) systems.
These  systems offer a more appropriate set of computational
operations, provide richer insights into behavior, and  have
greater biological plausibility.

I will focus on three specific areas  in  which  PDP  models
offer  new explanations for language behavior: (1) the abil-
ity to simulate rule-guided behavior without explicit rules;
(2)  a  mechanism  for analogical behavior; and (3) explana-
tions for the effect of context on  interpretation  and  for
dealing with variability in speech.

Results from a PDP model  of speech perception  will be pre-
sented.

------------------------------

Date: 27 November 1984 09:21-EST
From: Arline H. Benford <AH @ MIT-MC>
Subject: Seminar - Full Abstraction and Semantic Equivalence  (MIT)

           [Forwarded from the MIT bboard by Laws@SRI-AI.]


       APPLIED MATHEMATICS AND THEORY OF COMPUTATION COLLOQUIUM


                  "FULL ABSTRACTION AND SEMANTIC EQUIVALENCE"

                                Ketan Mulmuley
                          Carnegie Mellon University


                       DATE:  TUESDAY, DECEMBER 4, 1984
                       TIME:  3:30PM  REFRESHMENTS
                              4:00PM  LECTURE
                      PLACE:  2-338

A denotational semantics is said to be fully abstract if denotations of two
language constructs are equal whenever these constructs are operationally
equivalent in all programming contexts and conversely.  Plotkin showed that the
classical model of continuous functions was not a fully abstract model of typed
lambda calculus with recursion.  We show that it is possible to construct a
fully abstract model of typed lambda calculus as a submodel of the classical
lattice theoretic model.

The existence of "inclusive" predicates on semantical domains play a key role
in establishing semantic equivalence of operational and denotational
semantics.  We give a mechanizable theory for proving such existences.  In
fact, a theorem proving has been implemented which can almost automatically
prove the existence of most of the inclusive predicates which arise in
practice.


HOST:  Professor Michael Sipser

------------------------------

End of AIList Digest
********************

From:	CSVPI          29-NOV-1984 23:43  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a000813; 29 Nov 84 13:28 EST
Date: Thu 29 Nov 1984 09:25-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #163
To: AIList@SRI-AI
Received: from rand-relay by vpi; Thu, 29 Nov 84 23:39 EST


AIList Digest           Thursday, 29 Nov 1984     Volume 2 : Issue 163

Today's Topics:
  Philosophy - Dialectics,
  Seminars - Aesthetic Experience  (Berkeley) &
    Phonetics, Discourse, Semantics  (CSLI Stanford) &
    The KEE Knowledge Engineering System  (Stanford)
----------------------------------------------------------------------

Date: Tue, 27 Nov 84 20:42:29 est
From: FRAWLEY <20568%vax1%udel-cc-relay.delaware@udel-relay.ARPA>
Subject: Dialectics

Joel Isaacson (USC) and I (Frawley, Delaware) have recently exchanged, briefly,
ideas about DIALECTICS. Issacson is using dialectics in a theory of image
processing; I am using dialectics in my own work on Soviet theories of
language and cognition and the use of Soviet theories to explain
various quandaries about such things as language learning and text
processing. We thought it would be appropriate to have a general
discussion of dialectics on the AIList.

I have agreed to begin the discussion with a general introduction. Below are
some basic statements on what I see to be the nature and implications
of dialectics, along with some comments on how I see these ideas relating
 to problems of language and cognition. I offer these ideas not as
definitive statements, but as a means to get the ball rolling on a
discussion of dialectics. We (Isaacson and I) would appreciate any
commentary, arguments, etc. that can be given.


1. What is, and Whence, Dialectics?

Dialectics is, first of all, a method. It is a method of analyzing any
phenomenon not in terms of the phenomenon as an isolated entity, but
in terms of the phenomenon in its opposition to other phenomena and how
the opposition of two phenomena give rise to a third phenomenon (the
classic thesis, antithesis, synthesis trichotomy from Hegel). This idea
of opposition can of course be traced back in Western philosophy to Plato
(who loved oppositions), but is more conveniently situated in the work
of Marx. Marx objected to both idealism and positivism: to the former
because it ultimately situated knowledge in one metaphysical entity
(e.g., the pre-programmed subject, as Kant and Piaget argue, or in the
world of pure forms, as Plato argued) and to the latter because it
situated knowledge wholly in terms of the object of knowledge (i.e.,
the world irrespective of the perceiving subject). Marx saw knowledge
only in the dialectical struggle of the perceiving subject and perceived
object which unify in their struggle to produce knowledge. Dialectics is
a way of walking between hopeless metaphysics (idealism) and hopeless
banality (the world). Thus, it does no good simply to talk about
either simple properties of the subject or of the object since
neither exists without the other and neither the subject nor the
object has any privileged status in epistemology. If an epistemology
privileges the subject at the expense of the object, one gets
Piagetian psychology; if one privileges the object at the expense of the
subject, one gets behaviorism, Carnap, or the early Wittgenstein.


2. What does dialectics imply (I use "dialectics" in the  singular since
it is a totality, like the word "linguistics")?

First, it implies that knowledge is the activity of constant struggle.
What is primary in dialectics is not knowledge, but knowING. What is
primary in any dialectical epistemology is not knowledge structures,
but the BUILDING OF KNOWLEDGE. As Leontiev has said, heuristics are
more important than algorithms.

Second, it implies that development never ends. If knowing is a constant
struggle of opposites which unite in synthesis, and if that synthesis then
is opposed to something else and unites with it to produce another
synthesis, knowing never stops. We suffer, in developmental theory, from
a Piagetian epistemological blindness which views development as stopping
after logical operations: thereafter only mere learning occurs. When
studies have shown that only 50% of the U.S. population has achieved
logical operations, I begin to doubt Piaget and begin to side with
Luria, who has shown (Cognitive Development) that development, because
of its dialectical underpinnings, never stops.

Third, it implies that one must be a materialist. The subject is not
a metaphysical entity, but located in the world; the object is not
a metaphysical entity, but located in the world; the dialectical
synthesis of the two is not a metaphysical entity, but a process and
product conditioned by the material circumstances and nature of the
subject and object: dialectics secularizes knowing.

Fourth, it implies that one must always consider history. If knowing is
tied to dialectics in material circumstances, then one must also
realize that circumstances can only be historically given. As
Derrida has argued in his introduction to Husserl's Geometry, there
are no extra-systemic a priori ideas, only historical a priori ideas.
In this way, biological givens are also historically given because
both ontogenesis and phylogenesis are historical.

3. Two Psycholinguistic Implications of Dialectics

It is very chic these days to abandon linguistic competence in favor
of communicative competence by arguing that linguistic competence is
idealized and that communicative competence (pragmatics, speech acts,
intentionality, etc.) is "more real" because communicative competence
considers how language is used in the world. Dialectics shows that this
is a pseudo-argument.

Communicative competence still privileges the subject only, by giving
taxonomies of intentions which the subject felicitously deploys
"in the world." How is this done? That is the "real" question.
Pragmatics, in criticizing Chomskyan competence for being idealized
falls prey to its own criticisms since it still privileges the
subject and idealized linguistic knowledge just one step higher
than the sentence: communicative competence is another from of
idealism (for a very brief discussion, see my review in December 1984
issue of Language, p. 967).

Dialectics has another implication for theories of text processing.
It is typical in text theory to privilege either the subject or the
object: if privileging the former, one acconts for text processing
in terms of mental structures -- schemas, frames, scripts; if
privileging the latter, one accounts for text processing in terms
of the structure of the text -- rhetorical structure, propositional
hierarchies, complexity, etc. A dialectical model would ask how
schemas and text structure interact.

Dialectical considerations of text processing have implications for
AI. In Schank and Abelson's model, e.g., the script or frame is
seminal. From a dialectical model, the script is less important than
the ways by which the machine "decides" to access the script to
begin with: the knowledge structure is less important than the
procedures to deploy the knowledge structure since that is the
point where the machine as subject interacts with the text as
object.

Well, I've gone on perhaps too long for some preliminary statements
about dialectics, so I'll stop here. Any comments??

Bill Frawley

20568.ccvax1@udel

------------------------------

Date: Wed, 28 Nov 84 17:13:33 pst
From: chertok%ucbcogsci@Berkeley (Paula Chertok)
Subject: Seminar - Aesthetic Experience  (Berkeley)

             BERKELEY COGNITIVE SCIENCE PROGRAM
                         Fall 1984
           Cognitive Science Seminar -- IDS 237A

SPEAKER:        Thomas  G.  Bever,  Psychology   Department,
                Columbia University

TITLE:          The Psychological basis of aesthetic experi-
                ence:  implications for linguistic nativism

    TIME:                Tuesday, December 4, 11 - 12:30
    PLACE:               240 Bechtel Engineering Center
    DISCUSSION:          12:30 - 2 in 200 Building T-4

ABSTRACT:       We define the notion of Aesthetic Experience
                as   a   formal   relation   between  mental
                representations:   an  aesthetic  experience
                involves  at least two conflicting represen-
                tations that are  resolved  by  accessing  a
                third  representation.   Accessing the third
                representation releases  the  same  kind  of
                emotional  energy as the 'aha' elation asso-
                ciated with discovering the  solution  to  a
                problem. We show how this definition applies
                to  various  artforms,  music,   literature,
                dance.   The  fundamental aesthetic relation
                is similar to the  mental  activities  of  a
                child  during  normal cognitive development.
                These considerations explain the function of
                aesthetic  experience:  it elicits in adult-
                hood the characteristic mental  activity  of
                normal childhood.

                The fundamental activity revealed by consid-
                ering the formal nature of aesthetic experi-
                ence involves developing  and  interrelating
                mental  representations.   If  we  take THIS
                capacity  to  be  innate  (which  we  surely
                must),   the question then arises whether we
                can account for the phenomena that are  usu-
                ally argued to show the unique innateness of
                language as a mental organ.  These phenomena
                include  the  emergence of a psychologically
                real grammar,  a critical  period,  cerebral
                asymmetries.     More    formal   linguistic
                properties may be accounted for as partially
                uncaused (necessary) and partially caused by
                general  properties  of  animal  mind.   The
                aspects  of  language  that may remain unex-
                plained (and therefore non-trivially innate)
                are  the  forms of the levels of representa-
                tion.

------------------------------

Date: Wed 28 Nov 84 17:24:47-PST
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Seminars - Phonetics, Discourse, Semantics  (CSLI Stanford)

         [Excerpted from the CSLI Newsletter by Laws@SRI-AI.]


                   ABSTRACT OF TODAY'S SEMINAR
                   ``Parsing Acoustic Events''

This seminar addresses the problem of formulating a language-independent
representation of the acoustic aspects of natural, continuous speech from
which a general parser using language-specific grammars can recover
linguistic structure.  This decomposition of the problem permits a
representation that is stable over utterance situations and provides
constraints that handle some of the difficulties associated with partially
obscured or ``incomplete'' information. A system will be described which
contains a grammar for parsing higher-level (phonological) events as well
as an explicit grammar for low-level acoustic events. It will be shown that
the same techniques for parsing syntactic strings apply in this domain.  The
system thus provides a formal representation for physical signals and a way
to parse them as part of the larger task of extracting meaning from sound.
                                              --Meg Withgott
                           ____________

                ABSTRACT OF NEXT WEEK'S SEMINAR
           ``The Structures of Discourse Structure''

This talk will introduce a theory of discourse structure that attempts to
answer two rather simple questions, namely: What is discourse? What is
discourse structure? In this work (being done jointly with Sidner at BBN)
discourse structure will be seen to be intimately connected with two
nonlinguistic notions--intention and attention. Intentions will be seen to
play a primary role not only in providing a basis for explaining discourse
structure, but also in defining discourse coherence, and providing a coherent
notion of the term ``discourse'' itself.  A main thesis of the theory is that
the structure of any discourse is a composite of three interacting
constituents: the structure of the actual sequence of utterances in the
discourse, a structure of intentions, and an attentional state. Each of these
constituents of discourse structure both affects and is affected by the
individual utterances in the discourse.  The separation of discourse
structure into these three components allows us to generalize and simplify a
number of previous results and is essential to explaining certain discourse
phenomena. In particular, I will show how the different components contribute
to the proper treatment of various kinds of interruptions, as well as to
explanations of the use of certain types of referring expressions and of
various expressions that function directly to affect discourse structure.
                                        --Barbara J. Grosz
                        ____________

                  ABSTRACT OF NEXT WEEK'S TINLUNCH
    Syntactic Features, Semantic Filtering, and Generative Power

There is a trade-off in linguistic description using grammars with a syntax
and a separate semantics, such as GPSG.  One can often either use a
syntactic feature or appeal to semantic filtering to achieve the same ends.
Current GPSG countenances no semantic filtering, i.e. does not overgenerate
strings in the syntax and then let the semantics throw some away as
`uninterpretable'.  In the Tinlunch I would like to discuss this position
in light of some work I did in my dissertation which looks like it requires
semantic filtering, and in light of a paper by Marsh & Partee which shows
that adding certain types of semantic filtering to a grammar greatly
increases the generative power.                  --Peter Sells

                         ____________


            CSLI WORKSHOP ON THE SEMANTICS OF PROGRAMS

Tuesday, December 4, 1984
Location: The Bach Dancing and Dynamite Society, Princeton CA
          (a suburb of Half-Moon Bay)

There are long-standing traditions for the study of natural language
semantics and CSLI projects have been extending and reinterpreting them.
There is a briefer, but substantial, tradition for the study of the
semantics of programming languages.  Over the past few months, there have
been a series of presentations and discussions about similarities and
differences between the semantic accounts of natural and computational
languages.  Theories of natural language semantics have raised a number of
issues.  The purpose of the workshop is to discuss how some of these
theories can give rise to better accounts of the relation between
programs/program executions and the world.  Participation in the workshop
is by invitation only.  If you are interested in being invited to the
workshop, contact Ole Lehrmann Madsen (Madsen at SU-CSLI). If you have any
questions regarding the workshop you may contact Terry Winograd (TW at
SU-SAIL) or Madsen.
                         ____________

                        PH.D. PROPOSAL

On Tuesday, December 4, from 3:15 p.m. to 5:05 p.m., in Bldg. 200-217, Kurt
Queller will talk about ``Active Exploration with syntagmatic routines in
the child's construction of grammar:  Some phonological perspectves.'' Based
on detailed longitudinal analysis of data from 3 one-year-olds, the proposed
dissertation will provide a typology of syntag-matic phonological routines
or ``word-recipes'' used by young children in bulding a repertoire of
pronounceable works.  Then, it will show how individual children exploit
particular combinations of routines in constructing a coherent phonological
system.  Extensive synchronic variability and changes over time will be
accounted for in terms of the child's systematic exploration of the options
implicit in the resulting system.

------------------------------

Date: Mon 26 Nov 84 11:15:02-PST
From: Paula Edmisten <Edmisten@SUMEX-AIM.ARPA>
Subject: Seminar - The KEE Knowledge Engineering System  (Stanford)

      [Forwarded from the SIGLUNCH distribution by Laws@SRI-AI.]

SPEAKER:     Richard Fikes, Director
             Knowledge Systems Research and Development
             IntelliCorp, Inc.

ABSTRACT:    The KEE System - An Integration of Knowledge-Based
             Systems Technology

DATE:        Friday, November 30, 1984
LOCATION:    Chemistry Gazebo, between Physical and Organic Chemistry
TIME:        12:05

IntelliCorp has developed an integrated collection of  representation,
reasoning,  and  interface  facilities  for  building  knowledge-based
systems called  the  Knowledge  Engineering  Environment  (KEE).   The
system's components include (1) a frame-based representation  facility
incorporating features  of  UNITS,  LOOPS, and  KL-ONE  that  supports
taxonomic definition  of  object  types,  structured  descriptions  of
individual objects,  and  object-oriented  programming;  (2)  a  logic
language  for  asserting  and  deductively  retrieving  facts;  (3)  a
production rule language with  user-controllable backward and  forward
chainers that  supports  PROLOG-style  logic programming;  and  (4)  a
graphics work bench for  creating display-based user interfaces.   KEE
uses  interactive  graphics  to  facilitate  the  building,   editing,
browsing, and  testing of  knowledge  bases.  A  primary goal  of  the
overall  design  is  to  promote  rapid  prototyping  and  incremental
refinement  of  application  systems.    KEE  has  been   commercially
available since August 1983, and has been used by customers to build a
wide range  of application  systems.   In this  talk  I will  give  an
overview  of  the   KEE  system  with   particular  emphasis  on   its
representation and reasoning facilities, and discuss ways in which the
system provides significant leverage for its users.



Paula

------------------------------

End of AIList Digest
********************

From:	CSVPI          30-NOV-1984 05:08  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a003826; 30 Nov 84 1:53 EST
Date: Thu 29 Nov 1984 21:46-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #164
To: AIList@SRI-AI
Received: from rand-relay by vpi; Fri, 30 Nov 84 05:04 EST


AIList Digest            Friday, 30 Nov 1984      Volume 2 : Issue 164

Today's Topics:
  Algorithms - Karmarkar Algorithm & Linear Programming,
  Seminars - Search Complexity & User Interfaces  (IBM-SJ) &
    A Semantical Definition of Probability  (CSLI Stanford) &
    Learning in Stochastic Networks  (CMU)
----------------------------------------------------------------------

Date: 27 Nov 1984 17:19:38-EST (Tuesday)
From: S.Miller@wisc-rsch.arpa
Subject: Karmarkar Algorithm

          [Forwarded from the SRI-AI bboard by Laws@SRI-AI.]

The Karmarkar algorithm was presented at STOC (Symposium on Theory of
Computing) on May 1, 1984 (STOC '84, p. 302)
"A New Polynomial time Algorithm on Linear Programming".
The STOC proceedings are available from the ACM if your
location doesn't have them.

------------------------------

Date: Mon 26 Nov 84 17:33:08-PST
From: Walter Murray <OR.MURRAY@SU-SIERRA.ARPA>
Subject: Linear Progamming Algorithms.

         [Forwarded from the Stanford bboard by CKaun@AIDS-UNIX.]

Some recent bboard messages have referred to linear programming. The
algorithm by Karmarkar is almost identical with iterative reweighted
least squares (IRLS). This latter algorithm is used to solve approximation
problems other than in the l2 norm. It can be shown that the form of
LP assumed by Karmarkar is equivalent to an l infinity approximation
problem. If this problem is then solved by the IRLS algorithm the
estimates of the solution generated are identical to those of the
Karmarkar algorithm (assuming certain free choices in the definition
of the algorithms). Perhaps it should be added that the algorithm is
not held in high regard in approximation circles. To solve a
an l infinity problem it is usually transformed to an LP and solved using
the simplex method.

A message from Kaun (forwarded by Malachi without a heading)
described an algorithm for LP which Kaund claimed requires o(n^3) work. It
is easy to demonstrate the algorithm may fail to converge to the solution.
The following is a cross-section of a hole consisting of straight
sides. Water is poured into this hole from the point x.


                  x


                o                       o
                .                       .
                o                       o
                 .                     .
                  .                   .
                   o                 o
                      .         .
                           o

The water hits a facet. It continues to fall until it hits a second
facet which is a vertex. Unless the water is prepared to leave the first facet
hit it will not reach the bottom.

------------------------------

Date: 27 Nov 84 11:45:16 PST (Tue)
From: Carl Kaun <ckaun@aids-unix>
Subject: Linear Programming Algorithm

Murray is correct -- the algorithm as stated will not usually converge
to the solution.  One problem is that removing components from the
gradient does not automatically force it to zero after N steps, as I
asserted.  It looks to me like the gradient stepping idea can still be
used in a more complicated scheme, and that the computational time for
the algorithm will be o(M*N^2), where M is the number of constraints.
But I want to verify the details better than I did for my original
message before saying more.

I still wonder if finding such a solution to the continuous (as opposed to
the integer) linear programming problem has any significance.

                                                ckaun@aids-unix

------------------------------

Date: 27 Nov 84 22:31:30 PST (Tue)
From: Carl Kaun <ckaun@aids-unix>
Subject: Gradient Step Linear Programming (again)


Well, here we go again.  Let's see if this try stands up to scrutiny.
The claim is that the algorithm following (gradient step linear
programing) solves the linear programming problem in at most
o(M^2*N +M*N^3) operations.  I still don't know if that has any significance.
As before, the idea is to step, as best one can subject to the constraints,
along the gradient.  The terminating conditions are similar to the
algorithm given previously.

As before, the mathematical notation represents vectors by preceding them
with an underline, as "_x".  Subscripts are represented using a colon, _c:j
being the j-th vector _c. The inner product is represented by < * , * >.  A
functional form (i.e. f(  )) is used to represent things like sums. The rest
should be fairly obvious.

The statement of the linear programming problem is also as before, being to
maximize with respect to the N-dimensional vector _x the linear functional:
               <_c , _x >
 subject to the constraints:
               <_a:j , _x > >= b:j   for j = 1, 2, ..., M
M>= N, as otherwise the solution is unbounded.

Assume for the moment an initial feasible vector _x(0) in the interior (so
that there are initially no active constraints) of the polytope defined by
the constraints. _c:0 = _c.  All constraints are potentially active.

A.  From the current solution point _x(n), find the constraint limiting
motion in the direction _c:n, and the maximum feasible step size s>0  giving
the next solution point:  _x(n+1) = _x(n) + s*_c:n
    For j = 1, 2, ... M and j not a currently active constraint, compute
          D:j = <_x(n), _a:j> - b:j    ( >= 0 )
          s:j = - D:j / <_c(n), _a:j>
    s = min { s:j | s:j>0} , and the next active constraint has the index
j(n) providing the minimum.

B.  The next step is to compute a movement direction aligned with the
gradient (thus enabling improvement in the functional) that also satisfies
the active constraints.  The first active constraint was identified in the
previous step, thus:
          _c(0) = _c - _a:j(n) * <_c, _a:j(n)> / <_a:j(n), _a:j(n)>

C.  Next determine which of the constraints active in the previous cycle
are active in this step, and modify the movement direction accordingly.  A
previously active constraint a:j is active in this cycle if
          <_a:j, _c(i)> < 0.
That is, motion along the current direction _c(i) would violate the
constraint.  If the constraint is active, then the Gram-Schmidt
procedure is applied to _a:j to orthogonalize the vectors involved and
thereby determine the component to be removed from _c(i), yielding _c(i+1).
          _a(i) = _a:j - sum (n=0 to i-1) [ <_a:j, _a(n)> / <a:j, a:j> ]
          _c(i+1) = _c(i) - _a(i) * <_c(i), _a(i)> / <_a(i), _a(i)>
When all of the previously active constraints have been determined to be
either active or inactive for the current cycle, the next step direction is
          _c:n = _c(i), for the latest i.

(It appears necessary, for each determination of a _c(i), to scan the
entire set of constraints which were active in the previous cycle (but have
not yet been determined to be active in the current cycle) before deciding
that none is active in the current cycle.  Practically, there will
be only one active constraint in most of the cycles, and the
trajectory of the algorithm passes through various of the facets of the
polytope most of the time.)

The stopping condition results when _c(i) = _0; that is, when the objective
gradient _c lies in the cone formed from the combination of negatively
scaled gradients of the constraints.  This is the Kuhn-Tucker condition
of optimality.  Equivalently,N (linearly independent) constraints are found
to be active.  I don't remember that the Kuhn-Tucker conditions are
sufficient, but in any event this is the optimal point because there
is no feasible motion direction which improves the objective.

Unlike the previous algorithm, in this the identification of new constraints
can result in movement away from a previously active constraint.  When this
happens, the previously active constraint can be totally removed from further
consideration, due to the convexity of the problem (this assertion seems
obvious, but has not been PROVED by me).  The algorithm
encounters a new active constraint each cycle, and therefore converges
in at most M cycles, this being the maximum number of constraints that
can be newly encountered.  In practice again, the trajectory of the
algorithm will generally be such that convergence will occur in many fewer
cycles than M.

Steps A-C are repeated until the stopping condition occurs.

As indicated above, the algorithm converges in at most M cycles.  For each
cycle, step A requires o(N) multiplications and additions to compute the
inner product, etc. for each of o(M) constraints, for a total of o(MN)
operations.  Step B requires o(N) operations, which scarcely affects the
overall timing.  Step C can potentially result in the identification of N-1
active constraints.  Each such identification requires the removal of o(N)
orthogonal components, and each such removal entails o(N) operations, for
an overall count of o(N^3) operations to remove the effects of previously
identified active constraints.  Also, o(N) constraints may have to be
scanned to determine if they are active for each such identification,
each such determination requiring o(N) operations; resulting again in
a total of o(N^3) operations for step C.  Performing steps A and C therefore
requires o(M^2*N + M*N^3) operations.

An initial feasible point can be determined starting from an arbitrary point
(say the origin), identifying the unsatisfied constraints, and moving in
directions that satisfy them.  It may be more direct to simply start with a
"superoptimal" point, say K*_c for suitably large K, and iterate using
essentially the previously described algorithm along the negative constrained
gradient direction to feasibility.  The resulting feasible point
will also be optimal for the original problem.

                                                Carl F. Kaun

                                                ckaun@aids-UNIX
                                                415/941-3912

------------------------------

Date: Wed, 28 Nov 84 17:12:55 PST
From: IBM San Jose Research Laboratory Calendar
      <calendar%ibm-sj.csnet@csnet-relay.arpa>
Reply-to: IBM-SJ Calendar <CALENDAR%ibm-sj.csnet@csnet-relay.arpa>
Subject: Seminars - Search Complexity & User Interfaces  (IBM-SJ)

           [Forwarded from the SRI bboard by Laws@SRI-AI.]

                      IBM San Jose Research Lab
                           5600 Cottle Road
                         San Jose, CA 95193

                             CALENDAR
                       (DECEMBER 3 - 7, 1984)

  Wed., Dec. 5 Computer Science Seminar
  10:00 A.M.  HOW HARD IS NP-HARD?
  2C-012      This talk examines the average complexity of
            depth-first search for two different search models.
            The first model has no cutoff at unpromising internal
            nodes, but does terminate at a leaf node when the
            leaf node represents a successful search outcome.
            This model leads to an average complexity that grows
            anywhere from linearly to exponentially in the depth
            of the tree depending on the probability of choosing
            the best branch to search first at each internal node
            of the tree.  Good decisions lead to linear
            complexity and bad decisions lead to exponential
            complexity.  The second model examines tree searching
            with internal cutoff when unpromising paths are
            discovered.  In this model, the search terminates
            successfully when it reaches the first leaf.  The
            model is representive of branch-and-bound algorithms
            that guarantee that the first leaf reached is a
            successful leaf.  Roth's D-algorithm for generating
            test vectors for logic circuits fits this model, and
            White's efficient algorithm for solving the Traveling
            Salesman problem also fits except for the
            distribution cutoff probabilities.  Our model shows
            that the number of nodes visited during a depth-first
            search grows at most linearly on the average,
            regardless of cutoff probability.  If cutoff
            probability is very high, the search fails with a
            very high probability, and visits an average number
            of nodes that grows as O(1) as the tree depth
            increases.  If cutoff probability is very low, then
            the algorithm finds a successful leaf after visiting
            only O(N) nodes on the average where N is the depth
            of tree.  Many NP-complete problems can be solved by
            depth-first searches.  If such problems can be solved
            by algorithms that order the depth-search first to
            terminate at the first leaf, then this work and the
            work by Smith suggests that the average complexity
            might grow only polynomially in the tree depth,
            rather than exponentially as the worst-case analysis
            suggests.

            H. S. Stone, IBM Yorktown Research
            Host:  B. D. Rathi

  Thurs., Dec. 6 Computer Science Seminar
  10:00 A.M.  APPLICATIONS OF COGNITIVE COMPLEXITY THEORY
  2C-012      TO THE DESIGN OF USER INTERFACES
            The cognitive complexity project has two major
            objectives.  The first is to gain a theoretical
            understanding of the knowledge and thought processes
            that underlie successful use of computer-based
            systems (e.g., text editors).  The second goal is to
            develop a design technology that minimizes the
            cognitive complexity of such systems as seen by the
            user.  Cognitive complexity is defined as the amount,
            content, and structure of the knowledge required to
            operate a system.  In this particular work, the
            knowledge is described as a production system.  The
            computer-based system is described as a generalized
            transition network.  Quantitative predictions,
            derived from the production system, are shown to
            account for various aspects of user performance
            (e.g., training time).  The talk will include a brief
            presentation of the design methodology based on the
            production system formalism.

            Prof. D. E. Kieras, University of Michigan, Ann Arbor
            Prof. P. G. Polson, University of Colorado, Boulder
            Host:  J. L. Bennett

------------------------------

Date: Wed 28 Nov 84 17:24:47-PST
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Seminar - A Semantical Definition of Probability  (CSLI
         Stanford)

         [Excerpted from the CSLI Newsletter by Laws@SRI-AI.]


            SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS

Speaker: Prof. Rolando Chuaqui, Catholic University of Chile and IMSSS
Title:   A Semantical Definition of Probability

Place:   Room 381-T, 1st floor Math. Corner
Time:    Monday, December 3, 4:15-5:30 p.m.

ABSTRACT:  The analysis proposed in this lecture is an attempt to formalize
both chance and degree of support.  Chance is considered as a dispositional
property of the objects plus the experimental conditions (i.e. what is
called the chance set-up).  Degree of support measures the support that the
evidence we have (i.e. what we accept as true) gives to propositions.
Chance, in this model, is determined by the set K of possible outcomes (or
results) of the chance set-up.  Each outcome is represented by a relational
structure of a certain kind.  This set of structures determines the algebra
of events, an algebra of subsets of K, and the probability measure through
invariance under a group of symmetries.  The propositions are represented
by the sentences of a formal language, and the probability of a sentence,
phi in K, P[K](phi), is the measure of the set of models of phi that are
in K.   P[K](phi) represents the degree of support of phi given K.  This
definition of probability can be applied to clarify the different methods
of statistical inference and decision theory.

------------------------------

Date: 27 November 1984 1607-EST
From: David Ackley@CMU-CS-A
Subject: Seminar - Learning in Stochastic Networks  (CMU)

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

     "Learning evaluation functions in stochastic parallel networks"
                           Thesis Proposal
             Tuesday, December 4, 1984, at 3:30pm in 5409 WeH.

Although effective techniques exist for adjusting linear coefficients of
features to produce an improved heuristic evaluation of a game position,
the creation of useful features remains poorly understood.  Recent work
on parallel learning with the Boltzmann Machine suggests that the
creation of useful new features and the tuning of coefficients of
existing features can be integrated into a single learning process, but
the perceptual learning paradigm that underlies the Boltzmann Machine
formalism is substantially different from the reinforcement learning
paradigm that underlies most game-learning research.  The thesis work
will involve the development of a reinforcement-based parallel learning
algorithm that operates on a computational architecture similar to the
Boltzmann Machine, and drives the creation and refinement of an
evaluation function given only win/lose/draw reinforcement information
while playing a small game such as tic-tac-toe.  The thesis work will
test several novel ideas, and will have implications for a number of
issues in machine learning and knowledge representation.

------------------------------

End of AIList Digest
********************

From:	CSVPI           2-DEC-1984 04:36  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a014067; 2 Dec 84 1:51 EST
Date: Fri 30 Nov 1984 21:55-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #165
To: AIList@SRI-AI
Received: from rand-relay by vpi; Sun, 2 Dec 84 04:33 EST


AIList Digest            Saturday, 1 Dec 1984     Volume 2 : Issue 165

Today's Topics:
  Planning - Bibliography Wanted,
  Cognition - Amnesia Before Age 5?,
  Administrivia - Number of Internet Users,
  News - AI in the News,
  Humor - Software Productivity,
  Seminars - User Interface Management System  (CMU) &
    Calculus of Partially-Ordered Type Structures  (MIT)
----------------------------------------------------------------------

Date: 30 Nov 84 15:07:23 PST (Fri)
From: Dan Shapiro <dan@aids-unix>
Subject: Planning Bibliography wanted

Does anyone know of an annotated biblography in the area of AI
planning?  My specific context is an autonomous land vehicle
project which involves generating a plan for traversing long
distances in essentially unrestricted terrain.  Issues in route
planning, real time planning, planning under uncertainty, planning
with multiple goals, goal conflict resolution strategies, etc.,
are all relevant.

I would also be interested in a reference list on the topic of
spatial reasoning, in particular the representation and
manipulation of symbolic features in maps or processed images.

I am going to be compiling/extending annotated bibliographies in
these areas; once done, I'd be glad to distribute them to anyone
who is interested.

                        Dan Shapiro
                        (dan@aids-unix)

------------------------------

Date: 29 Nov 84 15:22:05 EST (Thursday)
From: Chris Heiny <Heiny.henr@XEROX.ARPA>
Subject: Amnesia before age 5????

"..no one can remember events before the age of five."

What's going on here, anyway???   Does this mean that no one remembers
(during any part of their life) any events that occurred prior to age 5;
or does it mean that prior to age 5, one can't remember events occuring
during ages 0..4.99.  I personally can disprove the former: I remember
events that occurred when I was 3 & 4.  An acquaintance disproves both:
at age 3 she remembered an event several weeks after it occurred, and at
18 still remembers both the event and the remembering of the event (is
this a meta-memory?).

I think someone's confused....I hope it's not me.

                                        Chris

------------------------------

Date: Thu 29 Nov 84 14:10:35-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Re: Amnesia before age 5????

Granted, I was rather sweeping in my generalization.  Kids certainly
do remember events, but after growing up very few can remember more
than one or two vague incidents from the early years.  Even those
few memories are often the ones strengthened by parent's retelling
of the events.  At any rate, >>I<< have only two or three conscious
memories from pre-kindergarten days, and not a great many more from all
of grade school.

                                        -- Ken Laws

------------------------------

Date: 26 Nov 84 15:45 EST
From: WILLUT%EDUCOM.BITNET@Berkeley
Subject: Estimate on number of Internet users

Some belated facts related to estimates of Internet users:

BITNET currently has 328 machines at 117 sites (almost exclusively
universities), with 52 sites pending.  A stats program run recently at
a non-peak time determined that 150 nodes were up and 6,000 users logged in.

Also, MAILNET includes 24 universities (most single machines, but some
multiple-node sites, such as Carnegie-Mellon) that exchange mail with the
MAILNET hub (the MIT-MULTICS machine) via dial-up and/or Telenet connections.

Using the proposed estimate of 100-200 users per university machine that's
35-70,000 users.

Candy Willut
EDUCOM Networking Activities

------------------------------

Date: Thu, 15 Nov 84 05:12:53 cst
From: Laurence Leff <leff%smu.csnet@csnet-relay.arpa>
Subject: Recent AI News

[The following message from Laurence Leff at SMU was delayed somewhat
by mailer troubles.  He has offered to provide AIList readers with
references to AI articles in the non-AI press.  Such reviews and
alerts are certainly welcome.  -- KIL]


...
I currently provide this service to the AI group in the department and
it might be useful to others.  The journals I scan include

  Electronic Week
  Electronic News
  IEEE PAMI
  IEEE System Man and Cybernetics
  IEEE Computer
  Communications ACM
  Datamation
  Infoworld
  IEEE Spectrum
  IEEE Potentials

[Each notice] includes a citation (so people can find it) and usually
a sentence or two about contents.  Very short articles (<= 1 paragraph)
are usually typed in verbatim.



As if we didn't know department:

From Wall Street Journal

COMPUTERS THAT THINK like people create demand for experts in short supply.

Interest in "Artificial Intelligence" systems is booming, say employers
and recruiters among firms in financial services, computer hardware and
software design, defense and communications.  The systems principally
duplicate the thought processes of experts for trouble-shooting and cash
management.  Demand for systems is "explosive" says Halbrecht Associates,
Stamford, Conn.

But Halbrecht's recruiter Daryl Furno says "there just aren't enough
people to go around" to design the systems.  Most prospects have about five
job offers when they finish a project.  Christian & Timbers, a Cleveland
recruiting firm, says qualified experts demand 10%-20% premiums over most
computer designers.

DM Data, a Scottsdale, Ariz., consulting firm, estimates that there are
nearly 5,000 jobs in the industry now, but there may be 50,00 jobs
by 1990.


CACM 1984 - Vol 27 No. 10 page 1044:
Combination of PERT with [heuristic] search.


Byte Vol 9 No 11 October 1984 page 39:
Announcements of Tektronix AI system and TIMM expert system.


Byte Vol 9 No 11 October 1984 page 207:
Ad for IBM PC Common Lisp.


Electronic News Monday October 1, 1984 pp. 37:
Japanese-English translation-software article.


Copied from Computer Industry Update September 1984
IBM Company Announcements:

Announced a version of the Lisp programming language for the VM operating
system.  Lisp/VM is an integrated interactive invironment that provides a
collection of artificial intelligence programming tools.  A structure
editor displays the structure of all objects including programs, data
and results.  A variety of debugging tools are included.  The price is
$6500.

The firm also unveiled five other internal research and development
projects in artificial intelligence: the YES/MVS, an expert system
which runs on mainframe computers that use the MVS operating system;
PRISM, a system shell written in PASCAL for developers who wish to
insert their own rules and inferences for expert systems; Scratchpad
II which incorporates a system and language to provide facilities for
scientists to manipulate algebra directly on the computer screen; PSC
Prolog, a version of the Prolog programming language that operates on
the 370 and interfaces with the LISP/VM and SQL/VM relational DBMS and
the CMS Command Executive language REXX; and HANDY, a user interface
to AI systems and a PC-based program that includes elements of
windowing, color animation, graphics, speech synthesis and video
programs.


Electronic Weeks November 12, 1984:
Describes efforts of Sperry ($20,000,000 worth) to become leader in AI.  pp. 34


Electronics Week October 22, 1984:
Work by Kurzweill on solving the speech recognition techniques.
(Kurzweill was the developer of the text recognizer used to make a
reader for the blind.)  pp. 83


Infoworld November 5, 1984:
Review of "Into the Height [Heart? --KIL] of the Mind"
The review is oriented towards those not knowledgeable in AI.


IEEE Transactions on System Man and Cybernetics July/August 1984
    Volume SMC-14 Number 4:
Linguistic Representation of Default Values in Frames
  R. R. Yager pp 630
Approximate Reasoning as a Basis for Rule-Based Expert Systems
  R. R. Yager pp 636


Electronic News, Monday November 12, 1984:
Computer Thought ships ADA/ Interpreter Debugger on Symbolics 3600
  Machine pp 43


Electronic News, October 29, 1984:
Article on Marketing AI systems pp 34

------------------------------

Date: Thu 29 Nov 84 12:40:33-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Software Productivity

The November issue of IEEE Computer contains an Open Channel note fro
David Feinberg about the folly of programmer productivity metrics that
reward only lines of code and not lines of documentation.  This
suggests many lines of thought.

A lines-of-code metric penalizes those who write APL one-liners -- a
good thing, no?  We could increase the readability/maintainability of
programs if we passed them through a filter that would expand complex
expressions into simpler steps.  We could then increase our
productivity even further by converting these simple steps into more
complex operations.  The possibilities for bootstrapping are obvious,
although important research questions must be solved to eliminate
cycles in repeated transformations.  Fortunately, we only need to find
one example of unlimited software growth (coupled with our supercomputer
technology) in order to guarantee our world pre-eminence in software
productivity.

The same concept can be extended to hardware development to guarantee
our lead in computer complexity.  Progess in this direction has so far
been limited to computer support systems (e.g., F-15 aircraft), but
wafer-scale integration offers hope for further optimization.

This looks like a fruitful area for artificial intelligence research.
(Progress might be measured by published lines of proof or by reams
of suggestive hypotheses.)  I suggest that DARPA institute a crash
project to develop a prototype optimizing preprocessor able to convert

    x = y = 0;

into

    register t;

    t = 0;
    y = t;
    x = t;
    if (x != y)
      abend("Compiler and/or hardware error.");


Further breakthroughs will come quickly.  For instance, we might
substitute

     Ln (Lim (1+(1/z))^z) + sin^2(x) + cos^2(x)
        z->INF
                   INF
                 - SUM (cosh(y sqrt(1-tanh^2(y))/(2^N)
                   N=0

for the constant 0 in the above program, providing that we can find
numerical methods of evaluating the limit and infinite summation
with adequate accuracy.  All that we need for rapid progress is a
sufficiently complex bureaucracy to support research and manage
distribution of the results.

                                        -- Ken Laws

------------------------------

Date: 29 Nov 84  1404 PST
From: Frank Yellin <FY@SU-AI.ARPA>
Subject: from the New Yorker

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]


The following is from the Palm Springs Desert Sun (and reprinted word
for word in the New Yorker).

        "Controlling a plant," says Theodore J.  Williams, a researcher
    at Purdue University "takes a wider attention span than any one person
    could possibly have."  But with a distributed computer system, Mr.
    Williams added, "You can increase profitability, increase
    productivity, reduce raw materials and reduce emissions, because the
    computer system is flexible, process, rather than an entire plant.
    The system is flexible, allowess, rather than an entire plant.  The
    system is flexible, allowing anather than an entire plant.  The system
    is flexible, allowing an operator to rearrange a manufacturing process
    from his seat at the console.  "If you change your mind," said Robert
    E. Otto, a technical consultant at the Monsanto Co., "you don't have
    to rewire, you can just reprogram."

        And because the systhe central computer.  Then if something goes
    wrong ing back to the central computer.  Then if something goes wrong
    ing back to the central computer.  Then if something goes wrong wit
    back to the central computer.  Then if something goes wrong with the
    main cocentral computer.  Then if something goes wrong with the main
    control l computer.  Then if something goes wrong with the main
    control room your plant is O.K."

------------------------------

Date: 28 November 1984 1433-EST
From: Staci Quackenbush@CMU-CS-A
Subject: Seminar - User Interface Management System  (CMU)

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

        Name:   Phil Hayes
        Date:   December 3, 1984
        Time:   3:30 - 4:30 p.m.
        Place:  WeH 5409

        Title:  "Design Alternatives for User Interface Management
                 Systems Based on Experience with the COUSIN System"


 User   interface  management  systems  (UIMSs)  provide  user  interfaces  to
 application  systems  based  on  an  abstract  definition  of  the  interface
 required.    This approach can provide higher-quality interfaces with a lower
 construction cost.  This talk examines a UIMS called  COUSIN  which  provides
 graphical  interfaces  to  a variety of application systems running on a Perq
 under the Accent operating system.  The presentation will include a videotape
 of a COUSIN interface.

 The talk will also take a more general look at the design  space  for  UIMSs.
 Specifically, we will consider three design choices.  The choices concern the
 sharing  of  control  between  the  UIMS  and  the  applications  it provides
 interfaces to, the level of abstraction in the definition of the  information
 exchanged  between  user and application, and the level of abstraction in the
 sequencing of information exchange.  For each choice, we argue for a specific
 alternative.  COUSIN's design corresponds to the alternatives we  argued  for
 in two out of three cases, and partially satisfies the third.

------------------------------

Date: Mon 26 Nov 84 16:37:16-EST
From: Susan Hardy <SH%MIT-XX@MIT-XX.ARPA>
Subject: Seminar - Calculus of Partially-Ordered Type Structures (MIT)

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

        A LATTICE-THEORETIC APPROACH TO COMPUTATION
BASED ON A CALCULUS OF PARTIALLY-ORDERED TYPE STRUCTURES

                     Hassan Ait-Kaci
  Microelectronics and Computer Technology Corporation
                      Austin, Texas


             DATE:    Friday, November 30, l984
             TIME:    2:00 p.m. - Talk
             PLACE:   NE43-512A

This talk will present a syntactic calculus of partially ordered
structures and its application to computation.  A syntax of record-
like terms and a "type subsumption" ordering are defined and shown
to form a lattice structure.  A simple "type-as-set"
interpretation of these term structures extends this lattice to
a distributive one, and in the case of finitary terms, to a
complete Brouwerian lattice.  As a result, a method for solving
systems of @i(type equations) by iterated substitution of type
symbols is proposed which defines an operational semantics
for KBL -- a Knowledge Base Language -- so-named to reflect
the original aim of this research; to wit, attempting a proper
formalization of the notion of "semantic network".

HOST:  Professor Rishiyur Nikhil

------------------------------

End of AIList Digest
********************

From:	COMSAT          7-DEC-1984 00:05  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a006690; 6 Dec 84 13:27 EST
Date: Fri 30 Nov 1984 22:16-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #166
To: AIList@SRI-AI
Received: from rand-relay by vpi; Thu, 6 Dec 84 23:56 EST


AIList Digest            Saturday, 1 Dec 1984     Volume 2 : Issue 166

Today's Topics:
  Administrivia - Remailing,
  Philosophy - Dialectics and Piaget,
  Logic Programming - Book Review,
  PhD Oral - Nonclausal Logic Programming,
  Seminar - Learning Theory and Natural Language  (MIT),
  Conference - Logics of Programs
----------------------------------------------------------------------

Date: Thu 6 Dec 84 09:20:51-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Lost Issue

It seems likely now that very few, if any, sites received this issue
on the first mailing.  I am therefore sending it out to all subscribers.
It has been gratifying to learn how many people just can't do without
an AIList issue, but you can all stop sending me messages about #166 now.

                                        -- Ken Laws

------------------------------

Date: 30 Nov 84 14:17:42 PST (Friday)
From: Rosenberg.PA@XEROX.ARPA
Subject: Dialectics and Piaget

Your summary of dialectics is quite nice, but your portrayal of Piaget
has a major error: Piaget was not a nativist, so it's unfair to lump him
together with, say, Kant.  (After all, Chomsky denounces him as an
empiricist!)  In fact, his constructivist genetic epistemology is
similar in many ways to the dialectical position you outlined (cf. his
books on negation and contradiction).

Jarrett Rosenberg

------------------------------

Date: 30 Nov 84 0059 EST (Friday)
From: Alex.Rudnicky@CMU-CS-A.ARPA
Subject: Piaget & dialectic

I would take issue with Bill Frawley's contention that Piaget's theory
is idealist in flavour.   If anything, it is essentially dialectical
in nature.  Piaget's work is often popularized in terms of his ``stages''
of intellectual development and their apparently immutable order.
His major contribution, however, is probably his elaboration of the
mechanisms by which this development could take place.  Specifically,
I would point to Piaget's concept of ``equilibration'', which can
(loosely) be described as the constant interaction between internal
cognitive structures and external events that results in modification
of internal structures.  Equilibrium is never quite reached, a state
that persists throughout an individual's life.  On the matter of
Piaget vs dialectics, I can offer the following quote:

"... in the domain of the sciences themselves structuralism has always
been linked with a constructivism from which the epithet "dialectical"
can hardly be withheld---the emphasis upon historical development,
opposition between contraries, and ``Aufhebungen'' (``de'passements'')
is surely just as characteristic of constructivism as of dialectic,
and that the idea of wholeness figures centrally in structuralist as
in dialectical modes of thought is obvious."  (Piaget, Structuralism,
1970, p.121).

------------------------------

Date: 30 Nov 84 09:46 PST
From: Newman.pasa@XEROX.ARPA
Subject: Re: Dialectics,   V2 #163

In reference to the recent posting on Dialectics, and in spite of the
fact that some of this has very little to do with AI.

Question: How does dialectics interact with the Heisenberg uncertainty
principle and other facets of quantum theory? It seems to me that the
idea of an interaction between the object and the observer which results
in some knowledge on the part of the observer might be an interesting
topic to discuss in terms of dialectics.

Comment: More in line with the basic topic of the digest, I think it is
obvious that there is some interaction between the observer and the
observed since psychology has shown that (to put it very simply) we see
and hear what we want to, and we don't notice what we wish to avoid.
However, this evidence and your arguments do not conclusively show that
Positivism is entirely wrong. Because I think that there are other
reasons to dismiss Behaviorism and I am not sure how Dialectics deals
with it, I will not deal with Behaviorism in this comment.

The best reason that I can think of on short notice for not dismissing
Positivism is that we must suppose that objects have some existence and
characteristics independent of the observer. I think that we would all
agree that there will be shock waves travelling through the air when the
tree falls in the forest, though we might disagree on whether this
constituted a sound (depending on the possible presence of an observer).
I am not sure what your position is on this issue, but my inclination is
that there is a position combining elements  of Dialecticism and
Positivism which is more aceptable than either of its parents.

Note that this is just an opinion since I don't have the time or
resources to do justice to the topic at the moment.

>>Dave

------------------------------

Date: 30 Nov 1984 04:55-EST
From: ISAACSON@USC-ISI.ARPA
Subject: Dialectics: Perils, and Promises for AI

Bill  Frawley  has  written a thought-provoking introduction  for  a
discussion  on  dialectics  [AIList v2  #  163,  11/29/84].   As  he
mentioned,  he applies dialectics in his work on Soviet theories  of
language  and  cognition, and studies the use of Soviet theories  to
explain language learning and text processing.   My own work relates
to  a  new  mode of information processing which is  dialectical  in
nature.   One of its applications is in Dialectical Image Processing
(DIP), reported in AIList v2 #153, 11/12/84.  It goes without saying
that   I  think  that  things  dialectical  are  crucial  to  things
intelligent.   But  before I proceed to elaborate    this  point  of
view,  I wish to caution the uninitiated,  and point out some of the
many perils of dialectics.

                      The Perils of Dialectics

"Dialectics" is basically an elusive, vague, and often controversial
and  misunderstood  term.   Its  origin is in antiquity  (Plato  and
Aristotle).   It attained prominence  and immense influence  through
the  German  idealism  of  the  early  nineteenth  century  (Fichte,
Schelling,   and, most notably Hegel) and has been transformed later
into  "dialectical materialism" by no other than Karl  Marx.   Major
American  philosophers  (notably C.  S.  Peirce) have  been  greatly
influenced by Hegelianism, and  significant Hegelian influences have
reached  as far as Japan (Nishida).   All in all,  huge segments  of
humanity  today  live under political philosophies,  or  ideologies,
that are dialectical at their roots in one way or another.   Through
it all, though, dialectics has remained elusive, unformalizable, and
-- in the view of many,  especially in the West -- unscientific  and
hence irrelevant to Western science.  A weird mixture of a method, a
(non-standard)  logic,  a  philosophy,  and  sometimes  a  political
ideology,  it  usually  baffles  the  Western  mind  and  hopelessly
frustrates  attempts to harness it in the interest of scientific  or
technological  objectives.   In  fact,  if  you wish to  dispose  of
dialectics  altogether,  you  are urged to read a  most  devastating
critique  by Karl Popper ("What is dialectics?" - Chap.  14) in  his
*Conjectures and Refutations* book.   Written many years  ago,  when
Marxist ideology seemed even more menacing than it is today,  Popper
shows  very little patience with "dialecticians" and portrays   them
as  a bunch of misguided cynics,  intellectual dwarfs,  and  pseudo-
scientific misfits.   And,  I should add, his points are not without
merit in many instances, and should not be ignored.

In  addition,   beyond philosophical and scholarly controversy   and
confusion,  there  always  looms  the  ideological/political  stigma
which  is usually attached to dialectics.   For it is the case  that
"dialectical materialism" has become the official dogma of  Marxism-
Leninism.   Much of Soviet science is constrained by their political
ideology,  and,  almost Pavolovian-style,  researchers are sometimes
rewarded for exhibiting "dialectical thinking" in their work.   Yet,
few Soviet scientific discoveries are known,  or recognized,  in the
West that owe their existence to dialectical foundations.   In other
words,  even  a  totalitarian society that  promotes,  and  rewards,
dialectical  thinking among its intellectuals has failed to  produce
significant scientific or technological results which are  genuinely
dialectical.  So, the questions should be asked:  What's really good
about  that dialectical stuff?   What's the hidden promise,  if any?
Why drag it into AI, our good old American AI?

                 The Promises of Dialectics for AI

The  answers are not easy to state,  and surely are incomplete here.
Bill Frawley gave his own sketchy rationale for adopting  dialectics
for certain language learning theories.   I am generally in sympathy
with  his  reaching  out for dialectics,  but my reasons  for  using
dialectics in AI are  more basic and,  admittedly,  almost  bizarre.
Having  an engineering background,  I never dreamt of using anything
as  remote  as dialectics for anything as   technically  mundane  as
image  processing.   It  so happened that,  for something like  five
years  (in the mid 60's) certain simple types of operations  yielded
imagery  that was "interesting" but unexpected and not  particularly
meaningful  or  interpretable.   Only  after  the  fact,  and  after
outsiders  had  been consulted,  it has become clearer  (and  later
obvious!)  that  what that type of image processing  was  doing  was
Hegelian  dialectics,  pure and simple.   All in all,  that exercise
took  some  twenty  years.   In other  words,  we've  learned  about
dialectics from the machine,  rather than have had any  premeditated
intention to program the machine to do dialectics!  Put another way,
the  machine  had been doing dialectics for us for some five  years,
well before we ever heard the term for the first time.  Well, twenty
years is certainly a long time,  and serious study of dialectics and
its ramifications has led, little-by-little, to the realization that
its  application in the implementation of certain intelligent  tasks
is  potentially  very  powerful.   The  reality  of  an  implemented
"dialectical  machine" then took hold and has opened  up  tremendous
possibilities.

To  put  it  all in very simple terms.   We on  this  project  don't
particularly  care  for Hegelian philosophy,  nor do  we care  about
Marxist  ideology.   Here  is  a machine that,  of its  own  accord,
behaves  in  a  classical dialectical  mode.   While  doing  so,  it
processes  images in an unusual (non-programmed) way that is  useful
in  machine-vision.   And  there are clear  indications  that  other
applications in other machine-intelligence domains are feasible, and
we hope to hear from others about those in this forum.   Anyway,  we
think  that  the promise of dialectics for AI clearly outweighs  its
traditional perils,   and recommend that people consider the  issues
and ramifications involved.

-- J. D. Isaacson

------------------------------

Date: Wed, 21 Nov 84 13:03:28 EST
From: Anonymous
Subject: Foundations of Logic Programming

          [Forwarded from the Prolog Digest by Laws@SRI-AI.]


                   Foundations of Logic Programming

                          J.W. Lloyd

                  Springer-Verlag,ISBN 3-540-13299-6


This is the first book to give an account of the mathematical
foundations of Logic Programming.  Its purpose is to collect,
in a unified and comprehensive manner, the basic theoretical
results of Logic Programming, which have previously only been
available in widely scattered research papers.

The book is intended to be self-contained, the only prerequisites
being some familiarity with Prolog and knowledge of some basic
undergraduate mathematics.

As well as presenting the technical results, the book also
contains many illustrative examples and a list of problems
at the end of each chapter.  Many of the examples and problems
are part of the folklore of Logic Programming and are not easily
obtainable elsewhere.

                             CONTENTS

Chapter 1. DECLARATIVE SEMANTICS
           section 1.  Introduction
           section 2.  Logic programs
           section 3.  Models of logic programs
           section 4.  Answer substitutions
           section 5.  Fixpoints
           section 6.  Least Herbrand model
                   Problems for chapter 1

Chapter 2. PROCEDURAL SEMANTICS
           section 7.  Soundness of SLD-resolution
           section 8.  Completeness of SLD-resolution
           section 9.  Independence of the computation rule
           section 10. SLD-refutation procedures
           section 11. Cuts
                   Problems for chapter 2

Chapter 3. NEGATION
           section 12. Negative information
           section 13. Finite failure
           section 14. Programming with the completion
           section 15. Soundness of the negation as failure rule
           section 16. Completeness of the negation as failure rule
                   Problems for chapter 3

Chapter 4. PERPETUAL PROCESSES
           section 17. Complete Herbrand interpretations
           section 18. Properties of T'
           section 19. Semantics of perpetual processes
                   Problems for chapter 4

------------------------------

Date: 29 Nov 84  0255 PST
From: Yoni Malachi <YM@SU-AI.ARPA>
Subject: PhD Oral - Nonclausal Logic Programming

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

Monday 3 December, 1984, 2:15pm, 146 MJH
PhD Orals
Yoni Malachi


                       Nonclausal Logic Programming


The Tableau Programming Language (Tablog) is based on the Manna-Waldinger
deductive-tableau proof system and combines advantages of Prolog and Lisp.  A
program in Tablog is a list of formulas in [quantifier-free] first-order logic
with equality and is usually more natural than the corresponding program in
either Lisp or Prolog.

The inclusion of equivalence, negation, conditionals, functions, and equality
in Tablog enables the programmer to combine functional and relational
programming in the same framework.  Unification is used as the binding
mechanism and makes it convenient to pass unbound variables to a program and
to manipulate partially computed objects.

The tableau proof system is employed as an interpreter for the language in the
same way that a resolution proof system serves as an interpreter for Prolog.
The basic rules of inference used in the system are: nonclausal resolution,
equational rewriting, and replacement of formulas by equivalent ones.

This work describe Tablog and its semantics.  In addition to the simple
declarative (logical) semantics of the language, a procedural interpretation
is presented for sequential and parallel models of computation.  Various
properties of the language are studied and the language is compared to Lisp
and Prolog and to other combinations of functional and logic programming.

------------------------------

Date: 29 Nov 1984  14:50 EST (Thu)
From: "Robert C. Berwick" <BERWICK%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Learning Theory and Natural Language  (MIT)

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

                 Language and Learning Seminar Series


                           Scott Weinstein

                      University of Pennsylvania
                                 and
                  Center for Cognitive Science, MIT


               ``LEARNING THEORY AND NATURAL LANGUAGE''


                      Tuesday, December 4, 2 PM
                            A.I. Playroom
                   8th floor, 545 Technology Square


Formal learning theory may be conceived as a means of relating
theories of comparative grammar to studies of linguistic development.
After a brief review of relevant concepts, the present talk surveys
formal results within Learning Theory that suggest corresponding
constraints on linguistic theory.  Particular attention is devoted to
the question: How many possible natural languages are there?

Host: Prof. Robert C. Berwick


Refreshments at 1:30

------------------------------

Date: 25 Nov 84 1146 EST (Sunday)
From: Edmund.Clarke@CMU-CS-A.ARPA
Subject: Logics of Programs Call for Papers

                      CALL FOR PAPERS
                   Logics of Programs 1985

The Workshop on Logics of Programs 1985, sponsored by Brooklyn College
and IBM Corporation, will be held Monday, June 17 through Wednesday,
June 19, at Brooklyn College in Brooklyn, New York.  Papers presenting
original research on logic of programs, program semantics, and program
verification are being sought.

Typical, but not exclusive, topics of interest include:  syntatic and
semantic description of new formal systems relevant to computation,
proof theory, comparative studies of expressive power, programming
language semantics, specification languages, type theory, model theory,
complexity of decision procedures, techniques for probabilistic,
concurrent, or hardware verification.  Demonstrations of working systems
are especially invited.

Authors are requested to submit 9 copies of a detailed abstract (not a
full paper) to the program chairman:

          Professor Rohit Parikh
          Logics of Programs '85
          Department of Computer and Information Science
          Brooklyn College
          Brooklyn, New York  11210

Abstracts should be 6 to 10 pages double-spaced, and must be received no
later than January 14, 1985.  Authors will be notified of acceptance or
rejection by February 18, 1985.  A copy of each accepted paper, typed on
special forms for inclusion in the proceedings, will be due on March 24, 1985.

------------------------------

End of AIList Digest
********************

From:	CSVPI           3-DEC-1984 05:34  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a017025; 2 Dec 84 20:23 EST
Date: Sun  2 Dec 1984 15:49-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #167
To: AIList@SRI-AI
Received: from rand-relay by vpi; Mon, 3 Dec 84 05:20 EST


AIList Digest             Sunday, 2 Dec 1984      Volume 2 : Issue 167

Today's Topics:
  Administrivia - Special Net.AI Issues for Arpanet Readers,
  Linguistics - Language Deficiencies & Translation Difficulties
----------------------------------------------------------------------

Date: Sun 2 Dec 84 16:04:11-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Special Net.AI Issues for Arpanet Readers

Laurence Leff of SMU has sent me the Usenet Net.AI record for
the period since my Usenet gateway has been down.  (I.e., since
October 23.)  I will pass the Usenet messages along to Arpanet
readers in three special issues.  This first one includes a
discussion of linguistics and translation difficulties.  The
next issue will include related material about the influence
of language on thought.  The third will be a miscellany issue
containing nonlinguistic material.

					-- Ken Laws

------------------------------

Date: 1:18 pm  Oct 23, 1984
From: colonel@gloria
Subject: natural language deficiencies?

> This struck a chord.  I remember a PBS TV show about the Australian
> aborigines and the difficulties studying them.  There is apparently no
> way to phrase "what if" types of questions.  The anthropologists had to
> tell them a thing was so, get their response, and then tell them it was
> not so.
> 
> This would seem to me to be a serious "expressive deficit".  Any
> aborigines on the net care to verify this?

	A general semanticist named Harrington whose first name
	I have forgotten said that he knew an Indian who was
	fluent in his tribal language and also in ours.  Harring-
	ton asked the Indian if there were such words (meanings)
	as "could" and "should" in his Indian language.  The
	Indian was quiet for a while, then shook his head.  "No,"
	he said.  "Things just are."

			Barry Stevens, _Don't Push the River_ (1970)

Expressive deficiency?  Or a more accurate modeling of reality?

See also the "Counterfactuals" dialogue in Hofstadter's _Godel, Escher,
Bach._

Col. G. L. Sicherman
...seismo!rochester!rocksanne!rocksvax!sunybcs!gloria!colonel

------------------------------

Date: 10:10 am  Oct 25, 1984  
From: dan@aplvax
Subject: Tenses in Hopi

It is well-known that the Hopi (American Indian) language only has a
present tense, there are no past or future tenses for their verbs.
Surely this is a language deficiency.

------------------------------

Date: 2:27 pm  Oct 26, 1984  
From: mmt@dciem
Subject: Tenses in Hopi

  It is well-known that the Hopi (American Indian) language [...]

If I remember correctly, Whorf pointed out that the Hopi don't really
have verbs.  Rather, they differentiate between events that last longer
than a cloud (nouns) and shorter events (verbs). Presumably they also
distinguish between events you know about (past+present[which is now past
because you are talking about it]) and events you don't know about
(counterfactuals and/or future).  Does anyone know more directly about
this?
The nature of the Hopi verb/noun tense/factual distinction is interesting
because Whorf used the non-distinction between noun and verb to
argue that the Hopi probably see the world in a different way.

Martin Taylor
{allegra,linus,ihnp4,floyd,ubc-vision}!utzoo!dciem!mmt
{uw-beaver,qucis,watmath}!utcsrgv!dciem!mmt

------------------------------

Date: 1:06 pm  Oct 27, 1984  
From: steven@mcvax
Subject: Language Deficiencies

I find this talk of 'deficiencies' a little disturbing.

A deficiency is in the ear of the listener, surely. If a language doesn't
have a particular feature, then that is only because the speakers of that
language don't need it. If they perceived a need for it, something would
develop.

As an example, 'standard' English doesn't distinguish between 'you' singular
and plural, while many languages do. Is this a deficiency of English? Most
English speakers would probably say not because they get along fine as it is.
However certain dialects of English apparently found it a deficiency, because
they went and invented a plural version (y'all in USA, youse in England).

A similar example is the difficulty in English of saying something in a
gender-neutral way (Chinese has a single word for 'he or she' for instance).
Many English speakers find this a deficiency, and so are developing ways to
express these things.

------------------------------

Date: 7:32 am  Oct 28, 1984  
From: malcolm@west44
Subject: "Youse"

Since when has the word "youse" been used in England (or even Great Britain)?

------------------------------

Date: 7:40 am  Oct 28, 1984  
From: dick@tjalk
Subject: Language Deficiencies

>
>	From: dan@aplvax.UUCP (Daniel M. Sunday)
>
>	It is well-known that the Hopi (American Indian) language [...]

It is well-known that the English language only has a genderless
substantive, there are no masculine or feminine forms for their
substantives.  Surely this is a language deficiency.

It is well-known that the English language only has a sizeless
substantive, there is no diminuitive form for their substantives.
Surely this is a language deficiency.

There is no (reasonable) way to render Dutch: leraresje (little female teacher)
into English.

					Dick Grune
					Vrije Universiteit
					Amsterdam
and my name isn't Richard.

------------------------------

Date: 7:40 am  Oct 28, 1984  
From: steven@mcvax
Subject: Translation of Dutch

> There is no (reasonable) way to render Dutch: leraresje (little female
> teacher) into English.

There is no way, reasonable or not, to render Dutch 'gezellig' into English.
This is also SURELY a language deficiency.

(Since there's no way to render the word into English, I'm afraid I can't
explain to non-Dutch speakers what it means, except to say that it's an
adjective describing social situations, and is desirable.)

((For Dutch readers: I find the same problems with 'eng', though it's not so
widely discussed as gezellig. But perhaps discussion on that should be
restricted to nlnet distribution.))

------------------------------

Date: 6:32 pm  Oct 29, 1984  
From: rob@ptsfa
Subject: Language Deficiencies

> It is well-known that the Hopi (American Indian) language only has a
> present tense, there are no past or future tenses for their verbs.
> Surely this is a language deficiency.

Similarly Indonesian does not have tenses either (nor aspect or person
or number).
However, the meanings that tenses, etc. express in English et al. get expressed
with separate words in Indonesian. In fact English doesn't even have a real
future tense, e.g. no prefix/suffix added to verb root to denote future;
English uses a separate word 'will' to denote futurity, as well as phrases
like 'be going to'.
Indonesian has a whole battery of adverbs to take the place of verb tense.

The lack of a syntactic feature does not necessarily mean a communicative
deficiency. And in any case it is not clear that if a language cannot
communicate some certain meaning it is deficient - maybe the native speakers
of that language have no need to express that meaning.
Do Congolese Pigmies need to have a word for snow? Actually that's a slightly
different issue than tense, because 'snow' is an object whereas tense is
has a more abstract significance.

Rob Bernardo, Pacific Bell, San Francisco, California
{ihnp4,ucbvax,cbosgd,decwrl,amd70,fortune,zehntel}!dual!ptsfa!pbauae!rob

------------------------------

Date: 7:24 pm  Oct 29, 1984  
From: lwall@sdcrdcf
Subject: Language Deficiencies

In article <6115@mcvax.UUCP> steven@mcvax.UUCP (Steven Pemberton) writes:

>I find this talk of 'deficiencies' a little disturbing.
>
>A deficiency is in the ear of the listener, surely. If a language doesn't
>have a particular feature, then that is only because the speakers of that
>language don't need it. If they perceived a need for it, something would
>develop.

I find this talk of deficiencies a little disturbing too, but for different
reasons.  Almost all purported "deficiencies" indicate not that a language
cannot communicate a particular idea, but that the purported linguist has
not studied the language well enough.  Languages are not differentiated on
the basis of what is possible or impossible to say, but on the basis of what
is easier or harder to say.  That is not to say that a given language is
easier or harder than another--languages on the whole are of approximately
equal complexity, but the complexities show up in different places in
different languages.  This is known as the waterbed theory of linguistics--
you push it down one place and it pops up somewhere else.

>As an example, 'standard' English doesn't distinguish between 'you' singular
>and plural, while many languages do. Is this a deficiency of English? Most
>English speakers would probably say not because they get along fine as it is.
>However certain dialects of English apparently found it a deficiency, because
>they went and invented a plural version (y'all in USA, youse in England).

Here in California, it's "you guys".  And no, they don't all have to be male.
They don't any of them have to be male.

Of course, "standard" English has "all of you", "you folks", "you ladies",
etc., and a bunch of vocative phrases to indicate plurality.  "Gentlemen,
start your engines!"

>A similar example is the difficulty in English of saying something in a
>gender-neutral way (Chinese has a single word for 'he or she' for instance).
>Many English speakers find this a deficiency, and so are developing ways to
>express these things.

One does have a certain amount of difficulty, doesn't one?  But just because
an English speaker runs up against this problem, it doesn't mean they have to
reinvent the wheel, do they?  English already has both a formal and an
informal way to express the idea.  One doesn't have to misunderstood if they
don't want to.  Of course, if one mixes up the formal with the informal, they
very well might be misunderstood.

(For you clunches out there, the previous paragraph is self-referential.)

Larry Wall
{allegra,burdvax,cbosgd,hplabs,ihnp4,sdcsvax}!sdcrdcf!lwall

------------------------------

Date: 4:55 pm  Oct 30, 1984  
From: mmt@dciem
Subject: Translation of Dutch

> There is no way, reasonable or not, to render Dutch 'gezellig' into English.
> This is also SURELY a language deficiency.

  (Since there's no way to render the word into English, I'm afraid I can't
  explain to non-Dutch speakers what it means, except to say that it's an
  adjective describing social situations, and is desirable.)

Why is there *no* way?  Do you mean to imply that English-speakers cannot
experience this social situation, or just that it would take a complex
phrase or paragraph to get the idea across.  If the former, then there
must be more difference between the Dutch culture and all English-speaking
ones than I have observed.  If the latter, then why not try and see
where you get.  I was under the impression that "gezellig" was close
to cosy, comfortable, unconstrained and home-like.  Is this anything like?

Martin Taylor
{allegra,linus,ihnp4,floyd,ubc-vision}!utzoo!dciem!mmt
{uw-beaver,qucis,watmath}!utcsrgv!dciem!mmt

------------------------------

Date: 7:47 am  Oct 31, 1984  
From: marcus@pyuxt
Subject: Translation of Dutch

Does gezellig mean the same as the german word "gemutlich"? ('Skuse the
spelling, please, but I'm not a German speaker, or even a speaker of
German).
		marcus hand

Incidentally,  I think its usually a deficiency in the speaker or writer
rather than the language....

------------------------------

Date: 7:33 pm  Nov  2, 1984  
From: lambert@mcvax
Subject: Language Deficiencies

[warn your system administrator if this line is missing]

> I think that there are two issues mixed up at the moment, being
> 1. Some languages have a single word-construction for an idea
>    that needs several words in some other language.
> 2. Some languages *CAN NOT* be used to express certain ideas.

The distinction between these two categories is not an absolute one. Steven
Pemberton mentioned already the Dutch word "gezelligheid".  No doubt it is
possible to explain the meaning of the word "gezellig" and its derivatives in
English.  To do so, however, to a reasonable degree of precision (let alone
to a degree of precision that would suffice for non-native speakers to rely
on their understanding and utter these words when and only when appropriate)
would require a minor essay.  Now these words are not at all infrequently
used in Dutch.  My dictionary lists as translations for "gezellig":
"sociable", "cosy", "snug" and "social".  A "gezellig avondje" is rendered as
a "social evening".  In the direction English -> Dutch this is always
reasonable.  But telling the host that the evening was "gezellig" would be
considered a compliment, whereas stating that it was social sounds like a
superfluous statement of fact.  Translating "gezellig" as "cosy" is usually
not only wrong, but also ridiculous.  When I try to express myself in English
where I would have used "gezellig" in Dutch, I usually substitute "nice".
However, "nice" does not really convey the meaning of what I am trying to
say.  I experience this as a language deficiency.

Another example is the Dutch phrase "voor de hand liggen".  There is no
phrase in English with the same meaning.  In some cases, "to be obvious" is
acceptable, in some other cases one can use "to come to mind", but in many
cases both are plainly wrong, and in those cases there is no *reasonable* way
that I know of to express the concept in English.

> On the other hand, the Aborigines have no construction for 'what if',
> which is much more serious. This really is a language deficiency,
> since it will take *lots* of trouble to communicate this idea.

Having no construction for a concept is not a property of a race or ethnic
group, but of a language.  There are many Australic languages.  Is the lack
of expressibility of "what if" common to all these, mutually largely
disparate, languages?  That would be a very interesting fact to find.
(However, it appears that none of these languages can express the concept
"supply-side economics" :-) Seriously, I don't know any of the Australic
languages, but I am not at all convinced that natural languages do exist in
which it is hard to express the fact that something has the status of a
hypothesis, even though the language may lack a word for the concept
"hypothesis".  This claim about the languages spoken by the Aborigines seems
to me just one more unfounded popular belief similar to so many introduced by
travellers to uncharted areas while recounting their curious discoveries.  If
it is true, however, for some language, then this would be a good test case
for the Sapir-Whorf hypothesis.  For the implication would be that the native
speakers could not entertain hypothetical thoughts, and so would not take
provisions for contingencies.

To conclude, I want to point out two deficiencies common to all languages I
know.  The first is well known: what should you reply to the question "Do you
still persist in your lies?", when you believe you are speaking the truth?

There is no way of stating that the question implies a falsehood other then
by directly contradicting the falsehood.  On paper, "Question not applicable"
may do, but not in a conversation.  The other deficiency has to do with "why"
questions.  Children tend to pass through a period of asking questions like:
"Why are bananas yellow?" "Why does water not burn?"  "Why is ice cold?"
etc., ad nauseam.  In some cases there is no "why"; the concept does not
apply.  For example, it is not reasonable to ask "Why is it Wednesday
today?", or "Why is red a colour?".  The deficiency is that there is no
accepted way of stating about a proposition that the concept "why" does not
apply.

     Lambert Meertens
     ...!{seismo,philabs,decvax}!lambert@mcvax.UUCP
     CWI (Centre for Mathematics and Computer Science), Amsterdam

"If I were you, I should wish I were me."

------------------------------

Date: 7:34 pm  Nov  2, 1984  
From: steven@mcvax
Subject: "Youse"

In article <382@west44.UUCP> malcolm@west44.UUCP (Malcolm Shute.) asks:

> Since when has the word "youse" been used in England (or even Great Britain)?

Well, the earliest date I can't give you. However, it was recorded in
Norfolk, for instance, in 1905. As for Great Britain, I can find references to
1880, and possibly earlier, in Northern Ireland. However, since it is also
recorded in Australia and the USA, it probably derives from much earlier.

------------------------------

Date: 2:43 am  Nov  4, 1984  
From: biep@klipper
Subject: Translation of Dutch

In article <1175@dciem.UUCP> mmt@dciem.UUCP (Martin Taylor) writes:

>I was under the impression that "gezellig" was close
>to cosy, comfortable, unconstrained and home-like.  Is this anything like?

	I wouldn't say it is "close to" the words you mentioned.
	It often is, but it isn't that. E.g. it can suddenly be
	"gezellig" when one of two people on an inhabitated is-
	land, suddenly reveals a bar of chocolate and shares it
	with his companion. They may be almost starving, but
	they eat it with little bits, and talk about the taste,
	and where, in which shop ("You remember, the old man
	who used to buy licorice over there?"), one can buy
	the best, etc.
	My English isn't that good, but the whole situation
	doesn't sound like "cosy", or "home-like", or such. The
	Dutch word "gezellig" is derived from the same stem as
	"gezelschap", which means both "the group around you"
	and "the mutual affection within the group". However,
	it has got a special meaning too because of the fact
	that the word is often used with respect to going and
	drinking coffee together at eleven o'clock in the mor-
	ning. (The word "coffee" itself is highly associated
	with "gezellig" too: I don't drink coffee, but nobody
	would invite me "Come, and drink chocolate milk with
	us!", however that is what I actually do. The word
	"coffee" *has* to be mentioned to commumicate the
	idea. The Dutch expression for "Our house stands always
	open for you" is "The coffee is always ready for you".)

							  Biep.
	{seismo|decvax|philabs}!mcvax!vu44!botter!klipper!biep

I utterly disagree with everything you are saying, but I am
prepared to fight myself to death for your right to say it.
							--Voltaire

------------------------------

Date: 5:10 pm  Nov  4, 1984  
From: ir44@sdcc6
Subject: Language Deficiencies

> 
> > I think that there are two issues mixed up at the moment, being
> > 1. Some languages have a single word-construction for an idea
> >    that needs several words in some other language.
> > 2. Some languages *CAN NOT* be used to express certain ideas.
> 
> The distinction between these two categories is not an absolute one. 

There are further problems in the comparison of languages and
their semantic capabilities that become evident in this series
of articles on "deficiencies." 
   1. The discussion of Dutch "gezellig" illustrates the
   difficulty of defining a word (more for some words than
   others) in its OWN language, let alone translating it, i.e.,
   finding a single or compact phrase that conveys its meaning
   to speakers of another language. The problems of definition
   and translation appear to be similar and always approximate.
   One test (of distribution) is whether a proposed synonym or
   defining phrase or circumlocution can be substituted for the
   original word over the whole range of environments in which
   that word can occur. Under this test there are few true 
   synonyms within a language let alone single word translations
   in the target language. In translation the test is doubly
   approximate as the environment in which a term occurs are
   themselves approximate translations, themselves environed by
   the word being tested. I have spoken to Bible translators, now
   so widespread in the world, about how they translate such
   notions as "God" or "hell." They do their best, ignore the
   incommensurabilities, and rely on God or "God" to get his
   point across.

   2. The notion of "word" in my inexpert opinion is one of the
   most loosely defined in linguistics. Sometimes it is taken
   as a unit that can occur by itself (unlike an affix which,
   while it can occur independently, with many different roots,
   is a bound morpheme that would not occur by itself unless 
   it has been liberated, like "isms and ologies.") But much of
   what we take as words in English are, I think, only separated
   as orthographic conventions, not occuring separately as 
   utterances in speech-- compare "am" with "-ing". The sense
   of "wordness" may be more semantic than syntactic or perhaps
   more a matter of cognitive chunking. The question of what 
   makes a good dictionary entry may have its counterpart in the
   storage of vocabulary- "word" being in some way the best
   retrieval unit. 

   Ted Schwartz    Anthro/UCSD

------------------------------

End of AIList Digest
********************

From:	CSVPI           3-DEC-1984 05:35  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a017446; 2 Dec 84 22:30 EST
Date: Sun  2 Dec 1984 16:42-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #168
To: AIList@SRI-AI
Received: from rand-relay by vpi; Mon, 3 Dec 84 05:30 EST


AIList Digest             Sunday, 2 Dec 1984      Volume 2 : Issue 168

Today's Topics:
  Perception - Language and Thought
----------------------------------------------------------------------

Date: 4:55 pm  Nov 10, 1984  
From: dts@gitpyr
Subject: Language and Thought

...

> The lack of a syntactic feature does not necessarily mean a communicative
> deficiency. And in any case it is not clear that if a language cannot
> communicate some certain meaning it is deficient - maybe the native speakers
> of that language have no need to express that meaning.

I don't take it as given that there exist any concepts that some language
can't express, because I'm not sure what it means to say that a language
"can't express" an idea. One thing that most people in this discussion
seem to have overlooked is the fact that the words don't carry all the
meaning.

The words you are reading now are arousing ideas in your mind. I have no
direct control over those ideas. All I can do is try to chose my words
so that they will evoke the ideas I want them to in the minds of the
majority of those people who bother to read this. If you fail to properly
understand what I am trying to say, whose fault is it? Mine for choosing
the wrong words? Yours for having the wrong ideas? English's for not
having a single word which encompasses everything I'm trying to say?

I've had discussions on this topic before with friends, in which I took
the position that there are things that can't be expressed in English.
But now I think that's a naive viewpoint because so much depends on
mutual understanding between the persons involved. I asked a Dutch
person about "gezellig" and she explained it so that I think I
understand. The closest single-word synonym I could think of in English
is "homey" but that's not really anywhere near being an exact equivalent.

But now, if someone said to me, "Homey. You know, in the Dutch sense,"
I would have a good idea of what they meant. English will have
communicated an idea that many people on the net have been saying it
can't.

-- Either Argle-Bargle IV or someone else. --

Danny Sharpe
School of ICS
Georgia Insitute of Technology, Atlanta Georgia, 30332
...!{akgua,allegra,amd,hplabs,ihnp4,seismo,ut-ngp}!gatech!gitpyr!dts

------------------------------

Date: 1:00 pm  Nov  9, 1984
From: arndt@decwrl
Subject: The Soapy-Woof theory of talk.


It seems to me that there is a hole at the bottom of the bag.
I mean, does language really have THAT much control over how we think?

"Language exists to communicate whatever it can communicate.  Some things
it communicates so badly that we never attempt to communicate them by words
if any other medium is available."

". . . what language can hardly do at all, and never does well, is to inform
us about complex physical shapes and movements.  Hence descriptions of such
things in the ancient writers are nearly always unintelligible.  Hence in
real life we never voluntarily use language for this purpose; we draw a
diagram or go through pantomimic gestures."

"Another grave limitation of language is that it cannot, like music or
gesture, do more than one thing at once.  However the words in a great poet's
phrase interinanimate one another and strike the mind as a quasi-instantaneous
chord, yet, strictly speaking, each word must be read or heard before the next.
That way, language is unilinear as time.  Hence, in narrative, the great 
difficulty of presenting a very complicated change which happens suddenly.
If we do justice to the complexity, the time the reader must take over the 
passage will destroy the feeling of suddenness.  If we get in the suddenness
we shall not be able to get in the complexity.  I am not saying that a genius
will not find its own ways of palliating this defect in the instrument; only
that the instrument is in this way defective."

"One of the most important and effective uses of language is the emotional.
It is also, of course, wholly legitimate.  We do not talk only in order to
reason or to inform.  We have to make love, and quarrel, to propitiate
and pardon, to rebuke, console, intercede, and arouse.  The real objection
lies not against the language of emotions as such, but against language 
which, being in reality emotional, masquerades - whether by plain hypocrisy or
subtler self-deceit - as being something else."

From:  C.S. Lewis, STUDIES IN WORDS, Cambridge University Press, 1960.
       Chapter 9 "At The Fringe Of Language, p.214-5.

Comments???????????????????

Regards,

Ken Arndt

------------------------------

Date: 7:21 am  Nov 12, 1984  
From: robison@eosp1
Subject: Perception

I disagree strongly wth the C.S. Lewis quote below (from ken Arndt).

>"Another grave limitation of language is that it cannot, like music or
>gesture, do more thatn one thing at once.  However the words in a great poet's
>phrase interinanimate one another and strike the mind as a quasi-instantaneous
>chord, yet, strictly speaking, each word must be read or heard before the
>next. That way, language is unilinear as time. Hence, in narrative, the great 
>difficulty of presenting a very complicated change which happens suddenly.
>If we do justice to the complexity, the time the reader must take over the 
>passage will destroy the feeling of suddenness.  If we get in the suddenness
>we shall not be able to get in the complexity.  I am not saying thta genius
>will not find its own ways of palliating this defect in the instrument; only
>that the instrument is in this way defective."
>
>From:  C.S. Lewis, STUDIES IN WORDS, Cambridge University Press, 1960.
>       Chapter 9 "At The Fringe Of Language, p.214-5.

All arts that appeal primarily to one sense suffer to a degree from
the fault Lewis describes, that one item of information is processed at
a time, and the artwork is perceived serially in a sense.  Almost all
great artists in all media have wonderful ways of addressing this
problem, so that it is not a limitation, but merely a challenge.
In the specific example, the words of poems particularly tend to have
multiple meanings, and to give additional meanings to other parts of
the poem.  Even if one focuses on the INITIAL reading of a poem
(which is ridiculous), the words already read will continually change
in perception as additional words are read.  This is a heavy parallel
activity!

Other examples one might give:

  In writing, many authors contrive to describe a complicated sudden
  change obscurely, so that the reader knows he does not understand the
  words fully in his serial reading, but the entire complex moment may
  be understood suddenly when, after many pages, the whole situation
  falls into place.  I'm sure we can all think of books where this
  occurs.  For spectacular, but easy examples of this I would recommend
  the beginning (say, the first 15 pages) of either of these novels by
  Henry Green:
	- Living
	- Party Going
  In each case, he starts by partially describing the current situation
  in such an uncommunicative manner that the reader is all at sea.
  Conversation, observation, and environment just accumulate in the
  readers mind, awaiting elucidation.  Then orientation occurs, the
  meaning of the opening pages hits the reader in a rush, and he is
  emotionally deep in the fabric of the book, having been struck by
  a torrent of words suddenly, in a way C.S. Lewis would have thought
  impossible...

  Painters and similar artists know that the eye perceives a picture
  serially.  Most types of art attract the eye (not 100%, but
  materially) to a part of the picture, and then lead it from place to
  place.  Many pictures are arranged so that the actual motion of the
  eye will be soothing or otherwise.  Some pictures are arranged so
  that a surprise awaits the eye after part of the picture is
  perceived.  [In Western Art, landscapes that slope down from left
  to right tend to be more soothing than the reverse, since Western
  eyes tend to read from left to right.  Some pictures just lead the
  eye round and round through an unsettling maze, as Picasso's
  Guernica.]

  Musical compositions are heard serially.  Again, if we focus on the
  initial hearing, musical ideas are being presented serially, with
  a minimum of parallelism possible.  But as a composition goes on,
  the listener learns more about, and re-interprets, what he has heard.
  An obvious example would be a theme and variations, in which some of
  the variations emphasize constructional characteristics of the theme,
  and some recall the theme so the listener can rethink its impression
  on the basis of better understanding of its parts.  These variations
  will be communicating in parallel (what happened before, plus the new
  variation itself).

  Three-dimensional sculptures must also be perceived over time, since
  they are not fully visible from one place.  Mnay sculpors are aware
  of this and arrange that the whole is greater than the sum of its
  parts.

  Etcetera, etcetera, etcetera.

------------------------------

Date: 10:26 am  Nov 14, 1984  
From: ben@sysvis
Subject: Perception

	Interesting.  (But why is this in net.ai instead of net.lang.n?)
	Language as an informational tool, especially when in written form, 
	seems to have some distinct disadvantages in terms of information
	density.  When describing a house, for instance, it is certainly 
	more informative to draw a floor plan, with dimensions, and provide
	architectural renderings in color, than to give a verbal description.

	However, the emotional impact of being present in a building itself
	cannot be conveyed by graphic or pictorial means alone.  If you visit
	the Vietnam War Memorial in Washington, it is a moving experience.
	However, the photograph you bring back cannot convey the emotion you
	experienced.  It will arouse emotional reactions in your viewers, but
	not necessarily the emotions you wished to convey.
	
	To a limited extent, written language together with graphic and pic-
	toral information will provide the emotional base for communication.
	Spoken language, with all its intonational coloring, will convey much
	more of the emotion.  These combined with musical score will allow
	you as a communicator to most closely recreate the experience both 
	informationally and emotionally for your audience.  Thus the basis for
	this combination in cinema and video.

					Ben Evans
					{ctvax!convex}!trsvax!sysvis!ben

------------------------------

Date: 5:02 pm  Nov 17, 1984  
From: mark@digi-g
Subject: Language and Thought


arndt@lymph.DEC writes:

> ...  does language really have THAT much control over how we think?

That depends on what you mean by `think'.

This is one of my pet theories.

At the very least, there are functional areas of the mind that perform
verbal reasoning.  This area maintains the continuous internal dialogue
that we all experience.  Most people identify this area as `I'.  There
are certainly non-verbal areas, too.  But this is not identified as the
self.  Consider, as an example, reflex actions: `I jumped out of the way
before I was even aware of it...'.  Other non-verbal areas influence
the `verbal-consciousness' with messages called `intuition'.

I believe that the reason we assign such importance the the verbal
consciousness is that we are social animals.  The importance of our
interactions with others of our ilk is so great that we tend to define
ourselves as that which others can experience.  Because language is the
primary means of communication with others, we percieve verbal
consciousness as being terribly important. Self-awareness would not exist
without the built-in social hooks.

Language, however, has little effect on the non-verbal areas of the mind.
A human in total isolation with no language experience could probably
function quite well with no internal dialogue. Many complex tasks, which
we would like to have computers emulate, are performed without language.

Comments?

					-- Mark Mendel 
					-- ...ihnp4!umn-cs!digi-g!mark

------------------------------

End of AIList Digest
********************

From:	CSVPI           3-DEC-1984 05:35  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a017652; 2 Dec 84 23:59 EST
Date: Sun  2 Dec 1984 16:54-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #169
To: AIList@SRI-AI
Received: from rand-relay by vpi; Mon, 3 Dec 84 05:32 EST


AIList Digest             Sunday, 2 Dec 1984      Volume 2 : Issue 169

Today's Topics:
  Planning - Constraint Propagation and Planning,
  AI Systems - Crossword Puzzle Program? & Learn Program,
  Cognition - Dignostic Reasoning,
  Humor - State-of-the-Art Riddle Program,
  Knowledge Representation - OPS5 Disjunctions
----------------------------------------------------------------------

Date: 3:26 pm  Nov 29, 1984
From: chandra@uiucuxc
Subject: Constraint Propagation & Planning   


	Constraint   Propagation  in  Planning

	
	I am thinking of doing my theses on Planning. I read Mark Stefik's
theses on Planning with Constraints. I wanted to know if anybody has seen 
any other papers on constraint propagation applied to
	
		a) Planning

	   or   b) Blocks world problems....

	I am planning on a system that will generate constraints from the
physical interaction between blocks and use them to do heirarchical 
planning to achieve goals. Still thinking...
 
 - Navin Chandra

full arpa address is : chandra@uiucuxc@uiucdcs@RAND-RELAY.ARPA

Thank you

------------------------------

Date: 10:12 pm  Nov 23, 1984
From: davy@ecn-ee
Subject: Crossword Puzzles?                  


Anybody got a nifty program to fill in crossword puzzles? Basically I 
need something which, given a template of "white squares" and "black 
squares" and a list of words, will generate patterns of the words 
placed into the template. All the program has to do is stick the words 
in the holes and make sure all the vertical/horizontal combinations 
are really words; it doesn't have to handle clues, etc. 

Please mail responses to:

	{decvax, ihnp4, ucbvax}!pur-ee!davy
	ecn.davy@purdue.arpa

Thanks in advance,
--Dave Curry

------------------------------

Date: 12:32 pm  Nov 19, 1984
From: rjs@okstate
Subject: Learn

I am intersted in making a 'learn' system for xlisp 1.2+.  To do this
I find myself in need of many examples of not only style but
*ACTUAL WORKING CODE*.  If anyone who has something that currently works
could send me a copy, I will try to build a learn system and will
subsequently post it to the net.

To these ends, a few guidelines should be adheard to:
1.  These xlisp 1.2+ programs should be short, useful, and explain
    some '*function*' that shows xlisp's abilities as a language.
2.  Interest should be aimed at A.I. people and others that would like
    to learn xlisp in a cursory manner (i.e. two approaches).
3.  The 'version' of xlisp 1.2+ that we have has been modified via the
    following net notes:
        net.sources / mit-eddi!jfw / 12:23 am  Sep 19, 1984
        net.sources / mit-eddi!jfw /  4:21 pm  Sep 21, 1984
        net.sources / mit-eddi!jfw /  8:24 pm  Sep 21, 1984
        net.sources / mit-eddi!jfw /  8:56 pm  Sep 24, 1984
        net.sources / mit-eddi!jfw /  2:04 pm  Oct  9, 1984
        net.lang.lisp / ea!mwm /  1:52 am  Oct 13, 1984
    Any programs should be runnable on this system.

Many Thanks in Advance

Roland Stolfa (Stalfonovich),
Oklahoma State University

....!ihnp4!umn-cs!isucs1!\
.......!ucbvax!mtxinu!ea! > okstate!rjs
....!convex!ctvax!uokvax!/

------------------------------

Date: 7:21 am  Nov 12, 1984
From: robison@eosp1
Subject: Re: Diagnosing strategies for humans

This is a followup on the discussion of how doctors reason when
doing diagnoses:

>I don't think it would alarm anyone who does deductive reasoning a lot.
>The method described IS deductive reasoning.  As Sherlock Holmes once 
>observed: 'when all that is impossible has been removed, whatever remains,
>no matter how improbable, must be the truth.'  This doesn't prevent checking
>out the most probable (or the most easily tested) first.

Sherlock Homes did not, in my opinion, describe what doctors do.
In the first place, many tests are available to doctors, some simple
and inexpensive, to rule out the improbable.  Usually these tests are
not performed until the more likely cases are checked out.  A good
example is a diseased gall bladder.  Its common symptoms are similar
(depending upon how people report them) to lower backpain, ulcers,
and other forms of gastric distress, including viruses.  Doctors
almost always will do the more painful, and more expensive ulcer test
first (barium X-ray), before checking for gall bladder disease, which
is less common.

Sherlock Holmes always reasoned on the basis of very little
information, but he was careful to collect all he could at a given
moment, and then was ready to deduce from that the ONLY possibility,
however improbable.  Doctors will collect some of the information
easily available to them, and then deduce the most probable cause,
no matter how many possible causes are still not ruled out.

Please recall that I'm not flaming about all this.  Anyone who has
suffered from one of the less likely possibilities will prefer that
more deductive reasoning were used sooner; but I can appreciate that
doctors have a system that works a high percentage of time, and also
minimizes the number of tests required, at the cost of delaying correct
treatment to a relatively few cases.  I'm not sure that any alternative
would be better.

	- Toby Robison (not Robinson!)
	allegra!eosp1!robison
	or: decvax!ittvax!eosp1!robison
	or (emergency): princeton!eosp1!robison

------------------------------

Date: 4:34 pm  Nov 21, 1984
From: emneufeld@water
Subject: UNIX - ai                           

/*
	For all you ai-ers, here's a great state-of-the-art
	ai program that runs on UNIX.  Compile this program
	with the command

	cc riddle.c -o ridde -lcurses -ltermlib

*/


#include <math.h>
# include <sys/types.h>
# include <sys/timeb.h>
#include <curses.h>


main()
{
    int     i,j;
    char    a,b,c;
    savetty();
    initscr ();
    printw ("ask me a riddle...\n");
    refresh ();
    i = randy (10);
    j = 0;
    while ((c = getchar ()) != '\n') {
	if (i = j)
	    srand ((int) c);
	j++;
    }
    printw ("Gee! ");
    refresh ();
    sleep (2);
    printw (" That's a tough one...");
    refresh ();
    refresh ();
    for (i = 0; i < 10; i++) {
	printw (".");
	refresh ();
	sleep (1);
    };
    printw ("\nI give up !!  What's the answer?\n");
    refresh ();
    while (getchar () != '\n');
    for (j = 0; j < 100; j++) {
	i = randy (3);
	move (randy (31), randy (70));
	switch (i) {
	    case 0: 
		printw ("Hee hee!");
		break;
	    case 1: 
		printw ("Har har!");
		break;
	    case 2: 
		printw ("That's a good one!");
		break;
	    case 3: 
		printw ("Yuk, yuk!!");
		break;
	    default: 
		printw ("That's hilarious!");
		break;
	}
	refresh ();
    }
    endwin();
    resetty();
}

randy(i)
int i;
{
    i = (int) ((double) i * (double) rand () / (double) 017777777777);
    return (i);
}

------------------------------

Date: 7:01 pm  Nov 13, 1984
From: neihart
Subject: OPS5 disjuction dilemma.

        I have encountered a problem with ops5 as follows:  since
disjunctions  are  implicitly  quoted  (see  pg 18 of ops5 users'
manual), it is impossible to substitute a variable within a  dis-
junction.   This is needed in the following example where the ^sd
field of a passtx should consist of a vector of 2 elements;  how-
ever,  it  doesn't  matter  which element is listed first, so the
condition element should succeed as long as the two elements  are
present  in any order.   If there were a method to call functions
for arguments of the condition elements, a function could  create
the  proper  disjunction, an admittedly clumsy solution; however,
the call mechanism only works on RHSs of productions!

        I have also considered using two  ^sds,  ^sd1  and  ^sd2,
storing the least of the two numbers in ^sd in ^sd1 and the other
in ^sd2, thereby eliminating the  vector  and  using  two  scalar
variables.  However, this won't work since the LHS of the produc-
tion is incapable of sorting the two variables (eg, <d> and  <in-
put1>  below),  providing the proper target variable for ^sd1 and
^sd2. How can I get around this and allow the LHS condition  ele-
ments  4  and 5 below to match regardless of the order of the two
^sd arguments?

(p Dflipflop
  (inv ^name <inv1>  ^input <input1> ^output <output1>)
  (inv ^name <inv2>  ^input <output1> ^ output <output2>)
  (inv ^name <inv3>  ^input <enable> ^output <output3>)
  (passtx  ^name <tx1> ^gate <enable> ^sd  
;*** following line doesn't work since <d> and <input1> are taken literally.
	{<< <d> <input1> >> <temp1>} {<< <d> <input1> >> <temp2> <> <temp1>})
  (passtx  ^name <tx2> ^gate <output3> ^sd 
;*** following line doesn't work since <output2> and <input1>
;*** are taken literally, rather than their values being used.
 {<< <output2> <input1> >><temp3>}{<< <output2> <input1> >><temp4> <> <temp3>})
-->
  (make Dff ^name <inv1>  ^clock <enable> ^Q <output2> ^Qbar <output1>)
  (remove 1 2 3 4 5)
)

------------------------------

Date: 12:25 pm  Nov 15, 1984  
From: paul@ctvax
Subject: OPS5 Disjunctions

The solution is simple (though a little ugly). Productions themselves
are disjunctions. The idea is that rules be aranged into disjunctive
form, then each disjunction is a separate OPS5-rule. and each rule
is itself a conjunction (with possible negations).

(p variant1
   ...ce's that bind <foo1> and <foo2> ...
   ( <foo1> <foo2> )
   -->
   (make found))

(p variant2
   ...ce's that bind <foo1> and <foo2> ...
   ( <foo2> <foo1> )
   -->
   (make found))

(p var1orvar2
   ...ce's that bind <foo1> and <foo2> ...
   (found)
   -->
   .... rhs goes here ...)

This way you can avoid duplication of the RHS.

paul.ct@CSNet-Relay
ctvax!paul

------------------------------

Date: 9:21 pm  Nov 17, 1984  
From: neihart
Subject: OPS5 Disjunctions

That certainly is a solution to the problem, but it gets inadequate
quickly.  The number of productions needed to express a production which
has n vectors, with m order-independent elements each, is m to the
n productions!  I've tried making a routine which would (build ..) these
productions automatically, however I've discovered that values in the
attribute-value pairs cannot be expressions which evaluate to a variable,
such as <x>!

------------------------------

Date: 9:36 am  Nov 18, 1984  
From: neihart
Subject: OPS5 Disjunctions

I've decided it is easier to make multiple version of the same thing in the
working memory, one for each possible permutation, than it is to just have
one copy with one or more complicated productions for matching.  All the
versions can have the same value in the ^name field, so that as soon as one
is used, all working memory elements with the same name as the one just
used can be removed.  This is still a clumsy way to get around the problem,
but does anyone know of any better method?

------------------------------

End of AIList Digest
********************
