
From:	CSVPI           4-DEC-1984 04:39  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a023431; 4 Dec 84 3:11 EST
Date: Mon  3 Dec 1984 21:17-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #170
To: AIList@SRI-AI
Received: from rand-relay by vpi; Tue, 4 Dec 84 04:35 EST


AIList Digest            Tuesday, 4 Dec 1984      Volume 2 : Issue 170

Today's Topics:
  Administrivia- Missing Digest #166 & Digest Sequence,
  AI Tools - Franz Lisp -> Common Lisp & Languages for AI,
  Knowledge Representation - OPS5 Disjunctions,
  Cognition - A Calculus of Elegence,
  Seminars - AI Architectures at TI  (SMU) &
    Karmarkar's Algorithm  (SU)
----------------------------------------------------------------------

Date: Mon 3 Dec 84 21:05:48-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Missing Digest #166

I have reason to believe that digest V2 #166 did not make it
out to several (many?) sites before it was mysteriously deleted
from my system.  Let me know if you need a remailing.  The digest
included

  Philosophy - Dialectics and Piaget,
  Logic Programming - Book Review,
  PhD Oral - Nonclausal Logic Programming,
  Seminar - Learning Theory and Natural Language  (MIT),
  Conference - Logics of Programs

It should have gone out Friday or Saturday (Nov. 30 or Dec. 1).

                                        -- Ken Laws

------------------------------

Date: Mon 3 Dec 84 20:49:45-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Digest Sequence

Usenet readers may have noticed that digest issues 167-169 are missing.
These issues were sent only to Arpanet readers because they contained
only messages from the net.ai discussion -- the ones since Oct. 23,
when our gateway host went down.

                                        -- Ken Laws

------------------------------

Date: 30 Nov 1984 1219-EST
From: Scott Fahlman <FAHLMAN@CMU-CS-C.ARPA>
Subject: Franz Lisp -> Common Lisp

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

A number of people around CMU and CGI have now successfully translated
Franz Lisp programs into Common Lisp.  In general, people seem to have
little trouble moving big programs over, but people who have not yet
done this are understandably apprehensive.  If the people who have
experience in this area want to send me a brief description of the
things to look out for or that caused them trouble, I will try to merge
these experiences into a short guide for people faced with this kind of
task in the future.

-- Scott

------------------------------

Date: Mon, 3 Dec 84 08:15 EST
From: D E Stevenson <dsteven%clemson.csnet@csnet-relay.arpa>
Subject: Languages for A. I.

I would like to compile a list of all language systems which have been
implemented / proposed for artificial intelligence purposes.  I would
appreciate lists, pointers, and vague recollections from anyone within
the community.  I will be glad to forward any resulting document.

steve
(803) 656-3444

------------------------------

Date: 2 Dec 1984 2124-PST (Sunday)
From: ricks%ucbic@Berkeley (Rick L Spickelmier)
Subject: OPS5 Disjunctions

I have found the following to be a solution to disjunction
in the type of problem you (neihart) have come up against.

If you are trying to represent a pass-transistor with permutable terminals,
just remove the permutable terminals from the pass-transistor working
memory element and make seperate working memory elements for each permutable
terminal, such as the following:

(passtx ^name <tr1> ^gate <gate>)
(terminal ^parent <tr1> ^name <tag1> ^type sd ^node <sd1>)
(terminal ^parent <tr1> ^name <tag2> ^type sd ^node <sd2>)

Then your rule can be:

(p Dflipflop
  {(inv ^name <inv1>  ^input <input1>  ^output <output1>) <inv1>}
  {(inv ^name <inv2>  ^input <output1> ^output <output2>) <inv2>}
  {(inv ^name <inv3>  ^input <enable>  ^output <output3>) <inv3>}
  {(passtx ^name <tx1> ^gate <enable>)  <passtx1>}
  {(terminal ^parent <tx1> ^name <tag1>    ^type sd ^node <input1>) <term1>}
  {(terminal ^parent <tx1> ^name <> <tag1> ^type sd ^node <d>)      <term2>}
  {(passtx ^name <tx2> ^gate <output3>) <passtx2>}
  {(terminal ^parent <tx2> ^name <tag2>    ^type sd ^node <output2>) <term3>}
  {(terminal ^parent <tx2> ^name <> <tag2> ^type sd ^node <input1>)  <term4>}
  -->
  (make Dff ^name <inv1>  ^clock <enable> ^Q <output2> ^Qbar <output1>)
  (remove <inv1> <inv2> <inv3>)
  (remove <passtx1> <passtx2> <term1> <term2> <term3> <term4>))

Note:  I have run examples using the above and using a single working memory
element for a pass-transistor with seperate rules for each allowable
permutation - and in all cases I have tried, adding extra rules gives
better performance than adding more working memory elements
(but if you are interested in readability, the extra working memory
element representation is better).

            Rick L Spickelmier (ricks@berkeley)
            Electronics Research Laborartory, UC Berkeley

------------------------------

Date: 3 Dec 84 1531 EST (Monday)
From: Lee.Brownston@CMU-CS-A.ARPA
Subject: OPS5 disjunction problem

A better solution is to remove the sd values from the passtx working memory
element.  If the name fields of the passtx wme's contain unique values, then
two sd children can be created which point to their common parent.

(literalize passtx
  name              ; a unique value
  gate)

(literalize sd
  passtx            ; the same value as the "name" field of the passtx parent
  value
)

When the passtx is made, the two sd children are linked to it.

-->
...
(make passtx ^name   <passtxname>
             ^gate   <passtxgate> )
(make sd     ^passtx <passtxname>
             ^value  <sd-value-1> )
(make sd     ^passtx <passtxname>
             ^value  <sd-value-2> )
...

Then it is easy to test the disjunction because the sd elements are unordered
(except by time tag, which is not used in matching).

(p Dflipflop
  (inv     ^name   <inv1>
           ^input  <input1>
           ^output <output1> )
  (inv     ^name   <inv2>
           ^input  <output1>
           ^output <output2> )
  (inv     ^name   <inv3>
           ^input  <enable>
           ^output <output3> )
  (passtx  ^name   <tx1>
           ^gate   <enable>  )
  (sd      ^passtx <tx1>
           ^value  <d>       )        ; is this where <d> is to be bound?
  (sd      ^passtx <tx1>
           ^value  { <input1> <> <d> } )
  (passtx  ^name   <tx2>
           ^gate   <output3> )
  (sd      ^passtx <tx2>
           ^value  <input1>  )
  (sd      ^passtx <tx2>
           ^value  { <output2> <> <input1> } )
-->
  (make Dff ^name  <inv1>
            ^clock <enable>
            ^Q     <output2>
            ^Qbar  <output1> )
  (remove 1 2 3 4 5 6 7 8 9)
)

Might I take this opportunity to make a plug for a forthcoming book on OPS5?
It is called "Programming Expert Systems in OPS5," and is to be published in
mid-April by Addison-Wesley.  The authors are Lee Brownston (CMU), Robert
Farrell (Yale), Elaine Kant (CMU), and Nancy Martin (Wang Institute).

------------------------------

Date: Thu, 29 Nov 84 11:37 EST
From: Steven Gutfreund <gutfreund%umass-cs.csnet@csnet-relay.arpa>
Subject: A calculus of elegence (re: kludge v2 #162)

I found your definition of Kludge very interesting. In the sense you
use it, it seems to be the antonym of Elegance (a term frequently heard
in mathematical circles). The problem is I have never seen a precise
definition of elegance. Would you like to try and produce a definition
for it?

1. What is it about a representation schemas (symbolic/anologic) that
   lead mathematicians or programers to consider them to be elegant
   representations of a problem?

2. Does elegance extend beyond to domain of symbolic representations
   to what David Smith (Pygmallion) called Non-Fregean systems such
   as art (paintings)? Do we call this esthetics?

3. Is there a calculus of esthetics? Can we capture its properties
   in a formal system (axiomatic) or does it correspond to
   reasoning structures inside the brain (a dual of K-lines)?

                                        - Steven Gutfreund

------------------------------

Date: Sat, 1 Dec 1984  01:06 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: A calculus of elegence (re: kludge v2 #162)

  Dear Steven,

Mathematical elegance must be a lot of things; generally it includes
economy and surprise: the sense of getting more from something than
one expected.  I don't believe it very often pays to "define" a
commonsense word, because it includes too many unrelated things -- or
ones which cannot be related except within some larger psychological
theory.  A sounder approach would be to define half a dozen influences
which might contribute, and make separate theories of them.  In
Poincare's famous essay on unconscious mathematical creativity, he
leaves open the question of how the unconscious mind decides when its
mathematical efforts have produced a structure which might be worthy
of the conscious mind's attention.

In fact, I would say that, rather than try to define mathematical
elegance, one would better spend the time refining a system which uses
criteria of possible mathematical value -- e.g., Lenat's AM and
Eurisko systems.  Then, when we better understand a way to make a
system make mathematical discoveries, we can return to speculate about
how human minds do such things.

As for a calculus of esthetics, that probably reflects even more
varied cultural acquisitions.  There was a whole book about this, in
the 1920's I think, called "Esthetic Measure", by Gearge. D. Birkhoff,
a great mathematician.  Here are some of my views, from a page in my
not-quite-finished new book.

PAGE 47:  STYLE

Why do we like so many things which seem to have no earthly use?
We often speak of this with mixtures of defensiveness and pride.

     "Art for Art's sake."
     "I find it aesthetically pleasing."
     "I just like it."
     "There's no accounting for it".

"There's no accounting for" sounds like a guilty child who's been told
to keep accounts.  "I just like it" sounds like one is hiding reasons
too unworthy to admit.  Why do we take our refuge in such vague,
defiant principles? Indeed, we @B{ought} to feel ashamed of doing
things that have no use -- if it is written in our self-ideals that it
is bad to squander time.

However, there are practical reasons to maintain stylistic
preferences.  Here are some reasons why it makes sense to make
choices which are empty in themselves, so long as they are based on
predictable, coherent uniformities.

SIMPLICITY: The legs of a chair work equally well if made square or
round.  Then, why do we tend to choose our furniture according to
systematic style or fashions?  Because they make it easier to
understand whole scenes; you can more quickly see which things are
similar.

DISTRACTION:  The purpose of a picture's frame is normally to
circumscribe its boundary.  Too much variety might distract viewers
from the pictures themselves.  Thus, the more easily the style of the
frames can be  identified -- even if by encrusting them with
ornaments --  the frames themselves can be more easily ignored.

CONVENTION: It makes no difference whether a single car drives on
the left or on the right. But when there are many cars, they must do
the same, one way or the other, or they'll crash.  Societies need
rules which make no sense at all for single individuals.

It saves a lot of mental work, to make each choice the way you did
before. To find out what to do, just use find the rule in memory.
Strangely enough, this principle can be the most valuable, when the
situation it applies to is the least critical, because of the
following principle:

FREDKIN'S PARADOX: The more equal seem the two alternatives, the
harder it will be to choose -- yet the more equal they are, the less
the choice matters.  Then, the more time spent, the more time lost.

It is no wonder that we find it hard to account for "taste" -- since,
often, it depends on all the rules we use when ordinary reasons
cancel out!  Does this mean that Fashion, Style, and Art are all the
same? No, only that they have the common quality that their diverse
forms of sense and reason are further than usual from the surface of
thought.  This is why, when we use stylish ways to make decisions, we
often feel a sense of being "free" from practicalities.  Those
decisions would seem more constrained, were we aware of how they're
made.

When should one give up reasoning and take resort to rules of style?
Only when we're fairly sure that further thought will just waste
time.  What are those fleeting hints of guilt we feel for liking
works of art? Perhaps they're how our minds remind themselves  to not
use rules, which we don't understand, too recklessly.

------------------------------

Date: Sat, 1 Dec 84 07:36:16 cst
From: leff@smu (Laurence Leff)
Subject: Seminar - AI Architectures at TI  (SMU)

Department of Computer Science and Engineering Seminar
    Southern Methodist University

SPEAKER: Dr. Satish Thatte
         Computer Science Laboratory Group
         Texas Instruments Incorporated
         Dallas, Texas

TOPIC: Computer Architectures for Artificial Intelligence

TIME: 3:00-4:00 p.m., Wednesday, December 5, 1984
PLACE: 315 Science Information Center, SMU

ABSTRACT: The seminar will cover the research on computer architecture for
symbolic processing and artificial intelligence at Texas Instruments.  Our
work is concentrated on three major areas: memory system architecture,
language and compiler technology, and symbolic processor design.  The memory
system research is aimed at developing a "uniform memory abstraction" that
comprehends a very large, recoverable, garbage-collected, virtual memory
system to support short-lived, as well as persistent objects.  Such a memory
system is expected to play a crucial role in supporting large,
knowledge-intensive artificial intelligence applications.  The language nad
compiler technology is based on the language SCHEME, a powerful and elegant
dialect of LISP.  The processor design effort is based on using the Reduced
Instruction Set Computer (RISC) philosophy to implement a virtual machine
that supports the SCHEME language, as well as the uniform memory abstraction.

------------------------------

Date: Sun 2 Dec 84 17:10:28-PST
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: Seminar - Karmarkar's Algorithm  (SU)

           [Forwarded from the SRI bboard by Laws@SRI-AI.]

12/6/84 - Irvin Lustig (OR Dept. - Stanford)

   "Karmarkar's Algorithm:  Theory, Practice, and Unfinished Business"

Recent articles in Science Magazine and the New York Times have
brought to light a new algorithm for Linear Programming by N.
Karmarkar.  The excitement created by this discovery in the Operations
Research and Computer Science communities is understandable,
considering the spectacular nature of the reported results.  In my
talk, I will discuss the theoretical result of Karmarkar, some of the
practical considerations of the algorithm, and how this algorithm is
leading to new heuristics for Linear Programming.  I will also explain
how the result has not yet been shown to be practically efficient,
even though fairly good results have been reported in the news media.

Time and place: December 6, 12:30 pm in MJ352 (Bldg. 460)

                                                - Andrei Broder

------------------------------

End of AIList Digest
********************

From:	COMSAT          7-DEC-1984 00:05  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a007386; 6 Dec 84 15:35 EST
Date: Thu  6 Dec 1984 09:40-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #171
To: AIList@SRI-AI
Received: from rand-relay by vpi; Thu, 6 Dec 84 23:59 EST


AIList Digest            Thursday, 6 Dec 1984     Volume 2 : Issue 171

Today's Topics:
  Applications - MACSYMA,
  AI Tools - XLISP Source & Franz Lisp -> Common Lisp,
  Humor - Typagrophical Erorrs,
  AI News - Recent Articles,
  Algorithms - Sorting Malgorithm,
  Knowledge Representation - OPS5 Disjunctions,
  Seminars - Scheme Overview  (Yale) &
    Principles of OBJ2  (MIT) &
    QUTE Functional Unification Language  (IBM-SJ)
----------------------------------------------------------------------

Date: Tue, 4 Dec 84 13:52 CDT
From: Joyce_Graham <jgraham%ti-eg.csnet@csnet-relay.arpa>
Subject: References to MACSYMA applications

I am putting together a little pitch for the TI Journal on the usefulness
of MACSYMA.  What I would like are references to articles about projects
that made use of MACSYMA.  I would also welcome any folklore that may be
floating around.  Can anyone help me?

Joyce Graham
Texas Instruments Incorporated
Post Box 801
M/S 8007
McKinney, TX  75069

from Arpanet - jgraham%ti-eg@csnet-relay
from Csnet   - jgraham@ti-eg

------------------------------

Date: Wed, 5 Dec 84 14:46:02 PST
From: Randy Schulz <lcc.randy@UCLA-LOCUS.ARPA>
Subject: Wanted: xlisp source

I'd like to find out how to get the source for version 1.2 of xlisp.
I'll be using it on a Macintosh, and compiling it with the Manx C
compiler.  If there are multiple versions of the source, I'd like to
get the one most appropriate to that environment.  Thanx in advance.

                                                Randy Schulz
                                                Locus Computing Corp.

                                                lcc!randy@ucla-cs
                                          trwrb!lcc!randy
                {trwspp,ucivax}!ucla-va!ucla-cs!lcc!randy
         {ihnpr,randvax,sdcrdcf,ucbvax}!ucla-cs!lcc!randy

------------------------------

Date: Wed, 5 Dec 1984  13:22 EST
From: "Scott E. Fahlman" <Fahlman@CMU-CS-C.ARPA>
Subject: Franz Lisp -> Common Lisp


Since my post appeared on this list (and thus received wider circulation
than I had really intended) I've had a number of requests for the Franz
Lisp to Common Lisp Conversion Guide.  When and if this document (or any
other conversion guide) is available, I'll put it in some place easily
accessible via arpanet and will send a pointer to AIList.  Don't
hold your breath, however.  So far, the response from people who have
done conversions is underwhelming, and while I would like to see this
document come into being, I do not have the time to go re-learn Franz
and gain the relevant conversion experience myself.  All I can say at
present is that the people who have done Franz to Common Lisp
conversions have reported very little trouble.

------------------------------

Date: Mon, 3 Dec 84 9:42:45 EST
From: Pete Bradford (CSD UK) <bradford@Amsaa.ARPA>
Subject: Typagrophical Erorrs.


        Those who, like me, enjoyed the Palm Springs Desert Sun paragraph
which was reprinted in the New Yorker would enjoy an article in the just
published Winter edition of the British periodical, Punch. The article is
entitled 'Wernit'.
        I cannot possibly describe it (is this a deficiency of the English
language?!), but Punch is widely available over here, at better bookshops
and in most college libraries.  Bear in mind when reading it that the
computer referred to belongs to the British newspaper 'The Guardian', and
that this paper is notorious for its typos.

                        Good reading,
                                PJB

------------------------------

Date: Sat, 1 Dec 84 06:03:54 cst
From: leff@smu (Laurence Leff)
Subject: AI News


Electronics Week, November 19, 1984
ICOT Details Its Progress.  Reports on work done on prolog
machines, a new logic language called Mandala. page 20


IEEE Transactions on Software Engineering, Sept 1984, Volume SE-10, No 5
Reusability Through Program Transformations - discusses using a
transformation-based system to convert a lisp program to Fortran. page 589

Empirical Studies of Programming Knowledge. - this is a cognitive science
study on the use of plans by experts and novice programmers.  Should
be of interest to those following the Plan Calculus work from MIT. page 595


IEEE Computer October 1984,  Volume 17 1984
This is their Centennial Issue.  The articles here are summaries of
various divisions of computer research and practice.
Relevant articles are "Knowledge-Based Expert Systems" by Frederick
Hayes-Roth, Robotics by John F. Jarivis, Computing in Medicine by
K. Preston Jr. et. al. and Speech Processing by Harold Andrews


IEEE Spectrum December 1984,
A one column article on Do What I Mean facilities.  page 29


Electronics Week November 26, 1984, page 50:
Article on venture between Isis Systems Ltd and Imperial Chemical
Industries to market expert systems.


Infoworld November 12, 1984 page 36-41
Article on marketing natural language interfaces for microcomputers.


Datamation, November 1 1984
Page 10, the following sentence was found in their Look Ahead section:
"TRW, the big defense contractor, is looking for some 500 symbolic
processors (Lisp Machines, that is) for use in a global weather
mapping application.

"The Overselling of Expert Ssytems" by Gary R. Martins page 76
Rather scathing attack on AI.  If you enjoyed Drew McDermott's Artificial
Intelligence meets Natural Stupidity, you should read this one too.

"The Blossoming of European AI" by Paul Tate page 85
discusses work by Imperial Chemical Industries, Elf Aquitaine,Schlumberger
and Framentec (set up by Teknowledge).  Sinclair has announced a Prolog
for one of its home machines and expects to have expert system products
out for it soon.  Also Expert System Internationals has announced
ES/P Advisor for $1300.00 (runs on 16 bit micros).  Also has discussions
of management reactions to AI and work done on along the lines of R/1.

"AI and software Engineering" by Robert Kowalski page 92
Talks about using AI techniques to handle a program to work with the
British Naturalization Act.  Presents AI as a technique like decision
tables, dataflow diagrams to improve productivity in general software
development, e.g. business sytems.

Page 163: review of about 10 books on AI.


Electronic Week, November 5, 1984 page 24
Discusses DARPA automated vehicle effort.


Electronics Week, December 3, 1984.
Cautiously Optimistic Tone Set for Fifth Generation Page 57-63 (Note
that this is a six page article.)

Discussed progress of Japanese ICOT effort. In the words of Susan Gerhart who
was quoted in the article, "The single thing that impresses me the most did
not really come out clearly at the conference but did at the ICOT open-house
demonstration the next week; it was that so much new stuff was all working
together -- new hardware, basic software, and application demos--all of it
based on logic programming."  Note that the *operating system* for the new
system is written in a logic programming language called KL1.

------------------------------

Date: 5 Dec 84 1806 EST (Wednesday)
From: Lee.Brownston@CMU-CS-A.ARPA
Subject: A baaaadalgorithm for sorting

One way to make a sort of n items very expensive is to compute the set of all
n! permutations of the n items and map each permutation onto its Godel number.
(One can find opportunities to dawdle in generating primes, too.)  Finding
the sorted permutation is equivalent to finding the minimum or maximum
Godel number if the Godelization preserved order.  This can be accomplished
by sorting the Godel numbers.  Thus, the problem of sorting n items has been
"reduced" to that of permuting, Godelizing, and sorting n! integers.  The
recursion cannot be infinite, of course, but may stop as soon as the use
of resources exceeds that of some turkey who thinks he has come up with a
slower sort.

------------------------------

Date: 4 Dec 1984 0942-PST (Tuesday)
From: ricks%ucbic@Berkeley (Rick L Spickelmier)
Subject: More on OPS5 Disjunctions


The idea of separating the 'sd' field from the 'passtx' element
and creating separate elements for each 'sd' were presented in
two submissions (ricks%ucbic@berkeley and Lee.Brownston@CMU-CS-A).
I would like to point out a difference that looks like it is important
in the original application (of neihart).

Lee's submission distinguished the two 'sd' elements by making sure
they were not connected to the same node (the 'value' attribute).
In this particular example it does not make sense to tie the two 'sd's
together, but in general, you may want to connect two or more of these
type of terminals (from a single element) to the same node (mosfets
used as capacitors have their source and drain connected together,
and in TTL design, nand gates are occassionally used as inverters by
tying their inputs together).

The above argument is why I put unique tags on each 'sd' working memory
element so this could be used to distinguish them, and thus allowing them
to be tied to the same node.

            Rick Spickelmier (ricks@berkeley)
            Electronics Research Laboratory, UC Berkeley

------------------------------

Date: 3 Dec 1984  16:26 EST (Mon)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Scheme Overview  (Yale)

        [Forwarded from the MIT bboard by SASW@MIT-MC.]

                    AI Revolving Seminar

                An Overview of Yale Scheme

                       Jonathan Rees


        Wednesday   12/5/84     4:00pm      8th floor playroom

Yale Scheme, also known as T, was developed over the past three years by
the Yale Computer Science Facility.  It is being used as a production
Lisp system at Yale, UCLA, and elsewhere.  It features a compiler which
generates native VAX and MC68000 code and compiles closure-intensive
code efficiently enough that closures may be used in preference to
record structures for many applications which are space- or
time-critical.  I will discuss how the language and implementation work
and how T is different from other Scheme and Lisp systems, and give a
list of what I consider to be unsolved problems in the design of
Scheme-like languages.

------------------------------

Date: 4 Dec 1984 1105-EST
From: ALR at MIT-XX
Subject: Seminar - Principles of OBJ2  (MIT)

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


"Principles of OBJ2"

Jean Pierre Jouannaud
University of Nancy (France) and SRI

Friday, December 7, 1984
Refreshments at 3:00 pm, talk at 3:15 pm
Room NE43-453


OBJ2 is an object-oriented language with an underlying formal
semantics based on equational logic and an operational semantics
based on rewrite rules.  Key OBJ2 principles are:

1.  Use of parameterized modules (Objects and Theories).  Objects
encapsulate executable code (e.g. rewrite rules), whereas Theories encapsulate
assertions that may be nonexecutable (e.g. first order formulae).

2.  Specification of interface requirements for parameters (Views).

3.  Use of Module Expressions for creating complex combinations of modules.

4.  Use of subsorts to support:

        a simple yet powerful form of polymorphism (overloading).

        partially defined operations (use of "sort-constraint").

        a simple yet powerful and automatic form of error-recovery.

5.  Use of user defined "built-ins", e.g. low level data types described in the
implementation language itself, e.g. MACLISP.  "Built-ins" are first class
objects, e.g. all other construct apply to them, including subsort definitions.


We will discuss these principles by means of examples of OBJ
specifications and point out the main implementation issues.


HOST:  Prof. Guttag

------------------------------

Date: Wed, 5 Dec 84 16:59:47 PST
From: IBM San Jose Research Laboratory Calendar
      <calendar%ibm-sj.csnet@csnet-relay.arpa>
Subject: Seminar - QUTE Functional Unification Language  (IBM-SJ)

           [Forwarded from the SRI bboard by Laws@SRI-AI.]

                      IBM San Jose Research Lab
                           5600 Cottle Road
                         San Jose, CA 95193


  Mon., Dec. 10 Computer Science Seminar
  10:30 A.M.  QUTE:  A FUNCTIONAL LANGUAGE BASED ON UNIFICATION
  Aud. B      A new programming language called Qute is introduced.
            Qute is a functional programming language which
            permits parallel evaluation.  While most functional
            programming languages use pattern matching as basic
            variable-value binding mechanism, Qute uses
            unification as its binding mechanism.  Since
            unification is bidirectional, as opposed to pattern
            match which is unidirectional, Qute becomes a more
            powerful functional programming language than most of
            existing functional languages.  This approach enables
            the natural unification of logic programming language
            and functional programming language.  In Qute it is
            possible to write a program which is very much like
            one written in conventional logic programming
            language, say, Prolog.  At the same time, it is
            possible to write a Qute program which looks like an
            ML (which is a functional language) program.  A Qute
            program can be evaluated in parallel
            (and-parallelism) and the same result is obtained
            irrespective of the particular order of evaluation.
            This is guaranteed by the Church-Rosser property
            enjoyed by the evaluation algorithm.

            M. Sato, Kyoto University
            Host:  J. Halpern

------------------------------

End of AIList Digest
********************

From:	COMSAT          7-DEC-1984 00:21  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a008703; 6 Dec 84 20:31 EST
Date: Thu  6 Dec 1984 13:43-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #172
To: AIList@SRI-AI
Received: from rand-relay by vpi; Fri, 7 Dec 84 00:05 EST


AIList Digest             Friday, 7 Dec 1984      Volume 2 : Issue 172

Today's Topics:
  Linguistics - Indonesian & Aymara' & Translation & Deficiencies
  Conference - Theoretical Approaches to Natural Language Understanding
----------------------------------------------------------------------

Date: 3 Dec 84 08:48 PST
From: Newman.pasa@XEROX.ARPA
Subject: Indonesian


In reply to the note from rob@ptsfa about "Indonesian".


Just one question in regard to your note about "Indonesian". Do you mean
that all dialects spoken in Indonesia have the features that you
mention? Or do you mean that the official language of Indonesia (called
Bahasa Indonesia I believe) has these features? Could you be more
specific? It has been many years since I lived in Indonesia, and I never
really learned enough of the language to have an opinion about your
assertions, but I do know that there are many languages spoken in
Indonesia, and that what you say may be true of any number of these
languages.


>>Dave

------------------------------

Date: Sat, 1 Dec 84 20:20:49 pst
From: weeks%ucbpopuli.CC@Berkeley (Harry Weeks)
Subject: Andean interlingua monograph.

Some mention has been made on this list recently of Aymara'
(that is an accent mark), an Andean language purportedly used
successfully by Iva'n Guzma'n de Rojas of La Paz, Bolivia, as
an interlingua for machine translation.  An article appears
in today's New York Times (Saturday, December 1) on page 4.
Probably of most interest to those involved in the interlingua
debate will be a reference to a 150 page monograph by Mr. Guzma'n
(no title given) published by the International Development
Research Center in Ottawa.  The article also mentions that
Mr. Guzma'n uses ``three-valued formulas, following the Polish
scientist Jan L/ukasciewicz'' to represent the Aymara' logic.
The remainder of the article seems to largely repeat what has
previously been cited on this list from articles in the Los
Angeles Times.
                                                -- Harry

------------------------------

Date: Mon, 3 Dec 84 9:16:12 EST
From: Pete Bradford (CSD UK) <bradford@Amsaa.ARPA>
Subject: Translation.

        The October 1 Electronic News article on Japanese-English translation
reminds me.........

        A young guy in the Pentagon devised this remarkable program to
translate English into Russian, and vice-versa.
        The Secretary of State for Defense was to visit his office and be
given a demonstration of the system.  On the arrival of the 'big-wig', our
hero asked him if he had a phrase he would like translated.  "What about 'The
spirit is willing, but the flesh is weak'?" asked the top man.
        This phrase was duly typed in, and after much flashing of lights etc,
the Russian translation appeared on the screen.   This was smugly read out to
the Secretary of State who pointed out, rather sheepishly, that he did not
speak Russian and was in no position to judge the quality of the translation.
        Things were about to break up in a very unsatisfactory and embarrassing
manner when our hero yelled "I've got it!  I'll just reverse the polarity of
the program and feed back the phrase it just came up with!".   "Brilliant!"
gulped his Director, recovering just sufficiently from his recent apoplexy
to enable him to talk again, "Let's do that.".
        The Russian phrase was then fed back into the machine which had now
been switched into the Russian-English mode, and the small crowd waited
expectantly while more lights flashed and blinked.
        They say that the Director is still recovering in George Washington
Hospital and our hero has, of course, given up all thoughts of a successful
carreer in the Defense Department.  How was he to know the program would play
such a dirty trick on him.  It certainly performed the translation it had
been asked to do, but it did seem too loose or colloquial a translation -
'The whisky's OK, but the meat's lousy!....

------------------------------

Date: Monday, 03 Dec 84 10:46:51 EST
From: thompson (ross thompson) @ cmu-psy-a
Subject: Language deficiencies (or wife beating)

There was a mention earlier on this bboard that it is often difficult
to answer questions, because there is an implication which is not
true contained in the question.  The example given was the question
"Do you persist in your lies?"  A more well known example of the same
phenomenon is the classic "Have you stopped beating your wife?"

I don't know a lot about eastern religions, and I am sure I will be
shot down in flames for going out on a limb, but I believe that Zen
provides us with at least one answer to this problem.  If, in response
to a question, you reply "Mu," then you have ``unasked'' the question.
The situation in which you do this is precisely what is described above.

The interesting thing about this (to me) is not what word they chose,
but the fact that there is an excepted linguistic practice among
these people for dealing with what many people around here
would call a ``deficiency.''
                                        Ross Thompson

------------------------------

Date: Mon, 3 Dec 84 10:00:59 EST
From: Bruce Nevin <bnevin@BBNCCH.ARPA>
Subject: Communication


        A General Semanticist named Harrington
        Returned from his colloquy swearing ten
        Natives could count
        Only half the amount
        Any properly trained investigator could while ignoring the fact
          that it was not really counting that they were doing, but
          rhyming.

Ancient chestnut from anthropological lingustics:

        Anthropologist (pointing):  What's that?
        Native: <forefinger>
        Anthropologist (pointing again):  And what's that?
        Native: <forefinger>

        (This goes on for a while.)

        Anthropologist:  You see, they have only the word <forefinger>
        for all these things, and make up for their deficient vocabulary
        by grunts and gestures.

(In fairness, this combines the (true) `finger' story with the persistent
canard about a `primitive language with grunts and gestures'.)

Opinion:  language, properly speaking, is principally a means of transmitting
information.  It happens to be used together with representational and
gestural systems (including the gestural system we know as intonation and
inflection) as a means of communicating a great deal more than (and sometimes
contrary to) the bare-bones information that it transmits.  See e.g.
Z. S. Harris, Mathematical Structures of Language, esp. ch 2 `Properties
of language relevant to a mathematical formulation'.

(By `contrary to' I refer to irony and the like.  Though I know no
Dutch, I bet there are instances where speakers say something like `That
must have been a gezellig meeting!', referring to e.g. a collection of
`strange bedfellows' brought together by political expedience.)

Much of this discussion confuses linguistic competence with communicative
competence.  Communicative competence boils down mostly to skills in
engaging others in a willing desire to communicate and understand.

        I have seen an affable extrovert on a Greek train
        communicate quite well with speakers of at least three
        languages of which he knew perhaps two words each.  (My
        companion identified Hungarian and Slovenian, I recognized
        German.) A gezellig time was had by all, proof positive of
        satisfactory communication (whether or not much information
        is transmitted), and liquor played a miniscule role.

        I have seen fluent speakers of the same dialect of English
        unable even to transmit information to one another, because
        of their abject failure to communicate.  And so have you.

Stereotypically, right-limbic communicative skills are best developed and
exemplified by women in western cultures, and by Japanese and Chinese
cultures in our reluctantly waning ethnocentricity.  What we call `small
talk' (software aside).

(Is there any AI work miming right-cerebral and right-limbic functions,
other than visual pattern perception?)

An important part of `engaging others in a willing desire to communicate
and understand' is the range of what I call gestures of solidarity--
affirming that we are comembers of the same gezellig in-group.  Jargon
plays a central role, especially in an electronic-mail environment.
Denigration of outsiders is felt necessary when the boundaries have not
yet been clearly defined and the door so to speak is not yet shut or
when an unwelcome interloper is suspected.  There are many unconscious
and semiconscious identifiers of class, ethnos, region, and so on in the
range of vocabulary choice and pronunciation (dialect), application of
standard or nonstandard grammatical rules (to call them standard and
nonstandard of course begs the sociological question), shared references
(`remember the old man who bought licorice there') and so on.

(Fade to track of Frank Sinatra crooning `Gezelligheid is made of this'.
Bring up following quote from Harris op. cit. 216:

        . . . the very simplicity of this system, which
        surprisingly enough seems to suffice for language, makes
        it clear that no matter how interdependent language and
        thought may be, they cannot be identical. It is not
        reasonable to believe that thought has the structural
        simplicity and the recursive enumerability which we see
        in language.  So that language structure appears rather
        as a particular system, satisfying the conditions of
        [the chapter cited above], . . . which is undoubtedly
        necessary for any thoughts other than simple or
        impressionistic ones, but which may in part be a rather
        rigid channel for thought.)


Bruce Nevin, bn@bbncch

------------------------------

Date: Tue, 4 Dec 84 15:59:27 EST
From: Bruce Nevin <bnevin@BBNCCH.ARPA>
Subject: re: saying the unsayable


      > Languages are not differentiated on the basis of what is
      > possible or impossible to say, but on the basis of what
      > is easier or harder to say.

                                        --Larry Wall (V2 #167)

My understanding of tense morphemes is that they have the same semantic
relation to adverbs of time that pronouns and classifier nouns have to
nouns: having said the adverb, the tense morpheme is obligatory; having
said the tense morpheme, certain non-specific adverbs (`in the future',
`in the past') need no longer be said.  But whether a given language has
a particular tense morpheme or not, the equivalent information about
temporal relationships may be expressed with adverbs of time or
conjunctions (`before', `after').  (Cf.  Harris, A Grammar of English on
Mathematical Principles, 265-79.)

I don't think even Whorf claimed that Hopi lacked all adverbs of time
and temporal conjunctions.

Achumawi, a Hokan language of northern California on which I have done
some work, has a dual number, like Classical Greek.  The dual is
obligatory whenever referring to a pair of something.  Having said the
dual suffix, the actual noun pair is almost always tacit.  The dual is
also obligatory in direct address to one's mother-in-law (if a man) or
father-in-law (if a woman), and also in certain religious invocations
and prayers.  A whole range of nuance expressing social relationships
and attitudes is thereby easy to express in Achumawi and awkward in
English.  But the `objective information' is transmitted in a pretty
obvious way, even in English.  Sometimes, the `objective information'
transmitted is that the speaker is referring to a pair; sometimes, the
`objective information' is that the speaker affirms a certain special
deference with respect to the intended audience.  Irony and the like can
complicate this further.  In each case, the dual suffix is an obligatory
choice given presence of certain explicit constructions; and given the
presence of the dual suffix those constructions need no longer be
explicitly said and are instead tacitly understood.

Honorifics in Japanese present a rich field for issues of this sort.
Indeed, every language abounds with reductions of explicit constructions
to concise, nuance-laden forms.

Translating from a nuance-laden reduced form to an explicit,
spelled-out, fully explanatory form always loses the impact that the
reduced form has on a native speaker.  Closely analogous to translating
a joke.  Which is why `getting' native humor is such an excellent test
of fluency.  (For many years, anthropologists speculated whether or not
American Indians joked!  My experience suggests they probably were the
frequent butts of deadpan setups and putons.)

Ross recently proposed differentiating languages along a Macluhanesque
`hot/cool' spectrum according to how easy or difficult it is to recover
tacit information from under pronominal references; an article in the
last issue of Linguistic Inquiry reviews and extends this work.  (Sorry,
I don't have either reference at hand.)

        Bruce Nevin, bn@bbncch

------------------------------

Date: Tue 27 Nov 84 11:10:47-PST
From: ISRAEL@SRI-AI.ARPA
Subject: Conference - Theoretical Approaches to Natural Language Understanding

                       CALL FOR PAPERS

                        WORKSHOP ON

                   Theoretical Approaches to
                Natural Language Understanding

                    Dalhousie Univeristy
                    Halifax, Nova Scotia
                    28-30 May, 1985

General Chairperson: Richard Rosenberg, Mathematics Department,
Dalhousie University, Halifax, N.S. B3H 4H8

Program Chairperson: Nick Cercone, Computing Science Dept., Simon
Fraser University, Burnaby, B.C. V5A 1S6

Theoretical Approaches to Natural Language Understanding is intended
to bring together active researchers in Computational Linguistics,
Artificial Intelligence, Linguistics, Philosophy, and Cognitive
Science to discuss/hear invited talks, papers, and positions relating
to some of the 'hot' issues regarding the current state of natural
language understanding.  The three areas chosen for discussion are
aspects of grammars, aspects of semantics/pragmatics, and knowledge
representation.  In each of these, current methodologies will be
considered: for grammars - theoretical developments, especially
generalized phrase structure grammars and logic-based meta-grammars;
for semantics - situation semantics and Montague semantics; for
knowledge representation - logical systems and special purpose
inference systems.

Papers are solicited on topics in any of the areas mentioned above.
You are invited to submit four copies of a paper (double-spaced,
maximum 4000 words) to the program chairman: Nick Cercone, before 12
January, 1985.  Authors will be notified of acceptances by 27
February.  Accepted papers, typed on special forms, will be due 30
March 1985 and should be sent to the program chairman.  To make
referring possible it is important that the abstract summarize the
novel ideas, contain enough information about the scope of the work,
and include comparisons to the relevant literature.  Accepted papers
will appear in the Proceedings; those papers so recommended by the
reviewers will be considered for inclusion in a speacial issue of
Computational Intelligence, an international Artificial Intelligence
journal published by the National Research Council of Canada.
Presentation of papers at the Workshop will be at the discretion of
the program/organizing committee in order to maintain the focus and
workshop flavor of this meeting.  Information concerning local
arrangements will be available from the general chairman: Richard
Rosenberg.  Proceedings will be distributed at the workshop and
subsequently available for purchase.

------------------------------

End of AIList Digest
********************

From:	CSVPI           8-DEC-1984 04:31  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a016417; 8 Dec 84 2:34 EST
Date: Fri  7 Dec 1984 22:15-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #173
To: AIList@SRI-AI
Received: from rand-relay by vpi; Sat, 8 Dec 84 04:28 EST


AIList Digest            Saturday, 8 Dec 1984     Volume 2 : Issue 173

Today's Topics:
  Journals - LISP Papers & Computational Intelligence,
  Brain Theory - Caenorhabditis Elegans,
  Cognition - Infant Amnesia & PBS and The Brain,
  Seminars - Speech & Language & Memory & Math Representation  (CSLI),
  Conference - Intelligent Systems and Machines
----------------------------------------------------------------------

Date: Fri 7 Dec 84 15:16:31-PST
From: Michael Georgeff <georgeff@SRI-AI.ARPA>
Subject: Journals for LISP papers


I wish to submit a paper on a new and efficient method for implementing
funargs in LISP (currently an SRI Tech Note) to an appropriate
journal.  Anyone know of any GOOD journal that publishes papers
on programming languages and implementations??

Michael Georgeff.

------------------------------

Date: Thu 6 Dec 84 16:58:08-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Computational Intelligence--New Journal

Computational Intelligence/Intelligence Informatique is a new journal which
will publish in English/French high-quality original theoretical or
experimental research in computational (aritificial) intelligence.  The
editors are Nick Cercone/Simon Fraser University and Gordon McCalla/Univer.
of Saskatchewan. Editorial board includes L. Bolc, A. Mackworth, A. Ortony,
R. Perrault, E. Sandewall, A. Sloman, n. Sridharan, D. Wilkins etc.
Subscription rates: U.S. $85 institutional, $47 personal.  It will be
a quarterly with the first issue to be available February 1985.
It will be published by the National Research Council of Canada and
sponsored by the Canadian Society for Computational studies of intelligence.
For more information: Distribution, R-88 (Computational Intelligence),
National Research Council of Canada, Ottawa, Ontario, Canada, K1A OR6.
Special rates for members of Canadian Societies.  Manuscripts should be
addressed to the editors, Computational Intelligence, Computing Science
Department, Simon Fraser University, Burnaby, British Columbia, Canada,
V5A 1S6.

I will be ordering this title for the Math/CS Library.  [...]

Harry Llull

------------------------------

Date: Thu 6 Dec 84 12:01:07-CST
From: ICS.DEKEN@UTEXAS-20.ARPA
Subject: brains, kludges, and elegance

The most substantial evidence of brain kludgery or lack thereof
curiously resides in the structure (completely mapped) of a nematode
named ... "elegant."

Caenorhabditis elegans has 302 neurons of 118 different types, which
make about 8000 synapses in total (each process synapses with about
50% of its neighbors).  The lineage of every one of these cells is
known, and the process by which neurological structures are formed may
well seem, to a computer scientist, a kludge.  Bilateral symmetry is
not produced, for example, in the "logical" mirror-image development
of a single precursor.  The word "kludge," though, carries a pejorative
connotation which seems inappropriate - there are multiple forces and
priorities at work. (One might similarly feel that "kludge" is not
the right word to describe democracy relative to totalitarianism.)

A better word, which may mean something to biologists and others
outside the hacker's ken, might be "fossiliferous," used to describe
any system (program or biological organism) which carries along the
baggage of its own trial-and-error evolution.  As Sulston, White,
Thomson, and Schierenberg put it:

        "... the perverse assignments, the cell deaths, the long-range
        migrations - all the features which could, it seems, be
        eliminated from a more efficient design - are so many
        developmental fossils."

(There is a three-part series on C. elegans in Science of 22 Jun,
6 Jul, and 13 Jul).

------------------------------

Date: Thu, 6 Dec 1984  01:59 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Infant Amnesia   V2 #165

The general evidence for infant memories is pretty poor.  For one
thing, as Ken Laws points out, there is something mighty suspicious
about those handfuls of memories each person claims.  In psychiatry
some of these are called "screen memories".  A very common feature is
to remember some scene, sort of "eidetically" -- but on questions, the
subject very often sees itself right in the center of the stage!
Since this is impossible, obviously, the conclusion is that the memory
is a construct.

What's worse, with careful questioning, one usually finds that the
memory has indeed been rehearsed, as Ken remarks, perhaps
periodically.  Presumably it has been reconstructed in the process,
too -- and can hardly be called a memory, but rather, an elaborated
theory or fantasy.

Finally, even more careful questioning is revealing: how do you know
that this was when you were three years old?  Oh, I'm sure of it.  It
was the day my dog was run over.  An innocent clue like that most
likely points to an incident of Freudian magnitude; a loss or death,
itself rehearsed perhaps for months, and then, unconsciously, for all
the rest of one's life.

In any case it is silly to haggle over the sharpness of the cutoff of
infantile amnesia.  I like theories like this: our experience is first
encoded in rather stupid ways; a square is seen as a line attached to
another line attached to another line, etc.  Like an early
assembly-language.  Later, a square is represented as "closed path of
equal lines" and, later, orthognal pairs of parallels, etc. -- going
to Fortrams to Pascals to LOGs to SMALLTalks to who-knows-what.  The
representatins and their interpreters grow more sophisticated, and
those first machine-languages of infancy just can't be always
upwards-compatible.  So, even if those early memories were not, in
fact, entirely ever lost, they're doomed to become
unintelligible, eventually.

------------------------------

Date: 1 Dec 1984 20:03:35 EST
From: HARTUNG@USC-ISI.ARPA
Subject: PBS & The Brain

Hello,
   I too have been watching the PBS series on the brain.  And while I find
it to be remarkably up to date, I do have a concern about it.  This is a
concern not just for this series but for many physiological explanations
of experiential phenomena presented to lay audiences.  When statements are
made that such and such an area of the brain is responsible for some known
effect, or that damage to location X results in some new and peculiar
observed behavior, these statements are (I fear) taken in a way they are
not meant to be.
   The lay audience has a different frame of reference than a psychologist or
neurophysiologist.  Scientists studying brain functions view their subjects
as complex models involving the interaction of a variety of known components:
neurotransmiters; ganglia; axon projections; structures, etc.  The majority
of the audiance has only limited exposure to these objects and concepts and
not enough time to really develop a similar framework to view all this new
knowledge in.  Instead I believe they do what people usually do when under-
standing new material and that is relate it to what they already know.  What
people already know is that their brain is responsible for their subjective
awareness of the world.  And as a result of the attempt to integrate knowledge
about the brain with the fact that it is the seat of subjective experience
there is a strong possibility that people will believe that these explanations
of brain functioning are in fact explanations of how it is that they have
an experiential component to their lives.
   Such physiological explanations will probably never supply the answer to
the question of how it is that we have the kind of experience of things that
we do.  For good argument on this point I refer you to Nagel's article in the
Oct. '74 Philsophical Review "What is it like to be a bat?".  But, lay
audiences are rarely if ever informed of this.
   Another point frequently skipped in presenting brain physiology to lay
audiences is the great importance of subjective experience in the functioning
of cognition.  (See Natsoulas, T.  Residual Subjectivity. American Psychologist
March 1978.)  Indeed subjectivity is so inseparable from cognition that it
raises serious questions about the capacity of digital machines to perform
the full range of human abilities, given that such digital machines may not
be able to achieve a subjective perspective (Searle, J. Minds, Brains, and
Programs.  Behavioral and Brain Sciences, Vol. 3, No. 3).  Arguments concerning
the mind brain problem have even come to doubt the capacity of present
scientific approaches to the study of mental phenomena and their relationship
to physical phenomena to have any success (Fodor, J.A. Methodological solipsism
considered as a research strategy in cognitive psychology.  Behavioral and
Brain Sciences Vol. 3 No. 3).
   I assume that the AI-LIST audience is aware of details of these arguments.
The television audience mostly is not.  I understand the reluctance of
television producers to include arguments as abstract and difficult as those
on the mind-brain problem.  Not to mention the fact that certain religious
groups find them upsetting.  However, I feel it is important for us who are
scientists, to encourage people consulting us about presentations they would
make to lay people, to provide the broadest possible context for our arguments,
and always to remember who our audience is and how different their perspective
may be.

                                Michael A. Moran
                                Lockheed Advanced Software Laboratory

address HARTUNG@USC-ISI

------------------------------

Date: Wed 5 Dec 84 21:28:17-PST
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Seminars - Speech & Language & Memory & Math Representation 
         (CSLI)

         [Excerpted from the CSLI Newsletter by Laws@SRI-AI.]


                ABSTRACT OF NEXT WEEK'S SEMINAR
      ``A Generalized Framework for Speech Recognition''

This talk will describe a framework for speaker-independent,
large-vocabulary and/or continuous speech recognition being developed at
Schlumberger (Fairchild).  The framework consists of three components:
  1) a finite-state pronunciation network which models relevant
     acoustic-phonetic events in the recognition vocabulary;
  2) a set of generalized acoustic pattern matchers; and
  3) an optimal search strategy based on a dynamic programming algorithm.
The framework is designed to accommodate a variety of (typically disparate)
approaches to the speech recognition problem, including spectral template
matching, acoustic-phonetic feature extraction and lexical pruning based
on broad-category segmentation.  A working system developed within this
framework and tailored to the digits vocabulary will also be described.  The
system achieves high recognition accuracy on a corpus spoken by
approximately 250 talkers from 22 ``dialect groups'' within the continental
United States.
                                                ---Marcia Bush
                        ____________

                ABSTRACT OF NEXT WEEK'S COLLOQUIUM
                      ``Data Semantics''

Abstract: There is a growing agreement of opinion that several semantic
phenomena can only be adequately dealt with in a theory which takes
partiality seriously, a theory of partial objects. There is no agreement
about what these partial objects are; for instance, whether they represent
``pieces of the world'' or ``states of partial information about the world.''
Yet, the choice of the perspective determines in large part the potential
of the theory.  I will discuss various aspects of Data Semantics, a theory
being developed by Frank Veltman and me, which takes the second
perspective as basic: the semantic behavior of several types of expressions
can best be understood if we take them to relate to our lack of information,
and regard them as patterns on how information can grow. I will argue that
problems concerning quantification and equality force us to distinguish
between different kinds of partial objects.
                                                        ---Fred Landman



                    F1 (AND F3) PROJECT MEETING

Title:     Self-propagating Search of Memory
Speaker:   Pentti Kanerva
Time/Date: Tuesday, December 11, 3:15 p.m.
Place:     Ventura Seminar Room

Abstract: Human memory has been compared to a film library that is indexed
by the contents of the film strips stored in it.  How might one construct
a computer memory that would allow the computer (a robot) to recognize
patterns and to recall sequences the way humans do?  The model presented
is a simple generalization of the conventional random-access memory of a
computer.  However, it differs from it in that (1) the address space is very
large (e.g., 1,000-bit addresses), (2) only a small number of physical
locations are needed to realize the memory, (3) a pattern is stored by
adding it into a SET of locations, and (4) a pattern is retrieved by POOLING
the contents of a set of locations.  Patterns (e.g., of 1,000 bits) are
stored in the memory (the memory locations are 1,000 bits wide) and they
are also used to address the memory.  From such a memory it is possible to
retrieve previously stored patterns by approximate retrieval cues--thus,
the memory is sensitive to similarities.  By storing a sequence of patterns
as a linked list, it is possible to index into any part of any "film strip"
and to follow the strip from that point on (recalling a sequence).
                         ____________

                       AREA C MEETING

Topic:     Theories of variable types for mathematical practice,
           with computational interpretations
Speaker:   Solomon Feferman, Depts. of Mathematics and Philosophy
Time/Date: 1:30-3:30 p.m., Wednesday, December 12
Place:     Conference Room, Ventura Hall

Abstract:  A new class of formal systems is set up with the following
characteristics:
   1) Significant portions of current mathematical practice (such as in
      algebra and analysis) can be formalized naturally within them.
   2) The systems have standard set-theoretical interpretations.
   3) They also have direct computational interpretations, in which all
      functions are partial recursive.
   4) The proof-theoretical strengths of these systems are surprisingly
      weak (e.g. one is of strength Peano arithmetic).
   Roughly speaking, these are axiomatic theories of partial functions and
classes.  The latter serve as types for elements and functions, but they
may be variable (or "abstract") as well as constant.  In addition, an element
may fall under many types ("polymorphism").  Nevertheless, a form of typed
lambda calculus can be set up to define functions.
   The result 3) gets around some of the problems that have been met with
the interpretation of the polymorphic lambda calculus in recent literature
on abstract data types.  Its proof requires a new generalization of the
First Recursion Theorem, which may have independent interest.
   The result 4) is of philosophical interest, since it undermines
arguments for impredicative principles on the grounds of necessity for
mathematics (and, in turn, for physics).
   There are simple extensions of these theories, not meeting condition 2),
in which there is a type of all types, so that operations on types appear
simply as special kinds of functions.



                           NL1 MEETING

Topic:      ``Association with Focus''
Speaker:    Mats Rooth
Time/Date:  2 p.m., Friday, December 7
Place:      Trailer Seminar Room
Note:       The content will overlap with but be non-identical to the
            presentation the speaker gave in the intonation seminar.

Abstract: In the context of adverbs of quantification, conditionals, and
``only,'' focus can have truth conditional significance.  Suppose Mary
introduced Bill and Tom to Sue and performed no other introductions.  Then
``Mary only introduced Bill to SUE'' is true, while ``Mary only introduced
BILL to Sue'' is false.  Similarly, ``MARY always takes Sue to the movies''
and ``Mary always takes SUE to the movies'' have different truth conditions.
My general claim is that focus influences truth conditions indirectly:  the
semantics of the constructions in question involve contextual parameters,
typically unspecified domains of quantification, which are fixed by a
focus-influenced component of meaning.  This idea is executed in a Montague
grammar framework.

------------------------------

Date: Fri, 7 Dec 84 10:36:07 EST
From: Morton A Hirschberg <mort@BRL-BMD.ARPA>
Subject: Conference - Intelligent Systems and Machines


                                CALL FOR PAPERS

                1985 Conference on Intelligent Systems and Machines

Dates:  April 23-24, 1985

Place:  Oakland University
        Rochester, Michigan

Technical papers reflecting both advances and applications in all aspects of
intelligent systems and machines will be considered.  Suggested topics include,
but are not restricted to:

     Intelligent Robotics, Machine Intelligence, C3I, Adaptive Control and
     Estimation, Visual Perception and Computer Vision, Pattern Recognition
     and Image Processing, Artificial Intelligence for Engineering Design,
     Intelligent Simulation Tools, Computer-Integrated Manufacturing Systems,
     Knowledge Representation, Expert Systems, Game Theory and Military
     Strategy, Interpretation of Multisensor Information, Automatic Message
     Understanding, Natural Language and Automatic Programming.

Authors are requested to submit a 300-500 word abstract by January 31, 1985 to:

     Professor Nan K. Loh, Conference Chairman,
     (313)377-2222

     Professor Christian Wagner, Technical Review Committee Chairman
     (313)377-2215

     Center for Robotics and Advanced Automation
     School for Engineering and Computer Science
     Oakland University
     Rochester, Michigan 48063

The conference will be preceded by tutorials on AI and Robotics held 22 April.

------------------------------

End of AIList Digest
********************

From:	CSVPI           9-DEC-1984 03:50  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a020196; 9 Dec 84 1:26 EST
Date: Sat  8 Dec 1984 17:25-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #174
To: AIList@SRI-AI
Received: from rand-relay by vpi; Sun, 9 Dec 84 03:46 EST


AIList Digest             Sunday, 9 Dec 1984      Volume 2 : Issue 174

Today's Topics:
  AI Tools - UNSW Prolog,
  Books - Pitman AI Series,
  Cognition - Childhood Memories,
  Expert Systems - Optical Disk Memories,
  Machine Translation - Folklore,
  Knowledge Representation - Nonverbal Meaning,
  Seminar - Reinforcement Learning  (CMU)
----------------------------------------------------------------------

Date: Thu, 6 Dec 84 13:08:58 PST
From: Adolfo Di-Mare <dimare@UCLA-LOCUS.ARPA>
Subject: UNSW Prolog

    Date: Mon 26 Nov 84 23:21:34-PST
    From: Michael A. Haberler <HABERLER@SU-SIERRA.ARPA>
    Subject: UNSW Prolog interpreter
    To: info-ibmpc@USC-ISIB.ARPA

I have ported the University of New South Wales Prolog interpreter to an
IBM PC running MS-DOS 2.0. It implements all built-in predicates of the
Unix version and can call your favorite editor or the command line inter-
preter.  UNSW Prolog is closely patterned after Prolog-10, but has no
compiler.

I got the permission to redistribute the interpreter from the author of
the Unix version, Claude Sammut of UNSW. If you want to obtain a copy,
sign the license which can be FTP'ed from [SIERRA]<HABERLER>PROLOG.LICENSE,
and send the license with 2 DSDD diskettes to the address below. Neither
Claude nor I charge anything for it.

Michael Haberler
Computer Systems Laboratory ERL 403
Stanford University, Stanford CA 94305
(415) 497-9503


        Adolfo
              ///

------------------------------

Date: Fri 7 Dec 84 17:32:57-EST
From: SRIDHARAN@BBNG.ARPA
Subject: Pitman AI series now a concrete reality!


Many of you know that Derek Sleeman and myself are the two Main
Editors for the Pitman AI research notes series.  The series was
conceived and developed over the past 18 months.  Some of you also saw
the Pitman booth at the AAAI-84 trade show.
Finally, the series appears to have taken concrete reality.  I have
received the first "book" in the series.  Another six books will be
out within the month.  THe first title is Perry Miller, A Critiquing
Approach to Expert Computer Advice: ATTENDING.
The other titles are listed below.
Paul Cohen, Heuristic Reasoning about uncertainty.
A. Palay, Searching with probabilities.
Y. Ohta, Knowledge based interpretation of outdoor natural color scenes.
R. Korf, Learning to solve problems by searching for macro-operators.
P. Poliltakis, Empirical Analysis for Expert Systems.
J. Kender, Shape from texture.

The series covers the whole spectrum of AI and publishes research
materials suitable for use in graduate courses, seminars, and
reference material for individuals working in this field.   The aims
of the series are (a) rapid publication in softback form;
(b) worldwide exposure for significant research results; and
(c) low cost - usually under $20.

Authors are encouraged to get in touch with one of the main editors
either Sridharan@BBNG or Sleeman@SUMEX.  [...]

------------------------------

Date: Fri, 7 Dec 84 21:30:31 est
From: utcsrgv!dciem!mmt@uw-beaver.arpa
Subject: Childhood memories

I have many memories dating back to as early as my 2nd birthday, and
can clearly remember large parts of the floor plan of the school I
attended from 3-5.  But ALL these memories are pictorial, not sequential.
I cannot remember happenings until about 5, when I remember my first
introduction to French verb conjugation.  Perhaps the truth is that
children are not capable of sequential logical operations until around 5,
and therefore cannot remember events of that kind, whereas pictures
are more readily preserved if you happen to grow up to be imagery-oriented.


Martin Taylor

------------------------------

Date: Thu 6 Dec 84 19:19:58-EST
From: Wayne McGuire <MDC.WAYNE%MIT-OZ@MIT-MC.ARPA>
Subject: Personal Assistants & Optical Disks

     Re: Dietz's speculation about optical disks:

     Optical disks will clearly impact information technology in
general (microforms, magnetic tape, commercial databases, book
publishing, etc.) and microcomputers in particular in many
revolutionary ways.  One potential use would be to integrate the
optical disk with AI-based integrated software in a microcomputer
product which would be a powerful general purpose idea processor and
personal assistant.

     We already see a trend towards general purpose idea processors in
such micro products as Framework, Symphony, Thinktank, Clout, Dayflo,
Factfinder, and The Desk Organizer.  This trend is likely to continue
and to accelerate as new generations of microprocessors rapidly come
online and make available ever greater random access memory for
personal computer users.  Framework and Symphony are the crude
precursors of general purpose personal assistant programs of 1MB, 5MB,
and more of memory.

     A sign of the times: Mitch Kapor, the founder of Lotus, recently
commented in an MIS Week interview that the next key step for his
company would be to explore current AI research in depth, and to
develop new more powerful products that were capable of sophisticated
qualitative, not just quantitative, information processing.

     Optical disks would nicely interface with the next generation of
general purpose idea processors.  With them one could easily store,
retrieve, and manipulate all the vital information and minute details
in one's life: financial transactions, notes for miscellaneous
projects, diary entries, address books, medical records, rough drafts,
datebooks, electronic mail, shopping lists, statistics, papers,
bibliographies, administrivia, programs in progress, graphs, abstracts
and full-text documents downloaded from commercial databases, etc.
Every individual record or key chunk of information in one's personal
digital archive could be uniquely identified by a date and time stamp,
and every personal database, structured and/or free-form, could be
integrated into a single richly interconnected knowledgebase.  The set
of storage optical disks for a program of this kind would constitute
for anyone, in compact and efficient form, an extremely thorough
journal of his or her life.

     Write-once optical disks would actually be preferable for this
archival purpose than disks which could be erased and written over.
Subsets from the master archival disk(s), of any desired information
or complex combination of records, could be transfered at will to
working floppy or hard disks.  The technology for the greatest
revolution in the history of personal information management is
already solidly in place.

     It is not likely that the total information processed by a
personal assistant for an average person over a lifetime would occupy
more than one or two disks.  Even for someone whose personal
information needs were much greater than average--say, a Harvard
economics professor who is a dedicated teacher, a prolific scholar and
author, holds a cabinet-level post (not concurrently with his teaching
responsibilities, of course), and has an active globe-trotting social
life--under 100 disks would probably neatly archive a lifetime of rich
intellectual, professional, and social activity.  Our professor would
be able to pinpoint in a few minutes those two sentences in which x
remarked about y in a private communication twenty years ago, or that
small note of last year which captured a flash of insight about how to
improve a formula in an econometric model of the Venezuelan oil
industry.  (Literary scholars analyzing the biodisks of future Walt
Whitmans or Virginia Woolfs would be able to reconstruct in
microscopic detail the evolution of their subjects' works and themes,
and the interaction of quotidian life events with their imaginative
creations.)

     AI-based personal assistants and optical disks seem to be made
for one another.  I wouldn't be surprised to see prototype products on
the market within the next two years.  By the '90s we may well wonder
how we ever got by without them.

-- Wayne McGuire (mdc.wayne@mit-oz)

------------------------------

Date: 7 Dec 84 16:22:46 EST
From: Allen <Lutins@RU-BLUE.ARPA>
Subject: more on translation...


I understand that a similar attempt with a Chinese/English translator
yielded the following results:

English input:  "Out of sight, out of mind"

Translated response: "Blind and Stupid"

I did have the occasion to "speak" with a Japanese student using a Sharp(?)
hand-held translator.  Surprisingly, general ideas were conveyed quite well.
However, I think we're still a long way off from getting a computer to
translate a language any better than an eight year old bilingual person can.

                                                -Allen

------------------------------

Date: Fri, 7 Dec 84 09:58:10 pst
From: Douglas young <young%uofm-uts.cdn%ubc.csnet@csnet-relay.arpa>
Subject: Nonverbal meaning

Following my enquiry in AIList 62, a few people have asked what I
mean by "nonverbal meaning". It seems appropriate to reply to them,
and to explain to any others who may not understand the significance
of the term, through the medium of AIList.
   While until quite recently Wittgenstein, Frege, Quine, and Chomsky
might have seemed nearer than any other philosophers of language ( or
anyone else, for that matter) to providing a firm foundation from
which to represent meaning, none has been willing to go systematically
"deeper" than using words, ultimately, as that foundation. They have
written only of very unspecific and vague concepts and structures.
But Jerry Fodor's recent and exciting book, " The Modularity of Mind",
made, in my view, a major leap ahead, in at least recognising that meaning
is founded upon nonverbal, cognitive modules, although he did not suggest
either the exact form that such modules might take, nor just how they
could be applied to providing nonverbal meaning.
   We have been working here for several years on the theory and
foundations of a system by which word and sentence meaning could be
represented nonverbally in a natural language understanding system. The
principles of this system arose from clues derived from some neuro-
-physiological experiments I conducted during 1976-78 (in which recordings
were taken from the pulvinar complex, a part of the brain that in man
is involved in language but that also exists as far back the phylogenetic
tree as in the rabbit).During the following six years, further neurologic
and psychological research provided the detailed foundations of a system
by which we could represent the meaning(s) of any word or sentence, in
English (but that is essentially transportable to any other major
natural language), wholly nonverbally.
   Some of the neurological and psychological grounds, for both the
semantic and the syntactic base of the system were described in two
papers published in Medical Hypotheses in 1982,83, but, as I mentioned
in my previous communication, the original systems of modalities and
coding described in these papers were so long ago superseded that they
(but not the grounds) are of little significance now. We are currently
in the early stages of designing the software for a prototype of the
modal system, and some of the reults of this work should be published
during the latter part of 1985.
  In order to explain as concisely as possible the principles and some of
the techniques employed, it may be helpful to take people back to basics:
Try to explain by words alone the meaning of any one of a range of
different words (eg, MUG, DIFFERENTIATE, WALK, OR, INTERNATIONAL, QUICKLY).
You will succeed in providing several sets and trees of dictionary-type
definitions; but, in the end ,if you continue to ask yourself the meaning
of each new word in each succeeding set of definitions, you will either
get into an endless cycle of using the same words with which you began
your definitions, or you will reach an impasse. If, however, you then
ask yourself, and consider carefully, the subjectively experienced
nonverbal significances of those same words, several ideas will come to
mind. For example, in respect of MUG, you will likely notice the fact
that it has both aspects of "appearance" (such as its visual shape, or
the interorientation of its parts to one another) and of "function" (such
as the motor and kinaesthetic sequences of events that enable you to
drink from a mug).  The same kind of thoughts may also occur to you when
you consider a word like QUICKLY or UP, for example. Abstract words, like
the verb MATCH, and "long" words, like INTERNATIONAL, will require some
or many levels of verbal "unfolding" of their meanings in order for you
to be able to reach any of their nonverbal foundations; but these words
also can be nonverbally represented, by means of the "mental modalities".
In fact, all of these nonverbal aspects of meaning can be represented
by means of a whole range of modalities.
    The system incorporates 32 different modalities, of which 27 are
neurologically based (such as visual detection of movement (VDM), verbal
expression (VXP), kinaesthetic (KIN), central autonomic proprioception
and control (CAP)), and 5 are the mental modalities, for which there are
no neurological, only cognitive, grounds (such as cognitive mental acts
(CMA), metaconceptual (MET), emotive mental states (EMS)). Codes within
these modalities, grouped together as a frame of generic parts of function
and/or appearance, and closely interrelated, can provide a nonverbal
meaning representation for any word. The meaning of a sentence is provided
through an interactive syntactic process that, both anteroactively and
retroactively, interrelates appropriate segments of those modal code
frames, so as to disambiguate both the individual word meanings and their
"use-categories" (i.e., "object", "activity", "characteristic", or "relation").
By this method, it is possible to represent the meaning(s) of any sentence
nonverbally, and at the same time provide access, up to any depth required
of a particular system application, to direct and associated knowledge re
that sentence.
   The modal system seems both versatile and quite powerful, and it has
the advantage over some other systems of NLU that it reduces memory and
storage requirements by taking advantage of the many cognitively equivalent
modal aspects in descriptions of similar objects, activities or
characteristics. One rather satisfying aspect of the mental modalities
is that the cognitive mental act modality not only provides for the
nonverbal meanings of such words as ASSOCIATE, NEGATE, SYMBOLIZE, MATCH
or CONJUNCT, but also provides the means of executing the relevant logical
activity. Incidentally, another feature of the system is that it can
provide for both metaphor and idiom; but work on this will almost
certainly be delayed until 1986 due to the need to complete the basic
system software for the prototype.
  It would be inappropriate in the AIList to do more than try to provide
with sufficient background an idea of the general characteristics of the
system. I hope, however, that what I have written will be sufficient to
explain at least what sort of thing I am referring to by "nonverbal meaning"
As mentioned in AIList 62, I would be most interested to hear about, and/or
to receive copies of any papers from other projects  in this or any allied
area of natural language understanding.

      Douglas A,Young
      Dept of Computer Science
      University of Manitoba
      Winnipeg
      Manitoba, R3T 2N2
      CANADA

------------------------------

Date: 7 Dec 84 11:48:09 EST
From: Steven.Shafer@CMU-CS-IUS
Subject: Seminar - Reinforcement Learning  (CMU)

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

Richard Sutton, from U. Mass., will be speaking at next Tuesday's AI
Seminar.  WeH 5409 at 3:30 pm.  If you'd like to speak with him
during his visit, please contact Geoff Hinton.

REINFORCEMENT LEARNING:  LEARNING METHODS FOR COMPLEX SYSTEMS

   Reinforcement learning is the process of learning to make decisions
based on the observed results of previous decisions.  It is
distinguished from other forms of machine learning in that it does not
require instruction as to what the learning system should do, only
evaluation of what it does do.  In this sense reinforcement learning
requires less help from its environment and is more powerful and robust
than other forms of learning.  In complex learning systems it is
particularly difficult to specify in detail what the learning system
should do, and reinforcement learning is particularly relevant.

   Nevertheless, reinforcement learning has been studied very little.
This talk will present computational experiments comparing the
performance of many previously-studied algorithms and several new
ones.  In many cases the previously proposed algorithms were found to
perform very poorly, much worse than the new algorithms.  Since in many
cases the new algorithms are only slightly different from the old,
these results suggest that the space of possible reinforcement learning
algorithms is mostly unexplored.  Among the previously-studied
algorithms compared are those due to Minsky, Rosenblatt, Farley and
Clark, Widrow, Samuel, and Michie and Chambers.  The most sophisticated
of the new algorithms appears to be a refinement and generalization of
the algorithm used in Samuel's celebrated checker-player to modify and
improve its static evaluation function.

   This talk will emphasize (1) the difference between reinforcement
learning and other basic forms of learning which have already been
thoroughly studied, (2) the demonstration of improvement over
previously-studied methods, and (3) areas of possible application of
reinforcement learning methods.

------------------------------

End of AIList Digest
********************

From:	CSVPI          12-DEC-1984 00:34  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a002548; 11 Dec 84 13:41 EST
Date: Tue 11 Dec 1984 09:54-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #175
To: AIList@SRI-AI
Received: from rand-relay by vpi; Wed, 12 Dec 84 00:28 EST


AIList Digest            Tuesday, 11 Dec 1984     Volume 2 : Issue 175

Today's Topics:
  AI Tools - Tapes on LM & XLISP Availabilty,
  AI News & Expert Systems - Recent Articles & Machine Poker,
  AI Tools - Parallel Processing and OPS5,
  Humor - Lardware & History of Computing Qual,
  Seminars - Connection Language for Parallel Computers  (MIT) &
    Instructionless Learning  (CMU)
  Course - Sets and Processes  (SU)
----------------------------------------------------------------------

Date: 10 Dec 1984 at 1125-EST
From: jim at TYCHO.ARPA  (James B. Houser)
Subject: Tapes on LM

Hi
        We just got a new  "industry  standard"  9-track  tape  drive  from
Symbolics  for our 36??.  Has anyone worked out how to convert tape formats
so you can interchange with other processors?  We are especially interested
in LMI and UNIX.

                                Cheers

                                        Jim

------------------------------

Date: Mon, 10 Dec 84  3:52:54 EST
From: "Martin R. Lyons" <991@NJIT-EIES.MAILNET>
Subject: XLISP availabilty


     Does anyone have the C source of XLISP laying around?  Our copy we had
forwarded to us was destroyed when we had a system crash.  I believe it was
version 1.2, but this is a best guess.

     If anyone has any information regarding other public domain LISPs written
in C I would appreciate pointers as to who to contact, etc.  to get a copy.

     As always, thanks in advance...

 MAILNET: Marty@NJIT-EIES.Mailnet
 ARPA:    Marty%NJIT-EIES.Mailnet@MIT-MULTICS.ARPA
 USPS:    Marty Lyons, CCCC/EIES @ New Jersey Institute of Technology,
          323 High St., Newark, NJ 07102    (201) 596-2932
 "You're in the fast lane....so go fast."

------------------------------

Date: Sat, 8 Dec 84 06:42:05 cst
From: Laurence Leff <leff%smu.csnet@csnet-relay.arpa>
Subject: AI News


Datamation December 1 1984 Page 172
Ovum ltd. announces their report "The commerical Application of Expert
Systems Technology."  It costs $395 and is available from Ovum Ltd.,
14 Penn Rd. London N7 9RD, England (including air mail).


Byte, December 1984
Page 412 - Ad: Walt Lisp for CP/M for $169.00.  It is substantially
compatable with Franz Lisp and similar to MacLisp.  $169.00
1-800-LIP-4000 from ProCode International 15930 SW Colony Pl.
Portland, Or 97224

Page 355: Review of micro-Prolog: Available from
Programming Logic Systems 31 Crescent Drive, Milford, CT 06460


Electronic News, December 3 1984
Page E
Symbolics has signed a contract valued at > $3,000,000 to supply 50 3600
Series Lisp Machines to Carnegie Group Inc.

Page 44
Announcement of Inforite Tablet which recognizes hand printed characters,
graphics and sketches.

------------------------------

Date: Fri 7 Dec 84 17:49:26-EST
From: SRIDHARAN@BBNG.ARPA
Subject: Excerpt from "games" mag

From the Jan 85 issueof GAMES p6-7
"How do you beat a poker player blessed with the supreme poker face?
That's one of the problems that will confront the winner of a $100,000
poker tournament to be held this month at the Bicycle Club in Bell Gardens,
California.

Whoever takes the event's high-draw competition must face a poker-playing
computer named ORAC in a head-to-head, no-limit game of draw poker.  ORAC
was developed by Mike Caro [Why is the program called ORAC?.. nss]
a top Las Vegas poker pro and computer whiz.  Not only is ORAC programmed
to beat people, it is also capable of explaining in English the strategy
used.

ORAC has not had an easy life thus far.  Its first trial by fire was last
April at the 1984 World Series of Poker in Las Vegas, where it played a
heads-up game against the then reigning world champion of poker, Tom
McEvoy. Though ORAC normally generates its own cards, a human dealer
was used at the World Series to allay any suspicion of cheating.  The
computer read its hand with a special optical scanner similar to the ones
used in supermarket checkout counters.

Man and machine played just about dead even for three-quarters of an hour
until ORAC moved all its chips in with an ace-queen of diamonds against
McEvoy's ace-nine off suit.  (the game was hold'em, a variation of
seven-card stud).  McEvoy held by far the worst hand, but he was lucky
enough to draw a pair of 9's and claim victory.  Commented the world champ:
"The fact that the computer went in with the best hand and got drawn
out on proves it's only human" [Hmm.!]

... The computer has proved itself a world-class competitor.  As for
the upcoming match at the Bicycle Club, Caro is full of confidence:
"ORAC will not only win," he says sanguinely, "but immediately afterwards,
it will write its own press release, explaining its actions during the match."

------------------------------

Date: 11 December 1984 0140-EST
From: Joseph Kownacki@CMU-CS-A
Subject: Parallel Processing and OPS5

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

OPS3/CM* is a facility on CM* which can execute OPS3/OPS5 programs in
parallel. The current version is not a complete system, but it is capable
of executing a repesentative subset of the TicTacToe program in parallel.

This post is a request for OPS5 test programs, especially those of moderate
size, which demonstrate (or counter-demonstrate) the usefulness of parallel
processing in this application.  A complex version of TTT or 8-puzzle
would be immediately handy.

Any assistance or suggestion in obtaining such examples would be greatly
appreciated.  Could you also disseminate this message forward to any other
people or groups that may be helpful.

You can obtain background information on OPS3/CM* from my Plan file -
just finger J. Kownacki.

------------------------------

Date: Fri, 7 Dec 84 13:12:29 cst
From: "Walter G. Rudd" <rudd%lsu.csnet@csnet-relay.arpa>
Subject: Special purpose hardware

There are still some open questions regarding the optimality of
Buell's sorting malgorithm (generate all N! possible permutations of the
N items to be sorted and then test each permutation to see if it is the
sorted result).  Nevertheless, the malgorithm does offer some interesting
properties when one considers the possibility of using an array of
parallel processors to implement the malgorithm.  One can show that
an array of N numbers can be sorted in constant time by an N by N!
array of processors and a data memory of the same size plus an
auxiliary memory that consists of one bit per processor.

We divide the set of processors into N! one-dimensional arrays of
N elements. Each of these arrays is reponsible for generating and
testing one of the N! permutations of the items to be sorted.

In the first step, each of the N processors in each array loads one
of the items to be sorted and stores it at a predetermined location
to generate the permutations.  In the second step, each processor
compares two neighboring items in its permutation and sets a bit
in the auxiliary memory if the items are in the proper order.  Finally,
one of the processors in each array examines its set of N bits in
the auxiliary memory to determine which of the permutations is
in the proper order.

A nice feature of this architecture is that it readily extends
to support descendents of the malgorithm, such as that recently
suggested by Lee and Brownson.

Question: if bad algorithms are called malgorithms, what should we call
architectures designed to implement malgorithms?  Cross suggests lardware.

------------------------------

Date: 10 Dec 84 20:06:06 EST
From: Ed.Frank@CMU-CS-UNH
Subject: History of Computing Qual

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

It's clear from an earlier post [on the CMU bboard, asking about
memory cores,] that there is a need in the department for a qual on
the history of various aspects of computing anachronisms, trivia, etc.
Such a qual will be given on Black Friday, at 3pm in the lounge.
Following standard practice, the syllabus will not be available until
next semester, the qual will not be pretested, and many of the
questions will be unclear.  This qual will cover all four areas.  Send
questions for inclusion in the qual to me and I'll forward them to the
History of Computing Qual committee. Anyone interesting in being on
the qual committee should also send me mail.

Some sample questions (Please don't send me the answers to these
questions. Just send me more questions.):

Computing Systems:
What's a drum card?

Programing Systems:
Describe a technique for getting a computer into an infinite
loop without ever executing a branch instruction. Name a machine
with this feature.

Theory
Describe the fundamental difference between Eniac and the Manchester
Mark I.

AI
What do CAR and CDR mean? On what machine?

------------------------------

Date: 7 Dec 1984  16:35 EST (Fri)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Connection Language for Parallel Computers  (MIT)


             [Forwarded from the MIT bboard by SASW@MIT-MC.]

               === === === AI REVOLVING SEMINAR === === ===

                               ALan Bawden

                        A Programming Language for
                       Massively Parallel Computers

                                    or

                      Help Stamp Out "Pointerthink"!


        Wednesday, December 12, 1984    4:00pm  8th floor playroom

The notion of a "pointer" is built deeply into many modern programming
languages.  Pointers are routinely used as the cement to build complex data
structures, even where other mechanisms would suffice, because on
conventional sequential computers they are cheap and their hazards are easy
to control.  Unfortunately the pointer is expensive and clumsy to support
on a massively parallel computer.  The notion of a "connection" will be
offered as a suitable substitute for the pointer.  Connections are a
minimal mechanism to allow communication; they are more constrained than
pointers and are less of a hazard in a parallel environment.  Most uses of
pointers are trivial enough that connections can be used instead.  This
makes it feasible to construct a programming language using connections,
instead of pointers, as the primitive cement for building data structures.

There are many consequences of making the switch from pointers to
connections.  Due to the symmetry of the connection mechanism, the concepts
of "object" and "type" become exact duals of the concepts of "message" and
"operation".  The notion of "state" emerges not as an aspect of objects,
but as an aspect of the interface between processes.  The problems of
method inheritance in a Flavor-like system are revealed to be even nastier
than previously suspected.  The "futures" mechanism, popular among parallel
programming languages, emerges as a natural consequence of the connection
mechanism.

------------------------------

Date: 8 December 1984 2230-EST
From: Jeff Shrager@CMU-CS-A
Subject: Seminar - Instructionless Learning  (CMU)

             [Forwarded from the CMU bboard by Laws@SRI-AI.]

                        Instructionless Learning
                  A Proposal for Dissertation Research

                              Jeff Shrager

                        Department of Psychology
                       Carnegie-Mellon University

        On: Friday December 14
        At: 10:30am-Noon
        In: Baker Hall 336B

We investigate the mechanisms of instructionless learning by asking
undergraduates to "figure out" a programmable toy, without instructions or
advice. From protocols, we obtain learners' hypotheses and the behaviors that
they exhibit which lead to learning a schema for the device.  Behaviors
include performing hypothesis testing experiments, explorations of various
aspects of the device and the incomplete schema, and solving problems to
exercise the schema. The present proposal is to construct and
validate a theory of instructionless learning of the BigTrak.  The theory
includes mechanisms of hypothesis formation, experimental test construction,
and overall learning control.  This work advances theories of concept
learning in complex realistic domains; mental models of complex systems, in
particular their acquisition; and cognitive modelling and its validation.

[Copies of the proposal are available in the Psych Lounge.]

------------------------------

Date: 07 Dec 84  0845 PST
From: Carolyn Talcott <CLT@SU-AI.ARPA>
Subject: Course - Sets and Processes  (SU)

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]



                      SETS AND PROCESSES


             MATH 294 (PHIL 394) WINTER QUARTER.
                       COURSE ANOUNCEMENT

             provisional time: Fridays, 1.15---3.15.

The standard universe of well-founded sets can be completed in a
natural way so as to incorporate every possible non-well-founded set.
The new completed  universe will still model all the axioms of set
theory except that the foundation axiom must be replaced by an
anti-foundation axiom.  The first part of the course will be concerned
with this new axiom, its model and its consequences. Several
interesting variants of the axiom will also be examined.

The second part of the course will be concerned with an axiomatic
approach to a general notion of abstract sequential process.  These
processes are capable of interacting with each other so that a variety
of operations for their parallel composition will be available.  The
notion is intended to form the foundation for an approach to the
semantics of programming languages involving concurrency.  A model for
the axiom system can be extracted from recent work of Robin Milner.
But by using the anti-foundation axiom a simple purely set theoretic
model will be given.

Some familiarity with the axiomatic theory of sets and classes will be
presupposed.  An understanding of the notion of a class model of ZFC
will be needed.  Definition by recursion on a well-founded relation
and Mostowski's collapsing lemma will be relevent.  But topics such as
the constructible universe, forcing or large cardinals will NOT be
needed. Some familiarity with computation theory would be useful.

Underlying the model constructions in both parts of the course is a
general result whose apreciation will require some familiarity with
the elements of universal algebra and category theory.

Background references will be available at the start of the course.

Auditors are very welcome.  The course may be of interest to both
mathematicians and computer scientists.


                                           PETER ACZEL

------------------------------

End of AIList Digest
********************

From:	COMSAT         14-DEC-1984 01:48  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a011503; 13 Dec 84 16:28 EST
Date: Thu 13 Dec 1984 12:00-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #176
To: AIList@SRI-AI
Received: from rand-relay by vpi; Fri, 14 Dec 84 01:29 EST


AIList Digest           Thursday, 13 Dec 1984     Volume 2 : Issue 176

Today's Topics:
  AI Companies - Survey,
  Machine Translation - Folklore & Aymara',
  Linguistics - Language Deficiencies,
  Humor - Nondeficient Christmas Tidings,
  Conferences - Machine Translation & JASIS Call for Papers
----------------------------------------------------------------------

Date: Thursday, 13 December 1984 00:24:23 EST
From: Duvvuru.Sriram@cmu-ri-cive.arpa
Subject: Information about AI companies

I am trying to put together a survey of various tools available in the
market for AI work. In particular I am interested in an assessment of the
tools (user experiences). Also would appreciate any information about the
kind of systems that AI companies are building.

sriram@cmu-ri-cive.arpa

------------------------------

Date: 10 Dec 84 11:37 EST
From: Gocek.henr@XEROX.ARPA
Subject: Re: Automatic Chinese translation

I remember the story about "Out of sight, out of mind" differently.  The
phrase was translated into Chinese and then retranslated into English.
The result was "invisible idiot".  Again, the person requesting the
translation was a government official.

Gary Gocek (Gocek.Henr@Xerox.ARPA)


[I first heard it as "blind idiot".  -- KIL]

------------------------------

Date: Sun, 9 Dec 1984  16:14 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Translation Folklore   V2 #174

I'm getting sick of hearing those two stories: "Blind and Stupid,"
and "The drinks were good but the meat was rotten."


It is time for you computer-people to start being serious!  Those
stories are only folklore, and did not come from the
machine-translation milieu at all; they circulated long before
computers, and were invented by cynic to make fun of bad human
translators!

If you think about it for a minute, you will realize that none of the
old translating machines were even nearly subtle enough to make such
coherent mistakes!  Modern ones are only a little better, and probably
not quite up to that standard yet.

Has anyone heard of a genuine translation blunder by a working
translation machine -- that is, one which is bad enough to be
considered really funny?  I consider a few of the paraphrases
produced by FRUMP to be in that class.

------------------------------

Date: Tue 11 Dec 84 09:46:43-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Aymara'

I just ran across an article in the S.F. Sunday Examiner and Chronicle
by Peter McFarren, Associated Press, Sept. 23, 1984, p. A17.  Most of
the content has been published in AIList already, but the following
may be new.

"Atamiri [Guzman de Rojas' program] is 10 times faster than any of the
others," said Bill Page, a computer specialist at the International
Research Center in Ottawa, Canada.  The center published Guzman de
Rojas' first study of Atamiri's potential in 1980, and Wang has just
offered him a $50,000 grant and a $100,000 computer to refine his
system.

The creator of Atamiri hopes to expand its vocabulary from the current
3,000 to 8,000 words per language [English, French, German, Portugese,
and Spanish] to about 30,000.  Then, he says, it will be possible to
translate prosaic texts such as newspaper articles with about 90 percent
accuracy.  Literary translations would come later, but human translators
will always have to be around to make corrections.

                                        -- Ken Laws

------------------------------

Date: Tue, 11 Dec 84 16:21:58 pst
From: ucdavis!lakhota@Berkeley (Lakhota)
Subject: Language deficiencies (AI List Digest 2:167,168,172)


   The interesting discussion of possible language deficiencies was triggered
by two anecdotes, one involving Australian Aborigines and the other American
Indians.  It would be useful in this context to look at some empirical facts
relating to these languages.  Australian languages are perfectly capable of
forming conditional and hypothetical expressions.  Examples of languages with
references follow below:

Dyirbal - Dixon, THE DYIRBAL LANGUAGE OF NORTH QUEENSLAND. CUP, 1972.
Tiwi - Osborne, THE TIWI LANGUAGE. Australian Institute of Aboriginal Studies
       [AIAS], Australian Aboriginal Studies no. 55, Linguistic Series no. 21,
       1974.
Walmatjari - Hudson, THE CORE OF WALMATJARI GRAMMAR. AIAS, 1978.
Guugu Yimidhirr - Haviland, Guugu Yimidhirr. In Dixon & Blake (eds.),
       HANDBOOK OF AUSTRALIAN LANGUAGES [HAL], v. 1. John Benjamins, 1979.
Djapu - Morphy, Djapu, a Yolngu dialect. HAL, v. 3. John Benjamins, 1983.
Yukulta - Keen, Yukulta. HAL, v. 3.
Nunggubuyu - Heath, A FUNCTIONAL GRAMMAR OF NUNGGUBUYU. Humanities Press, 1984.

   The same holds true for American Indian languages.  It is worth mentioning
that there is now more on Hopi than Whorf's papers.  E. Malotki has written two
700 page books on Hopi concepts of space and time: HOPI TIME, Mouton, 1983, and
HOPI RAUM (not yet translated into English).  These volumes should lay to rest
speculation about what Hopi does and doesn't have.  Examples of American Indian
languages and references follow:

Nootka - Sapir & Swadesh, NOOTKA TEXTS. LSA, 1939.
Yokuts - Newman, YOKUTS LANGUAGE OF CALIFORNIA. VFPA 2, 1944.
Cree - Wolfart, PLAINS CREE: A GRAMMATICAL SKETCH.  Trans. APS, 1973.
Takelma - Sapir, The Takelma Language of Southwestern Oregon. HANDBOOK OF
      AMERICAN INDIAN LANGUAGES [HAIL], v. 2 (BBAE 40, 2), 1922.
Tunica - Haas, TUNICA. HAIL, v. 4. J.J. Augustin, 1940.
Uto-Aztecan - Langacker, AN OVERVIEW OF UTO-AZTECAN LANGUAGES. STUDIES IN
      UTO-AZTECAN GRAMMAR, v. 1. SIL Publ. in Ling. 56, 1977.

   There are hundreds of Aboriginal and American Indian languages, and these
are only a handful of examples.  Nevertheless, they illustrate the point that
these languages do have the capacity for forming conditional, counterfactual,
and hypothetical expressions.  If anyone desires any further references, I'd
be happy to supply them.

   Robert Van Valin (ucdavis!lakhota@BERKELEY)
   Linguistics, UC Davis

------------------------------

Date: Wed, 12 Dec 84 10:40:02 pst
From: Peter Karp <karp@diablo>
Subject: Christmas Tidings

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]


                        A VISIT FROM ST. NICHOLAS
                        -------------------------

Twas the nocturnal segment of the diurnal period preceding the annual
yuletide celebration, and throughout our place of residence, kinnetic
activity was not in evidence among the possessors of this potential,
including that species of domestic rodent known as Mus musculus.
Hosiery was meticulously suspended from the forward edge of the wood
burning caloric apparatus, pursuant to our anticipatory pleasure
regarding an immiment visitation from an eccentric philanthropist
among whose folkloric appellations is the honorific title of St.
Nicholas.

The prepubescent siblings, comfortably ensconced in their respective
accommodations of repose, were experiencing subconscious visual
hallucinations of variegated fruit confections moving rhythmically
through their cerebrums.  My conjugal partner and I, attired in our
nocturnal head coverings, were about to take slumbrous avantage of
the hibernal darkness when upon the avenaceous exterior portion of
the grounds there ascended such a cacaphony of dissonance that I felt
compelled to arise with alacrity from my place of repose for the
purpose of ascertaining the precise source thereof.

Hastening to the casement, I forthwith opened the barriers sealing
this fenestration, noting thereupon that the lunar brilliance without,
reflected as it was on the surface of a recent crystalline
precipitation, might be said to rival that of the solar meridian
itself-- thus permitting my incredulous optical sensory organs to
behold a miniature airborne runnered conveyance drawn by eight
diminutive specimens of the genus Rangifer, piloted by a minuscule,
aged chauffeur so ebullient and numble that it became instantly
apparent to me that he was indeed our anticipated caller.  With his
ungulate motive power travelling at what may possibly have been more
vertiginous velocity than patriotic alar predators, he vociferated
loudly, expelled breath musically through contracted labia, and
addressed each of the octet by his or her respective cognomen - "Now
Dasher, now Dancer..." et al. - guiding them to the uppermost exterior
level of our abode, through which structure I could readily
distinguish the concatenations of each of the 32 cloven pedal
extremities.

As I retracted my cranium from its erstwhile location, and was
performing a 180-degree pivot, our distinguished visitant achieved -
with utmost celerity and via a downward leap - entry by way of the
smoke passage.  He was clad entirely in animal pelts soiled by the
ebon residue from oxidations of carboniferous fuels which had
accumulated on the walls thereof.   His resemblance to a street vendor
I attributed largely to the plethora of assorted playthings which he
bore dorsally in a commodious cloth receptacle.

His orbs were scintillant with reflected luminosity, while his
submaxillary dermal indentations gave every evidence of engaging
amiability.  The capillaries of his malar regions and nasal
appurtenance were engorged with blood which suffused the subcutaneous
layers, the former approximating the coloration of Albion's floral
emblem, the latter that of the Prunus avium, or sweet cherry.  His
amusing sub- and supralabials resembled nothing so much as a common
loop knot, and their ambient hirsute facial adornment appeared like
small, tabular and columnar crystals of frozen water.

Clenched firmly between his incisors was a smoking piece whose gray
fumes, forming a tenuous ellipse about his occiput, were suggestive of
a decorative seasonal circlet of holly.  His visage was wider than it
was high, and when he waxed audibly mirthful, his corpulent abdominal
region undulated in the manner of impectinated fruit syrup in a
hemispherical container.  He was, in short, neither more nor less than
an obese, jocun, multigenarian gnome, the optical perception of whom
rendered me risibly frolicsome despite every effort to refrain from so
being.  By rapidly lowering and then elevating one eyelid and rotating
his head slightly to one side, he indicted that trepidation on my part
was groundless.

Without utterance and with dispatch, he commenced filling the
aforementioned appended hosiery with various of the aforementioned
articles of merchandise extracted from his aforementioned previously
dorsally transported cloth receptacle.  Upon completion of this task,
he executed an abrupt about-face, placed a single manual digit in
lateral juxtaposition to his olfactory organ, inclined his cranium
forward in a gesture of leave-taking, and forthwith effected his egress
by renegotiating (in reverse) the smoke passage.  He then propelled
himself in a short vector onto his conveyance, directed a musical
expulsion of air through his contracted oral sphincter to the antlered
quadrupeds of burden, and proceeded to soar aloft in a movement
hitherto observable chiefly among the seed-bearing portions of a
common weed.  But I overheard his parting exclamation, audible
immediately prior to his vehiculation  beyond the limits of
visibility:  "Ecstatic yuletide to the planetary constituency, and to
that selfsame assemblage, my sincerest wishes for a salubriously
beneficial and gratifyingly pleasurable period between sunset and
dawn."

-- From Eleonore Johnson at Teknowledge

------------------------------

Date: Tue, 11 Dec 84 00:06 EST
From: Sergei Nirenburg <nirenburg%umass-cs.csnet@csnet-relay.arpa>
Subject: Conference - Machine Translation


               CALL  FOR  PAPERS

CONFERENCE ON THEORETICAL AND METHODOLOGICAL ISSUES

   IN MACHINE TRANSLATION OF NATURAL LANGUAGES

              Colgate  University
              Hamilton  NY  13346
              August 14-16, 1985

The program of the conference will be biased toward invited lectures and
panel discussions.  However, a restricted number of excellent submitted
papers will be also included.

The major topics of the conference are as follows :

-- Machine Translation (MT) as an application area for Theoretical
   Linguistics (including stylistics and discourse analysis)

-- MT as an application area for Artificial Intelligence (including the
   choice of the representation schemata for MT)

-- Theory and methodology of translation and machine translation

-- Sublanguages, restricted domains and MT

-- MT as a case study in software system development

-- Computational tools for MT, human engineering aspects,
   management and evaluation of MT projects.


The papers should not exceed 3,000 words, should contain a 250-word abstract
and a list of index terms.  Send them (and address all inquiries) to

Sergei Nirenburg
MT Conference Program Chair
Department of COmputer Science
Colgate University
Hamilton  NY  13346
(315) 824-1000 x586

Every paper will be read by two members of the program committee whose
members are:

Christian Boitet, University of Grenoble
Jaime Carbonell, Carnegie-Mellon University
David MacDonald, University of Massachusetts
James Pustejovsky, University of Massachusetts
Allen Tucker, Colgate University
Don Walker, AT&T Laboratories

The emphasis of the conference is on the theoretical and methodological
issues.  Therefore, the papers that do not address such issues will not
be considered.

Dates: Submission deadline        -- March 11, 1985
       Notification of acceptance -- May 15, 1985
       Final version due          -- June 17, 1985


>>>>> the above will provide a good opportunity to conduct more lively
>>>>> discussions of Aymara, Sastric Sanskrit, Esperanto, etc., the
>>>>> problem of translatability, theory of translation (even not
>>>>> necessarily automatic), interlinguae and their structure...

------------------------------

Date: Tue, 11 Dec 84 11:20:41 cst
From: Don Kraft <kraft%lsu.csnet@csnet-relay.arpa>
Subject: JASIS Call for Papers

As the new editor of the JOURNAL OF THE AMERICAN SOCIETY FOR
INFORMATION  SCIENCE  (JASIS),  I  am sending out a call for
papers.  We are  a  refereed  professional  journal  seeking
scholarly, relevant articles in the area of information sci-
ence.  To submit an article, please send three copies of the
manuscript to me at

     Donald H. Kraft
     Department of Computer Science
     Louisiana State University
     Baton Rouge, LA  70803.

If you have any questions, I can also be reached at
(504) 388-1495 or
kraft%lsu@csnet-relay  .

I have attached below a list of topics considered  relevant.
Please  note  the presence of artificial intelligence, which
has become of interest, especially in the area  of  informa-
tion  retrieval (intelligent front ends, expert systems, and
the use  of  natural  language  processing  seem  especially
relevant  to  my  readers  at  the moment).  You may wish to
check out the September, 1984 (v. 35,  n.  5)  issue,  which
featured a series of articles on AI.


                  CALL FOR PAPERS -- JASIS

1. Theory of Information Science          4. Applied Information Science

   Foundations of Information Science        Informations systems design --
   Information theory                            tools, principles, applications
   Bibliometrics                             Case histories
   Information retrieval --                  Information system operations
      models and principles                  Standards
   Evaluation and measurement                Information technology -- hardware
   Representation, organization, and             and software
       classification of information         Automation of information systems
   ARTIFICIAL INTELLIGENCE and natural       Online retrieval systems
       language processing                   Office automation and records
                                                 management

2. Communication                          5. Social and Legal Aspects of
                                                 Information
   Theory of communication
   Non-print media                           Impact of information systems and
   Man-machine interaction                       technology upon society
   Network design, operation, and            Ethics and information
       management                            Legislative and regulatory aspects
   Models and empirical findings about       History of information science
       information transfer                  Information science education
   User and usage studies                    International issues

3. Management, Economics, and Marketing

   Economics of information
   Management of information systems
   Models of information management decisions
   Marketing and market research studies
   Special clientele -- arts and humanities,
        behavioral and social sciences, biological
        and chemical sciences, energy and environment,
        legal, medical, and education.


Authors may also  send in  brief  communications,  scholarly
opinion pieces, and even letters to the editor. In addition,
we also have a fine book review section.

Thank you in advance for your consideration of JASIS.

Don Kraft
kraft%lsu@csnet-relay

------------------------------

End of AIList Digest
********************

From:	COMSAT         14-DEC-1984 01:49  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a012590; 13 Dec 84 21:28 EST
Date: Thu 13 Dec 1984 17:29-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #177
To: AIList@SRI-AI
Received: from rand-relay by vpi; Fri, 14 Dec 84 01:39 EST


AIList Digest            Friday, 14 Dec 1984      Volume 2 : Issue 177

Today's Topics:
  AI in Engineering - SIGART Special Issue,
  Expert Systems - Micro Survey & Poker & Personal Assistants,
  Planning - Constraint Propagation and Planning,
  Report - Reflection and Semantics in LISP
  Humor - Scientific Method,
  Seminars - Three-valued Hintikkian Epistemic Logic  (CSLI) &
    The Sequential Nature of Unification  (IBM-SJ)
----------------------------------------------------------------------

Date: Thursday, 13 December 1984 00:28:33 EST
From: Duvvuru.Sriram@cmu-ri-cive.arpa
Subject: SIGART special issue on AI in Engineering

The deadline for submissions of abstracts for the special issue is extened
to  January 15th  for all Arpanet mailers. For more information on this
issue see SIGART newsletter dated July 1984.  All submissions should be sent
to rj@cmu-cs-h.arpa.

Sriram

------------------------------

Date: Tue, 11 Dec 84 16:05:23 mst
From: "Arthur I. Karshmer" <arthur%nmsu.csnet@csnet-relay.arpa>
Subject: Expert systems


I am interested in obtaining information about expert systems that
run on micro computers and software to develop expert system on
micro processors. We are currently using a variety of micro's including
IBM PC'c and IBM-AT's.

Arthur I. Karshmer
arthur.nmsu@csnet-relay

------------------------------

Date: 12 Dec 84 10:35:28 EST
From: Jeffrey Shulman <SHULMAN@RUTGERS.ARPA>
Subject: ORAC's Poker Game

        This past weekend (Sunday 12/9) "Ripley's Believe It or Not" had a
segment on ORAC's poker game.  You should try to catch it in rerun.

                                                        Jeff

------------------------------

Date: Wed, 12 Dec 84 17:28:04 EST
From: David_West%UMich-MTS.Mailnet@MIT-MULTICS.ARPA
Subject: McGuire's Speculations on Personal Assistants (v2 #174)

   -Of course, the biodisks of a future Walt Whitman would be
exhaustively analyzed not by a future Louis Untermeyer, but
by the latter's automated personal assistant, and the resulting
voluminously definitive biography would be read and enjoyed by the public's
personal assistants.  Thus we would all be freed from untold
drudgery, to fulfil the vision of Villiers de L'Isle-Adam (1890):
  "Living? Our servants will do that for us."         :-)

------------------------------

Date: 12 Dec 84 13:30:24 EST
From: Louis Steinberg <STEINBERG@RUTGERS.ARPA>
Subject: Constraint Propagation and Planning

A recent message from chandra@uiucuxc@uiucdcs@RAND-RELAY.ARPA asked
about people working on Constraint Propagation and Planning ala
Stefik's MOLGEN.

The AI/VLSI Project at Rutgers is using this approach in building a
system to do design.  Our thesis is that:
        Design = Top Down Refinement + Constraint Propagation
Our current system aids in the design of digital VLSI circuits, but we
believe the ideas apply to the design of other kinds of things as
well.  Design and the sort of planning chandra was talking about are
essentially the same problem, although there are some peculiar things
about blocks-world style domains that make planning/design issues a
bit different than they are in design of circuits or, to some extent,
programs.

The only paper I can point you to on our design stuff is:

        Mitchell, Steinberg, and Shulman, "A Knowledge Based Approach to
        Redesign", Proceedings of IEEE workshop on Principles of Knowledge
        Based Systems, Denver, December 3-4, 1984

Also, many of our ideas flow from previous work on REdesign and on
constraint propagation in circuits - see, for instance:

        Steinberg, L. and Mitchell, T., "A Knowledge Based Approach to
                VLSI CAD", Proceedings of 21st Design Automation
                Conference, June, 1984.

        Kelly, V.  "The CRITTER System: Automated Critiquing of Digital
                Circuit Designs", Proceedings of 21st Design Automation
                Conference, June, 1984.

        Mitchell, T., Steinberg, L., Kedar-Cabelli, S., Kelly, V., Shulman,
                J., Weinrich, T., "An Intelligent Aid for Circuit Redesign",
                Proceedings of the National Conference on Artificial
                Intelligence, 1983, pp. 274-278.

        Kelly, V., and Steinberg, L., "The CRITTER System:  Analyzing Digital
                Circuits by Propagating Behaviors and Constraints",
                Proceedings of the National Conference on Artificial
                Intelligence, 1982, pp. 284-289.  Also Report LCSR-TR-30,
                Dept. of Computer Science, Rutgers University.

        Mitchell, T., Steinberg, L.,  Smith, R. G., Schooley, P.,  Kelly, V.,
                and  Jacobs,  H.,  "Representations  for  Reasoning  about
                Digital Circuits," Proceedings of the Seventh International
                Joint Conference on Artificial Intelligence, 1981, pp. 343-344.

------------------------------

Date: Wed 12 Dec 84 17:54:25-PST
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Report - Reflection and Semantics in LISP

         [Excerpted from the CSLI Newsletter by Laws@SRI-AI.]


                   NEW EDITION OF CSLI REPORT NO. 8

The final edition of Report No. CSLI--84--8, entitled ``Reflection and
Semantics in LISP'' by Brian Smith, has now been published. Copies
of this report may be obtained by writing to Dikran Karagueuzian at CSLI.

------------------------------

Date: Thu, 6-Dec-84 00:42:44 PST
From: reid@Glacier.ARPA
Subject: Scientific Method

        [Forwarded to the Xerox bikers' bboard by Trow.PA@XEROX.]
        [Forwarded to the Xerox bboard by Ayers.PA@XEROX.]
        [Forwarded from the Xerox bboard by PolleZ.PA@XEROX.]
        [Forwarded to the Stanford bboard by Jock@SU-SCORE.]
        [Forwarded from the Stanford bboard by Laws@SRI-AI.]


Subject: net.bicycle.freewheel.cleaning: a reprise

As avid readers of this group may remember, we had a big row about cleaning
freewheels this summer, which was sort of ended when Fred at Varian, who is an
analytical chemist, and me, Brian at Stanford, who is a professor of CS, got
into a disagreement about something having to do with chemistry and Brian at
Stanford had the rare sense to keep his mouth shut.

However, despite being merely a computer scientist, and being quite willing to
work out of doors where the fumes won't kill him as fast, Brian remained
slightly unconvinced that the chemicals suggested by Fred at Varian were in
fact better at cleaning freewheels than the junk currently used by Brian at
Stanford. Brian had this vague suspicion that Fred the Chemist from Varian had
been exposed to lectures telling him to stay away from the kind of toxic
chemicals that Brian liked to use to clean freewheels, in much the same way
that Brian the CS professor lectures his students to stay away from Fortran
and IBM PC's.

So Brian went out in the rain and did some experiments. Actually, he had
another attack of good sense and stayed on his back porch, where the rain did
not fall directly on his head or on his freewheels or into his chemicals.

Now here a problem developed. Computer Scientists do not customarily do
experiments. Computer Scientists normally just say things because it makes
them feel good, and if they say them loudly and brashly enough then the things
become true. The current U.S. 5th generation computer project is a good
example of this.

But Brian at Stanford was once a physics major at the University of Maryland,
and he remembered how to run experiments after some consultation with his old
Physics 171 lab notebooks. The gist of it seemed to be that you were supposed
to do something twice, and the second would be identical to the first in every
way except for one controlled variable, and then if there were any differences
you could chalk them up to that variable. I think you're supposed to do a Chi
Square test in there too, or maybe draw some graphs, but this was just an
amateur experiment.

As the light dawned, Brian realized that he could do this experiment using
some hardware that was near and dear to his hacker's heart.  Brian's wife had
given him a birthday present consisting of a real mother of a power saw, a
Milwaukee worm drive power saw, with a finetooth carbide blade. That saw is
just the cat's meow--you put the carbide blade on it, put on the requisite eye
and lung protectors, and wow, you can rip up anything you can reach.  Joe-Bob
Briggs would be thrilled. The same feeling that you get when you first run
some code on a Cray, that feeling of almost limitless power, can be had much
more cheaply with a Milwaukee worm drive saw with a good carbide blade.

In particular, a Milwaukee worm drive saw with a carbide blade will saw a
freewheel clean in half. Lots of wild sparks shooting everywhere, but since
it's raining they probably won't set very much on fire. Ball bearings getting
caught in the carbide teeth and being whipped around at 200 mph and shot
across the yard, scaring the squirrels. Oh, this was great fun.

After counting his fingers and finding them all still intact, Brian took these
two demi-freewheels and stuck them in two old margarine tubs, which are one of
the principal tools of the serious amateur freewheel cleaner.  Brian got out a
beaker (after all, this was an experiment, right?  Experiments use beakers)
and measured out a beakerful of Berryman's Carburetor Cleaner [brian's
favorite toxic chemical for cleaning freewheels].  This beakerful didn't cover
the freewheel much, because it was a 60ml beaker, so then Brian poured a bunch
of glugs of Berryman's on top of the freewheel, until it was immersed. Brian
figured he would face the issue of how to clean the beaker and return it to
his kitchen at a later time.  The label on the Berryman's can says it contains
Methylene Chloride, Cresylic acid, and Perchloroethylene.

Into the other margarine tub Brian put the other half of the freewheel, and
then poured out a bunch of glugs of "Gunk" brand degreasing liquid. The label
on the Gunk can says it contains Petroleum Distillates.

Brian is sufficiently afraid of Berryman's Carburetor Cleaner that he didn't
want to go messing with it by stirring it or sticking a brush into it, but it
was quite clear to Brian from the moment this experiment started that the Gunk
was going to need some help, so he took an acid brush and used it to scrub
parts of the surface of the freewheel that was soaking in Gunk.

Brian then went to eat a chicken chimichanga (hold the sour cream) and came
back about 20 minutes later to inspect the results of the experiment.

The result was that there was no grease on either freewheel half, but there
was still a pile of rust and black goop and garbage on the Gunk half, though
not as much in the places where it had been brushed. The Berryman's Carbuetor
Cleaner half was as clean as a new whistle, gleaming metal. A dead insect of
some sort was floating in the Berryman's, busily dissolving.

Brian longed for the skills of a real physical scientist--to weigh these
bisected freewheels on a microbalance, or look at them under high-powered
microscopes, or grind them up and feed them to a mass spectrometer, but none
of these machines were in evidence in the back yard, so instead he just washed
them off with soap and water and looked at them under a bright light.

What he saw is that the Berryman's Carburetor Cleaner gets freewheel halves
(and therefore, presumably, freewheels) really really clean, by dissolving or
decaying or disintegrating the grease and the rust and the insects.  And that
the Gunk gets the grease off of freewheels, and if you scrub it will get the
dirt off, but it leaves the rust behind.

The moral of this story seems to be that if you are a responsible freewheel
owner and you clean it as often as it wants to be cleaned and you avoid
letting it get built up with dirt and you keep it out of the rain, all of
which are good things to do to a freewheel, that Gunk degreaser (or other
similar chemicals) works just fine. But if you let your freewheel go too far,
to get to the point where if it were teeth you know your dentist would give
you a long lecture about flossing, that you should clean it with some sort of
toxic waste such as Berryman's Carburetor Cleaner (which has been found "more
effective" in scientific experiments at a major university.....)

        Brian Reid      Reid@SU-Glacier.ARPA    decwrl!glacier!reid

------------------------------

Date: Wed 12 Dec 84 17:54:25-PST
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Seminar - Three-valued Hintikkian Epistemic Logic  (CSLI)

         [Excerpted from the CSLI Newsletter by Laws@SRI-AI.]


                 SUMMARY OF LAST WEEK'S NL1 SEMINAR
             ``Three-valued Hintikkian Epistemic Logic''
                          By Lauri Carlson

Hintikka's system of epistemic logic in K&B and Models for Modalities
contains a number of peculiar features (restricted range feature,
treatment of irreducible existential formulae) which skew the natural
interpretation of certain formulae and make it hard to ascertain
completeness of the system(s).  For instance the formula (x)(Ey)Kx=y
is valid (and does not mean I "know who everyone is"), while
(Ex)(Ey)(x=y & -Kx=y) is inconsistent (and does not mean "There is
someone who might be two different people as far as I know").  Lauri
Carlson presented a version of epistemic logic which overcomes these
difficulties and can be shown complete with respect to its intended
Kripkean style semantics.

------------------------------

Date: Thu, 13 Dec 84 09:35:13 PST
From: IBM San Jose Research Laboratory Calendar
      <calendar%ibm-sj.csnet@csnet-relay.arpa>
Reply-to: IBM-SJ Calendar <CALENDAR%ibm-sj.csnet@csnet-relay.arpa>
Subject: Seminar - The Sequential Nature of Unification  (IBM-SJ)

                 [Forwarded from the SRI-AI bboard.]

                      IBM San Jose Research Lab
                           5600 Cottle Road
                         San Jose, CA 95193


  Mon., Dec. 17 Computer Science Seminar
  2:00 P.M.   ON THE SEQUENTIAL NATURE OF UNIFICATION
  Audit. A     Unification of terms is a crucial step in resolution
            theorem proving with applications to a variety of
            symbolic computation problems.  It will be shown that
            the general problem is log-space complete for P, even
            if infinite substitutions are allowed.  Thus, it is
            "popularly unlikely" that unification can enjoy
            substantial speed-up in a parallel model of
            computation.  A fast parallel (NC) algorithm for term
            matching, an important subcase of unification, will
            also be presented.  This talk assumes no familiarity
            with unification or its applications.

            Dr. C. Dwork, Massachusetts Institute of Technology,
                Laboratory for Computer Science
            Host:  J. Halpern

  [...]

------------------------------

End of AIList Digest
********************

From:	CSVPI          16-DEC-1984 20:56  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a000455; 16 Dec 84 17:07 EST
Date: Sun 16 Dec 1984 13:19-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #178
To: AIList@SRI-AI
Received: from rand-relay by vpi; Sun, 16 Dec 84 20:46 EST


AIList Digest            Sunday, 16 Dec 1984      Volume 2 : Issue 178

Today's Topics:
  Linguistics - Nonverbal Semantics
----------------------------------------------------------------------

Date: Fri, 14 Dec 84 13:13:04 EST
From: Bruce Nevin <bnevin@BBNCCH.ARPA>
Subject: Nonverbal Semantics  [Long Message]


    >It's . . . convenient to talk about natural language as if
    >it's something "on its own".  However, I view this attitude
    >as scientifically unhealthy, since it leads to an
    >overemphasis on linguistic structure.  Surely the
    >interesting questions about NL concern those cognitive
    >processes involved in getting from NL to thoughts in memory
    >and back out again to language.  These processes involve
    >forming models of what the speaker/listener knows, and
    >applying world knowledge and context.  NL structure plays
    >only a small part in these overall processes, since the main
    >ones involve knowledge application, memory interactions,
    >memory search, inference, etc.

                Dyer V2 #160

    >Bravo, Dyer!  As you suggest, there is indeed much to learn
    >from the study of natural language -- but not about "natural
    >language itself"; we can learn what kinds of manipulations
    >and processes occur in the under-mind with enough frequency
    >and significance that it turns out to be useful to signify
    >them with surface language features.

    >. . . All that is very fine.  We should indeed study
    >languages.  But to "define" them is wrong.  You define the
    >things YOU invent; you study the things that already exist.
    >. . . But when one confuses the two situations, as in the
    >subjects of generative linguistics or linguistic competence
    >-- ah, a mind is a terrible thing to waste, as today's
    >natural language puts it.

                Minsky V2 #162


I suspect that the antipathy to natural-language parsers, grammars, and
theories that we often encounter in AI literature reflects a healthy
revulsion from the excesses of generative linguistics.  In all of its
many schizmatic forms, generative grammer posits, as the secret inner
mechanism of language, one of various language-like systems that share
historical roots with programming languages, and uses natural-language
data only in a fragmentary and anecdotal way to advance or refute the
latest version.  These systems can be quite hairy, but I am convinced
that the hair is mostly inside the heads of the theorists.

Any natural phenomenon, or any artifact of human culture, is a
legitimate object of study.  Natural language is both an artifact of
human culture, and a natural phenomenon.  There are some who are
studying language, as opposed to the grammatical machinery of
language-like systems.

I recently reviewed a book by the linguist from whom Noam Chomsky
learned about linguistic transformations (among other things).  It will
appear in AJCL vol. 10 nos. 3 and 4 (a double issue).  The following
excerpt gives an outline of the model of language he has developed:

          I refer to the Harrisian model of language as `constructive
          grammar' and to the Harrisian paradigm for linguistics as
          `constructive linguistics'.  A constructive grammar has
          at least the following six characteristics:

          1    The semantic primes are words in the language, a base
               vocabulary that is a proper subset of the vocabulary of
               the language as a whole.

          2    Generation of sentences in the base is by word entry,
               beginning with entry of (mostly concrete) base nouns.
               The only condition for a word to enter is that its
               argument requirement must be met by some previously
               entering word or words, generally the last entry or
               entries, which must not already be in the argument of
               some other word.  The base vocabulary has thus a few
               simple classes of words:

               N         base nouns with null argument
               On, Onn   operators requiring base nouns as arguments
               Oo, Ooo   requiring operators as arguments
               Ono, Oon  requiring combinations of operators and base
                         nouns
               [NOTE:  these are intended to be O with subscripts]

               This does not exhaust the base vocabulary.  In addition
               to these, almost all of the operators require
               morphophonemic insertion of `argument indicators' such
               as -ing and that.  (These were termed the `trace' of
               `incremental transformations' in Harris 1965 and 1968.)

          3    The base generates a sublanguage which is
               informationally complete while containing no
               paraphrases.  This is at the expense of redundancy and
               other stylistic awkwardness, so that utterances of any
               complexity in the base sublanguage are unlikely to be
               encountered in ordinary discourse.  As in prior reports
               of H's work, base sentences are all assertions, other
               forms such as questions and imperatives being derived
               from underlying performatives I ask, I request, and the
               like.

          4    A well-defined system of reductions yields the other
               sentences of the language as paraphrases of base
               sentences.  The reductions were called the
               `paraphrastic transformations', and `extended
               morphophonemics' in earlier reports.  They consist of
               permutation of words (movement), zeroing, and
               morphophonemic changes of phonological shape.  Each
               reduction leaves a `trace' so that the underlying
               redundancies of the base sublanguage are
               recoverable. Linearization of the operator-argument
               dependencies--in English either `normal' SVO or a
               includes much of what is in the lexicon in generative
               `topicalizing' linear order--is accomplished by the
               reduction system, not the base.  The reduction system
               includes much of what is in the lexicon in generative
               grammar (cf. Gross 1979).


          5    Metalinguistic information required for many
               reductions, such as coreferentiality and lexical
               identity, is expressed within the language by conjoined
               metalanguage sentences, rather than by a separate
               grammatical mechanism such as subscripts.
               Similarly, `shared knowledge' contextual and pragmatic
               information is expressed by conjoined sentences
               (including ordinary dictionary definitions) that are
               zeroable because of their redundancy.  [Harris's book
               Mathematical Structures of Language (Wiley 1968) shows
               that the metalanguage of natural language necessarily
               is contained within the language.]

          6    The set of possible arguments for a given operator (or
               vice-versa) is graded as to acceptability.  These
               gradings correspond with differences of meaning in the
               base sublanguage, and thence in the whole language.
               They diverge in detail from one sublanguage or
               subject-matter domain to another.  Equivalently, the
               fuzzy set of `normal' cooccurrents for a given word
               differs from one such domain to another within the base
               sublanguage.

               In informal, intuitive terms, a constructive grammar
          generates sentences from the bottom up, beginning with word
          entry, whereas a generative grammar generates sentences from
          the top down, beginning with the abstract symbol S.  The
          grammatical apparatus of constructive grammar (the rules
          together with their requirements and exceptions) is very
          simple and parsimonious.  H's underlying structures, the
          rules for producing derived structures, and the structures
          to be assigned to surface sentences are all well defined.
          Consequently, H's argumentation about alternative ways of
          dealing with problematic examples has a welcome concreteness
          and specificity about it.

               In particular, one may directly assess the semantic
          wellformedness of base constructions and of each intermediate
          stage of derivation, as well as the sentences ultimately
          derived from them, because they are all sentences.  By
          contrast, in generative argumentation, definitions of base
          structures and derived structures are always subject to
          controversy because the chief principle for controlling them
          is linguists' judgments of paraphrase relations among
          sentences derived from them.  Even if one could claim to
          assess the semantic wellformedness of abstract underlying
          structures, these are typically so ill-defined as to compel us
          to rely almost totally on surface forms to choose among
          alternative adjustments to the base or to the system of rules
          for derivation.  And as we all know, a seemingly minor tweak
          in the base or derivation rules can and usually does have
          major and largely unforeseen consequences for the surface
          forms generated by the grammar.

This model of language offers an interesting approach to the problem
brought up by Young in V2 #162, 174: how to represent the meaning of
words without (circularly) using words?

Most approaches amount to what I call `translation semantics':  having
found a set of language-universal semantic primes, one translates
sentences of a given natural language into those primes and, voila'!,
one has represented the `meaning' of those NL sentences.

Let us ignore the difficulty of finding a set of semantic universals (a bit
of hubris there, what!).  The `representation of meaning' is itself a
proposition in a more-or-less artificial language that has its own
presumably very simple syntax (several varieties of logic are promoted
as most suitable) and--yes--its own semantics.  Logics boil `meaning'
down to sets of `values' on propositions, such as true/false.

`But my system', rejoins Young, `uses actual nonverbal modalities, it
has real hooks into the neurological and cognitive processes that human
beings use to understand and manipulate not only language, but all other
experience as well'.  That may be.  It does beg the question to what
degree cognitive processes and even neurological processes are molded by
language and culture.  (In Science 224:1325-1326 Nottebohm reports that
the part of the forebrain of adult canaries responsible for singing
becomes twice as large coincident with (a) increased testosterone and
(b) learning of songs.  This is the same whether the testosterone
increase is annually in the Spring or experimentally, and the latter
even in females, who consequently learn to sing songs as if they were
males.  Vrenson and Cardozo report in Brain Research 218:79-97
experiments indicating that both the size and shape of synapses in the
visual cortices of adult rabbits changed as a result of visual training.
Cotman and Nieto-Sampedro survey research on synapse growth and brain
plasticity in adult animals in Annual Review of Psychology 33:371-401.
Roger Walsh documents other research of this sort in his book Towards an
Ecology of Brain.  Conventional wisdom of brain science, that no new
neurons are formed after infancy, is unwise.)

The padres of yore surveyed the primitive languages around their
missions and found so many degenerate forms of Latin.  Their grammars
lay these languages on the procrustean bed of inflections and
declensions in a way that we see today as obviously ethnocentric and
downright silly.  We run the same risk today, because like those padres
we cannot easily step out of the cultural/cognitive matrix with which we
are analyzing and describing the world.  Ask a fish to describe water:
the result is a valid `insider's view', but of limited use to nonfish.

Mr. Chomsky characterized his mentor in linguistics as an Empiricist and
himself as a Rationalist, and in the Oedipal struggle which ensued
mother Linguistics has got screwed.  Given that systems based on
constituent analysis are inherently overstructured, with layers of
pseudo-hierarchy increasingly remote from the relatively concrete words
and morphemes of language, an innate language-learning device is
ineluctable:  how else could a child learn all of that complexity in so
short a time on so little and so defective evidence?  The child cannot
possibly be an Empiricist, she must be a Rationalist.  Given a
biologically innate language-acquisition device, there must be a set of
linguistic universals that all children everywhere come into the world
just knowing, and all languages must be specialized realizations of
those archetypes--phenotypes of that genotype, as it were.  (Chomsky did
not set out to `define' natural language but to explain it.  It is
principally because his `underlying', `innate' constructs have a
connection to empirical data that is remote at best--rather like the
relation of a programmer's spec to compiled binary--that they appear
to be (are?) definitions.)

But consider a model in which the structure of language is actually
quite simple.  Might the characteristics of that model not turn out to
be those of some general-purpose cognitive `module'?  I believe Harris's
model, sketched above, presents us this opportunity.

Now about Jerry Fodor's book The Modularity of Mind, which Young mentions.
The following is from the review by Matthei in Language 60.4:979,

        F presents a theory of the structure of the mind in which two
        kinds of functionally distinguishable faculties exist:
        `vertical' faculties (modules) and `horizontal' faculties
        (central processes). . . . F identifies the modules with the
        `input systems, whose function is to interpret information
        about the outside world and to make it available to the central
        cognitive processes.  They include [five modules for] the
        perceptual systems . . . and [one for] language. . . .

        The central processes, as horizontal faculties, can be
        functionally distinguished from modular processes because their
        operations cross content domains.  The paradigm example of their
        operation is the fixation of belief, as in determining the truth
        of a sentence.  What one believes depends on an evaluation of
        what one has seen, heard, etc., in light of background
        information. . . .

                . . . the condition for successful science is that
                nature should have joints to carve it at:  relatively
                simple subsystems which can be artificially isolated and
                which behave, in isolation, in something like the way
                that they behave in situ. (128)

        [The above, by the way, suggests that, while studying language
        in isolation--severing its `joints' with other systems--may be
        of limited interest to AI researchers seeking to model language
        users' performance, rather than their competence, it is not
        `scientifically unhealthy'.  It also points to the central
        problem of semantics, as Matthei points out . . .]

        Modules, F says, satisfy this condition; central processes do
        not.  If true, this is bad news for those who wish to study
        semantics.  The burden which F puts on them is that they must
        demonstrate that computational formalisms exist which can
        overcome the problems he enumerates.  These formalisms will have
        to be invented, because F maintains that no existing formalisms
        are capable of solving the problems.

I, too, feel that notions of modules and modularity, or at least Fodor's
attempt to consolidate them, make a great deal of sense.  However, the
caveat about the study of semantics underscores my contention that
semantics properly must be based on an `acceptability model': a body of
knowledge stated in sentences in the informationally complete
sublanguage of Harris's base, whose acceptability is known.  This is
akin to a `truth model' in aletheic approaches to semantics in logic.
It is also very simply conceived of as a database such as is constructed
by Sager's LSP systems at NYU.  We should note that the sentences of
this base sublanguage correspond very closely across languages (cf. e.g.
the English-Korean comparison in Harris's 1968 book), and that the
vocabulary of the base sublanguage is a subset of that of the whole
language (allowing for derivation, morphological degeneracy, and the
like), much closer to Young's categories than the vocabulary with which
he expresses so much frustration.

There is one pointer I can give to another version of `translation
semantics' that probably satisfies Young's sense of `nonverbal':
Leonard Talmy developed an elaborate system for representing the
semantics and morphological derivation of some pretty diverse languages
in his (1974?) PhD dissertation at UC Berkeley.  The languages included
Atsugewi (neighbor and cousin to the Native American language I worked
on), Spanish, and Yiddish.  He went to SRI after graduation, but I have
no idea where he is now or what he is doing.

        Bruce Nevin (bn@bbncch)

------------------------------

End of AIList Digest
********************

From:	COMSAT         20-DEC-1984 01:34  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a003076; 19 Dec 84 15:44 EST
Date: Wed 19 Dec 1984 11:19-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #179
To: AIList@SRI-AI
Received: from rand-relay by vpi; Thu, 20 Dec 84 00:40 EST


AIList Digest           Wednesday, 19 Dec 1984    Volume 2 : Issue 179

Today's Topics:
  AI Tools - Micro-PROLOG & SmallTalk AI Systems,
  Applications - Expert Legal Systems & Intelligent Skimmer,
  Planning - Constraint Propagation and Design,
  Reports - SEAI Publications,
  Politics - Visitors from USSR,
  Lab Description - NRL,
  Workshop - Logic and Computer Science
----------------------------------------------------------------------

Date: Mon, 17 Dec 1984  17:36 EST
From: Chunka Mui <CHUNKA%MIT-OZ@MIT-MC.ARPA>
Subject: micro-PROLOG info request


We are looking for PROLOG packages which run on micros, especially the
IBM PC.  If you are familiar with any PROLOG interpreters for the PC,
especially one with a tutorial package, I appreciate any information
that you could give me.

Thanks,

Chunka Mui
Chunka%mit-oz@mit-mc

------------------------------

Date: 17 Dec 84 12:21:53 EST
From: Mike.Rychener@CMU-RI-ISL2
Subject: SmallTalk AI systems?

      [Forwarded from the CMU bboard by Laws@SRI-AI.]

Does anyone know of any successful AI applications coded in SmallTalk?
This was stimulated by the new Tektronix AI machine, whose blurb touts
its SmallTalk as useful for developing expert systems.

------------------------------

Date: 17 Dec 1984 12:21-EST
From: Alexander.Hauptmann@CMU-CS-G.ARPA
Subject: expert legal systems?

I am looking for references to publications about expert systems for
legal reasoning. If you know of anybody who has done work in this area,
please let me know (Alexander.Hauptmann@CMU-CS-G.ARPA). Among other
things, I have heard that Roger Schank has done work in this area, but have
been unable to find citations. Thanks.

                                        Alex.

------------------------------

Date: 17 Dec 84 06:44:45 EST
From: Robert.Thibadeau@CMU-CS-H
Subject: expert legal system

      [Forwarded from the CMU bboard by Laws@SRI-AI.]

Extensive work in legal reasoning was done by Thorne McCarty.  Thorne published
in the Harvard Law Review back in 1977ish.  His topic was legal reasoning
in corporate tax law -- one of the areas where the Supreme Court effectively
makes the law.  Thorne, educated at Harvard in Law and Stanford in AI,
and a tenured professor of Law at Rochester, evaluated Yale, way back,
but decided to do his AI work at Rutgers.  While I regard Roger Shank as
absolutely excellent, I find it unfortunate that like natural language
understanding systems vis a vis Yorick Wilks, belief systems vis a vis
Chuck Schmidt and N. Sridharan, Memory vis a vis 100 years of thought
in German and British psychology, we find now Roger implied at the leading
edge in legal reasoning.  Roger does good work, but he takes a long time
to see the light and he tends to ignore his surround.
I would hope the people on the frontiers not be forgotten this time around.

------------------------------

Date: 17 Dec 84 16:05:04 EST
From: BIESEL@RUTGERS.ARPA
Subject: Intelligent skimmer suggestion.

As the volume of mail in this and other lists increases I find that
I spend more and more time only skimming the text, searching for the
message or two that is of interest to me. It occurs to me that an
intelligent program for skimming text would be of some help in this.

This program would scan a message, break up its sentences into
grammatical tokens, and would first display only nouns and verbs - in
their correct places on the screen. As the text scrolls upward adjectives,
adverbs and pronouns appear, and by the time the text has traversed
2/3 of the screen, all words in each sentence are filled in. A smarter
system would also keep track of the rate at which CTL-s/CTL-Q is sent,
and adjust its transfer rate accordingly. A really smart program would
keep track of keywords in those pieces of text which the user actually
reads, determined by how often he slows down the skimming presentation,
and would automatically present more fleshed out versions of messages
which contained such keywords.

There is no good reason why text has to be displayed in a letter-
sequential form. We have a whole 2-D array to work with; let's try
to use it to enhance rather  than obfuscate communication.

Biesel@rutgers

------------------------------

Date: Saturday, 15 December 1984 03:46:38 EST
From: Duvvuru.Sriram@cmu-ri-cive.arpa
Subject: Planning, Constraint Propagation and Design

A part of the January 1983 SIGART newsletter was dedicated to Planning.
A number of abstracts on (then) current  research was compiled by
Ann Robinson.

I would like to add the following to Steinberg's equation about design:

   Heuristic Knowledge  (HK) +  Well-structured Programs (Algorithms) (WP)
                           = Good Engineering Programs (GEP)

If we add Causal knowledge (CK) to the LHS of above equation then we have

 HK + WP + CK = EEP (Effecient Engineering Programs)

Any comments?

Has anyone tried the task suspension method instead of constraint
propagation? Task suspension works in the following manner (there is more to
it):
   IF a constraint in a certain part of the design cannot be satisfied
   THEN suspend that task and get the values needed to satisfy the constraint
In other words if you are designing Module-1 and find that there is a
constraint relating Module-1 to Module-2 then suspend the task of performing
Module-1 and design that part of Module-2 which satisfies the constraint.
I tried this in structural design [1] using a  Hearsay-type approach.
However, I ran into problems when a constraint involved interaction
between 3 or more components.

[1] ALL-RISE : A Case Study in Constraint-Directed Design, Working Paper,
               Department of Civil Engineering, C-MU, Pittsburgh, PA 15213

Sriram

------------------------------

Date: Tue 18 Dec 84 10:47:20-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: SEAI Publications

A brochure from the SEAI Institute has crossed my desk.  They are offering
a two-volume survey of commercial and near-commercial AI systems as of
August 1984.  The two 200-page surveys, AI Applications for Manufacturing
and AI Applications for Business Management, include 136 products and
in-house systems at over 100 corporations, including 28 expert-system
toolkits and 10 natural-language systems.  The reports are $110 each, or
$200 together.  SEAI also offers a three-volume set on Machine Vision for
Robotics and Automated Inspection and several other reports on robots
in industry, AI, expert systems, and automated guided vehicle systems.
You can contact them at Box 590, Madison, GA 30650, (404) 342-9638.


[Note: I have no connection with the company, and pass this along only in
the hopes that it will be of use to the Arpanet or AI research communities.
I obviously cannot report on every AI book offered by every publisher, but
see no harm in forwarding book reviews or notices about obscure reports.
Correspondence about this policy should be directed to AIList-Request@SRI-AI.
-- KIL]

------------------------------

Date: Tue, 18 Dec 84 21:34:41 PST
From: Judea Pearl <judea@UCLA-LOCUS.ARPA>
Subject: Visitors from USSR

 I wish to share with the readers of the AI-Digest this
letter, which I wrote to Professor Viktor V. Aleksandrov,
Head, Leningraad Research Computer Center, who is currently
visiting the U.S. and who is particularly interested in
meeting AI researchers.


Dear Professor Alexandrov,

           I would have liked very much to meet you during
your current visit to UCLA, but the following circumstances
will not allow me to do so in good faith:

           I have received from the Association of Computing
Machinery (ACM) a long list of Soviet computer scientists who,
for the past several years, have been barred from scientific
activity and have been denied permission to participate
in scientific meetings, domestic as well as international.
Some of these people would like to present papers at the
International Joint Conference on Artificial Intelligence
which will take place at UCLA, August 1985, but will be
prevented from leaving your country.

           I am particularly familiar with the stories of:

                 Alexander Lerner, Moscow
                 Isai Goldstein, Tbilisi
                 Gregory Goldstein, Tbilisi

whom I met at the International Joint Conference on
Artificial Intelligence - 1973, Tbilisi, Georgia, and with
whom I tried to keep in touch. To my dismay, I find
these three cited in the 1984 Report of the ACM
Committee on Scientific Freedom and Human Rights as being
harrassed and prevented from engaging in scientific
activities. In 1973, I personally witnessed
Isai Goldstein being barred from entering the lecture hall of
the Tbilisi conference, so I feel obliged to express my
concern that today, eleven years later, the method of
professional deprivation is still practiced in your country.

        Although I would like to contribute to improved
scientific cooperation between our two countries, my
understanding has been that a prerequisite to true
cooporation is the freedom for individuals to engage in
scientific pursuits and to communicate their findings to
other scientists.  Your government apparently has a
different perception of cooperation, and I will be happy to
discuss with you these differences. However, because you are
an official Soviet visitor, I cannot meet with you in good
faith to engage in a purely professional discussion. To do
so would be to betray Professor Lerner, who personally
pleaded with me to refrain from participation in U.S.-USSR
cooperative programs until minimum standards of scientific
freedom are agreed upon.

         I hope you understand my position and will
 convey my regrets to your colleagues at the Leningrad
 Computer Research Center.

                              Yours Sincerely,

                                         Judea Pearl
                           Professor, Computer Science Dpt.
                                  University of California
                                        Los Angeles


A note to the reader:
        The 1984 report of the ACM Committee on Scientific
Freedom and Human Rights is available from my office. It is
scheduled for publication in the January-85 issue of the
Communications of the ACM.

        If you  meet with Professor Alexandrov, or other
Soviet visitors, you may find it appropriate to
express your sensitivity to two allegations made
in the ACM report:

1. That Soviet scientists are dismissed from their jobs
   (or demoted) once they apply for exit visas.

2. That these scientists are prevented from attending
   professional meetings (even in the privacy of their homes)
   or from submitting papers to international meetings,
   e.g., IJCAI-83.

       If you kindly send me a summary of Professor
Alexandrov's replies, especially regarding the practices at
his own Institute, I will be glad to bring them to the
attention of the ACM Committee.
                                      J.Pearl
                                <judea@ucla-locus.arpa>

------------------------------

Date: Thu, 13 Dec 84 16:30:45 est
From: Rod Johnson <johnson@nrl-css>
Subject: Lab Description - NRL (Computer Science & Systems Branch)

                       [Edited by Laws@SRI-AI.]



                       NAVAL RESEARCH LABORATORY
                  Computer Science and Systems Branch


The Computer Science and Systems Branch of NRL is active in:

  >> software engineering  >> computer security  >> information theory
  >> search theory         >> expert systems     >> message processing
  >> software measurement  >> speech and signal processing
  >> formal software specifications.

Our interests also include performance modeling and evaluation, human-
computer interfaces, and program specification and verification tools.

    OUR GROUP is small, close-knit, and informal, with a research staff
of 22 members; 9 hold PhDs.  Attendance at conferences and publication
in the open literature are encouraged.  There are ample opportunities
for educational support toward graduate degrees.  Several branch
members also teach at local universities.

    COMPUTING RESOURCES at NRL are being expanded to include a Cray
X-MP/12 system.  This unique system will include a front end consisting
of a cluster of VAX 11/785s with connections to the ARPANET and to a
broadband network linking other NRL computers.  The Branch maintains
VAX 11/780, Sun, and VAX 11/750 machines running UNIX and VMS, and a
Symbolics Lisp Computer.  Each office includes a terminal with a
high-speed link to these systems, which are also linked to the ARPANET.

    THE NAVAL RESEARCH LABORATORY is a government laboratory located on
a 129-acre campus on the banks of the Potomac River in Washington,
D.C.  It was founded at the suggestion of Thomas Edison more than 60
years ago and carries out a wide variety of basic and applied
research.  The Washington area offers a temperate climate and an
outstanding cultural environment, including the museums of the
Smithsonian Institution, the Kennedy Center for the Performing Arts,
and several excellent professional and collegiate theatre groups.

    For more information, contact:

    Mr. S. H. Wilson
    Head, Computer Science and Systems Branch
    Code 7590                      Phone:  (202) 767-2518
    Naval Research Laboratory      Arpanet:  Wilson@NRL-CSS
    Washington, D.C.  20375        uucp:  ...!decvax!nrl-css!wilson

------------------------------

Date: Mon, 10 Dec 84 16:07:59 est
From: ukma!marek@ANL-MCS.ARPA (Wiktor Marek)
Subject: Workshop - Logic and Computer Science

                    FIRST COMMUNICATION

                        Workshop on

                 LOGIC AND COMPUTER SCIENCE

               Lexington,KY, June 9-14 1984.

     In the first half of June 1985 a workshop on Logic  and
Computer Science will take place in Lexington, Kentucky.

     The workshop will take 4 and 1/2 working days.

     The workshop will cover those parts of Computer Science
where  an  active part is played by logic-inclined research-
ers, in particular:

                   Theory of Computation
                    Theory of Databases
                  Artificial Intelligence
        Theory of Operating Systems (Temporal Logic)
                    Program Verification
                     Logic Programming

All the inqueries should be sent to:

                 Logic and Computer Science
               Department of Computer Science
                   University of Kentucky
                 Lexington, KY, 40506-0027
                       (606) 257-3961
or:


    Logic and Computer Science
    ARPA:  "ukma!logic-and-cs"@ANL-MCS   (Note the quote marks.)
    UUCP->  unmvax -----------\
    UUCP->  research ----------\____ !anlams --\
    UUCP->  boulder -----------/                >-!ukma!logic-and-cs
    UUCP->  decvax!ucbvax ----/                /
                       cbosgd!hasmed!qusavx --/


                 Organizational Committee:
          Forbes Lewis  Wiktor Marek  Anil Nerode

Lexington, December 1984

------------------------------

End of AIList Digest
********************

From:	CSVPI          22-DEC-1984 03:28  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a017955; 21 Dec 84 16:59 EST
Date: Fri 21 Dec 1984 10:18-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #180
To: AIList@SRI-AI
Received: from rand-relay by vpi; Sat, 22 Dec 84 03:25 EST


AIList Digest            Friday, 21 Dec 1984      Volume 2 : Issue 180

Today's Topics:
  Humor - Jokes & Limericks & Linguistics & D/B Theory & Lardware &
    Computer Museum Traveling Exhibit
----------------------------------------------------------------------

Date: Thu 13 Dec 84 09:21:09-EST
From: Bob Hall <RJH%MIT-OZ@MIT-MC.ARPA>
Subject: AI Jokes

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

                      Announcing the only annual

                         AI Joke Contest

Come up with a good cocktail-party-worthy joke about some aspect of
AI and win a U.C., Berkeley T-shirt!  Enter as many times as you like.
Winner (exactly one) will be judged solely on the number of ``HA''s
evoked from the impartial panel of judges.  Ties will be broken by
earliest postmark and contest ends after a sufficiently long time with
no entries.

To be eligible for a prize, you must include your address and t-shirt size.
Entries become property of the judges.

To Enter:

Mail via US Mail your entry in any legible format to

                       AI Jokes
                       1717 Allston Way
                       Berkeley, CA  94703

Please do not send any entries to me, as I am just posting this.  I can,
however, answer limited questions on this, like "Is it legit?" (Yes.)

Enter Now!

------------------------------

Date: Tue 18 Dec 84 16:38:26-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Call for Computer Science Limericks--ABACUS

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

The journal Abacus will pay $25 for each original limerick related to computing
that is accepted and published.  Send entries to Mr. Eric A. Weiss, Box 222,
Springfield, PA 19064.  Submissions should be better than the following
samples:

Said a recent B.S. in E.E.
"Three things are important to me:
 How much do you pay?
 Must I work every day?
And the proof of correctness of C."

A professor (whose last name is Wirth),
After seeing Pascal through its birth,
  Said, "It's better than Snobol,
  More structured than Cobol,
And soon will take over the earth!"

This is strictly a public service announcement for those students who want to
make some extra money.  I will make no comments about the above examples nor
my personal view of limericks in general.

Harry Llull

------------------------------

Date: 11 Dec 84 19:29:24 GMT
From: sms@eisx.UUCP (Samuel Saal)
Subject: Oxymorons, Pleonasms and various forms of Bull

              [Forwarded from net.jokes by SASW@MIT-MC.]


       From "More on Oxymorons, Foolish Wisdom in Words and
       Pictures".

       Oxymoron:   two antithetical words, adj vs. noun.  eg.
                   Living Death (can be extended: They agreed to
                   disagree)

       Pleonasm:   sort of the opposite of an oxymoron, the adj or
                   adv agrees with the noun. eg. Wet Water  (what
                   else could water be?) Tautology: a pleonasm
                   whose terms are joined by a copula.  eg. At the
                   center is the middle.

       Bull:       the linguistic name for such linguistic pearls
                   of logic, enabling one to label examples of:

                      - self-contradiction (To be ignorant of one's
                        ignorance is the malady of the ignorant).

                      - self reference (Brain: an apparatus with
                        which we think that we think).

                      - the obvious (Who died? I'm not sure, but I
                        think it's the one in the hearse).

                      - "read the sentence twice and be amazed that
                        it was written" (The sudden rise in
                        temperature was responsible for the
                        intolerable heat) (Nobody goes to that
                        restaurant anymore, it's too crowded)



       Sort the following according to the above rules:

         1.  The best cure for insomnia is to get a  lot of sleep.
             (W.C.Fields)

         2.  You will always find something in the last place you
             look.

         3.  He hadn't a single redeeming vice. (Oscar Wilde)

         4.  Nothing succeeds like success. (Alexandre Dumas)

         5.  New Innovation.

         6.  In these matters the only certainty is that there is
             nothing certain. (Pliny the Elder)

         7.  For those who like this sort of thing, this is the
             sort of thing they like.  (Abraham Lincoln)

         8.  Anyone who goes to a psychiatrist ought to have his
             head examined. (Samuel Goldwyn)

         9.  To visually see.

        10.  Bachelors' wives and old maids' children are always
             perfect. (Nicolas Chamfort)

        11.  One effect of the better lighting is the improved
             visibility.

        12.  He lived his life to the end.

        13.  I have made mistakes but I have never made the mistake
             of claiming that I have never made one. (James Gordon
             Bennett)

        14.  She's genuinely bogus.

       HINT: There are 2 examples of each category (not counting
       "Bull" but rather the subdivisions mentioned)

       "Words are but a window on the word...."

       Sam Saal
       ...ihnp4!eisx!sms

------------------------------

Date: 08 Dec 84 20:27 CDT
From: Maxwell_L%VANDERBILT.MAILNET@MIT-MULTICS.ARPA
Reply-to: Maxwell_L%VANDERBILT.MAILNET@MIT-MULTICS.ARPA,
Subject: Language Deficiencies

There is a legend of a remote tribe of Indians in the Peruvian
Andes, the language of which has no word for "No."  Should a
member of this tribe wish to communicate a negative response,
he will nod his head and say "I'll get back to ya."    :-)

------------------------------

Date: 17 December 1984 2105-EST
From: Jeff Shrager@CMU-CS-A
Subject: Discipline&Bondage Theory

      [Forwarded by Laws@SRI-AI from a file typed from hardcopy
      and made available by Jeff Shrager@CMU-CS-A.  The original
      author is James A. Matisoff of Berkeley.]


                Announcing a new theory of language:

                   DISCIPLINE AND BONDAGE THEORY

                        Ffositam A. Semaj
                             Yelekreb

                        February 23, 1984

       [APPLICATION TO THE GROAN FOUNDATION, WASHINGTON, D.C.]

        It has become increasingly clear that the current linguistic theories
are inadequate to explain much of anything about language.

        Yesterday, however, I conceived a new theory of language, which is,
finally, the correct one.  Already I have found the solutions to virtually
all linguistic problems.  A few details remain to be worked out, but this can
certainly be accomplished during the grant period.

        Despite its explanatory and predictive power, my theory rest on a
very few simple ideas.

        (1) The chief organizing principle of language is CONTROL.

That is to say, certain words should boss others around.  This idea is
perhaps not entirely new, but my theory is the first to carry it one step
further, to the meta-theoretical level:

        (2) The linguist must control language, not vice versa.

        At no time must the theoretician allow himself to be hog-tied by mere
data.  Too much unmotivated detail clogs the mind, and can led to "control
slippage."  Endless time can be wasted on brute undisciplined facts.  That
leas to our third axiom:

        (3) The most highly valued theory is based on the most limited
            and carefully selected data, preferably data gained from
            solitary introspection by the linguist himself.

(In difficult cases, however, it is not methodologically unsound to seek
confirmation of one's grammaticality judgments from other linguists,
provided they are working within the same theory.  It is for this reason that
I have included within this proposal a request for funds for consultation in
D/B Theory at other institutions.)

        D/B Theory is correct precisely because it succeeds in *controlling*
and *dominating* language.  The unique terminology required by our theory
reflects this orientation.  (See below, GLOSSARY OF TECHNICAL TERMS.)

        D/B Theory relates in the most efficient way imaginable to its
data base.  I have, in fact, succeeded in formulating a single sentence that
is so rich in theoretical implications, that once it is properly
disciplined-and-bound it will serve all by itself as the corpus of data for
the whole theory.  Here it is:

        (4) Helmut asked her if Fatima could say wow what a nice day
            to them sorta only if the beige one circumcised her with
            a knout.

It need hardly be emphasized that my theory also applies to other languages
than English, indeed universally to the class of all possible languages.
Firm plans are in place to have (4) translated into French during the next
(1985-86) grant period.

        On a more mundane level, note that D/B Theory uses much better names
in its example sentences than any other theory.   While some theories use
anodyne names like John and Mary, and others offer unmotivatedly cutesy-poo
ones (e.g., Mortimer, Seymour, Snurdley), D/B Theory goes in exclusively for
names like Butch, Helmut and Fatima, thereby enhancing its predictive power
in pragmatic situations where discipline and control are at issue.

        Notice also that (4) could never have been arrived at by the
"butterfly-collector" method of recording natural utterances.  Fortunately,
D/B Theory enabled me to predict that the odds of (4) occurring in a natural
conversation would be quite low.  If I had waited around to hear this
sentence uttered spontaneously I could never have formulated my theory so
rapidly, and would probably have missed the application deadline.

        D/B Theory enables us to account in a principled way for the
otherwise puzzling fact that (4) is fully grammatical, while (5), (6), and
(7) are totally unacceptable:

        (5) * Helmut sorta circumcised her with a knout.

        (6) * Wow what a nice day sorta.

        (7) * Helmut could say beige.

Even previous theories of language recognize that (7) violates a felicity
condition whereby the features [+male] and [+beige] are mutually exclusive.
If the feature specification for "Helmut" does indeed include [+male], these
theories would predict, quite correctly in this case, that (7) is
infelicitous.  Only D/B Theory, however, explains why the acceptability of (7)
increases when it is disciplined by a strappadoed clause, as in (4).

        Space constraints preclude our going into further detail here, and in
any event this discussion must necessarily appear somewhat abstract before
the special terminology required by D/B theory has been mastered.  As a
warning to the reader, the following Glossary of Technical Terms has been
provided.

        Learn them, and learn them now!

                GLOSSARY OF TECHNICAL TERMS

CAT-O'-NINE-TAIL-MENT.

        A clause which is reluctant to fit into our framework may be whipped
into shape by this operation, according to which any nine constituents may be
entailed by any nine others.  Thus (8) may be cat-o'nine-tailed into (9):

        (8) Fatima sucked the sherbert through a straw while her
            Shiite eunuch guards leafed through a stack of girlie
            magazines without much interest.

        (9) The Queen of England opened Parliament with a knout.

As always, however, rigorous disciplinary techniques like this should
not be resorted to prematurely.  It is usually advisable to try FROTTAGE
first, in order to relax the clause and throw it off its guard.

CLAUSE-ABUSE.

        A cover-term for several more specific operations described below.
Occasionally a deeply embedded clause may be forced into self-abuse to avoid
subjugation or subincision at the hands of a clause that ranks higher on the
BOUNDEDNESS HIERARCHY.

CLAUSE-CASTRATION.

        A clause is said to have undergone castration when certain members
have been removed in order to allow a rule to work more insightfully.  Thus
(11) may be generated from (10) by this operation, which is actually
justified on independent grounds anyway, so that no special ad hoc rules need
be added to the grammar:

        (10) What's all this ballyhoo about that balloon that was
             embellished by the ballistic missile?

        (11) What's all this yhoo about that oon that was embellished
             by the istic missile?

Note that our theory correctly predicts that "embellishment" does not satisfy
the conditions for the operation of this rule, despite its surface similarity
to the castratable constituents.  "Embellished" therefore survives
(temporarily) to undergo other sorts of clause-abuse that occur later in the
grammar.

CLAUSE-CRUCIFIXTION.

        A crucified clause is one which has been generated by entailment.
The head of the clause remains free to move slightly, but the rest is bound
tightly to the tree.  Ex-cruciated constituents are usually found to be much
more amenable to persuasion than before the operation applied.

CLAUSE-FROTTAGE.

        An important preliminary discourse strategy that opens clauses up
for further discipline.  Unlike its extreme form, KEELHAULING, which can
involve scraping the clause up one side and down the other, FROTTAGE requires
only a light movement from left to right and back again on the nodule which
is F-commanded by the subjugating member.

CLAUSE-STRAPPADO.

        The weakest NP's hands are tied behind its back and attached to a
pulley by means of which it is pulled out from under the VP that had been
disciplining it and raised to the next higher clause, after which it is
suddenly dropped halfway back down with a jerk.  Thus (12) may be strappadoed
into (13).

        (12) Butch said fuck you or I'll take away your teddybear
             with a knife.

        (13) Butch said fuck you, teddybear, or with a knife I'll take
             yours away, jerk.

Note that jerk-insertion must be ordered with respect to frottage, to avoid
generating such ungrammatical strings as:

        (14) *Butch said fickledy-fuckledy you, teddldy-bearidy, jerk.

PROCRUSTEAN PRUNING.

        A powerful process whereby unwanted constituents are lopped off
either from the beginning or the end of a clause, or both.  This is related
to Pham Phuc Dong's 'constituent gerrymandering', though it is much more
rigorously applied within the D/B framework.  Thus (16) may be derived from
(15) by "equi-PP":

        (15) The chomeur had no place to go during the earthquake, so he
             sat down by default, the chomeur had no place to go during the.

        (16) Earthquake, so he sat down by default.

The questionable grammaticality of (16) is accounted for by the fact that
neither the pre-pruned nor the post-pruned constituents were willing to cross
the picket line.

PROTO-HYPE THEORY.

        Proto-hype theory is an important adjunct to D/B analysis.  Generally
speaking, it enables us to recognize whether a token is behaving
satisfactorily as a member of its type.  (If a constituent is lacking in
discipline, we have ways to make it talk.)  The following data are from
French:

        (17) *Mordxe hot zix nebex aroysgeshnitn di kishkes mit a
              tsibele-kuxn.

             (Mortimer ripped out his guts on a buzz-saw, poor guy.)

Proto-hype theory enables us to predict that "tsibele-kuxn", literally:
"onion-roll", is nowhere near being a prototypical cutting instrument (though
sometimes in particular pragmatic situations poppy seeds may be rather
sharp).  We thus reject (17) as ungrammatical.

TOUGH B-MOVEMENT.

        Applies when a clause has become constipated through lack of roughage.
This is one of the more severe operations permitted by our theory, and should
only be used after milder processes like frottage and proto-hyping have
failed to dislodge the construction.  Consider the following:

        (18) *To do it squeezing over a pit full of viper without
              bran muffins or prune juice is tough duty.

This is clearly ungrammatical and infelicitous as it stands, though, as my
theory predicts, a perfectly good reading is obtained if tough b-movement is
not allowed to apply until after the sentence has been sphincter-bound, as in
(19):

        (19) It is tough duty to do it without squeezing bran muffins
             or prune juice over a pit full of vipers.

The 3-way ambiguity of this sentence is likewise predicted by the theory.

                                ***

All previous linguistic theories have been thinly disguised notational
variants of the flabbily sentimental "philology" of the past.  With
Discipline and Bondage Theory, we serve notice on language that it is to be
coddled no longer.  Broad new vistas of control have opened up.  Let 1984 be
the year that we get back at language once and for all.

                                        MAJ, Principle Investigator.

------------------------------

Date: Thu, 13 Dec 84 14:06:32 cst
From: "Walter G. Rudd" <rudd%lsu.csnet@csnet-relay.arpa>
Subject: Architecture for Malogrithms


Kathy Daley, one of our graduate students, suggests the following:

Since the "hardware" will be running "underneath" the malgorithm,
why not call it "UNDERWARE"?????

------------------------------

Date: Thu, 20 Dec 84 07:09:02 pst
From: Paul A. Ehrler <ehrler%cod@Nosc>
Subject: Lardware

    My nomination  for  Lardware of the month goes to IBM. I recall seeing a
    reference to an attempt of theirs to build  a  computer  without an ALU.
    The trick was to do everything with table  look  up, even arithmetic.  I
    guess  they reasoned that first graders are pretty good at that sort  of
    thing, so why not automate it.  It worked  to  some extent, but needless
    to say was not an overwhelming commercial success.

------------------------------

Date: Thu, 13 Dec 84 14:41:50 est
From: Walter Hamscher <walter at mit-htvax>
Subject: Computer Museum Traveling Exhibit

      [Forwarded from the MIT bboard by SASW@MIT-MC.]


        NOON, FRIDAY, IN THE 8TH FLOOR PLAYROOM

               THE BOSTON COMPUTER MUSEUM
                  In conjunction with
              THE REVOLTING SEMINAR SERIES
Presents a traveling exhibit especially for Graduate Students

            COMPUTER POWER AND HUMAN FASHION

                       Featuring

               THE VON NEUMANN TURTLENECK
                          Plus
           NILS NILSSON'S ALPHA-BETA CUTOFFS

   Also featuring a rare Huffman-clothes encoating and
    a dress once worn by Herb Simon's wandering Aunt.

         Hosts: Bonnie Dorr and Dave Braunegg.

------------------------------

End of AIList Digest
********************

From:	CSVPI          21-DEC-1984 22:10  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a018588; 21 Dec 84 20:14 EST
Date: Fri 21 Dec 1984 14:27-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #181
To: AIList@SRI-AI
Received: from rand-relay by vpi; Fri, 21 Dec 84 22:04 EST


AIList Digest           Saturday, 22 Dec 1984     Volume 2 : Issue 181

Today's Topics:
  Math - Fermat's Last Theorem,
  AI Tools - XLISP Interpreter & PROLOG & Expert System Tools,
  Reports - SEAI Survey & Winograd on Semantics & Barwise on Logic,
  Opinion - Skeptical Viewpoints,
  Seminar - REVE: Solving Problems in Equational Theories  (CSLI),
  Course - Reasoning About Knowledge  (SU)
----------------------------------------------------------------------

Date: 19 December 1984 1724-EST
From: Oswald Wyler@CMU-CS-A
Subject: Fermat's last Theorem

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

The first two sentences of an AMS abstract, 816-11-188, by Chen Wenjen,
read: The missing proof of Fermat's Last Theorem has been rediscovered.
The proof is elementary, zigzag, and truly wonderful as claimed by
Fermat nearly three and a half centuries ago.
Anyone know more about this?

------------------------------

Date: 19 Dec 1984 2001 PST
From: Larry Carroll <LARRY@JPL-VLSI.ARPA>
Reply-to: LARRY@JPL-VLSI.ARPA
Subject: Xlisp interpreter

Some time back David Betz announced he'd placed into the public domain
a Lisp interpreter with object-oriented extensions.  Where is it stored
in FTPable form?  Thanks.
                                        Larry @ jpl-vlsi

------------------------------

Date: Thu, 20 Dec 84 00:06 MST
From: May%pco@CISL-SERVICE-MULTICS.ARPA
Subject: Re Issue 179, "micro-PROLOG info request"

Dr.  George Luger, at the University of New Mexico, is developing a
Prolog that runs on PC-compatibles.  It is currently in beta-test.  (no
phone # available)

Also, the University of York, Heslington, York, YO1 5DD, England, has a
C&M Prolog that is written in standard Pascal.  It requires three
file-system-specific procedures to be written for the host, which is
usually a minor job.  The original version compiled cleanly under
Turbo-Pascal but we haven't yet checked it out for correct execution.
The same source compiled and executed cleanly on a mainframe host.
Contact Mrs.  Jenny Turner, Secretary, Software Technology Centre,
telephone 0904 59861, or at the above address.  (A few months ago, they
were charging 200 Pounds.)

------------------------------

Date: Thu, 20 Dec 84 15:20:44 pst
From: weeks%ucbpopuli.CC@Berkeley (Harry Weeks)
Subject: Prolog on Micros.

There is an article in the December 1984 issue of Byte on
`micro-Prolog', which runs on CP/M and MS-DOS machines
(including the IBM PC).  It is distributed in the United
States by Programming Logic Systems, 31 Crescent Drive,
Milford, Connecticutt 06460, 203 877 7988.
                                            -- Harry

------------------------------

Date: Thu, 20 Dec 84 07:07:46 pst
From: Paul A. Ehrler <ehrler%cod@Nosc>
Subject: Expert System Tools

    Are  there  any  head-to-head   comparisons   of  the  so-called  'fifth
    generation'  expert  system  building tools like KEE, ART, S1, SRL,  and
    LOOPS? I've heard that ART has been improved since the AAAI conference.
    The demo I saw then was  not very informative, since they didn't have an
    extra Symbolics to put  in their hotel suite for serious shoppers; I was
    more favorably impressed by  KEE  at the time.  As for the others, first
    impressions  are  that  S1  was out of date, SRL was underdeveloped  and
    overpriced  ($70K),  and  LOOPS  was  unsupported,  but  had   lots   of
    potential.    Anything   more  concrete  (performance,  ease   of   use,
    robustness,  support provided, etc) would be welcome, especially  direct
    comparisons.   If  I  missed  any  of  importance  (not  of  the  EMYCIN
    generation, please), that would also be useful to know.

    Speaking of prices, are they serious  about  the  exorbitant  prices for
    secondary copies of the software?  I can understand, given the tradition
    of whatever the market  will  bear, that something extra must be charged
    for more  application,  but we have a LAN of five 1108's all on the same
    project, and I can't see charging more for the secondary copies than the
    machines cost  -  that's  a  big  reason  we're  using LOOPS now.  Maybe
    they're thinking like the micro houses,  assuming  that  since  most  of
    their customers are going  to  cheat,  they'll use the honest suckers to
    subsidize.

------------------------------

Date: Thursday, 20 December 1984 01:26:44 EST
From: Duvvuru.Sriram@cmu-ri-cive.arpa
Subject: SEAI Publications

Another report by SEAI  titled "Artificial Intelligence: A New Tool for
Industry and Business" discusses a number of products in the market.  The
utility of this book, which costs $485, is summarized by Price (see SIGART
Newsletter, Oct. 1984) as "it is expensive but it would cost more to
assemble the same information. It is not directed towards researchers but
managers who want to determine how AI can be effectively used in their
business". I wonder if there is a significant difference in content
between this one and the ones mentioned by Ken Laws!

Sriram

------------------------------

Date: Wed 19 Dec 84 18:32:28-PST
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Reports - Winograd & Barwise

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]


                         CSLI REPORTS

``Moving the Semantic Fulcrum'' by Terry Winograd (Report No. CSLI--84-17)
has just been published. Report No. CSLI--84-2, ``The Situation in
Logic--I'' by Jon Barwise, which has been out of print, is now available.
To obtain a copy of these reports write to Dikran Karagueuzian, CSLI,
Ventura Hall, Stanford 94305 or send net mail to Dikran at SU-CSLI.

------------------------------

Date: 18 Dec 84 13:03:55 CST (Tue)
From: ihnp4!utzoo!henry@Berkeley
Subject: Re: Personal Assistants -- a skeptical viewpoint

        [Forwarded from the Human-Nets Digest by Laws@SRI-AI.]

> Dear sir--oh, my very dear sir.  Is NOTHING going to cheer you
> up?  Can the micro revolution do nothing to help you?

Nope, I'd rather be grumpy and play Devil's Advocate.  Bah.  Humbug.
(Who is that odd fellow with the chains coming through my wall...?)

> For me, I keep remembering what a joy Electric Pencil was after
> typing millions of words on a Selectric; and while nothing that
> has come after Pencil has been the quantum step up that Pencil
> was in 1977, there has been steady improvement.  Computers make
> my life simpler.  (Well, actually more complex; but I get more
> done, and spend more of  my time doing that which I LIKE
> doing...)

I have similar memories of encountering computerized text editing for
the first time, back in 1972.  I've never written anything substantial
on a typewriter since, and have no wish to.  I do appreciate the vast
improvement computers have brought, and the continuing improvements in
the situation.

What I do dislike is sales hype, or the equivalent, which claims that
innovation X is going to bring about Nirvana here on Earth in just a
few years.  I.e., Real Soon Now.  (Yes, I read and enjoy your column
in Byte.)  In particular, the next time somebody tells me that applied
AI and/or the Fifth Generation is going to solve all my problems, I
think I'm gonna throw up.  The AI folks are notorious for exuberant
promises followed by failure and disillusionment.  I would have
thought they, of all people, would be a bit more cautious about
predicting the Millenium yet again.  Nope, same old snake oil...

What I should have made clearer, in my earlier note, was that I do
expect some very interesting by-products from the inevitable failures.
I have no quarrel with anyone who merely predicts significant advances
and the appearance of useful new tools.  This cloud is indeed likely
to have a silver lining, even though it's not going to be solid
platinum as its proponents claim.

                           Henry Spencer @ U of Toronto Zoology
                            {allegra,ihnp4,linus,decvax}!utzoo!henry

------------------------------

Date: 20 December 1984 00:46-EST
From: Jerry E. Pournelle <POURNE @ MIT-MC>
Subject: Personal Assistants -- a skeptical viewpoint

        [Forwarded from the Human-Nets Digest by Laws@SRI-AI.]

Ah well, I suppose I must agree regarding the hype.
As to AI: there is a famous story.

John McCarthy some years ago is said to have bought a Heathkit
television for the Stanford AI lab.  When it arrived a student
eagerly fell upon it, but was restrained.
        "We will construct a robot to build the kit," McCarthy
is said to have said.
        Last I heard the box was unopened.

        The story is probably apocryphal, but I do recall
the Great Foreign Language Translation Revolution predicted in
the 60's...

------------------------------

Date: Wed 19 Dec 84 18:32:28-PST
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Seminar - REVE: Solving Problems in Equational Theories 
         (CSLI)

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]


                SUMMARY OF NOVEMBER 21 AREA C MEETING

Topic:     REVE: A system for solving problems in equational theories,
              based on term rewriting techniques
Speaker:   Jean-Pierre Jouannaud, Professor at University of NANCY, FRANCE,
              on leave at SRI-International and CSLI.

Equational Logic has been adopted by mathematicians for a very long
time and by computer scientists recently.  Specifications in OBJ2, an
``object-oriented'' language designed and implemented at
SRI-International, uses equations to express relations between
objects.  To express computations in this logic, equations are used
one way, e.g. as rules.  To make proofs with rules in this logic
requires the so-called ``confluence'' property, which expresses that
the result of a computation is unique, no matter the order the rules
are applied.  Proofs and computations are therefore integrated in a
very simple framework.  When a set of rules does not have the
confluence property, it is augmented by new rules, using the so-called
Knuth and Bendix completion algorithm, until the property becomes
satisfied.  This algorithm requires the set of rules to have the
termination property, i.e., an expression cannot be rewritten forever.
It has been proved that this algorithm allows one to perform as
inductive proof without invoking explicitly an induction principle and
to solve equations (unification) in the corresponding equational
theory as well.

------------------------------

Date: Fri, 14 Dec 84 16:15:12 PST
From: Joe Halpern <halpern%ibm-sj.csnet@csnet-relay.ARPA>
Subject: Course on reasoning about knowledge

I'll be teaching a course on reasoning about knowledge at Stanford
in the winter quarter, along much the same lines as [my IBM-SJ] seminar.
He are the details:

Reasoning About Knowledge (CS400B)
Knowledge seems to play a crucial role in such diverse areas as
distributed systems, cryptography, and artificial intelligence.
We will examine various attempts at formalizing reasoning about
knowledge, and see to what extent they are applicable to the areas
mentioned above.  In particular we will consider such problems as
resource-bounded reasoning, inconsistency of beliefs, belief revision,
and knowledge representation.  There is no text from the course; we
will be concentrating on current research.

Officially the course meets on Tuesdays in the winter quarter,
from 2:45-5:00.  I would be willing to consider moving that time
to another afternoon (although I suspect it might be hard to
reach agreement).  It might be nice to push the meeting time forward
to 1:30-3:45, so those interested can attend the CS Colloquium.
I've enclosed a brief (tentative!) outline for the course.  As of now,
the emphasis is on material I'm most familiar with (i.e., papers
I've written), but I would be interested in hearing suggestions
from participants in the course on other material to cover.
Auditors are welcome.

Week 1 and 2:  Philosophical background and thorough introduction to
               possible-worlds semantics for knowledge.
  References:  W. Lenzen, Recent work in epistemic logic, Acta
               Philosophica Fennica, 1978.
               J.Y. Halpern and Y.O. Moses, A guide to the modal logics
               of knowledge and belief, to appear as an IBM RJ, 1985.
Week 3:        The "knowledge structures" approach
  References:  R. Fagin, J.Y. Halpern, and M.Y. Vardi, A
               model-theoretic analysis of knowledge, in "Proceedings
               of the 25th Annual Conference of Foundations of
               Computer Science", 1984, pp. 268-278
Week 4:        Knowledge in distibuted systems
  References:  J.Y. Halpern and Y.O. Moses, Knowledge and common
               knowledge in a distributed environment, in "Proceedings
               of the 3rd ACM Conference on Principles of Distributed
               Computing", 1984; IBM RJ 4421, 1984.
               R. Strong and D. Dolev, Byzantine agreement, IBM RJ 3714,
               1982.
Weeks 5 and 6: Resource-bounded and incomplete knowledge, relevance
               logic, the "syntactic approach"
  References:  H.J. Levesque, A logic of implicit and explicit belief,
               Proceedings of the National Conference on Artificial
               Intelligence, 1984, pp. 198-202.
               K. Konolige, A deduction model of belief, Ph.D. Thesis,
               Stanford University, 1984.
               R. Fagin and J.Y. Halpern, Knowledge and awareness,
               unpublished manuscript, 1985.
               S. Shapiro and M. Wand, The relevance of relevance,
               Indiana University Technical Report No. 46, 1976.
Weeks 7 and 8: Belief revision and non-monotonic reasoning
  References:  D. McDermott and J. Doyle, Non-monotonic logic I,
               Artificial Intelligence 13, Vol. 1,2, 1980, pp. 41-72.
               R. Reiter, A logic for default reasoning,
               Artificial Intelligence 13, Vol. 1,2, 1980, pp. 81-132.
               J. McCarthy, Circumscription - a form of non-monotonic
               reasoning,  Artificial Intelligence 13, Vol. 1,2, 1980,
               pp. 27-39.
               W.R. Stark, A logic of knowledge, Zeitschrift fur
               Mathematische Logik und Grundalagen der Mathematik 27,
               pp. 371-374, 1981.
               D. McDermott, Non-monotonic logic II: non-monotonic modal
               theories, Journal of the ACM, Vol. 29, No. 1, 1982,
               pp. 35-57
               R.C. Moore, Semantical considerations on non-monotonic
               logic, SRI Technical Note 284, 1983.
               H.J. Levesque, A formal treatment of incomplete knowledge
               bases, Fairchild Technical Report No. 614, FLAIR Technical
               Report No. 3, 1982.
               K. Konolige, Circumscriptive ignorance, Proceedings of
               the National Conference on Artificial Intelligence, 1982,
               pp. 202-204.
               J.Y. Halpern and Y.O. Moses, Towards a theory of knowledge
               and ignorance, Proceedings of Workshop on Non-monotonic
               Reasoning, 1984; IBM RJ 4448, 1984.
               R. Parikh, Monotonic and non-monotonic logics of
               knowledge, unpublished manuscript, 1984.
Week 9:        Knowledge bases
  References:  H.J. Levesque, A formal treatment of incomplete knowledge
               bases, Fairchild Technical Report No. 614, FLAIR Technical
               Report No. 3, 1982.
               K. Konolige, A deduction model of belief, Ph.D. Thesis,
               Stanford University, 1984.
Week 10:       Knowledge and cryptography; puzzles
  References:  M.J. Merritt, Cryptographic protocols, Ph.D. Thesis,
               Georgia Institute of Technology, 1983.
               S. Goldwasser, S. Micali and C. Rackoff, Knowledge
               complexity, unpublished manuscript, 1984.
               X. Ma and W. Guo, W-JS: a modal logic about knowing,
               Proceedings of the 8th International Joint Conference
               on Artificial Intelligence, 1983.
               D. Dolev, J.Y. Halpern and Y.O. Moses, Cheating spice
               and other stories, unpublished manuscript, 1984.

------------------------------

End of AIList Digest
********************

From:	CSVPI          26-DEC-1984 21:32  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a002015; 26 Dec 84 3:30 EST
Date: Tue 25 Dec 1984 23:39-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #182
To: AIList@SRI-AI
Received: from rand-relay by vpi; Wed, 26 Dec 84 21:24 EST


AIList Digest           Wednesday, 26 Dec 1984    Volume 2 : Issue 182

Today's Topics:
  AI Tools - Prolog for PCs,
  Linguistics - Oxymorons,
  Humor - Malgorithm Contest,
  Bindings - Navy Center for Applied Research in AI,
  News - Recent Articles,
  Opinion - Personal Assistants,
  Workstations - Very Inexpensive LISP Machine,
  Courses - Intelligent Tutoring Systems  (SU) &
    Computational Semantics  (SU)
----------------------------------------------------------------------

Date: Sat, 22 Dec 84 21:12 EST
From: Tim Finin <Tim%upenn.csnet@csnet-relay.arpa>
Subject: Prolog for PC-type machines


Expert Systems Limited has a prolog for PC-type machines that seems
pretty good.  It is Clocksin&Melish compatible.  We've run it with
no problems on both a IBM-PC and a DEC Rainbow, so it will probably
run on any MS-DOS machine.  There is also a CP/M version.  This is the
prolog that Technowledge used to implement M1 in.  The home address for
the company is:

        Expert Systems Limited
        9 West Way
        Oxford OX2 OJB
        England

There is a U.S. affiliate, located in the Philadelphia area, that
has the US rights.  I don't have the address at the moment.

------------------------------

Date: Sat, 22 Dec 84 21:29 EST
From: Tim Finin <Tim%upenn.csnet@csnet-relay.arpa>
Subject: Oxymorons, Pleonasms and various forms of Bull


Saul Gorn has published a compendium of material related to the recent
note on "Oxymorons, Pleonasms and various forms of Bull" that he has
collected in his 50 year career as a mathematician and computer
scientist.  It is available as "Self-Annihilating Sentences; Saul
Gorn's Compendium of Rarely Used Cliches"; Technical Report
MS-CIS-83-22.  It can be obtained by writing:

        Publications
        Computer and Information Science
        The Moore School
        University of Pennsylvania
        Philadelphia, PA 19104

Tim

------------------------------

Date: Fri, 21 Dec 84 14:53:45 mst
From: jlg@LANL (Jim Giles)
Subject: Contest

It's the first annual Complete the Book Title Contest ( no prizes awarded,
none were donated).

'Malgorithms + Data Scrambling = ___________________'

First prize (which is worth twice as much as the other prizes) will be
awarded to the person who guesses the author of the above work.

Send answers to jlg@lanl.ARPA and I will summarize.

------------------------------

Date: Wed, 19 Dec 84 10:25:36 est
From: Dennis Perzanowski <dennisp@nrl-aic>
Subject: erratum

Please be advised of the following correction in the address for the
Navy Center for Applied Research in Artificial Intelligence which was
recently broadcast:

     U.S. Navy Center for Applied Research in Artificial Intelligence
     Naval Research Laboratory - Code 7510
     Washington, DC  20375-5000

The address of the Civilian Personnel Office to which all resumes and
inquiries should be sent is correct as printed in the announcement.
Sorry for any inconvenience.  Thank you.

------------------------------

Date: Sun, 23 Dec 84 12:46:37 cst
From: Laurence Leff <leff%smu.csnet@csnet-relay.arpa>
Subject: AI News


The Institute, Volume 9 Number 1 January 1985 Page 10
Experts Envision New Applications for AI Technology on the Shop Floor
Describes work for automatically constructing part programs for milling.
Also discusses applications of AI to such industries as paperboard packaging
Proccedings on an Expert System Session of Autofact 6 are available
from SME, One SME Drive, P. O. Box 930, Dearborn, Mich. 48121 which includes
papers on these subjects.


Electronics Week, October 15 1984 page 14
Discusses various Fifth Generation projects in America and Japan


IEEE Computer November 1984 Volume 17 No. 11

Page 117 Three-paragraph review of the National Conference on Artificial
Intelligence in Austin by Elaine Rich

Page 114 summarizes talk by Robert Miller, senior vice president at Data
General, on "personal expert systems"

Page 65 The Library of Computer and Information Science is again offering
the three volume Handbook of Artificial Intelligence for only $4.95 as
a sign up bonus.


Electronics Week October 29, 1984, page 34
Discusses Quintus Computer Systems Prolog systems and development environments
for Prolog.


Electronics Week September 24, 1984 page 59
Interview with Larry Harris who is president of Artificial Intelligence
Corp., the people behind the Intellect natural language database interface


Communications of the ACM December 1984 Volume 27 Number 12 page 1227
Discusses a solution to the travelling salesman problem with thousands
of nodes.  The solution was used for determing paths in drilling holes in
PC boards.  Uses a cluster-based approach.

------------------------------

Date: Fri 21 Dec 84 20:40:13-EST
From: Wayne McGuire <MDC.WAYNE@MIT-OZ>
Subject: Personal Assistants

     I agree with Henry Spencer that many claims from the AI community
are overblown, and that we need to maintain a healthy stance of
skepticism about the Next Big Revolutionary Breakthroughs that are
forecast every week.  However:

     (1) I don't think the present generation of outliners, natural
language interfaces, and free-form databases, which are some of the
basic building blocks of idea processors, are, as you insist, a "fad."
Products like Thinktank and Intellect are not vaporware: they have
firmly established themselves in the marketplace, and are not going to
disappear.  They are a permanent and welcome fixture in the world of
microcomputer and (in the case of Intellect) mainframe software.

     (2) Mitch Kapor's remarks about AI are not, as you put it, a lot
of "marketing hype." As I understand it, a company has been spun off
from Lotus which is doing serious research in natural language
processing.  That company will probably develop a product somewhat
like Intellect or Clout which will become an essential element in
future integrated software from Lotus.

     (3) A pencil and paper is fine, but I much prefer a Model 100 as
a portable device for recording and shaping notes and ideas.  A Model
100 with significantly greater memory, built-in idea processing
software, and a connecter to an optical disk storage device would, I
suspect, wean many people away from paper and pencils for good.

     (4) Building a powerful idea processor is very much a function of
available memory.  Framework, for instance, would be a much more
effective product if the quality of its word processor and database
management system could be raised to the level of ZyWrite II Plus and
MDBS III.  To acquire that kind of power would require an extra
megabyte or two of memory.

     (5) The privacy issue in regard to optical disks is a red
herring.  The federal government already has easy access to much of
the sensitive information which would be stored on a personal disk.  A
biodisk might give individuals an opportunity to know as much about
themselves as the government does.

-- Wayne McGuire <wayne%mit-oz@mit-mc>

------------------------------

Date: 24 Dec 1984 00:07-EST
From: Todd.Kueny@CMU-CS-G.ARPA
Subject: Very Inexpensive LISP Machine

I have recently been toying with the idea of very inexpensive lisp
machines (VILM).  The ideal VILM would support a hi-res display,
a keyboard, mouse, RS-232/422 interface, floppies (5 1/4 or 3 1/2 inch),
support interpreter, compiler, plus other handy functions (fasl,
debugger, trace, maybe an object language), provide a window package
(mulitple fonts, editor, etc.), be portable (so I can drag it back
and forth to work easily), and be able to support
as some sort of options: virtual memory with a hard disk (10M, 20M, or
whatever is cheap), ethernet, and different size physical memory (512K, 1M,
2M).

As I see it, the technology exists right now to build such a beast (by
"right now" I mean "order it from BYTE magazine").  The hi-res display,
keyboard, mouse, r2-232/422 and floppies are supplied by an Apple MacIntosh
(approximately $1700-2800).  The remaining non-optional stuff would be
supplied (initially) by a box similar in size to the Mac containing an
8 slot Multi-Bus card rack, power supply, fan,
M68010 processor card, ROM card (interpreter, compiler, other handy
stuff), RAM card or cards (512K or more), interface logic to talk to the
Mac (total < $5,000).

The LISP would be Portable Standard Lisp (PSL) which is
cheap, avaiable, and could be loaded into ROMS.  The Mac would
handle the display and filing functions. It would be portable
since the Mac will zip into a bag and so could the additional box.

Total cost would be around $8,500 to build from scratch (the
Imagine IMPRINT laser printers use this concept, so I know it's
workable).

Some tense hacking plus a disk controller card and 10-30M Winchester
could make a single process, virtual memory system possible
for an additional $5,000 (total price ~ $13,500).

Enhancements could include a bit-slice processor board with a real
instruction set, tape cartridge backup, more disk, and a real
operating system with files, multple processes, and ether/arc/apple
net.

My goal is a VILM which is affordable, flexible, and
able to support truly tense lisp hacking in a useful way.  Is there
any such thing out there?  I would like to correspond with anyone
having interest in VILMs (ideas, designs, hardware and software
implementations).

                                                        -Todd K.

------------------------------

Date: Fri 14 Dec 84 23:29:42-PST
From: Derek Sleeman <SLEEMAN@SUMEX-AIM.ARPA>
Subject: Intelligent Tutoring Systems course - Winter Quarter

    [Forwarded from the Stanford bboard by Laws@SRI-AI.]

This course was given for the first time last session; this year the
course will have more of a workshop flavour.


Topic:  Some issues in Intelligent Tutoring Systems (ITSs) CS 324X & Ed. 495X

Instructor:  D Sleeman

Time/Location: Winter Quarter: Wednesday, 4-6 p.m., Room 334 Cubberley
Audience: Graduate Students in Computer Science, Education & Psychology.
Prerequisites: Consent of Instructor required
Number of units: 2-3

The  seminar  will  highlight  research  problems  which  are   encountered   in
implementing automated teaching systems from principally, an AI perspective, and
secondly from Cognitive Science and instructional perspectives.   In  particular
we  will  review the "traditional" CAI systems and the more recent activities in
ITSs within these frameworks and point out the  current  perceived  shortcomings
which include:

        -  inappropriate feedback due to inadequate students models
        -  inadequate conceptualization of the domain
        -  unprincipled tutoring strategies
        -  user interaction with the system is too restricted

The systems which have concentrated on the issue of inferring a  student  model,
namely BUGGY and PIXIE (formerly LMS), will be studied in some depth.  Inferring
a model of a student's problem solving, even in a  restricted  domain,  is  very
complex as given N rules, there are potentially N!  models to be considered.  We
shall discuss how these  modelling  systems  have  addressed  and  "solved"  the
combinatorial  explosion  problem.   We  will  then  consider  how some of these
techniques could be applied to the more general problem of modelling a  user  of
computer system/package.  The class will have access to several mini-versions of
ITSs which have very recently been transferred to the IBM PC -- these include  a
version  of  BUGGY,  PROUST  and  the  instructor's  PIXIE  system.   Indeed the
principal task for the class will be to implement  a  data-base  for  the  PIXIE
system.


The course will conclude with a discussion of open research issues in the area.

Literature:  Principal source will be  Intelligent  Tutoring  Systems,  Academic
Press  l982,  (eds.   Sleeman  and  Brown).  Additional BUGGY and LMS papers and
selected papers from Mental Models Erlbaum, l983, (eds.  Gentner & Stevens).


Queries may be addressed to SLEEMAN@SUMEX, or 497-3257.

D.  Sleeman, 10 December l984

------------------------------

Date: 18 Dec 84  1105 PST
From: Terry Winograd <TW@SU-AI.ARPA>
Subject: Course on Computational Semantics - Ling/CS 276

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

Computer Science 276 / Linguistics 276
Computational Models for the Semantics of Natural Language
Winter 1985
Terry Winograd

MWF 10-11, Terman 156 (televized)

In this course we will develop the theoretical basis for the implementation
of computer systems dealing with the meaning of natural language.  We
will cover a variety of semantic and pragmatic areas, developing three
aspects of each:

1) The formal theories relevant to the area, drawn from work in linguistics
   and the philosophy of language

2) Computational issues that arise, and the computational mechanisms that
   have been developed to augment or supplant the standard formal framework

3) Limitations of the formalization and problems in extending it to cover
   the full range of related phenomena.

Areas covered will include lexical meaning, compositionality, quantification
and reference, temporality, speech acts, and schematic structures.

I will describe a number of existing AI systems in light of these
theoretical foundations, but will not attempt to provide a comprehensive
coverage of the currently available systems or to deal in depth with
details of implementation.  The course is intended to serve as a basis for
understanding what is being done and what can be done, not as a practical
"how-to-do-it" course.

There will be three lectures a week, and some homework assignments.  There
will be a mid-term and a final exam.  No computer programming exercises or
project will be required.

There is no regular textbook.  Course notes will be duplicated and made
available, based partly on a textbook I am writing.

The course will assume a background (either prior, or through additional
study during the course) in two areas: formal logic and basic techniques
of artificial intelligence.  Two books are recommended:

  Logic in Linguistics, by Allwood, Andersson and Dahl,is recommended to
  anyone not already well versed in the logical formalisms used in
  semantics, including basic set theory, propositional and predicate logic,
  deduction rules, and rudiments of modal and intensional logic.

  Principles of Artificial Intelligence, by Nils Nillson, is recommended
  as an introduction to basic AI techniques for planning, deduction, and
  representation.

We will not cover most of this material in class, but will provide
tutorial opportunities for those students who need to fill in the
background as we go.  There are no other prerequisities in either
computation or linguistics, except for a general familiarity with concepts
of programming (as gained from any programming course or experience).

------------------------------

End of AIList Digest
********************

From:	CSVPI           1-JAN-1985 02:59  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a007398; 31 Dec 84 15:48 EST
Date: Mon 31 Dec 1984 11:42-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #183
To: AIList@SRI-AI
Received: from rand-relay by vpi; Tue, 1 Jan 85 02:57 EST


AIList Digest            Monday, 31 Dec 1984      Volume 2 : Issue 183

Today's Topics:
  Projects - Cognitive Science Dictionary,
  AI Tools - Cheap Lisp Machines & Xerox,
  News - Recent Articles & Thinking Machines Corporation & Space Shuttle,
  Courses - Massively Parallel Models of Intelligence  (CMU) &
    Reasoning about the Physical World  (UIUC)
----------------------------------------------------------------------

Date: Sun, 30 Dec 84 21:53:17 est
From: 20568%vax1@cc.delaware (FRAWLEY)
Subject: Cognitive Science Dictionary


I recently spoke with a publisher about the possibility of compiling
a Dictionary of Cognitive Science. I'm sending out this preliminary
inquiry to you all to see what you think of the idea. I'd appreciate
responses to any or all of the following:

1. Is the idea of such a dictionary good, bad, ridiculous...?

2. Is such a dictionary a feasible project?

3. If the project is feasible, what areas of Cognitive Science
ought to be covered?

4. What do you think of the marketability of such a dictionary?

5. If the project is feasible, what form should the dictionary take
(i.e., standard dictionary form, encyclopedic form, etc.)?

You can send your responses via the AIList or to me directly.

Thanks,

Bill Frawley
Linguistics
U. of Delaware

20568.ccvax1@udel

------------------------------

Date: Thu, 27 Dec 84 17:07:33 pst
From: hplabs!sdcrdcf!darrelj@Berkeley (Darrel VanBuer)
Subject: A Very Cheap Lisp Machine

To be slightly partisan toward the machines I
use, Xerox Dandelions can be had for under $19,000 in some configurations.
For not much over the high end of the proposal in V2 #182, you GET the high
end machine (except addition of Ethernet and a display with 6 times the
pixels of the Macintosh).  About a third of the cost of a Dandelion is for
the Interlisp software (inferred from the unbundled Star price list).
This is a reasonable cost given the complexity of a full-blown display-oriented
Lisp environment and the (relatively) small market for Lisp machines.

Darrel J. Van Buer, PhD
System Development Corp.
2500 Colorado Ave
Santa Monica, CA 90406
(213)820-4111 x5449
...{allegra,burdvax,cbosgd,hplabs,ihnp4,orstcs,sdcsvax,ucla-cs,akgua}
                                                            !sdcrdcf!darrelj
VANBUER@USC-ECL.ARPA

------------------------------

Date: 26 Dec 1984 1757 PST
From: Larry Carroll <LARRY@JPL-VLSI.ARPA>
Reply-to: LARRY@JPL-VLSI.ARPA
Subject: Xerox AI

Paul Erler's message reminds me: the latest Computerworld has a full-page
ad with the banner XEROX ANNOUNCES A 15-YEAR HEADSTART IN ARTIFICIAL
INTELLIGENCE.  It seems they're now selling and supporting what they call
the Xerox AI System.  It includes a combination of 1108 or 1132 workstations,
Interlisp D and LOOPS, and training as well as support.  Added info can be
gotten from
                        attn: AI Marketing, MS 1245
                        Xerox Special Information Systems
                        Artificial Intelligence Business Unit
                        250 N. Halstead St., PO Box 7018
                        Pasadena, CA 91109

------------------------------

Date: Sat, 29 Dec 84 06:24:05 cst
From: Laurence Leff <leff%smu.csnet@csnet-relay.arpa>
Subject: Recent AI Articles


New Scientist November 8, 1984 Volume 104 No. 1429 pp 10
Japan unveils its fifth generation


New Scientist November 15, 1984 Volume 104 No. 1430
AI is Stark Naked from the Ankles Up.  [An entertaining article
claiming that the emperor's new clothes (AI) consist only of
sneakers (20-year-old expert systems technology).  -- KIL]

Distributing Computing
APIC studies in Data Processing Volume 20
Edited by F. B. Chambers D. A. Dune G. P. Jones
Academic Press $22.50
The following titles in this compendium might be of interest:
  Using Algebra for Concurrency
  Reasoning about Concurrent Systems
  Functional Programming
  Logic Programming and Prolog

------------------------------

Date: Mon 31 Dec 84 11:40:57-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Thinking Machines Corporation

From the January, 1985, issue of Omni, p. 33, by Edward Rosenfeld:

[...]
The latest fusion of acadame and venture capital is Thinking
Machines Corporation (TMC), a Cambridge, Mass., company that
boasts Marvin Minsky, cofounder of MIT's AI Laboratory and one
of the pioneers of AI, as one of its founders.

A group of investors headed by CBS founder William Paley has
reportedly put up a $10 million stake to get TMC off the ground.
AI insiders refer to the company as the Marv and Marv Show because,
in addition to Marvin Minsky, TMC has also acquired the services
of Marvin Denicoff, who formerly guided the AI programs at the
Office of Naval Research.

The company's first product, currently in prototype development,
will be the connection machine, a parallel-processing supercomputer
designed by W. Daniel Hillis, of MIT.  [...]

                                        -- Ken Laws

------------------------------

Date: Fri, 28 Dec 84 14:04:56 est
From: nikhil@mit-fla (Rishiyur S. Nikhil)
Subject: AI and the Shuttle


Here are some items of interest from Aviation Week and Space Technology:

++++ AWST Sep 17, 1984, page 79

Johnson Space Center (Houston) officials expect to use AI techniques in
future Shuttle missions, beginning late 1984 or in 1985.

The first use will be Navex, a "navigational expert system". Currently, the
navigation console position is manned in 4 shifts, with 3 controllers per
shift. Each person needs 2 years of training to make high-speed decisions
about shuttle velocity and trajectory.
JSC officials expect to man it with one controller per shift in conjunction
with Navex.

Navex is built on ART (Automatic Reasoning Tool), which is written in Lisp.
ART is a product of Inference Corp. of Los Angeles. Navex was developed by
Inference Corp. and LinCom Corp. of Houston.

++++ AWST Dec10, 1984, page 24

NASA will test Navex along with its human counterparts in Jan 1985. A Symbolics
computer will run in a lab near Mission Control at Johnson Space Center,
Houston, and will be wired to the navigator console position. They expect
it to make decisions about Shuttle velocity and trajectory six times faster
than humans.

By March, an AI program will perform Shuttle electrical system checks during
pre-launch ground preparations. The actual program is finished, but
documentation to explain it will take 3 months. (!!)

By late summer 1985, Johnson Space Center wil complete an expert system
that captures the expertise of a person whose job would be to talk the
shuttle down during re-entry if it were to emerge from a radio blackout
with malfunctioning navigation instruments. It will take 2 months to build,
and will run in Mission Control as an advisor to flight controllers.

------------------------------

Date: 22 Dec 1984 1152-EST
From: Geoff Hinton <HINTON@CMU-CS-C.ARPA>
Subject: Course - Massively Parallel Models of Intelligence

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

                      Advanced Course on:

           MASSIVELY PARALLEL MODELS OF NATURAL INTELLIGENCE

                  Geoffrey Hinton & Scott Fahlman

This is a 7 week advanced course. It meets from 11.30 - 12.50 on Wednesdays
and Fridays in 5409, starting on Wednesday Jan 16.  A reading list and a brief
description of each lecture will be available from Geoff Hinton on Jan 15th.

The course covers models of @b(search, representation,) and @b(learning) in
networks of simple processing elements that are richly interconnected.  The
emphasis will be on the computational properties of these networks, but we will
also cover the psychological and neurophysiological evidence for and against
various models.

SEARCH
The main search technique used in these networks is iterative relaxation.
Five different models of relaxation will be presented and their performance
will be compared on a variety of tasks including stereo-fusion,
surface-interpolation, shape-recognition, and figure-ground segmentation.
Other search methods will also be covered.

REPRESENTATION
To make efficient use of the representational capacity of massively parallel
networks, it is often necessary to use novel kinds of representation in which
individual processing elements do not have a simple relationship to the
concepts being represented.  We will cover methods of representing continuous
variables, high-dimensional feature spaces, spatial transformations, simple
associations, schemas, trees, production systems, and Clyde.  We will discuss
the interaction between representational efficiency and ease of search for each
kind of representation.

LEARNING
We will cover the history of attempts to make networks that learn by modifying
connection strengths, and show why these attempts generally failed or worked
only for very circumscribed domains.  The difficult problem in learning is to
construct @i(new) representations.  We will compare three different models that
create representations by modifying connection strengths.  We will also compare
these connectionist models with more conventional AI learning methods.

------------------------------

Date: Thu, 27 Dec 84 20:57:01 cst
From: Kenneth Forbus <forbus%uiucdcsp@uiuc.ARPA>
Subject: Course - Reasoning about the Physical World  (UIUC)

Course Announcement - U. of Illinois at Urbana

CS 497, Spring 1985
Title: Reasoning about the Physical World
Instructor: Ken Forbus

This graduate seminar will examine principles and methods developed in
Artificial Intelligence for reasoning about problems involving space, time,
processes, and action.  Topics include:  solving word problems; qualitative
physics; planning actions, experiments, assemblies, and routes; analysis,
design, troubleshooting, and control of engineered systems.  A solid AI
background will be assumed.

Outline:

1. Solving Textbook Physics Problems

        Survey of programs: Charniak's CARPS, Novak, Larkin,
                Bundy, de Kleer.

        Transformation from natural language to equations

        Symbolic algebra

2. Qualitative Physics

        Qualitative State representation: ontology,
                making predictions, correlating qualitative
                results with quantitative results, using
                qualitative reasoning to guide search for
                quantitative solutions.

        Qualitative Process theory: processes as mechanisms of
                change, influences as representation of equations,
                basic deductions sanctioned by QP theory, prediction,
                measurement interpretation.

        Qualitative System Dynamics: breakdown of processes when
                system connectivity becomes high, device-centered
                model for physics.  Confluences as representation of
                equations, constraint-satisfaction and propagation
                techniques for solving confluences.

3. Planning

        "Classical" AI planning: GPS, STRIPS, NOAH, MOLGEN.  Limitations
                due to inadequate models of time, space, and action.

        Modelling time: Histories and Chronicles.  Allen's interval-based
                formulation.  Vere's DEVISER. Theories of action.

        Modelling space: symbolic, metric, and analog representations
                of space.  The "visual routines" model of human spatial
                competence.

        Robot planning (routes): Configuration space approach and related
                computational problems.  Quantizing free space into
                 "freeways".

        Robot planning (assembly):  Symbolic analysis of errors.  Automatic
                insertion of inspection steps into assembly plans.

4. Engineering Problem Solving

        Analysis: Propagation of constraints, EL.  Qualitative
                analysis for functional recognition.

        Design: SYN, the role of causality in circuit design,
                circuit grammars.

        Troubleshooting: Digital electronics: Davis' group and the DART
                 project.  Continuous systems: SOPHIE.

        Control: Temporal logic for synthesizing control strategies.

------------------------------

End of AIList Digest
********************

From:	CSVPI           5-JAN-1985 11:18  
To:	ROACH,FOX
Subj:	From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>

Received: from sri-ai.arpa by csnet-relay.arpa id a027416; 5 Jan 85 2:16 EST
Date: Fri  4 Jan 1985 20:53-PST
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #184
To: AIList@SRI-AI
Received: from rand-relay by vpi; Sat, 5 Jan 85 11:15 EST


AIList Digest            Saturday, 5 Jan 1985     Volume 2 : Issue 184

Today's Topics:
  Symbolic Algebra - Package Request,
  Expert Systems - Smalltalk Application,
  AI Tools - Inexpensive Lisp Machines,
  Mathematics - Fermat's Last Theorem,
  Cognitive Science - Dictionary Project,
  Anecdote - SAIL TV Story,
  Opinion - 5th Generation Research,
  News - Reading Machines,
  Conferences - Upcoming Submission Deadlines,
  Seminars - Representation and Presentation  (CSLI) &
    Rewrite Rules for Functional Programming  (IBM-SJ)
----------------------------------------------------------------------

Date: Wed, 2 Jan 85 08:34 EST
From: D E Stevenson <dsteven%clemson.csnet@csnet-relay.arpa>
Subject: Symbolic Algebra Package Request

I would like to obtain a symbolic algebra package which would run on
a VAX/Franz Lisp configuration.  Preferably, I like one in the public
domain.

D. E. Stevenson,
Department of Computer Science
Clemson University
Clemson, SC 29631
(803) 656-3444

------------------------------

Date: Wed,  2 Jan 85 11:43:21 PST
From: Jan Steinman <jans@mako>
Reply-to: Jan Steinman <jans%mako.uucp@csnet-relay.arpa>
Subject: Smalltalk Expert Systems


    Mike.Rychener@CMU-RI-ISL2:
    Does anyone know of any successful AI applications coded in SmallTalk?
    This was stimulated by the new Tektronix AI machine, whose blurb touts
    its SmallTalk as useful for developing expert systems.

Take a look at the Troubleshooter for the Tektronix 4404.  Although I am not
on the "inside" on this one, It is a rule based system written in Smalltalk.
One of the program's principles (Jim Alexander) is a Cognitive Scientist and
not, strictly speaking, a propgrammer, which attests to the ease with which
such things can be done in Smalltalk.

The Troubleshooter has two graphic and several text windows.  The graphic
windows present a schematic and a parts layout, each having little probes that
move from point to point.  A text window asks questions, such as "Is the
voltage at N19 high?"; the answers of such questions cause the probe(s) to
move to the next test point.  Other text windows can be opened on a parts
database, troubleshooting advice, and the actual rules program, among others.
(Remember, the full power of Smalltalk is always available, which is good and
bad!)  A window can be opened on a scope screen, which shows expected
waveforms at various points.

I have seen it; it works; it's fun!  I fixed stereos, transceivers, and color
TVs before getting into computers and know that half the battle in
troubleshooting is often using the service literature!  This application is
sort of a smart, graphics-based, hypertext service manual and would really be
useful.  It is not simply an interesting bit of AI research!

I AM NOT A PART OF THIS PROJECT.  Although I don't want to seem anti-social,
please contact your nearest Tek field office for a demo and more information;
do not contact me!

:::::: Jan Steinman             Box 1000, MS 61-161     (w)503/685-2843 ::::::
:::::: tektronix!tekecs!jans    Wilsonville, OR 97070   (h)503/657-7703 ::::::

------------------------------

Date: 2 Jan 1985 09:58:48-EST
From: kushnier@NADC
Subject: VILM


Todd,
We at NAVAIRDEVCEN are also interested in a Low cost portable LISP machine.
The MAC came up as a possible candidate. Could you please tell me more
about Portable Standard LISP (PSL) ?

We are currently considering implementing an EXPERT SYSTEM written in FORTH
which we would translate into MACFORTH. Unless an external high speed, high
capacity memory device can be utilized, the prospect of using LISP does not
look promising. Keep us informed on your progress.

                                   Ron Kushnier
                                     kushnier@nadc.arpa

------------------------------

Date: Thursday,  3-Jan-85 12:20:36-GMT
From: JOLY QMA (on ERCC DEC-10) <GCJ%edxa@ucl-cs.arpa>
Subject: Re: Fermat's Last Theorem.


Does the reference to the proof of Fermat's Last Theorem (Vol 2 # 181)
have anything to do with the incorrect proof of Arnold Arnold which
was reported in the Guardian newspaper in October/November 1984 ?

Gordon Joly

gcj@edxa

------------------------------

Date: Wed, 2 Jan 85 15:12:10 est
From: hoffman%vax1@cc.delaware (HOFFMAN)
Subject: Re:  Cognitive Science Dictionary


I think it would be a good idea and might have a good market. I would
hate to be the one doing the compiling, though.

------------------------------

Date: Thu, 3 Jan 85 13:01:25 est
From: chester%vax1@cc.delaware (CHESTER)
Subject: Re:  Cognitive Science Dictionary

A dictionary (with short definitions of terms) would have limited sales,
since it would only be useful to people who are already in the field or who
already have strong motivation to get in the field and are required to buy
it for a course.

An encyclopedia would be better, but I favor a format like that of The
Handbook of Artificial Intelligence, (Barr and Feigenbaum) or the Handbook of
Human Intelligence (Sternberg).  Such a work would appeal to people who have
a moderate interest in the field and might give them suitable orientation
and motivation to join us.

------------------------------

Date: Friday, 21 Dec 1984 18:12-PST
From: imagen!les@su-shasta.arpa
Subject: TV and the 5th generation

        [Forwarded from the Human-Nets Digest by Laws@SRI-AI.]

In response to your 20 Dec. comments on "Personal Assistants", I can
confirm that the TV story is apocryphal.  I bought the Heathkit
television set for the Stanford AI Lab and it was completely assembled
within a few days after arrival, by gnomes not robots.  Aside from its
use for monitoring "Mary Hartman! Mary Hartman!" it served as a
display for computer-synthesized color images.

A creative student (Hans Morovec) shortly built a remote control ray
gun that worked rather well.  As I recall, that was a few years before
remote control became available on commercial TV sets.

As for the digs at the AI community by you and others, please do not
paint everyone with the same brush.  In any research field, the
lunatic fringe is much more likely to catch headlines and certain
government grants than those who speak rationally.  The Great Machine
Translation fiasco of the '60s was brought about mainly by the CIA's
slavering desire to leap ahead in an area where no one knew how to
walk yet.

An even greater fiasco was the series of "Command and Control" systems
assembled by the Air Force and others in the '50s, '60s, and '70s.
They wanted computers to run the military establishment even though
they hadn't mastered chess yet.  The reason that these largely useless
projects kept going was that the people involved were having a good
time (and making good money) and the Congress never seemed to
understand what was going on.

As for AI and 5th generation computers, I know of very few people in
the AI community who believe in any of that nonsense.  Nevertheless,
some will use it to pry larger grants out of the government or to sell
high-priced seminars to the gullible public.

What keeps happening, it seems, is that people take a few partially-
understood facts and principles then extrapolate a few light years
away and declare that it must be possible to do this new thing.  As
long as such activities are rewarded, they will continue to
proliferate.  Why settle for a trip to the beach when you can head
toward Andromeda?

        Les Earnest

------------------------------

Date: 02 Jan 85  2300 PST
From: Richard Vistnes <RV@SU-AI.ARPA>
Subject: Reading machines & news

I seem to remember someone a while ago asking about the availability
of machines that could `read' a page of text with a camera and produce
computer-readable text.  In the latest issue of Fortune magazine
(Jan 7 '85, p.74) there's an article about speech recognition, and it
mentions that Kurzweil (formerly of MIT, I believe) let Xerox produce his
reading machine, and that this machine can read text in several
different fonts.  Maybe someone at Xerox can supply more information.

                - Richard Vistnes

------------------------------

Date: 02 Jan 85  1107 PST
From: Yoni Malachi <YM@SU-AI.ARPA>
Subject: Upcoming conference submission deadlines

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

(details in file CONFER.TXT[2,2] at SAIL.)

7-Jan-85: IJCAI-85
10-Jan-85: VLSI-85
12-Jan-85: Theoretical Approaches to Natural Language Understanding
14-Jan-85: Logics of Programs 1985
15-Jan-85: Symposium on Complexity of Approximately Solved Problems
15-Jan-85: Workshop on Environments for Programming-in-the-Large
15-Jan-85: 1985 CHAPEL HILL CONFERENCE ON VLSI
18-Jan-85: Computational Linguistics
31-Jan-85: FUNCTIONAL PROGRAMMING LANGUAGES AND COMPUTER ARCHITECTURE
31-Jan-85: Conference - Intelligent Systems and Machines
4-Feb-85: CONFERENCE ON SOFTWARE MAINTENANCE -- 1985
4-Feb-85: Sigmetrics '85
11-Mar-85: THEORETICAL AND METHODOLOGICAL ISSUES IN MACHINE TRANSLATION OF
        NATURAL LANGUAGES
1-Apr-85: Logic, language and computation meeting
29-Apr-85: FOUNDATIONS OF COMPUTER SCIENCE (FOCS)
1-May-85: Expert Systems in Government Conference

You can get the file to your computer using FTP.

------------------------------

Date: Wed 2 Jan 85 17:16:47-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Seminar - Representation and Presentation  (CSLI)

         [Excerpted from the CSLI Newsletter by Laws@SRI-AI.]

                             CSLI SEMINAR
                 ``Representation and Presentation''
             Benny Shanon, Hebrew University of Jerusalem
     Wednesday, January 9 at 4:00 pm in the Ventura conference room

A series of arguments, drawn on the basis of various aspects of
psychological phenomenology are marshalled against the representational-
computational view of mind.  The argument from context marks the
unconstrained variation of meaning with context, hence the impossibility
of a full, comprehensive semantic representation; the argument from
medium points out that medium is an ineliminable contributor to meaning
and that a variety of psychological patterns do not allow for a
distinction between medium and message, hence they cannot be accounted
for by means of abstract, symbolic representations; the argument from
development notes that the representational view not only cannot
account for the problem of the origin in cognition, but that it leads
to unnatural and even paradoxical patterns whereby what is theoretically
simple is phenomenologically complex and/or developmentally late and
what is theoretically complex is phenomenologically simple and/or
developmentally early.  On the basis of these arguments it is
suggested that cognition be viewed as a dialectic process between two
types of patterns: representational and presentational.

------------------------------

Date: 02 Jan 85  2347 PST
From: Yoni Malachi <YM@SU-AI.ARPA>
Subject: Seminar - Rewrite Rules for Functional Programming   (IBM-SJ)

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

2:00pm  Monday, Jan. 7
Room 1C-012 (in Building 28 at IBM)

Ed Wimmers
IBM Research San Jose

        What does it mean for rewrite rules to be "correct"?

We consider an operational definition for FP via rewrite rules.  What would it
mean for such a definition to be correct?  We certainly want the rewrite rules
to capture correctly our intuitions regarding the meaning of the primitive
functions.  We also want there to be enough rewrite rules to compute the correct
meaning of all expressions, but not too many, thus making equivalent two
expressions that should be different.  And what does it mean for there to be
"enough" rules?  We develop a new formal criterion for deciding whether there
are enough rewrite rules and show that our rewrite rules meet that criterion.
Our proof technique is novel in the way we use the semantic domain to guide an
assignment of types to the untyped language FP; this allows us to adopt powerful
techniques from the typed lambda-calculus theory.

Host: John Backus


------------------------------

End of AIList Digest
********************
