3-Jan-84 15:46:43-PST,10403;000000000001
Mail-From: LAWS created at  3-Jan-84 15:44:16
Date: Tue  3 Jan 1984 15:33-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #1
To: AIList@SRI-AI


AIList Digest           Wednesday, 4 Jan 1984       Volume 2 : Issue 1

Today's Topics:
  Administrivia - Host List & VISION-LIST,
  Cognitive Psychology - Looping Problem,
  Programming Languages - Questions,
  Logic Programming - Disjunctions,
  Vision - Fiber Optic Camera
----------------------------------------------------------------------

Date: Tue 3 Jan 84 15:07:27-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Host List

The AIList readership has continued to grow throughout the year, and only
a few individuals have asked to be dropped from the distribution network.
I cannot estimate the number of readers receiving AIList through bboards
and remailing nodes, but the existence of such services has obviously
reduced the outgoing net traffic.  For those interested in such things,
I present the following approximate list of host machines on my direct
distribution list.  Numbers in parentheses indicate individual subscribers;
all other hosts (and those marked with "bb") have redistribution systems.
A few of the individual subscribers are undoubtedly redistributing
AIList to their sites, and a few redistribution nodes receive the list
from other such nodes (e.g., PARC-MAXC from RAND-UNIX).  AIList is
also available to USENET through the net.ai distribution system.

    AEROSPACE(8), AIDS-UNIX, BBNA(2), BBNG(1), BBN-UNIX(8), BBN-VAX(3),
    BERKELEY(3), BITNET@BERKELEY(2), ONYX@BERKELEY(1), UCBCAD@BERKELEY(2),
    BRANDEIS(1), BRL(bb+1), BRL-VOC(1), BROWN(1), BUFFALO-CS(1),
    cal-unix@SEISMO(1), CIT-20, CMU-CS-A(bb+11) CMU-CS-G(3),
    CMU-CS-SPICE(1), CMU-RI-ISL1(1), COLUMBIA-20, CORNELL,
    DEC-MARLBORO(7), EDXA@UCL-CS(1), GATECH, HI-MULTICS(bb+1),
    CSCKNP@HI-MULTICS(2), SRC@HI-MULTICS(1), houxa@UCLA-LOCUS(1),
    HP-HULK(1), IBM-SJ(1), JPL-VAX(1), KESTREL(1), LANL, LLL-MFE(2),
    MIT-MC, NADC(2), NOSC(4), NOSC-CC(1), CCVAX@NOSC(3), NPRDC(2),
    NRL-AIC, NRL-CSS, NSF-CS, NSWC-WO(2), NYU, TYM@OFFICE(bb+2),
    RADC-Multics(1), RADC-TOPS20, RAND-UNIX, RICE, ROCHESTER(2),
    RUTGERS(bb+2), S1-C(1), SAIL, SANDIA(bb+1), SCAROLINA(1),
    sdcrdcf@UCBVAX(1), SRI-AI(bb+6), SRI-CSL(1), SRI-KL(12), SRI-TSC(3),
    SRI-UNIX, SU-AI(2), SUMEX, SUMEX-AIM(2), SU-DSN, SU-SIERRA@SU-DSN(1),
    SUNY-SBCS(1), SU-SCORE(11), SU-PSYCH@SU-SCORE(1), TEKTRONIX(1), UBC,
    UCBKIM, UCF-CS, UCI, UCL-CS, UCLA-ATS(1), UCLA-LOCUS(bb+1),
    UDel-Relay(1), UIUC, UMASS-CS, UMASS-ECE(1), UMCP-CS, UMN-CS(bb+1),
    UNC, UPENN, USC-ECL(7), USC-CSE@USC-ECL(2), USC-ECLD@USC-ECL(1),
    SU-AI@USC-ECL(4), USC-ECLA(1), USC-ECLB(2), USC-ECLC(2), USC-ISI(5),
    USC-ISIB(bb+6), USC-ISID(1), USC-ISIE(2), USC-ISIF(10), UTAH-20(bb+2),
    utcsrgv@CCA-UNIX(1), UTEXAS-20, TI@UTEXAS-20(1), WISC-CRYS(3),
    WASHINGTON(4), YALE

                                        -- Ken Laws

------------------------------

Date: Fri, 30 Dec 83 15:20:41 PST
From: Philip Kahn <kahn@UCLA-CS>
Subject: Are you interested in a more specialized "VISION-LIST"?

        I been feeling frustrated (again).  I really like AIList,
since it provides a nice forum for general AI topics.  Yet, like
many of you out there, I am primarily a vision researcher looking into
ways to facilitate machine vision and trying to decipher the strange,
all-too-often unknown mechanisms of sight.  What we need is a
specialized VISION-LIST to provide a more specific forum that will
foster a greater exchange of ideas among our research.
So...one question and one request:  1) is there such a list in the
works?, and  2) if you are interested in such a list PLEASE SPEAK UP!!

                        Thanks!
                        Philip Kahn
                        UCLA

------------------------------

Date: Fri 30 Dec 83 11:04:17-PST
From: Rene Bach <BACH@SUMEX-AIM.ARPA>
Subject: Loop detection

Mike,
        It seems to me that we have an inbuilt mechanism which remembers
what is done (thought) at all times. I.E. we know and remember (more or
less) our train of thoughts. When we get in a loop, the mind is
immediately triggered : at the first element, we think it could be a
coincidence, as more elements are found matching the loop, the more
convinced we get that there is a repeat : the reading example is quite
good , even when just one word appears in the same sentence context
(meaning rather than syntactical context), my mind is triggered and I go
back and check if there is actually a loop or not. Thus to implement this
property in the computer we would need a mechanism able to remember the
path and check whether it has been followed already or not (and how
far), at each step. Detection of repeats of logical rather than word for
word sentences (or sets of ideas) is still left open.
        I think that the loop detection mechanism is part of the
memorization process, which is an integral part of the reasoning engine
and it is not sitting "on top" and monitoring the reasoning process from
above.

Rene

------------------------------

Date: 2 January 1984 14:40 EST
From: Herb Lin <LIN @ MIT-ML>
Subject: stupid questions....

Speaking as an interested outsider to AI, I have a few questions that
I hope someone can answer in non-jargon.  Any help is greatly appreciated:

1. Just why is a language like LISP better for doing AI stuff than a
language like PASCAL or ADA?  In what sense is LISP "more natural" for
simulating cognitive processes?  Why can't you do this in more tightly
structured languages like PASCAL?

2. What is the significance of not distinguishing between data and
program in LISP?  How does this help?

3. What is the difference between decisions made in a production
system (as I understand it, a production is a construct of the form IF
X is true, then do Y, where X is a condition and Y is a procedure),
and decisions made in a PASCAL program (in which IF statements also
have the same (superficial) form).


many thanks.

------------------------------

Date: 1 Jan 84 1:01:50-PST (Sun)
From: hplabs!hpda!fortune!rpw3 @ Ucb-Vax
Subject: Re: Re: a trivial reasoning problem? - (nf)
Article-I.D.: fortune.2135

Gee, and to a non-Prolog person (me) your problem seemed so simple
(even given the no-exhaustive-search rule). Let's see,

        1. At least one of A or B is on = (A v B)
        2. If A is on, B is not         = (A -> ~B) = (~A v (~B)) [def'n of ->]
        3. A and B are binary conditions.

>From #3, we are allowed to use first-order Boolean algebra (WFF'n'PROOF game).
(That is, #3 is a meta-condition.)

So, #1 and #2 together is just (#1) ^ (#2) [using caret ^ for disjunction]

or,             #1 ^ #2 = (A v B) ^ (~A v ~B)
(distributivity)        = (A ^ ~A) v (A ^ ~B) v (B ^ ~A) v (B ^ ~B)
(from #3 and ^-axiom)   = (A ^ ~B) v (B ^ ~A)
(def'n of xor)          = A xor B

Hmmm... Maybe I am missing your original question altogether. Is your real
question "How does one enumerate the elements of a state-space (powerset)
for which a certain logical proposition is true without enumerating (examining)
elements of the state-space for which the proposition is false?"?

To me (an ignorant "non-ai" person), this seems excluded by a version of the
First Law of Thermodynamics, namely, the Law of the Excluded Miraculous Sort
(i.e. to tell which of two elements is bigger, you have to look at both).

It seems to me that you must at least look at SOME of the states for which the
proposition is false, or equivalently, you must use the structure of the
formula itself to do the selection (say, while doing a tree-walk). The problem
of the former approach is that the number of "bad" states should be kept
small (for efficiency), leading to all kinds of pruning heuristics; while
for the latter method the problem of elimination of duplicates (assuming
parallel processing) leads to the former method!

In either case, however, reasoning about the variables does not seem to
solve the problem; one must reason about the formulae. If Prolog admits
of constructing such meta-rules, you may have a chance. (I.e., "For all
true formula 'X xor Y', only X need be considered when ~Y, & v-v.)

In any event, I think your problem can be simplified to:

        1'. A xor B
        2'. A, B are binary variables.


Rob Warnock

UUCP:   {sri-unix,amd70,hpda,harpo,ihnp4,allegra}!fortune!rpw3
DDD:    (415)595-8444
USPS:   Fortune Systems Corp, 101 Twin Dolphins Drive, Redwood City, CA 94065

------------------------------

Date: 28 Dec 83 4:01:48-PST (Wed)
From: hplabs!hpda!fortune!rpw3 @ Ucb-Vax
Subject: Re: REFERENCES FOR SPECIALIZED CAMERA DE - (nf)
Article-I.D.: fortune.2114

Please clarify what you mean by "get close to the focal point of the
optical system". For any lens system I've used (both cameras and TVs),
the imaging surface (the film or the sensor) already IS at the focal point.
As I recall, the formula (for convex lenses) is:

         1     1     1
        --- = --- + ---
         f    obj   img

where "f" is the focal length of the lens, "obj" the distance to the "object",
and "img" the distance to the (real) image. Solving for minimum "obj + img",
the closest you can get a focused image to the object (using a lens) is 4*f,
with the lens midway between the object and the image (1/f = 1/2f + 1/2f).

Not sure what a bundle of fibers would do for you, since without a lens each
fiber picks up all the light around it within a cone of its numerical
aperture (NA). Some imaging systems DO use fiber bundles directly in contact
with film, but that's generally going the other way (from a CRT to film).
I think Tektronix has a graphics output device like that. I suppose you
could use it if the object were self-luminous...

Rob Warnock

UUCP:   {sri-unix,amd70,hpda,harpo,ihnp4,allegra}!fortune!rpw3
DDD:    (415)595-8444
USPS:   Fortune Systems Corp, 101 Twin Dolphins Drive, Redwood City, CA 94065

------------------------------

End of AIList Digest
********************
 4-Jan-84 16:42:36-PST,19474;000000000001
Mail-From: LAWS created at  4-Jan-84 16:38:41
Date: Wed  4 Jan 1984 16:31-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #2
To: AIList@SRI-AI


AIList Digest            Thursday, 5 Jan 1984       Volume 2 : Issue 2

Today's Topics:
  Hardware - High Resolution Video Projection,
  Programming Languages - LISP vs. Pascal,
  Net Course - AI and Mysticism
----------------------------------------------------------------------

Date: 04 Jan 84  1553 PST
From: Fred Lakin <FRD@SU-AI>
Subject: High resolution video projection

I want to buy a hi-resolution monochrome video projector suitable for use with
generic LISP machine or Star-type terminals (ie approx 1000 x 1000 pixels).
It would be nice if it cost less than $15K and didn't require expensive
replacement parts (like light valves).

Does anybody know of such currently on the market?

I know, chances seem dim, so on to my second point: I have heard it would be
possible to make a portable video projector that would cost $5K, weigh 25lb,
and project using monochrome green phosphor.  The problem is that industry
does not feel the market demand would justify production at such a price ...
Any ideas on how to find out the demand for such an item?  Of course if
all of you who might be interested in this kind of projector let me know
your suggestions, that would be a good start.

Thanks in advance for replies and/or notions,
Fred Lakin

------------------------------

Date: Wed 4 Jan 84 10:25:56-PST
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM.ARPA>
Subject: Re: stupid questions (i.e. Why Lisp?)

        You might want to read an article by Beau Sheil (Xerox PARC)
in the February '83 issue of Datamation called "Power tools for
programmers."  It is mostly about the Interlisp-D programming
environment, but might give you some insights about LISP in general.
        I'll offer three other reasons, though.
        Algol family languages lack the datatypes to conveniently
implement a large number of knowledge representation schemes.  Ditto
wrt. rules.  Try to imagine setting up a pascal record structure to
embody the rules "If I have less than half of a tank of gas then I
have as a goal stopping at a gas station" & "If I am carrying valuable
goods, then I should avoid highway bandits."  You could write pascal
CODE that sort of implemented the above, but DATA would be extremely
difficult.  You would almost have to write a lisp interpreter in
pascal to deal with it.  And then, when you've done that, try writing
a compiler that will take your pascal data structures and generate
native code for the machine in question!  Now, do it on the fly, as a
knowledge engineer is augmenting the knowledge base!
        Algol languages have a tedious development cycle because they
typically do not let a user load/link the same module many times as he
debugs it.  He typically has to relink the entire system after every
edit.  This prevents much in the way of incremental compilation, and
makes such languages tedious to debug in.  This is an argument against
the languages in general, and doesn't apply to AI explicitly.  The AI
community feels this as a pressure more, though, perhaps because it
tends to build such large systems.
        Furthermore, consider that most bugs in non-AI systems show up
at compile time.  If a flaw is in the KNOWLEDGE itself in an AI
system, however, the flaws will only show up in the form of incorrect
(unintelligent?) behavior.  Typically only lisp-like languages provide
the run-time tools to diagnose such problems.  In Pascal, etc, the
programmer would have to go back and explicitly put all sorts of
debugging hooks into the system, which is both time consuming, and is
not very clean.  --Christopher

------------------------------

Date: 4 Jan 84 13:59:07 EST
From: STEINBERG@RUTGERS.ARPA
Subject: Re: Herb Lin's questons on LISP etc.

Herb:
Those are hardly stupid questions.  Let me try to answer:

        1. Just why is a language like LISP better for doing AI stuff than a
        language like PASCAL or ADA?

There are two kinds of reasons.  You could argue that LISP is more
oriented towards "symbolic" processing than PASCAL.  However, probably
more important is the fact that LISP provides a truly outstanding
environment for exploratory programming, that is, programming where
you do not completely understand the problem or its solutions before
you start programming.  This is normally the case in AI programming -
even if you think you understand things you normally find out there
was at least something you were wrong about or had forgotten.  That's
one major reason for actually writing the programs.

Note that I refer to the LISP environment, not just the language.  The
existence of good editors, debuggers, cross reference aids, etc. is at
least as important as the language itself.  A number of features of LISP
make a good environment easy to provide for LISP.  These include the
compatible interpreter/compiler, the centrality of function calls, and the
simplicity and accessibility of the internal representation of programs.

For a very good introduction to the flavor of programming in LISP
environments, see "Programming in an Interactive Environment, the LISP
Experience", by Erik Sandewall, Computing Surveys, V. 10 #1, March 1978.

        2. What is the significance of not distinguishing between data
        and program in LISP?  How does this help?

Actually, in ANY language, the program is also data for the interpreter
or compiler.  What is important about LISP is that the internal form used
by the interpreter is simple and accessible.  It is simple in that the
the internal form is a structure of nested lists that captures most of
both the syntactic and the semantic structure of the code.  It is accessible
in that this structure of nested lists is in fact a basic built in data
structure supported by all the facilities of the system, and in that a
program can access or set the definition of a function.

Together these make it easy to write programs which operate on other programs.
E.g.  to add a trace feature to PASCAL you have to modify the compiler or
interpreter.  To add a trace feature to LISP you need not modify the
interpreter at all.

Furthermore, it turns out to be easy to use LISP to write interpreters
for other languages, as long as the other languages use a similar
internal form and have a similarly simple relation between form and
semantics.  Thus, a common way to solve a problem in LISP is to
implement a language in which it is easy to express solutions to
problems in a general class, and then use this language to solve your
particular problem.  See the Sandewall article mentioned above.

        3. What is the difference between decisions made in a production
        system and decisions made in a PASCAL program (in which IF statements
        also have the same (superficial) form).

Production Systems gain some advantages by restricting the languages
for the IF and THEN parts.  Also, in many production systems, all
the IF parts are evaluated first, to see which are true, before any
THEN part is done.  If more than one IF part is true, some other
mechanism decides which THEN part (or parts) to do.  Finally, some
production systems such as EMYCIN do "backward chaining", that is, one
starts with a goal and asks which THEN parts, if they were done, would
be useful in achieving the goal.  One then looks to see if their
corresponding IF parts are true, or can be made true by treating them
as sub-goals and doing the same kind of reasoning on them.

A very good introduction to production systems is "An Overview of Production
Systems" by Randy Davis and Jonathan King, October 1975, Stanford AI Lab
Memo AIM-271 and Stanford CS Dept. Report STAN-CS-75-524.  It's probably
available from the National Technical Information Service.

------------------------------

Date: 1 Jan 84 8:42:34-PST (Sun)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: Netwide Course -- AI and Mysticism!!
Article-I.D.: psuvax.395

*************************************************************************
*                                                                       *
*            An Experiment in Teaching, an Experiment in AI             *
*       Spring Term Artificial Intelligence Seminar Announcement        *
*                                                                       *
*************************************************************************

This Spring term Penn State inaugurates a new experimental course:

        "THE HUMAN CONDITION: PROBLEMS AND CREATIVE SOLUTIONS".

This course explores all that makes the human condition so joyous and
delightful: learning, creative expression, art, music, inspiration,
consciousness, awareness, insight, sensation, planning, action, community.
Where others study these DESCRIPTIVELY, we will do so CONSTRUCTIVELY.  We
will gain familiarity by direct human experience and by building artificial
entities which manifest these wonders!!

We will formulate and study models of the human condition -- an organism of
bounded rationality confronting a bewilderingly complex environment.  The
human organism must fend for survival, but it is aided by some marvelous
mechanisms: perception (vision, hearing), cognition (understanding, learning,
language), and expression (motor skill, music, art).  We can view these
respectively as the input, processing, and output of symbolic information.
These mechanisms somehow encode all that is uniquely human in our experience
-- or do they??  Are these mechanisms universal among ALL sentient beings, be
they built from doped silicon or neural jelly?  Are these mechanisms really
NECESSARY and SUFFICIENT for sentience?

Not content with armchair philosophizing, we will push these models toward
the concreteness needed for physical implementation.  We will build the tools
that will help us to understand and use the necessary representations and
processes, and we will use these tools to explore the space of possible
realizations of "artificial sentience".

This will be no ordinary course.  For one thing, it has no teacher.  The
course will consist of a group of highly energetic individuals engaged in
seeking the secrets of life, motivated solely by the joy of the search
itself.  I will function as a "resource person" to the extent my background
allows, but the real responsibility for the success of the expedition rests
upon ALL of its members.

My role is that of "encounter group facilitator":  I jab when things lag.
I provide a sheltered environment where the shy can "come out" without
fear.  I manipulate and connive to keep the discussions going at a fever
pitch.  I pick and poke, question and debunk, defend and propose, all to
incite people to THINK and to EXPRESS.

Several people who can't be at Penn State this Spring told me they wish
they could participate -- so: I propose opening this course to the entire
world, via the miracles of modern networks!  We have arranged a local
mailing list for sharing discussions, source-code, class-session summaries,
and general flammage (with the chaff surely will be SOME wheat).  I'm aware
of three fora for sharing this: USENET's net.ai, Ken Laws' AIList, and
MIT's SELF-ORG mailing list.  PLEASE MAIL ME YOUR REACTIONS to using these
resources: would YOU like to participate? would it be a productive use of
the phone lines? would it be more appropriate to go to /dev/null?

The goals of this course are deliberately ambitious.  I seek participants
who are DRIVEN to partake in this journey -- the best, brightest, most
imaginative and highly motivated people the world has to offer.

Course starts Monday, January 16.  If response is positive, I'll post the
network arrangements about that time.

This course is dedicated to the proposition that the best way to secure
for ourselves the blessings of life, liberty, and the pursuit of happiness
is reverence for all that makes the human condition beautiful, and the
best way to build that reverence is the scientific study and construction
of the marvels that make us truly human.

--
Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
Arpa:   bobgian%psuvax1.bitnet@Berkeley    Bitnet: bobgian@PSUVAX1.BITNET
CSnet:  bobgian@penn-state.csnet           UUCP:   allegra!psuvax!bobgian
USnail: 333 Whitmore Lab, Penn State Univ, University Park, PA 16802

------------------------------

Date: 1 Jan 84 8:46:31-PST (Sun)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: Netwide AI Course -- Part 2
Article-I.D.: psuvax.396

*************************************************************************
*                                                                       *
*         Spring Term Artificial Intelligence Seminar Syllabus          *
*                                                                       *
*************************************************************************


  MODELS OF SENTIENCE
    Learning, Cognitive Model Formation, Insight, Discovery, Expression;
    "Subcognition as Computation", "Cognition as Subcomputation";
    Physical, Cultural, and Intellectual Evolution.

      SYMBOLIC INPUT CHANNELS: PERCEPTION
        Vision, hearing, signal processing, the "signal/symbol interface".

      SYMBOLIC PROCESSING: COGNITION
        Language, Understanding, Goals, Knowledge, Reasoning.

      SYMBOLIC OUTPUT CHANNELS: EXPRESSION
        Motor skills, Artistic and Musical Creativity, Story Creation,
        Prose, Poetry, Persuasion, Beauty.

  CONSEQUENCES OF THESE MODELS
    Physical Symbol Systems and Godel's Incompleteness Theorems;
    The "Aha!!!" Phenomenon, Divine Inspiration, Extra-Sensory Perception,
    The Conscious/Unconscious Mind, The "Right-Brain/Left-Brain" Dichotomy;
    "Who Am I?", "On Having No Head"; The Nature and Texture of Reality;
    The Nature and Role of Humor; The Direct Experience of the Mystical.

  TECHNIQUES FOR DEVELOPING THESE ABILITIES IN HUMANS
    Meditation, Musical and Artistic Experience, Problem Solving,
    Games, Yoga, Zen, Haiku, Koans, "Calculus for Peak Experiences".

  TECHNIQUES FOR DEVELOPING THESE ABILITIES IN MACHINES

    REVIEW OF LISP PROGRAMMING AND FORMAL SYMBOL MANIPULATION:
      Construction and access of symbolic expressions, Evaluation and
      Quotation, Predicates, Function definition; Functional arguments
      and returned values; Binding strategies -- Local versus Global,
      Dynamic versus Lexical, Shallow versus Deep; Compilation of LISP.

    IMPLEMENTATION OF LISP:  Storage Mapping and the Free List;
      The representation of Data: Typed Pointers, Dynamic Allocation;
      Symbols and the Symbol Table (Obarray); Garbage Collection
      (Sequential and Concurrent algorithms).

    REPRESENTATION OF PROCEDURE:  Meta-circular definition of the
      evaluation process.

    "VALUES" AND THE OBJECT-ORIENTED VIEW OF PROGRAMMING: Data-Driven
      Programming, Message-Passing, Information Hiding; the MIT Lisp Machine
      "Flavor" system; Functional and Object-Oriented systems -- comparison
      with SMALLTALK.

    SPECIALIZED AI PROGRAMMING TECHNIQUES:  Frames and other Knowledge
      Representation Languages, Discrimination Nets, Augmented Transition
      Networks; Pattern-Directed Inference Systems, Agendas, Chronological
      Backtracking, Dependency-Directed Backtracking, Data Dependencies,
      Non-Monotonic Logic, and Truth-Maintenance Systems.

    LISP AS THE "SYSTEMS SUBSTRATE" FOR HIGHER LEVEL ABSTRACTIONS:
      Frames and other Knowledge Representation Languages, Discrimination
      Nets, "Higher" High-Level Languages:  PLANNER, CONNIVER, PROLOG.

  SCIENTIFIC AND ETHICAL CONSEQUENCES OF THESE ABILITIES IN HUMANS
  AND IN MACHINES
    The Search for Extra-Terrestrial Intelligence.
      (Would we recognize it if we found it?  Would they recognize us?)
    The Search for Terrestrial Intelligence.
    Are We Unique?  Are we worth saving?  Can we save ourselves?
    Why are we here?  Why is ANYTHING here?  WHAT is here?
    Where ARE we?  ARE we?  Is ANYTHING?


These topics form a cluster of related ideas which we will pursue more-or-
less concurrently; the listing is not meant to imply a particular sequence.

Various course members have expressed interest in the following software
engineering projects.  These (and possibly others yet to be suggested)
will run concurrently throughout the course:

    LISP Implementations:
      For CMS, in PL/I and/or FORTRAN
      In PASCAL, optimized for personal computers (esp HP 9816)
      In Assembly, optimized for Z80 and MC68000
      In 370 BAL, modifications of LISP 1.5

    New "High-Level" Systems Languages:
      Flavor System (based on the MIT Zetalisp system)
      Prolog Interpreter (plus compiler?)
      Full Programming Environment (Enhancements to LISP):
        Compiler, Editor, Workspace Manager, File System, Debug Tools

    Architectures and Languages for Parallel {Sub-}Cognition:
      Software and Hardware Alternatives to the Von-Neuman Computer
      Concurrent Processing and Message Passing systems

    Machine Learning and Discovery Systems:
      Representation Language for Machine Learning
      Strategy Learning for various Games (GO, CHECKERS, CHESS, BACKGAMMON)

    Perception and Motor Control Systems:
      Vision (implementations of David Marr's theories)
      Robotic Welder control system

    Creativity Systems:
      Poetry Generators (Haiku)
      Short-Story Generators

    Expert Systems (traditional topic, but including novel features):
      Euclidean Plane Geometry Teaching and Theorem-Proving system
      Welding Advisor
      Meteorological Analysis Teaching system


READINGS -- the following books will be very helpful:

    1.  ARTIFICIAL INTELLIGENCE, Patrick H. Winston; Addison Wesley, 1984.

    2.  THE HANDBOOK OF ARTIFICIAL INTELLIGENCE, Avron Barr, Paul Cohen, and
    Edward Feigenbaum; William Kaufman Press, 1981 and 1982.  Vols 1, 2, 3.

    3.  MACHINE LEARNING, Michalski, Carbonell, and Mitchell; Tioga, 1983.

    4.  GODEL, ESCHER, BACH: AN ETERNAL GOLDEN BRAID, Douglas R. Hofstadter;
    Basic Books, 1979.

    5.  THE MIND'S I, Douglas R. Hofstadter and Daniel C. Dennett;
    Basic Books, 1981.

    6.  LISP, Patrick Winston and Berthold K. P. Horn; Addison Wesley, 1981.

    7.  ANATOMY OF LISP, John Allen; McGraw-Hill, 1978.

    8.  ARTIFICIAL INTELLIGENCE PROGRAMMING, Eugene Charniak, Christopher K.
    Riesbeck, and Drew V. McDermott; Lawrence Erlbaum Associates, 1980.

--
Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
Arpa:   bobgian%psuvax1.bitnet@Berkeley    Bitnet: bobgian@PSUVAX1.BITNET
CSnet:  bobgian@penn-state.csnet           UUCP:   allegra!psuvax!bobgian
USnail: 333 Whitmore Lab, Penn State Univ, University Park, PA 16802

------------------------------

End of AIList Digest
********************
 5-Jan-84 11:20:47-PST,18365;000000000001
Mail-From: LAWS created at  4-Jan-84 17:33:00
Date: Wed  4 Jan 1984 17:23-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #3
To: AIList@SRI-AI


AIList Digest            Thursday, 5 Jan 1984       Volume 2 : Issue 3

Today's Topics:
  Course - Penn State's First Undergrad AI Course
----------------------------------------------------------------------

Date: 31 Dec 83 15:18:20-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: Penn State's First Undergrad AI Course
Article-I.D.: psuvax.380

Last fall I taught Penn State's first ever undergrad AI course.  It
attracted 150 students, including about 20 faculty auditors.  I've gotten
requests from several people initiating AI courses elsewhere, and I'm
posting this and the next 6 items in hopes they may help others.

  1.  General Information
  2.  Syllabus (slightly more detailed topic outline)
  3.  First exam
  4.  Second exam
  5.  Third exam
  6.  Overview of how it went.

I'll be giving this course again, and I hate to do anything exactly the
same twice.  I welcome comments and suggestions from all net buddies!

        -- Bob

  [Due to the length of Bob's submission, I will send the three
  exams as a separate digest.  Bob's proposal for a network AI course
  associated with his spring semester curriculum was published in
  the previous AIList issue; that was entirely separate from the
  following material.  -- Ken Laws]

--
Spoken: Bob Giansiracusa
Bell:   814-865-9507
Bitnet: bobgian@PSUVAX1.BITNET
Arpa:   bobgian%psuvax1.bitnet@Berkeley
CSnet:  bobgian@penn-state.csnet
UUCP:   allegra!psuvax!bobgian
USnail: Dept of Comp Sci, Penn State Univ, University Park, PA 16802

------------------------------

Date: 31 Dec 83 15:19:52-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course, Part 1/6
Article-I.D.: psuvax.381

CMPSC 481:  INTRODUCTION TO ARTIFICIAL INTELLIGENCE

An introduction to the theory, research paradigms, implementation techniques,
and philosopies of Artificial Intelligence considered both as a science of
natural intelligence and as the engineering of mechanical intelligence.


OBJECTIVES  --  To provide:

   1.  An understanding of the principles of Artificial Intelligence;
   2.  An appreciation for the power and complexity of Natural Intelligence;
   3.  A viewpoint on programming different from and complementary to the
       viewpoints engendered by other languages in common use;
   4.  The motivation and tools for developing good programming style;
   5.  An appreciation for the power of abstraction at all levels of program
       design, especially via embedded compilers and interpreters;
   6.  A sense of the excitement at the forefront of AI research; and
   7.  An appreciation for the tremendous impact the field has had and will
       continue to have on our perception of our place in the Universe.


TOPIC SUMMARY:

  INTRODUCTION:  What is "Intelligence"?
    Computer modeling of "intelligent" human performance.  The Turing Test.
    Brief history of AI.  Relation of AI to psychology, computer science,
    management, engineering, mathematics.

  PRELUDE AND FUGUE ON THE "SECRET OF INTELLIGENCE":
    "What is a Brain that it may possess Intelligence, and Intelligence that
    it may inhabit a Brain?"  Introduction to Formal Systems, Physical Symbol
    Systems, and Multilevel Interpreters.  Necessity and Sufficiency of
    Physical Symbol Systems as the basis for intelligence.

  REPRESENTATION OF PROBLEMS, GOALS, ACTIONS, AND KNOWLEDGE:
    State Space, Predicate Calculus, Production Systems, Procedural
    Representations, Semantic Networks, Frames and Scripts.

  THE "PROBLEM-SOLVING" PARADIGM AND TECHNIQUES:
    Generate and Test, Heuristic Search (Search WITH Heuristics,
    Search FOR Heuristics), Game Trees, Minimax, Problem Decomposition,
    Means-Ends Analysis, The General Problem Solver (GPS).

  LISP PROGRAMMING:
    Symbolic Expressions and Symbol Manipulation, Data Structures,
    Evaluation and Quotation, Predicates, Input/Output, Recursion.
    Declarative and Procedural knowledge representation in LISP.

  LISP DETAILS:
    Storage Mapping, the Free List, and Garbage Collection,
    Binding strategies and the concept of the "Environment", Data-Driven
    Programming, Message-Passing, The MIT Lisp Machine "Flavor" system.

  LISP AS THE "SYSTEMS SUBSTRATE" FOR HIGHER LEVEL ABSTRACTIONS:
    Frames and other Knowledge Representation Languages, Discrimination
    Nets, "Higher" High-Level Languages:  PLANNER, CONNIVER, PROLOG.

  LOGIC, RULE-BASED SYSTEMS, AND INFERENCE:
    Logic: Axioms, Rules of Inference, Theorems, Truth, Provability.
    Production Systems: Rule Interpreters, Forward/Backward Chaining.
    Expert Systems: Applied Knowledge Representation and Inference.
    Data Dependencies, Non-Monotonic Logic, and Truth-Maintenance Systems,
    Theorem Proving, Question Answering, and Planning systems.

  THE UNDERSTANDING OF NATURAL LANGUAGE:
    Formal Linguistics: Grammars and Machines, the Chomsky Hierarchy.
    Syntactic Representation: Augmented Transition Networks (ATNs).
    Semantic Representation: Conceptual Dependency, Story Understanding.
    Spoken Language Understanding.

  ROBOTICS: Machine Vision, Manipulator and Locomotion Control.

  MACHINE LEARNING:
    The Spectrum of Learning: Learning by Adaptation, Learning by Being
      Told, Learning from Examples, Learning by Analogy, Learning by
      Experimentation, Learning by Observation and Discovery.
    Model Induction via Generate-and-Test, Automatic Theory Formation.
    A Model for Intellectual Evolution.

  RECAPITULATION AND CODA:
    The knowledge representation and problem-solving paradigms of AI.
    The key ideas and viewpoints in the modeling and creation of intelligence.
    Is there more (or less) to Intelligence, Consciousness, the Soul?
    Prospectus for the future.


Handouts for the course include:

1.  Computer Science as Empirical Inquiry: Symbols and Search.  1975 Turing
Award Lecture by Allen Newell and Herb Simon; Communications of the ACM,
Vol. 19, No. 3, March 1976.

2.  Steps Toward Artificial Intelligence.  Marvin Minsky; Proceedings of the
IRE, Jan. 1961.

3.  Computing Machinery and Intelligence.  Alan Turing; Mind (Turing's
original proposal for the "Turing Test").

4.  Exploring the Labyrinth of the Mind.  James Gleick; New York Times
Magazine, August 21, 1983 (article about Doug Hofstadter's recent work).


TEXTBOOKS:

1.  ARTIFICIAL INTELLIGENCE, Patrick H. Winston; Addison Wesley, 1983.
Will be available from publisher in early 1984.  I will distribute a
copy printed from Patrick's computer-typeset manuscript.

2.  LISP, Patrick Winston and Berthold K. P. Horn; Addison Wesley, 1981.
Excellent introductory programming text, illustrating many AI implementation
techniques at a level accessible to novice programmers.

4.  GODEL, ESCHER, BACH: AN ETERNAL GOLDEN BRAID, Douglas R. Hofstadter;
Basic Books, 1979.  One of the most entertaining books on the subject of AI,
formal systems, and symbolic modeling of intelligence.

5.  THE HANDBOOK OF ARTIFICIAL INTELLIGENCE, Avron Barr, Paul Cohen, and
Edward Feigenbaum; William Kaufman Press, 1981 and 1982.  Comes as a three
volume set.  Excellent (the best available), but the full set costs over $100.

6.  ANATOMY OF LISP, John Allen; McGraw-Hill, 1978.  Excellent text on the
definition and implementation of LISP, sufficient to enable one to write a
complete LISP interpreter.

------------------------------

Date: 31 Dec 83 15:21:46-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course -- part 2/6  (Topic Outline)
Article-I.D.: psuvax.382

CMPSC 481:  INTRODUCTION TO ARTIFICIAL INTELLIGENCE


TOPIC OUTLINE:

   INTRODUCTION:  What is "Intelligence"?

   Computer modeling of "intelligent" human performance.  Turing Test.
   Brief history of AI.  Examples of "intelligent" programs:  Evan's Geometric
   Analogies, the Logic Theorist, General Problem Solver, Winograd's English
   language conversing blocks world program (SHRDLU), MACSYMA, MYCIN, DENDRAL.

   PRELUDE AND FUGUE ON THE "SECRET OF INTELLIGENCE":

   "What is a Brain that it may possess Intelligence, and Intelligence that
   it may inhabit a Brain?"  Introduction to Formal Systems, Physical Symbol
   Systems, and Multilevel Interpreters.

   REPRESENTATION OF PROBLEMS, GOALS, ACTIONS, AND KNOWLEDGE:

   State Space problem formulations.  Predicate Calculus.  Semantic Networks.
   Production Systems.  Frames and Scripts.

   SEARCH:

   Representation of problem-solving as graph search.
   "Blind" graph search:
      Depth-first, Breadth-first.
   Heuristic graph search:
      Best-first, Branch and Bound, Hill-Climbing.
   Representation of game-playing as tree search:
      Static Evaluation, Minimax, Alpha-Beta.
   Heuristic Search as a General Paradigm:
      Search WITH Heuristics, Search FOR Heuristics

   THE GENERAL PROBLEM SOLVER (GPS) AS A MODEL OF INTELLIGENCE:

   Goals and Subgoals -- problem decomposition
   Difference-Operator Tables -- the solution to subproblems
   Does the model fit?  Does GPS work?

   EXPERT SYSTEMS AND KNOWLEDGE ENGINEERING:

   Representation of Knowledge:  The "Production System" Movement
   The components:
      Knowledge Base
      Inference Engine
   Examples of famous systems:
      MYCIN, TEIRESIAS, DENDRAL, MACSYMA, PROSPECTOR

   INTRODUCTION TO LISP PROGRAMMING:

   Symbolic expressions and symbol manipulation:
      Basic data types
         Symbols
            The special symbols T and NIL
         Numbers
         Functions
      Assignment of Values to Symbols (SETQ)
      Objects constructed from basic types
         Constructor functions:  CONS, LIST, and APPEND
         Accessor functions:  CAR, CDR
   Evaluation and Quotation
   Predicates
   Definition of Functions (DEFUN)
   Flow of Control (COND, PROG, DO)
   Input and Output (READ, PRINT, TYI, TYO, and friends)

   REPRESENTATION OF DECLARATIVE KNOWLEDGE IN LISP:

   Built-in representation mechanisms
      Property lists
      Arrays
   User-definable data structures
      Data-structure generating macros (DEFSTRUCT)
   Manipulation of List Structure
      "Pure" operations (CONS, LIST, APPEND, REVERSE)
      "Impure" operations (RPLACA and RPLACD, NCONC, NREVERSE)
   Storage Mapping, the Free List, and Garbage Collection

   REPRESENTATION OF PROCEDURAL KNOWLEDGE IN LISP:

   Types of Functions
      Expr:  Call by Value
      Fexpr:  Call by Name
      Macros and macro-expansion
   Functions as Values
      APPLY, FUNCALL, LAMBDA expressions
      Mapping operators (MAPCAR and friends)
      Functional Arguments (FUNARGS)
      Functional Returned Values (FUNVALS)

   THE MEANING OF "VALUE":

   Assignment of values to symbols
   Binding of values to symbols
      "Local" vs "Global" variables
      "Dynamic" vs "Lexical" binding
      "Shallow" vs "Deep" binding
   The concept of the "Environment"

   "VALUES" AND THE OBJECT-CENTERED VIEW OF PROGRAMMING:

   Data-Driven programming
   Message-passing
   Information Hiding
   Safety through Modularity
   The MIT Lisp Machine "Flavor" system

   LISP'S TALENTS IN REPRESENTATION AND SEARCH:

   Representation of symbolic structures in LISP
      Predicate Calculus
      Rule-Based Expert Systems (the Knowledge Base examined)
      Frames
   Search Strategies in LISP
      Breadth-first, Depth-first, Best-first search
      Tree search and the simplicity of recursion
   Interpretation of symbolic structures in LISP
      Rule-Based Expert Systems (the Inference Engine examined)
      Symbolic Mathematical Manipulation
         Differentiation and Integration
      Symbolic Pattern Matching
         The DOCTOR program (ELIZA)

   LISP AS THE "SYSTEMS SUBSTRATE" FOR HIGHER LEVEL ABSTRACTIONS

   Frames and other Knowledge Representation Languages
   Discrimination Nets
   Augmented Transition Networks (ATNs) as a specification of English syntax
   Interpretation of ATNs
   Compilation of ATNs
   Alternative Control Structures
      Pattern-Directed Inference Systems (production system interpreters)
      Agendas (best-first search)
      Chronological Backtracking (depth-first search)
      Dependency-Directed Backtracking
   Data Dependencies, Non-Monotonic Logic, and Truth-Maintenance Systems
   "Higher" High-Level Languages:  PLANNER, CONNIVER

   PROBLEM SOLVING AND PLANNING:

   Hierarchical models of planning
      GPS, STRIPS, ABSTRIPS

   Non-Hierarchical models of planning
      NOAH, MOLGEN

   THE UNDERSTANDING OF NATURAL LANGUAGE:

   The History of "Machine Translation" -- a seemingly simple task
   The Failure of "Machine Translation" -- the need for deeper understanding
   The Syntactic Approach
      Grammars and Machines -- the Chomsky Hierarchy
      RTNs, ATNs, and the work of Terry Winograd
   The Semantic Approach
      Conceptual Dependency and the work of Roger Schank
   Spoken Language Understanding
      HEARSAY
      HARPY

   ROBOTICS:

   Machine Vision
      Early visual processing (a signal processing approach)
      Scene Analysis and Image Understanding (a symbolic processing approach)
   Manipulator and Locomotion Control
      Statics, Dynamics, and Control issues
      Symbolic planning of movements

   MACHINE LEARNING:

   Rote Learning and Learning by Adaptation
      Samuel's Checker player
   Learning from Examples
      Winston's ARCH system
      Mitchell's Version Space approach
   Learning by Planning and Experimentation
      Samuel's program revisited
      Sussman's HACKER
      Mitchell's LEX
   Learning by Heuristically Guided Discovery
      Lenat's AM (Automated Mathematician)
      Extending the Heuristics:  EURISKO
   Model Induction via Generate-and-Test
      The META-DENDRAL project
   Automatic Formation of Scientific Theories
      Langley's BACON project
   A Model for Intellectual Evolution (my own work)

   RECAP ON THE PRELUDE AND FUGUE:

   Formal Systems, Physical Symbol Systems, and Multilevel Interpreters
   revisited -- are they NECESSARY?  are they SUFFICIENT?  Is there more
   (or less) to Intelligence, Consciousness, the Soul?

   SUMMARY, CONCLUSIONS, AND FORECASTS:

   The representation of knowledge in Artificial Intelligence
   The problem-solving paradigms of Artificial Intelligence
   The key ideas and viewpoints in the modeling and creation of intelligence
   The results to date of the noble effort
   Prospectus for the future


------------------------------

Date: 31 Dec 83 15:28:32-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course -- part 6/6  (Overview)
Article-I.D.: psuvax.386

A couple of notes about how the course went.  Interest was high, but the
main problem I found is that Penn State students are VERY strongly
conditioned to work for grades and little else.  Most teachers bore them,
expect them to memorize lectures and regurgitate on exams, and students
then get drunk (over 50 frats here) and promptly forget all.  Initially
I tried to teach, but I soon realized that PEOPLE CAN LEARN (if they
really want to) BUT NOBODY CAN TEACH (students who don't want to learn).
As the course evolved my role became less "information courier" and more
"imagination provoker".  I designed exams NOT to measure learning but to
provoke thinking (and thereby learning).  The first exam (on semantic
nets) was given just BEFORE covering that topic in lecture -- students
had a hell of a hard time on the exam, but they sure sat up and paid
attention to the next week's lectures!

For the second exam I announced that TWO exams were being given: an easy
one (if they sat on one side of the room) and a hard one (on other side).
Actually the exams were identical.  (This explains the first question.)
The winning question submitted from the audience related to the chapter
in GODEL, ESCHER, BACH on the MU system: I gave a few axioms and inference
rules and then asked whether a given wff was a theorem.

The third exam was intended ENTIRELY to provoke discussion and NOT AT ALL
to measure anything.  It started with deadly seriousness, then (about 20
minutes into the exam) a few "audience plants" started acting out a
prearranged script which included discussing some of the questions and
writing some answers on the blackboard.  The attempt was to puncture the
"exam mentality" and generate some hot-blooded debate (you'll see what I
mean when you see the questions).  Even the Teaching Assistants were kept
in the dark about this "script"!  Overall, the attempt failed, but many
people did at least tell me that taking the exams was the most fun part
of the course!

With this lead-in, you probably have a clearer picture of some of the
motivations behind the spring term course.  To put it bluntly: I CANNOT
TEACH AI.  I CAN ONLY HOPE TO INSPIRE INTERESTED STUDENTS TO WANT TO LEARN
AI.  I'LL DO ANYTHING I CAN THINK OF WHICH INCREASES THAT INSPIRATION.

The motivational factors also explain my somewhat unusual grading system.
I graded on creativity, imagination, inspiration, desire, energy, enthusiasm,
and gusto.  These were partly measured by the exams, partly by the energy
expended on several optional projects (and term paper topics), and partly
by my seat-of-the-pants estimate of how determined a student was to DO real
AI.  This school prefers strict objective measures of student performance.
Tough.

This may all be of absolutely no relevance to others teaching AI.  Maybe
I'm just weird.  I try to cultivate that image, for it seems to attract
the best and brightest students!

					-- Bob Giansiracusa

------------------------------

End of AIList Digest
********************
 5-Jan-84 11:33:19-PST,17105;000000000001
Mail-From: LAWS created at  5-Jan-84 11:30:29
Date: Thu  5 Jan 1984 11:16-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #4
To: AIList@SRI-AI


AIList Digest            Thursday, 5 Jan 1984       Volume 2 : Issue 4

Today's Topics:
  Course - PSU's First AI Course (continued)
----------------------------------------------------------------------

Date: 31 Dec 83 15:23:38-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course -- part 3/6  (First Exam)
Article-I.D.: psuvax.383

[The intent and application of the following three exams was described
in the previous digest issue.  The exams were intended to look difficult
but to be fun to take.  -- KIL]


********        ARTIFICIAL INTELLIGENCE  --  First Exam        ********

The field of Artificial Intelligence studies the modeling of human
intelligence in the hope of constructing artificial devices that display
similar behavior.  This exam is designed to study your ability to model
artificial intelligence in the hope of improving natural devices that
display similar behavior.  Please read ALL the questions first, introspect
on how an AI system might solve these problems, then simulate that system.
(Please do all work on separate sheets of paper.)


EASY PROBLEM:

The rules for differentiating polynomials can be expressed as follows:

IF the input is:  (A * X ^ 3) + (B * X ^ 2) + (C * X ^ 1) + (D * X ^ 0)

THEN the output is:
 (3 * A * X ^ 2) + (2 * B * X ^ 1) + (1 * C * X ^ 0) + (0 * D * X ^ -1)

(where "*" indicates multiplication and "^" indicates exponentiation).

Note that all letters here indicate SYMBOLIC VARIABLES (as in algebra),
not NUMERICAL VALUES (as in FORTRAN).


1.  Can you induce from this sample the general rule for polynomial
differentiation?  Express that rule in English or Mathematical notation.
(The mathematicians in the group may have some difficulty here.)

2.  Can you translate your "informal" specification of the differentiation
rule into a precise statement of an inference rule in a Physical Symbol
System?  That is, define a set of objects and relations, a notation for
expressing them (hint: it doesn't hurt for the notation to look somewhat
like a familiar programming language which was invented to do mathematical
notation), and a symbolic transformation rule that encodes the rule of
inference representing differentiation.

3.  Can you now IMPLEMENT your Physical Symbol System using some familiar
programming language?  That is, write a program which takes as input a
data structure encoding your symbolic representation of a polynomial and
returns a data structure encoding the representation of its derivative.
(Hint as a check on infinite loops:  this program can be done in six
or fewer lines of code.  Don't be afraid to define a utility function
or two if it helps.)


SLIGHTLY HARDER PROBLEM:

Consider a world consisting of one block (a small wooden cubical block)
standing on the floor in the middle of a room.  A fly is perched on the
South wall, looking North at the block.  We want to represent the world
as seen by the fly.  In the fly's world the only thing that matters is
the position of that block.  Let's represent the world by a graph
consisting of a single node and no links to any other nodes.  Easy enough.

4.  Now consider a more complicated world.  There are TWO blocks, placed
apart from each other along an East/West line.  From the fly's point of
view, Block A (the western block) is TO-THE-LEFT-OF Block B (the eastern
block), and Block B has a similar relationship (TO-THE-RIGHT-OF) to
Block A.  Draw your symbolic representation of the situation as a graph
with nodes for the blocks and labeled links for the two relationships
which hold between the blocks.  (Believe it or not, you have just invented
the representation mechanism called a "semantic network".)

5.  Now the fly moves to the northern wall, looking south.  Draw the new
semantic network which represents the way the blocks look to him from his
new vantage point.

6.  What you have diagrammed in the above two steps is a Physical Symbol
System: a symbolic representation of a situation coupled with a process
for making changes in the representation which correspond homomorphically
with changes in the real world represented by the symbol system.
Unfortunately, your symbol system does not yet have a concrete
representation for this changing process.  To make things more concrete,
let's transform to another Physical Symbol System which can encode
EXPLICITLY the representation both of the WORLD (as seen by the fly)
and of HOW THE WORLD CHANGES when the fly moves.

Invent a representation for your semantic network using some familiar
programming language.  Remember what is being modeled are OBJECTS (the
blocks) and RELATIONS between the objects.  Hint: you might like to
use property lists, but please feel no obligations to do so.

7.  Now the clincher which demonstrates the power of the idea that a
physical symbol system can represent PROCESSES as well as OBJECTS and
RELATIONS.  Write a program which transforms the WORLD-DESCRIPTION for
FLY-ON-SOUTH-WALL to WORLD-DESCRIPTION for FLY-ON-NORTH-WALL.  The
program should be a single function (with auxiliaries if you like)
which takes two arguments, the symbol SOUTH for the initial wall and
NORTH for target wall, uses a global symbol whose value is your semantic
network representing the world seen from the south wall, and returns
T if successful and NIL if not.  As a side effect, the function should
CHANGE the symbolic structure representing the world so that afterward
it represents the blocks as seen by the fly from the north wall.
You might care to do this in two steps: first describing in English or
diagrams what is going on and then writing code to do it.

8.  The world is getting slightly more complex.  Now there are four
blocks, A and B as before (spread apart along an East/West line), C
which is ON-TOP-OF B, and D which is just to the north of (ie, in back
of when seen from the south) B.  Let's see your semantic network in
both graphical and Lisp forms.  The fly is on South wall, looking North.
(Note that we mean "directly left-of" and so on.  A is LEFT-OF B but has
NO relation to D.)

9.  Generalize the code you wrote for question 4 (if you haven't already)
so that it correctly transforms the world seen by the fly from ANY of
the four walls (NORTH, EAST, SOUTH, and WEST) to that seen from any other
(including the same) wall.  What I mean by "generalize" is don't write
code that works only for the two-block or four-block worlds; code it so
it will work for ANY semantic network representing a world consisting of
ANY number of blocks with arbitrary relations between them chosen from
the set {LEFT-OF, RIGHT-OF, IN-FRONT-OF, IN-BACK-OF, ON-TOP-OF, UNDER}.
(Hint: if you are into group theory you might find a way to do this with
only ONE canonical transformation; otherwise just try a few examples
until you catch on.)

10.  Up to now we have been assuming the fly is always right-side-up.
Can you do question 6 under the assumption that the fly sometimes perches
on the wall upside-down?  Have your function take two extra arguments
(whose values are RIGHT-SIDE-UP or UPSIDE-DOWN) to specify the fly's
vertical orientation on the initial and final walls.

11.  Up to now we have been modeling the WORLD AS SEEN BY THE FLY.  If
the fly moves, the world changes.  Why is this approach no good when
we allow more flies into the room and wish to model the situation from
ANY of their perspectives?

12.  What can be done to fix the problem you pointed out above?  That is,
redefine the "axioms" of your representation so it works in the "multiple
conscious agent" case.  (Hint: new axioms might include new names for
the relations.)

13.  In your new representation, the WORLD is a static object, while we
have functions called "projectors" which given the WORLD and a vantage
point (a symbol from the set {NORTH, EAST, SOUTH, WEST} and another from
the set {RIGHT-SIDE-UP, UPSIDE-DOWN}) return a symbolic description (a
"projection") of the world as seen from that vantage point.  For the
reasons you gave in answer to question 11, the projectors CANNOT HAVE
SIDE EFFECTS.  Write the projector function.

14.  Now let's implement a perceptual cognitive model builder, a program
that takes as input a sensory description (a symbolic structure which
represents the world as seen from a particular vantage point) and a
description of the vantage point and returns a "static world descriptor"
which is invariant with respect to vantage point.  Code up such a model
builder, using for input a semantic network of the type you used in
questions 6 through 10 and for output a semantic network of the type
used in questions 12 and 13.  (Note that this function in nothing more
than the inverse of the projector from question 13.)


********    THAT'S IT !!!    THAT'S IT !!!    THAT'S IT !!!    ********


SOME HELPFUL LISP FUNCTIONS
You may use these plus anything else discussed in class.

Function      Argument description          Return value     Side effect

PUTPROP <symbol> <value> <property-name> ==>  <value>       adds property
GET <symbol> <property-name>             ==>  <value>
REMPROP <symbol> <property-name>         ==>  <value>    removes property


***********************************************************************

					-- Bob Giansiracusa

------------------------------

Date: 31 Dec 83 15:25:34-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course -- part 4/6  (Second Exam)
Article-I.D.: psuvax.384

1.  (20) Why are you now sitting on this side of the room?  Can you cite
an AI system which used a similar strategy in deciding what to do?

2.  (10) Explain the difference between vs CHRONOLOGICAL and DEPENDENCY-
DIRECTED backtracking.

3.  (10) Compare and contrast PRODUCTION SYSTEMS and SEMANTIC NETWORKS as
far as how they work, what they can represent, what type of problems are
well-suited for solution using that type of knowledge representation.

4.  (20) Describe the following searches in detail.  In detail means:
 1) How do they work??           2) How are they related to each other??
 3) What are their advantages??  4) What are their disadvantages??
      Candidate methods:
         1) Depth-first                 2) Breadth-first
         3) Hill-climbing               4) Beam search
         5) Best-first                  6) Branch-and-bound
         7) Dynamic Programming         8) A*

5.  (10) What are the characteristics of good generators for
the GENERATE and TEST problem-solving method?

6.  (10) Describe the ideas behind Mini-Max.  Describe the ideas behind
Alpha-Beta.  How do you use the two of them together and why would you
want to??

7.  (50) Godel's Incompleteness Theorem states that any consistent and
sufficiently complex formal system MUST express truths which cannot be
proved within the formal system.  Assume that THIS theorem is true.
  1.  If UNPROVABLE, how did Godel prove it?
  2.  If PROVABLE, provide an example of a true but unprovable statement.

8.  (40) Prove that this exam is unfinishable correctly; that is, prove
that this question is unsolvable.

9.  (50) Is human behavior governed by PREDESTINATION or FREE-WILL?  How
could you design a formal system to solve problems like that (that is, to
reason about "non-logical" concepts)?

10.  (40) Assume only ONE question on this exam were to be graded -- the
question that is answered by the FEWEST number of people.  How would you
decide what to do?  Show the productions such a system might use.

11.  (100) You will be given extra credit (up to 100 points) if by 12:10
pm today you bring to the staff a question.  If YOUR question is chosen,
it will be asked and everybody else given 10 points for a correct answer.
YOU will be given 100 points for a correct answer MINUS ONE POINT FOR EACH
CORRECT ANSWER GIVEN BY ANOTHER CLASS MEMBER.  What is your question?

					-- Bob Giansiracusa

------------------------------

Date: 31 Dec 83 15:27:19-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course -- part 5/6  (Third Exam)
Article-I.D.: psuvax.385

1.  What is the sum of the first N positive integers?  That is, what is:

         [put here the sigma-sign notation for the sum]

2.  Prove that the your answer works for any N > 0.

3.  What is the sum of the squares of the first N positive integers:

         [put here the sigma-sign notation for the sum]

4.  Again, prove it.

5.  The proofs you gave (at least, if you are utilizing "traditional"
mathematical background,) are based on "mathematical induction".
Briefly state this principle and explain why it works.

6.  If you are like most people, your definition will work only over the
domain of NATURAL NUMBERS (positive integers).  Can this definition be
extended to work over ANY countable domain?

7.  Consider the lattice of points in N-dimensional space having integer
valued coordinates.  Is this space countable?

8.  Write a program (or express an algorithm in pseudocode) which returns
the number of points in this space (the one in #7) inside an N-sphere of
radius R (R is a real number > 0).

9.  The domains you have considered so far are all countable.  The problem
solving methods you have used (if you're "normal") are based on
mathematical induction.  Is it possible to extend the principle of
mathematical induction (and recursive programming) to NON-COUNTABLE
domains?

10.  If you answered #9 NO, why not?  If you answered it YES, how?

11.  Problems #1 and #3 require you to perform INDUCTIVE REASONING
(a related but different use of the term "induction").  Discuss some of
the issues involved in getting a computer to perform this process
automatically.  (I mean the process of generating a finite symbolic
representation which when evaluated will return the partial sum for
an infinite sequence.)

12.  Consider the "sequence extrapolation" task: given a finite sequence
of symbols, predict the next few terms of the sequence or give a rule
which can generate ALL the terms of the sequence.  Is this problem
uniquely solvable?  Why or why not?

13.  If you answered #12 YES, how would you build a computer program to
do so?

14.  If you answered #12 NO, how could you constrain the problem to make
it uniquely solvable?  How would you build a program to solve the
constrained problem?

15.  Mankind is faced with the threat of nuclear annihilation.  Is there
anything the field of AI has to offer which might help avert that threat?
(Don't just say "yes" or "no"; come up with something real.)

16.  Assuming mankind survives the nuclear age, it is very likely that
ethical issues relating to AI and the use of computers will have very
much to do with the view the "person on the street" has of the human
purpose and role in the Universe.  In what way can AI researchers plan
NOW so that these ethical issues are resolved to the benefit of the
greatest number of people?

17.  Could it be that our (humankind's) purpose on earth is to invent
and build the species which will be the next in the evolutionary path?
Should we do so?  How?  Why?  Why not?

18.  Suppose you have just discovered the "secret" of Artificial
Intelligence; that is, you (working alone and in secret) have figured
out a way (new hardware, new programming methodology, whatever) to build
an artificial device which is MORE INTELLIGENT, BY ANY DEFINITION, BY
ANY TEST WHATSOEVER, that any human being.  What do you do with this
knowledge?  Explain the pros and cons of several choices.

19.  Question #9 indicates that SO FAR all physical symbol systems have
dealt ONLY with discrete domains.  Is it possible to generalize the
idea to continuous domains?  Since many aspects of the human nervous
system function on a continuous (as opposed to discrete) basis, is it
possible that the invention of CONTINUOUS PHYSICAL SYMBOL SYSTEMS might
provide part of the key to the "secret of intelligence"?

20.  What grade do you feel you DESERVE in this course?  Why?  What
grade do you WANT?  Why?  If the two differ, is there anything you
want to do to reduce the difference?  Why or Why Not?  What is it?
Why is it (or is it not) worth doing?

--
Spoken: Bob Giansiracusa
Bell:   814-865-9507
Bitnet: bobgian@PSUVAX1.BITNET
Arpa:   bobgian%psuvax1.bitnet@Berkeley
CSnet:  bobgian@penn-state.csnet
UUCP:   allegra!psuvax!bobgian
USnail: Dept of Comp Sci, Penn State Univ, University Park, PA 16802

------------------------------

End of AIList Digest
********************
 9-Jan-84 15:02:04-PST,8392;000000000001
Mail-From: LAWS created at  9-Jan-84 14:59:13
Date: Mon  9 Jan 1984 14:53-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #5
To: AIList@SRI-AI


AIList Digest            Tuesday, 10 Jan 1984       Volume 2 : Issue 5

Today's Topics:
  AI and Weather Forecasting - Request,
  Expert Systems - Request,
  Pattern Recognition & Cognition,
  Courses - Reaction to PSU's AI Course,
  Programming Lanuages - LISP Advantages
----------------------------------------------------------------------

Date: Mon 9 Jan 84 14:15:13-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: AI and Weather Forecasting

I have been talking with people interested in AI techniques for
weather prediction and meteorological analysis.  I would appreciate
pointers to any literature or current work on this subject, especially

    * knowledge representations for spatial/temporal reasoning;
    * symbolic description of weather patterns;
    * capture of forecasting expertise;
    * inference methods for estimating meteorological variables
      from (spatially and temporally) sparse data;
    * methods of interfacing symbolic knowledge and heuristic
      reasoning with numerical simulation models;
    * any weather-related expert systems.

I am aware of some recent work by Gaffney and Racer (NBS Trends and
Applications, 1983) and by Taniguchi et al. (6th Pat. Rec., 1982),
but I have not been following this field.  A bibliography or guide
to relevant literature would be welcome.

                                        -- Ken Laws

------------------------------

Date: 5 January 1984 13:47 est
From: RTaylor.5581i27TK at RADC-MULTICS
Subject: Expert Systems Info Request


Hi, y'all...I have the names (hopefully, correct) of four expert
systems/tools/environments (?).  I am interested in the "usual":  that
is, general info, who to contact, feedback from users, how to acquire
(if we want it), etc.  The four names I have are:  RUS, ALX, FRL, and
FRED.

Thanks.  Also, thanks to those who provided info previously...I have
info (similar to that requested above) on about 15 other
systems/tools/environments...some of the info is a little sketchy!

             Roz  (aka:  rtaylor at radc-multics)

------------------------------

Date: 3 Jan 84 20:38:52-PST (Tue)
From: decvax!genrad!mit-eddie!rh @ Ucb-Vax
Subject: Re: Loop detection and classical psychology
Article-I.D.: mit-eddi.1114

One of the truly amazing things about the human brain is that its pattern
recognition capabilities seem limitless (in extreme cases).  We don't even
have a satisfactory way to describe pattern recognition as it occurs in
our brains.  (Well, maybe we have something acceptable at a minimum level.
I'm always impressed by how well dollar-bill changers seem to work.)  As
a friend of mine put it, "the brain immediately rejects an infinite number
of wrong answers," when working on a problem.

Randwulf  (Randy Haskins);  Path= genrad!mit-eddie!rh

------------------------------

Date: Fri 6 Jan 84 10:11:01-PST
From: Ron Brachman <Brachman at SRI-KL>
Subject: PSU's First AI Course

Wow!  I actually think it's kind of neat (but, of course, very wacko).  I
particularly like making people think about the ethical and philosphical
considerations at the same time as their thinking about minimax, etc.

------------------------------

Date: Wed 4 Jan 84 17:23:38-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Re: AIList Digest   V2 #1

[in response to Herb Lin's questions]

Well, 2 more or less answers 1.   One of the main reasons why Lisp and not C
is the language of many people's choice for AI work is that you can easily cons
up at run time a piece of data which "is" the next action you are going to
take.   In most languages you are restricted to choosing from pre-written
actions, unless you include some kind of interpreter right there in your AI
program.   Another reason is that Lisp has all sorts of extensibility.

As for 3, the obvious response is that in Pascal control has to be routed to an
IF statement before it can do any good, whereas in a production system, control
automatically "goes" to any production that is applicable.   This is highly
over-simplified and may not be the answer you were looking for.

                                                - Richard

------------------------------

Date: Friday,  6 Jan 1984 13:10-PST
From: narain@rand-unix
Subject: Reply to Herb Lin: Why is Lisp good for AI?


A central issue in AI is knowledge representation.  Experimentation with  a
new  KR  scheme  often involves defining a new language. Often, definitions
and meanings of new  languages  are  conceived  of  naturally  in  terms of
recursive (hierarchical) structures.  For instance, many grammars of English-
like frontends are recursive, so  are  production  system  definitions,  so
are theorem provers.

The abstract machinery  underlying  Lisp,  the  Lambda  Calculus,  is  also
inherently recursive, yet very simple and powerful.  It involves the notion
of function application to symbolic expressions.  Functions can  themselves
be  symbolic  expressions.  Symbolic expressions provide a basis for SIMPLE
implementation   and   manipulation   of   complex   data/knowledge/program
structures.

It is therefore possible to easily interpret  new  language  primitives  in
terms of Lisp's already very high level primitives.  Thus, Lisp is a  great
"machine language" for AI.

The usefulness of a well understood, powerful, abstract  machinery  of  the
implementation language is probably more obvious when we  consider  Prolog.
The  logical  interpretation of Prolog programs helps considerably in their
development and verification.  Logic is a convenient specification language
for  a  lot  of  AI, and it is far easier to 'compile' those specifications
into a logic language like Prolog than  into  Pascal.  For  instance,  take
natural  language  front ends implemented in DCGs or database/expert-system
integrity and redundancy constraints.

The fact that programs can be considered as data is not true only of  Lisp.
Even in Pascal you can analyze a Pascal program.  The nice thing  in  Lisp,
however,  is  that  because  of  its  few  (but  very powerful) primitives,
programs tend to be simply structured and concise  (cf.  claims  in  recent
issues  of  this  bulletin that Lisp programs were much shorter than Pascal
programs).  So naturally it is simpler to analyze  Lisp  programs  in  Lisp
than it is to analyze Pascal programs in Pascal.

Of course,  Lisp  environments  have  evolved  for  over  two  decades  and
contribute  no  less to its desirability for AI.  Some of the nice features
include screen-oriented editors, interactiveness, debugging facilities, and
an extremely simple syntax.

I would greatly appreciate any comments on the above.

Sanjai Narain
Rand.

------------------------------

Date: 6 Jan 84 13:20:29-PST (Fri)
From: ihnp4!mit-eddie!rh @ Ucb-Vax
Subject: Re: Herb Lin's questons on LISP etc.
Article-I.D.: mit-eddi.1129

One of the problems with LISP, however, is it does not force one
to subscribe the code of good programming practices.  I've found
that the things I have written for my bridge-playing program (over
the last 18 months or so) have gotten incredibly crufty, with
some real brain-damaged patches.  Yeah, I realize it's my fault;
I'm not complaining about it because I love LISP, I just wanted
to mention some of the pitfalls for people to think about.  Right
now, I'm in the process of weeding out the cruft, trying to make
it more clearly modular, decrease the number of similar functions
and so on.  Sigh.

Randwulf  (Randy Haskins);  Path= genrad!mit-eddie!rh

------------------------------

Date: 7 January 1984 15:08 EST
From: Herb Lin <LIN @ MIT-ML>
Subject: my questions of last Digest on differences between PASCAL
         and LISP

So many people replied that I send my thanks to all via the list.  I
very much appreciate the time and effort people put into their
comments.

------------------------------

End of AIList Digest
********************
10-Jan-84 10:13:10-PST,13872;000000000001
Mail-From: LAWS created at 10-Jan-84 10:10:10
Date: Tue 10 Jan 1984 09:48-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #6
To: AIList@SRI-AI


AIList Digest            Tuesday, 10 Jan 1984       Volume 2 : Issue 6

Today's Topics:
  Humor,
  Seminars - Programming Styles & ALICE & 5th Generation,
  Courses - Geometric Data Structures & Programming Techniques & Linguistics
----------------------------------------------------------------------

Date: Mon, 9 Jan 84 08:45 EST
From: MJackson.Wbst@PARC-MAXC.ARPA
Subject: An AI Joke

Last week a cartoon appeared in our local (Rochester NY) paper.  It was
by a fellow named Toles, a really excellent editorial cartoonist who
works out of, of all places, Buffalo:

Panel 1:

[medium view of the Duckburg Computer School building.  A word balloon
extends from one of the windows]
"A lot of you wonder why we have to spend so much time studying these
things."

Panel 2:

[same as panel 1]
"It so happens that they represent a lot of power.  And if we want to
understand and control that power, we have to study them."

Panel 3:

[interior view of a classroom full of personal computers.  At right,
several persons are entering.  At left, a PC speaks]
". . .so work hard and no talking.  Here they come."

Tickler (a mini-cartoon down in the corner):

[a lone PC speaks to the cartoonist]
"But I just HATE it when they touch me like that. . ."


Mark

------------------------------

Date: Sat, 7 Jan 84 20:02 PST
From: Vaughan Pratt <pratt@navajo>
Subject: Imminent garbage collection of Peter Coutts.  :=)

  [Here's another one, reprinted from the SU-SCORE bboard.  -- KIL]

Les Goldschlager is visiting us on sabbatical from Sydney University, and
stayed with us while looking for a place to stay.  We belatedly pointed him
at Peter Coutts, which he immediately investigated and found a place to
stay right away.  His comment was that no pointer to Peter Coutts existed
in any of the housing assistance services provided by Stanford, and that
therefore it seemed likely that it would be garbage collected soon.
-v

------------------------------

Date: 6 January 1984 23:48 EST
From: Steven A. Swernofsky <SASW @ MIT-MC>
Subject: Seminar on Programming Styles in AI

                     DATE:      Thursday, January 12, 1984
                     TIME:      3.45 p.m.  Refreshments
                                4.00 p.m.  Lecture
                     PLACE:     NE43-8th Floor, AI Playroom


               PROGRAMMING STYLES IN ARTIFICIAL INTELLIGENCE

                              Herbert Stoyan
                   University of Erlangen, West Germany

                               ABSTRACT

Not much is clear about the scientific methods used in AI research.
Scientific methods are sets of rules used to collect knowledge about the
subject being researched.  AI is an experimental branch of computer science
which does not seem to use established programming methods.  In several
works on AI we can find the following method:

    1.  develop a new convenient programming style

    2.  invent a new programming language which supports the new style
        (or embed some appropriate elements into an existing AI language,
        such as LISP)

    3.  implement the language (interpretation as a first step is
        typically less efficient than compilation)

    4.  use the new programming style to make things easier.

A programming style is a way of programming guided by a speculative view of
a machine which works according to the programs.  A programming style is
not a programming method.  It may be detected by analyzing the text of a
completed program.  In general, it is possible to program in one
programming language according to the principles of various styles.  This
is true in spite of the fact that programming languages are usually
designed with some machine model (and therefore with some programming
style) in mind.  We discuss some of the AI programming styles.  These
include operator-oriented, logic-oriented, function-oriented, rule-
oriented, goal-oriented, event-oriented, state-oriented, constraint-
oriented, and object-oriented. (We shall not however discuss the common
instruction-oriented programming style).  We shall also give a more detailed
discussion of how an object-oriented programming style may be used in
conventional programming languages.

HOST:  Professor Ramesh Patil

------------------------------

Date: Mon 9 Jan 84 14:09:07-PST
From: Laws@SRI-AI
Subject: SRI Talk on ALICE, 1/23, 4:30pm, EK242


ALICE:  A parallel graph-reduction machine for declarative and other
languages.

SPEAKER -  John Darlington, Department of Computing, Imperial College,
           London
WHEN    -  Monday, January 23, 4:30pm
WHERE   -  AIC Conference Room, EK242

     [This is an SRI AI Center talk.  Contact Margaret Olender at
     MOLENDER@SRI-AI or 859-5923 if you would like to attend.  -- KIL]

                           ABSTRACT

Alice is a highly parallel-graph reduction machine being designed and
built at Imperial College.  Although designed for the efficient
execution of declarative languages, such as functional or logic
languages, ALICE is general purpose and can execute sequential
languages also.

This talk will describe the general model of computation, extended
graph reduction, that ALICE executes, outline how different languages
can be supported by this model, and describe the concrete architecture
being constructed.  A 24-processor prototype is planned for early
1985.  This will give a two-orders-of-magnitude improvement over a VAX
11/750 for derclarative languages. ALICE is being constructed out of
two building blocks, a custom-designed switching chip and the INMOS
transputer. So far, compilers for a functional language, several logic
languages, and LISP have been constructed.

------------------------------

Date: 9 Jan 1984 1556-PST
From: OAKLEY at SRI-CSL
Subject: SRI 5th Generation Talk


  Japan's 5th Generation Computer Project: Past, Present, and Future
      -- personal observations by a researcher of
         ETL (ElectroTechnical Laboratory)

                          Kokichi FUTATSUGI
                    Senior Research Scientist, ETL
                    International Fellow, SRI-CSL


    Talk on January 24, l984, in conference room EL369 at 10:00am.
    [This is an SRI Computer Science Laboratory talk.  Contact Mary Oakley
    at OAKLEY@SRI-AI or 859-5924 if you would like to attend.  -- KIL]


1 Introduction
  * general overview of Japan's research activities in
    computer science and technology
  * a personal view

2 Past -- pre-history of ICOT (the Institute of New Generation
  ComputerTechnology)
  * ETL's PIPS project
  * preliminary research and study activities
  * the establishment of ICOT

3 Present -- present activities
  * the organization of ICOT
  * research activities inside ICOT
  * research activities outside ICOT

4 Future -- ICOT's plans and general overview
  * ICOT's plans
  * relations to other research activities
  * some comments

------------------------------

Date: Thu 5 Jan 84 16:41:57-PST
From: Martti Mantyla <MANTYLA@SU-SIERRA.ARPA>
Subject: Data Structures & Algorithms for Geometric Problems

                    [Reprinted from the SU-SCORE bboard.]

                                  NEW COURSE:
                     EE392 DATA STRUCTURES AND ALGORITHMS
                            FOR GEOMETRIC PROBLEMS


Many   problems   arising  in  science  and  engineering  deal  with  geometric
information.  Engineering design  is  most  often  spatial  activity,  where  a
physical  shape  with  certain desired properties must be created.  Engineering
analysis also uses heavily information on the geometric form of the object.

The seminar Data Structures and Algorithms for Geometric  Problems  deals  with
problems  related to representing and processing data on the geometric shape of
an object in a computer.    It  will  concentrate  on  practically  interesting
solutions to tasks such as

   - representation of digital images,
   - representation of line figures,
   - representation of three-dimensional solid objects, and
   - representation of VLSI circuits.

The  point  of  view  taken  is  hence  slightly  different  from a "hard-core"
Computational Geometry view that  puts  emphasis  on  asymptotic  computational
complexity.    In  practice,  one  needs solutions that can be implemented in a
reasonable  time,  are  efficient  and  robust  enough,  and  can  support   an
interesting   scope  of  applications.    Of  growing  importance  is  to  find
representations  and  algorithms  for  geometry  that   are   appropriate   for
implementation in special hardware and VLSI in particular.

The seminar will be headed by

    Dr. Martti Mantyla (MaM)
    Visiting Scholar
    CSL/ERL 405
    7-9310
    MANTYLA@SU-SIERRA.ARPA

who  will  give  intruductory  talks.    Guest  speakers of the seminar include
well-known scientists and practitioners of the field such as Dr. Leo Guibas and
Dr. John Ousterhout.  Classes are held on

                             Tuesdays, 2:30 - 3:30
                                      in
                                    ERL 126

First class will be on 1/10.

The seminar should be of interest to  CS/EE  graduate  students  with  research
interests   in   computer   graphics,   computational   geometry,  or  computer
applications in engineering.

------------------------------

Date: 6 Jan 1984 1350-EST
From: KANT at CMU-CS-C.ARPA
Subject: AI Programming Techniques Course

                  [Reprinted from the CMUC bboard.]


           Announcing another action-packed AI mini-course!
                 Starting soon in the 5409 near you.

This course covers a variety of AI programming techniques and languages.
The lectures will assume a background equivalent to an introductory AI course
(such as the undergraduate course 15-380/381 or the graduate core course
15-780.)  They also assume that you have had at least a brief introduction to
LISP and a production-system language such as OPS5.

       15-880 A,  Artificial Intelligence Programming Techniques
                         MW 2:30-3:50, WeH 5409


T Jan 10        (Brief organizational meeting only)
W Jan 11        LISP: Basic Pattern Matching (Carbonell)
M Jan 16        LISP: Deductive Data Bases (Steele)
W Jan 18        LISP: Basic Control: backtracking, demons (Steele)
M Jan 23        LISP: Non-Standard Control Mechanisms (Carbonell)
W Jan 25        LISP: Semantic Grammar Interpreter (Carbonell)
M Jan 30        LISP: Case-Frame interpreter (Hayes)
W Feb 1         PROLOG I (Steele)
M Feb 6         PROLOG II (Steele)
W Feb 8         Reason Maintenance and Comparison with PROLOG (Steele)
M Feb 13        AI Programming Environments and Hardware I (Fahlman)
W Feb 15        AI Programming Environments and Hardware II (Fahlman)
M Feb 20        Schema Representation Languages I (Fox)
W Feb 22        Schema Representation Languages II (Fox)
W Feb 29        User-Interface Issues in AI (Hayes)
M Mar 5         Efficient Game Playing and Searching (Berliner)
W Mar 7         Production Systems: Basic Programming Techniques (Kant)
M Mar 12        Production Systems: OPS5 Programming (Kant)
W Mar 14        Efficiency and Measurement in Production Systems (Forgy)
M Mar 16        Implementing Diagnostic Systems as Production Systems (Kahn)
M Mar 26        Intelligent Tutoring Systems: GRAPES and ACT Implementations
                     (Anderson)
W Mar 28        Explanation and Knowledge Acquisition in Expert Systems
                     (McDermott)
M Apr 2         A Production System for Problem Solving: SOAR2 (Laird)
W Apr 4         Integrating Expert-System Tools with SRL (KAS, PSRL, PDS)
                     (Rychener)
M Apr 9         Additional Expert System Tools: EMYCIN, HEARSAY-III, ROSIE,
                   LOOPS, KEE (Rosenbloom)
W Apr 11        A Modifiable Production-System Architecture: PRISM (Langley)
M Apr 16        (additional topics open to negotiation)

------------------------------

Date: 9 Jan 1984 1238:48-EST
From: Lori Levin <LEVIN@CMU-CS-C.ARPA>
Subject: Linguistics Course

                  [Reprinted from the CMUC bboard.]

NATURAL LANGUAGE SYNTAX FOR COMPUTER SCIENTISTS

FRIDAYS  10:00 AM - 12:00
4605 Wean Hall

Lori Levin
Richmond Thomason
Department of Linguistics
University of Pittsburgh

This is an introduction to recent work in generative syntax.  The
course will deal with the formalism of some of the leading syntactic
theories as well as with methodological issues.  Computer scientists
find the formalism used by syntacticians easy to learn, and so the
course will begin at a fairly advanced level, though no special
knowledge of syntax will be presupposed.

We will begin with a sketch of the "Standard Theory," Chomsky's
approach of the mid-60's from which most of the current theories have
evolved.  Then we will examine Government-Binding Theory, the
transformational approach now favored at M.I.T.  Finally, we will
discuss in more detail two nontransformational theories that are more
computationally tractable and have figured in joint research projects
involving linguists, psychologists, and computer scientists:
Lexical-Functional Grammar and Generalized Context-Free Phrase
Structure Grammar.

------------------------------

End of AIList Digest
********************
16-Jan-84 22:15:32-PST,15986;000000000001
Mail-From: LAWS created at 16-Jan-84 22:13:48
Date: Mon 16 Jan 1984 21:55-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #7
To: AIList@SRI-AI


AIList Digest            Tuesday, 17 Jan 1984       Volume 2 : Issue 7

Today's Topics:
  Production Systems - Requests,
  Expert Systems - Software Debugging Aid,
  Logic Programming - Prolog Textbooks & Disjunction Problem,
  Alert - Fermat's Last Theorem Proven?,
  Seminars - Mulitprocessing Lisp & Lisp History,
  Conferences - Logic Programming Discount & POPL'84,
  Courses - PSU's First AI Course & Net AI Course
----------------------------------------------------------------------

Date: 11 Jan 1984 1151-PST
From: Jay <JAY@USC-ECLC>
Subject: Request for production systems

  I would like pointers  to free or  public domain production  systems
(running on Tops-20, Vax-Unix, or Vax-Vms) both interpreters (such  as
ross) and systems built up on them (such as emycin).  I am  especially
interested in Rosie, Ross, Ops5, and Emycin.  Please reply directly to
me.
j'

ARPA: jay@eclc

------------------------------

Date: Thu 12 Jan 84 12:13:20-MST
From: Stanley T. Shebs <SHEBS@UTAH-20.ARPA>
Subject: Taxonomy of Production Systems

I'm looking for info on a formal taxonomy of production rule systems,
sufficiently precise that it can distinguish OPS5 from YAPS, but also say
that they're more similar than either of them is to Prolog.  The only
relevant material I've seen is the paper by Davis & King in MI 8, which
characterizes PSs in terms of syntax, complexity of LHS and RHS, control
structure, and "programmability" (seems to mean meta-rules).  This is
a start, but too vague to be implemented.  A formal taxonomy should
indicate where "holes" exist, that is, strange designs that nobody has
built.  Also, how would Georgeff's (Stanford STAN-CS-79-716) notion of
"controlled production systems" fit in?  He showed that CPSs are more
general than PSs, but then one can also show that any CPS can be represented
by some ordinary PS.  I'm particularly interested in formalization of
the different control strategies - are text order selection (as in Prolog)
and conflict resolution (as in OPS5) mutually exclusive, or can they be
intermixed (perhaps using text order to find 5 potential rules, then
conflict resolution to choose among the 5).  Presumably a sufficiently
precise taxonomy could answer these sorts of questions.  Has anyone
looked at these questions?

                                                        stan shebs

------------------------------

Date: 16 Jan 84 19:13:21 PST (Monday)
From: Ron Newman <Newman.es@PARC-MAXC.ARPA>
Subject: Expert systems for software debugging?

Debugging is a black art, not at all algorithmic, but almost totally
heuristic.  There is a lot of expert knowledge around about how to debug
faulty programs, but it is rarely written down or systemetized.  Usually
it seems to reside solely in the minds of a few "debugging whizzes".

Does anyone know of an expert system that assists in software debugging?
Or any attempts (now or in the past) to produce such an expert?

/Ron

------------------------------

Date: 12 Jan 84 20:43:31-PST (Thu)
From: harpo!floyd!clyde!akgua!sb1!mb2c!uofm-cv!lah @ Ucb-Vax
Subject: prolog reference
Article-I.D.: uofm-cv.457

Could anybody give some references to good introductory book
on prolog?

------------------------------

Date: 14 Jan 84 14:50:57-PST (Sat)
From: decvax!duke!mcnc!unc!bts @ Ucb-Vax
Subject: Re: prolog reference
Article-I.D.: unc.6594

There's only one introductory book I know of, that's Clocksin
and Mellish's "Programming in Prolog", Springer-Verlag, 1981.
It's a silver paperback, probably still under $20.00.

For more information on the language, try Clark and Tarnlund's
"Logic Programming", Academic Press, 1982.  It's a white hard-
back, with an elephant on the cover.  The papers by Bruynooghe
and by Mellish tell a lot about Prolog inplementation.

Bruce Smith, UNC-Chapel Hill
decvax!duke!unc!bts     (USENET)
bts.unc@CSnet-Relay (lesser NETworks)

------------------------------

Date: 13 Jan 84 8:11:49-PST (Fri)
From: hplabs!hao!seismo!philabs!sbcs!debray @ Ucb-Vax
Subject: re: trivial reasoning problem?
Article-I.D.: sbcs.572

Re: Marcel Schoppers' problem: given two lamps A and B, such that:

        condition 1) at least one of them is on at any time; and
        condition 2) if A is on then B id off,

        we are to enumerate the possible configurations without an exhaustive
        generate-and-test strategy.

The following "pure" Prolog program that will generate the various
configurations without exhaustively generating all possible combinations:


  config(A, B) :- cond1(A, B), cond2(A, B).   /* both conditions must hold */

  cond1(1, _).    /* at least one is on an any time ... condition 1 above */
  cond1(_, 1).

  cond2(1, 0).    /* if A is on then B is off */
  cond2(0, _).    /* if A is off, B's value is a don't care */

executing Prolog gives:

| ?- config(A, B).

A = 1
B = 0 ;

A = 0
B = 1 ;

no
| ?- halt.
[ Prolog execution halted ]

Tracing the program shows that the configuration "A=0, B=0" is not generated.
This satisfies the "no-exhaustive-listing" criterion. Note that attempting
to encode the second condition above using "not" will be both (1) not pure
Horn Clause, and (2) using exhaustive generation and filtering.

Saumya Debray
Dept. of Computer Science
SUNY at Stony Brook

                {floyd, bunker, cbosgd, mcvax, cmcl2}!philabs!
                                                              \
        Usenet:                                                sbcs!debray
                                                              /
                   {allegra, teklabs, hp-pcd, metheus}!ogcvax!
        CSNet: debray@suny-sbcs@CSNet-Relay


[Several other messages discussing this problem and suggesting Prolog
code were printed in the Prolog Digest.  Different writers suggested
very different ways of structuring the problem.  -- KIL]


------------------------------

Date: Fri 13 Jan 84 11:16:21-CST
From: Clive Dawson <CC.Clive@UTEXAS-20.ARPA>
Subject: Fermat's Last Theorem Proven?

                [Reprinted from the UTEXAS-20 bboard.]

There was a report last night on National Public Radio's All Things Considered
about a British mathematician named Arnold Arnold who claims to have
developed a new technique for dealing with multi-variable, high-dimensional
spaces.  The method apparently makes generation of large prime numbers
very easy, and has applications in genetics, the many-body problem, orbital
mechanics, etc.  Oh yeah, the proof to Fermat's Last Theorem falls out of
this as well!  The guy apparently has no academic credentials, and refuses
to publish in the journals because he's interested in selling his technique.
There was another mathematician named Jeffrey Colby who had been allowed
to examine Arnold's work on the condition he didn't disclose anything.
He claims the technique is all it's claimed to be, and shows what can
be done when somebody starts from pure ignorance not clouded with some
of the preconceptions of a formal mathematical education.

If anybody hears more about this, please pass it along.

Clive

------------------------------

Date: 12 Jan 84  2350 PST
From: Rod Brooks <ROD@SU-AI>
Subject: Next week's CSD Colloquium.

                [Reprinted from the SU-SCORE bboard.]

  Dr. Richard P. Gabriel, Stanford CSD
  ``Queue-based Multi-processing Lisp''
  4:30pm Terman Auditorium, Jan 17th.

As the need for high-speed computers increases, the need for
multi-processors will be become more apparent. One of the major stumbling
blocks to the development of useful multi-processors has been the lack of
a good multi-processing language---one which is both powerful and
understandable to programmers.

Among the most compute-intensive programs are artificial intelligence (AI)
programs, and researchers hope that the potential degree of parallelism in
AI programs is higher than in many other applications.  In this talk I
will propose a version of Lisp which is multi-processed.  Unlike other
proposed multi-processing Lisps, this one will provide only a few very
powerful and intuitive primitives rather than a number of parallel
variants of familiar constructs.

The talk will introduce the language informally, and many examples along
with performance results will be shown.

------------------------------

Date: 13 January 1984 07:36 EST
From: Kent M Pitman <KMP @ MIT-MC>
Subject: What is Lisp today and how did it get that way?

                 [Reprinted from the MIT-MC bboard.]

                        Modern Day Lisp

        Time:   3:00pm
        Date:   Wednesdays and Fridays, 18-27 January
        Place:  8th Floor Playroom

The Lisp language has changed significantly in the past 5 years. Modern
Lisp dialects bear only a superficial resemblance to each other and to
their common parent dialects.

Why did these changes come about? Has progress been made? What have we
learned in 5 hectic years of rapid change? Where is Lisp going?

In a series of four lectures, we'll be surveying a number of the key
features that characterize modern day Lisps. The current plan is to touch
on at least the following topics:


        Scoping. The move away from dynamic scoping.
        Namespaces. Closures, Locales, Obarrays, Packages.
        Objects. Actors, Capsules, Flavors, and Structures.
        Signals. Errors and other unusual conditions.
        Input/Output. From streams to window systems.


The discussions will be more philosophical than technical. We'll be
looking at several Lisp dialects, not just one. These lectures are not
just something for hackers. They're aimed at just about anyone who uses
Lisp and wants an enhanced appreciation of the issues that have shaped
its design and evolution.

As it stands now, I'll be giving all of these talks, though there
is some chance there will be some guest lecturers on selected
topics. If you have questions or suggestions about the topics to be
discussed, feel free to contact me about them.

                        Kent Pitman (KMP@MC)
                        NE43-826, x5953

------------------------------

Date: Wed 11 Jan 84 16:55:02-PST
From: PEREIRA@SRI-AI.ARPA
Subject: IEEE Logic Programming Symposium (update)

              1984 International Symposium on
                      Logic Programming

                 Student Registration Rates


In our original symposium announcements, we failed to offer a student
registration rate. We would like to correct that situation now.
Officially enrolled students may attend the symposium for the reduced
rate of $75.00.

This rate includes the symposium itself (all three days) and one copy
of the symposium proceedings. It does not include the tutorial, the
banquet, or cocktail parties.  It does however, include the Casino
entertainment show.

Questions and requests for registration forms by US mail to:

   Doug DeGroot                           Fernando Pereira
   Program Chairman                       SRI International
   IBM Research                    or     333 Ravenswood Ave.
   P.O. Box 218                           Menlo Park, CA 94025
   Yorktown Heights, NY 10598             (415) 859-5494
   (914) 945-3497

or by net mail to:

                  PEREIRA@SRI-AI (ARPANET)
                  ...!ucbvax!PEREIRA@SRI-AI (UUCP)

------------------------------

Date: Tue 10 Jan 84 15:54:09-MST
From: Subra <Subrahmanyam@UTAH-20.ARPA>
Subject: *** P O P L 1984 --- Announcement ***

*******************************  POPL 1984 *********************************

                              ELEVENTH ANNUAL

                            ACM SIGACT/SIGPLAN

                               SYMPOSIUM ON

                               PRINCIPLES OF

                           PROGRAMMING LANGUAGES


    *** POPL 1984 will be held in Salt Lake City, Utah January 15-18. ****
  (The skiing is excellent, and the technical program threatens to match it!)

For additional details, please contact

        Prof. P. A. Subrahmanyam
        Department of Computer Science
        University of Utah
        Salt Lake City, Utah 84112.

        Phone: (801)-581-8224

ARPANET: Subrahmanyam@UTAH-20 (or Subra@UTAH-20)


------------------------------

Date: 12 Jan 84 4:51:51-PST (Thu)
From: 
Subject: Re: PSU's First AI Course - Comment
Article-I.D.: sjuvax.108

I would rather NOT get into social issues of AI: there are millions of
forums for that (and I myself have all kinds of feelings and reservations
on the issue, including Vedantic interpretations), so let us keep this
one technical, please.

------------------------------

Date: 13 Jan 84 11:42:21-PST (Fri)
From: 
Subject: Net AI course -- the communications channel
Article-I.D.: psuvax.413

Responses so far have strongly favored my creating a moderated newsgroup
as a sub to net.ai for this course.  Most were along these lines:

    From: ukc!srlm (S.R.L.Meira)

    I think you should act as the moderator, otherwise there would be too
    much noise - in the sense of unordered information and discussions -
    and it could finish looking like just another AI newsgroup argument.
    Anybody is of course free to post whatever they want if they feel
    the thing is not coming out like they want.

Also, if the course leads to large volume, many net.ai readers (busy AI
professionals rather than students) might drop out of net.ai.

For a contrasting position:

    From: cornell!nbires!stcvax!lat

    I think the course should be kept as a newsgroup.  I don't think
    it will increase the nation-wide phone bills appreciably beyond
    what already occurs due to net.politics, net.flame, net.religion
    and net.jokes.

So HERE's how I'll try to keep EVERYBODY happy ...    :-)

... a "three-level" communication channel.  1: a "free-for-all" via mail
(or possibly another newsgroup), 2: a moderated newsgroup sub to net.ai,
3: occasional abstracts, summaries, pointers posted to net.ai and AIList.

People can then choose the extent of their involvement and set their own
"bull-rejection threshold".  (1) allows extensive involvement and flaming,
(2) would be the equivalent of attending a class, and (3) makes whatever
"good stuff" evolves from the course available to all others.

The only remaining question: should (1) be done via a newsgroup or mail?

Please send in your votes -- I'll make the final decision next week.

Now down to the REALLY BIG decisions: names.  I suggest "net.ai.cse"
for level (2).  The "cse" can EITHER mean "Computer Science Education"
or abbreviate "course".  For level (1), how about "net.ai.ffa" for
"free-for-all", or .raw, or .disc, or .bull, or whatever.

Whatever I create gets zapped at end of course (June), unless by then it
has taken on a life of its own.

        -- Bob

[PS to those NOT ON USENET: please mail me your address for private
mailings -- and indicate which of the three "participation levels"
best suits your tastes.]

Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
UUCP:   bobgian@psuvax.UUCP       -or-    allegra!psuvax!bobgian
Arpa:   bobgian@PSUVAX1           -or-    bobgian%psuvax1.bitnet@Berkeley
Bitnet: bobgian@PSUVAX1.BITNET    CSnet:  bobgian@penn-state.csnet
USnail: 333 Whitmore Lab, Penn State Univ, University Park, PA 16802

------------------------------

End of AIList Digest
********************
17-Jan-84 22:57:32-PST,14885;000000000001
Mail-From: LAWS created at 17-Jan-84 22:52:40
Date: Tue 17 Jan 1984 22:43-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #8
To: AIList@SRI-AI


AIList Digest           Wednesday, 18 Jan 1984      Volume 2 : Issue 8

Today's Topics:
  Programming Languages - Lisp for IBM,
  Intelligence - Subcognition,
  Seminar - Knowledge-Based Design Environment
----------------------------------------------------------------------

Date: Thu 12 Jan 84 15:07:55-PST
From: Jeffrey Mogul <MOGUL@SU-SCORE.ARPA>
Subject: Re: lisp for IBM

                [Reprinted from the SU-SCORE bboard.]

        Does anyone know of LISP implementations for IBM 370--3033--308x?

Reminds me of an old joke:
        How many IBM machines does it take to run LISP?

        Answer: two -- one to send the input to the PDP-10, one
                to get the output back.

------------------------------

Date: Thursday, 12 Jan 1984 21:28-PST
From: Steven Tepper <greep@SU-DSN>
Subject: Re: lisp for IBM

                [Reprinted from the SU-SCORE bboard.]

Well, I used Lisp on a 360 once, but I certainly wouldn't recommend
that version (I don't remember where it came from anyway -- the authors
were probably so embarrassed they wanted to remain anonymous).  It
was, of course, a batch system, and its only output mode was "uglyprint" --
no matter what the input looked like, the output would just be printed
120 columns to a line.

------------------------------

Date: Fri 13 Jan 84 06:55:00-PST
From: Ethan Bradford <JLH.BRADFORD@SU-SIERRA.ARPA>
Subject: LISP (INTERLISP) for IBM

                [Reprinted from the SU-SCORE bboard.]

Chris Ryland (CPR@MIT-XX) sent out a query on this before and he got back
many good responses (he gave me copies).  The main thing most people said
is that a version was developed at Uppsula in Sweden in the 70's.  One
person gave an address to write to, which I transcribe here with no gua-
rantees of currentness:
    Klaus Appel
    UDAC
   Box 2103
    750 02 Uppsala
    Sweden
    Phone: 018-11 13 30

------------------------------

Date: 13 Jan 84  0922 PST
From: Jussi Ketonen <JK@SU-AI>
Subject: Lisp for IBM machines

                [Reprinted from the SU-SCORE bboard.]

Standard Lisp runs quite well on the IBM machines.
The folks over at IMSSS on campus know all about it --
they have written several large theorem proving/CAI programs for
that environment.

------------------------------

Date: 11 January 1984 06:27 EST
From: Jerry E. Pournelle <POURNE @ MIT-MC>
Subject: intelligence and genius

I should have thought that if you can make a machine more or
less intelligent; and make another machine ABLE TO RECOGNIZE
GENIUS (it need not itself be able to "be" or "have" genius)
then the "genius machine " problem is probably solved: have the
somewhat intelligent one generate lots of ideas, with random
factors thrown in, and have the second "recognizing" machine
judge the products.
        Obviously they could be combined into one machine.

------------------------------

Date: Sunday, 15 January 1984, 00:18-EST
From: Marek W. Lugowski <MAREK%MIT-OZ@MIT-MC.ARPA>
Subject: Adrressing DRogers' questions (at last) + on subcogniton

    DROGERS (c. November '84):
      I have a few questions I would like to ask, some (perhaps most)
    essentially unanswerable at this time.

Appologies in advance for rashly attempting to answer at this time.

      - Should the initially constructed subcognitive systems be
    "learning" systems, or should they be "knowledge-rich" systems? That
    is, are the subcognitive structures implanted with their knowledge
    of the domain by the programmer, or is the domain presented to the
    system in some "pure" initial state?  Is the approach to
    subcognitive systems without learning advisable, or even possible?

I would go off on a limb and claim that attempting wholesale "learning"
first (whatever that means these days) is silly.  I would think one
would first want to spike the system with hell of a lot of knowledge
(e.g., Dughof's "Slipnet" of related concepts whose links are subject to
cummulative, partial activation which eventually makes the nodes so
connected highly relevant and therefore taken into consideration by the
system).  To repeat Minsky (and probably, most of the AI folk: one can
only learn if one already almost knows it).

      - Assuming human brains are embodiments of subcognitive systems,
    then we know how they were constructed: a very specific DNA
    blueprint controlling the paths of development possible at various
    times, with large assumptions as to the state of the intellectual
    environment.  This grand process was created by trial-and-error
    through the process of evolution, that is, essentially random
    chance. How much (if any) of the subcognitive system must be created
    essentially by random processes? If essentially all, then there are
    strict limits as to how the problem should be approached.

This is an empirical question.  If my now-attempted implementation of
the Copycat Project (which uses the Slipnet described above)
[forthcoming MIT AIM #755 by Doug Hofstadter] will converge nicely, with
trivial tweaking, I'll be inclined to hold that random processes can
indeed do most of the work.  Such is my current, unfounded, belief.  On
the other hand, a failure will not debunk my position--I could always
have messed up implementationally and made bad guesses which "threw"
the system out of its potential convergence.

      - Which processes of the human brain are essentially subcognitive
    in construction, and which use other techniques? Is this balance
    optimal?  Which structures in a computational intelligence would be
    best approached subcognitively, and which by other methods?

Won't even touch the "optimal" question.  I would guess any process
involving a great deal of fan-in would need to be subcognitive in
nature.  This is argued from efficiency.  For now, and for want of
better theories, I'd approach ALL brain functions using subcognitive
models.  The alternative to this at present means von Neumannizing the
brain, an altogether quaint thing to do...

      - How are we to judge the success of a subcognitive system? The
    problems inherent in judging the "ability" of the so-called expert
    systems will be many times worse in this area. Without specific goal
    criteria, any results will be unsatisfying and potentially illusory
    to the watching world.

Performance and plausibility (in that order) ought to be our criteria.
Judging performance accurately, however, will continue to be difficult
as long as we are forced to use current computer architectures.
Still, if a subcognitive system converges at all on a LispM, there's no
reason to damn its performance.  Plausibility is easier to demonstrate;
one needs to keep in touch with the neurosciences to do that.

      - Where will thinking systems REALLY be more useful than (much
   refined) expert systems? I would guess that for many (most?)
   applications, expertise might be preferable to intelligence. Any
   suggestions about fields for which intelligent systems would have a
   real edge over (much improved) expert systems?

It's too early (or, too late?!) to draw such clean lines.  Perhaps REAL
thinking and expertise are much more intertwined than is currently
thought.  Anyway, there is nothing to be gained by pursuing that line of
questioning before WE learn how to explicitly organize knowledge better.


Over all, I defend pursuing things subcognitively for these reasons:

  -- Not expecting thinking to be a cleanly organized, top-down driven
  activity is minimizing one's expectations.  Compare thinking with such
  activities as cellular automata (e.g., The Game of Life) or The Iterated
  Pairwise Prisoner's Dilemma Game to convince yourself of the futility of
  top-down modeling where local rules and their iterated interactions are
  very successful at concisely describing the problem at hand.  No reason
  to expect the brain's top-level behavior to be any easier to explain
  away.

  -- AI has been spending a lot of itself on forcing a von Neumannian
  interpretation on the mind.  At CMU they have it down to an art, with
  Simon's "symbolic information processing" the nowadays proverbial Holy
  Grail.  With all due respect, I'd like to see more research devoted to
  modeling various alleged brain activities with high degree of
  parallelism and probabilistic interaction, systems where "symbols" are
  not givens but intricately invovled intermediates of computation.

  -- It has not been done carefully before and I want at least a thesis
  out of it.

                                -- Marek

------------------------------

Date: Mon, 16 Jan 1984  12:40 EST
From: GLD%MIT-OZ@MIT-MC.ARPA
Subject: minority report


     From: MAREK
     To repeat Minsky (and probably, most of the AI folk: one can
     only learn if one already almost knows it).

By "can only learn if..." do you mean "can't >soon< learn unless...", or
do you mean "can't >ever< learn unless..."?

If you mean "can't ever learn unless...", then the statement has the Platonic
implication that a person at infancy must "already almost know" everything she
is ever to learn.  This can't be true for any reasonable sense of "almost
know".

If you mean "can't soon learn unless...", then by "almost knows X", do you
intend:

 o a narrow interpretation, by which a person almost knows X only if she
   already has knowledge which is a good approximation to understanding X--
   eg, she can already answer simpler questions about X, or can answer
   questions about X, but with some confusion and error; or
 o a broader interpretation, which, in addition to the above, counts as
   "almost knowing X" a situation where a person might be completely in the
   dark about X-- say, unable to answer any questions about X-- but is on the
   verge of becoming an instant expert on X, say by discovering (or by being
   told of) some easy-to-perform mapping which reduces X to some other,
   already-well-understood domain.

If you intend the narrow interpretation, then the claim is false, since people
can (sometimes) soon learn X in the manner described in the broad-
interpretation example.  But if you intend the broad interpretation, then the
statement expands to "one can't soon learn X unless one's current knowledge
state is quickly transformable to include X"-- which is just a tautology.

So, if this analysis is right, the statement is either false, or empty.

------------------------------

Date: Mon, 16 Jan 1984  20:09 EST
From: MAREK%MIT-OZ@MIT-MC.ARPA
Subject: minority report

         From: MAREK
         To repeat Minsky (and probably, most of the AI folk): one can
         only learn if one already almost knows it.

    From: GLD
    By "can only learn if..." do you mean..."can't >ever< learn unless..."?

    If you mean "can't ever learn unless...", then the statement has
    the Platonic implication that a person at infancy must "already almost
    know" everything she is ever to learn.  This can't be true for any
    reasonable sense of "almost know".

I suppose I DO mean "can't ever learn unless".  However, I disagree
with your analysis.  The "Platonic implication" need not be what you
stated it to be if one cares to observe that some of the things an
entity can learn are...how to learn better and how to learn more.  My
original statement presupposes an existence of a category system--a
capacity to pigeonhole, if you will.  Surely you won't take issue with
the hypothesis that an infant's category system is lesser than that of
an adult.  Yet, faced with the fact that many infants do become
adults, we have to explain how the category system can muster to grow
up, as well.

In order to do so, I propose to think that the human learning
is a process where, say, in order to assimilate a chunk of information
one has to have a hundred-, nay, a thousand-fold store of SIMILAR
chunks.  This is by direct analogy with physical growing up--it
happens very slowly, gradually, incrementally--and yet it happens.

If you recall, my original statement was made against attempting
"wholesale learning" as opposed to "knowledge-rich" systems when
building subcognitive sytems.  Admittedly, the complexity of a human
being is many an order of magnitude beyond that what AI will attempt
for decades to come, yet by observing the physical development of a
child we can arrive at some sobbering tips for how to successfully
build complex systems.  Abandoning the utopia of having complex
systems just "self-organize" and pop out of simple interactions of a
few even simplier pieces is one such tip.

                                -- Marek

------------------------------

Date: Tue 17 Jan 84 11:56:01-PST
From: Juanita Mullen  <MULLEN@SUMEX-AIM.ARPA>
Subject: SIGLUNCH ANNOUNCEMENT- JANUARY 20, l984

         [Reprinted from the Stanford SIGLUNCH distribution.]

Friday,   January 20, 1984   12:05

LOCATION: Chemistry Gazebo, between Physical & Organic Chemistry

SPEAKER:  Harold Brown
          Stanford University

TOPIC:    Palladio:  An Exploratory Environment for Circuit Design

Palladio is an environment for experimenting with design methodologies
and  knowledge-based  design   aids.   It  provides   the  means   for
constructing, testing  and incrementally  modifying design  tools  and
languages.  Palladio  is  a  testbed for  investigationg  elements  of
design including  specification,  simulation, refinement  and  use  of
previous designs.

For  the  designer,   Palladio  supports  the   construction  of   new
specification languages  particular to  the design  task at  hand  and
augmentation of  the  system's  expert knowledge  to  reflect  current
design goals  and constraints.   For the  design environment  builder,
Palladio provides several  programming paradigms:  rule based,  object
oriented,  data   oriented  and   logical  reasoning   based.    These
capabilities are largely provided by two of the programming systems in
which Palladio is implemented: LOOPS and MRS.

In this talk,  we will  describe the  basic design  concepts on  which
Palladio is  based,  give  examples  of  knowledge-based  design  aids
developed   within   the   environment,   and   describe    Palladio's
implementation.

------------------------------

End of AIList Digest
********************
22-Jan-84 15:29:15-PST,15132;000000000001
Mail-From: LAWS created at 22-Jan-84 15:25:44
Date: Sun 22 Jan 1984 15:15-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #9
To: AIList@SRI-AI


AIList Digest            Monday, 23 Jan 1984        Volume 2 : Issue 9

Today's Topics:
  AI Culture - Survey Results Available,
  Digests - Vision-List Request,
  Expert Systems - Software Debugging,
  Seminars - Logic Programming & Bagel Architecture,
  Conferences - Principles of Distributed Computing
----------------------------------------------------------------------

Date: 18 Jan 84 14:50:21 EST
From: Smadar <KEDAR-CABELLI@RUTGERS.ARPA>
Subject: How AI People Think - Cultural Premises of the AI Community...

                 [Reprinted from the Rutgers bboard.]

How AI People Think - Cultural Premises of the AI Community...
is the name of a report by sociologists at the University of Genoa, Italy,
based on a survey of AI researchers attending the International AI conference
(IJCAI-8) this past summer.  [...]

Smadar.

------------------------------

Date: Wed, 18 Jan 84 13:08:34 PST
From: Philip Kahn <kahn@UCLA-CS>
Subject: TO THOSE INTERESTED IN COMPUTER VISION, IMAGE PROCESSING, ETC

        This is the second notice directed to all of those interested
in IMAGE PROCESSING, COMPUTER VISION, etc.  There has been a great need,
and interest, in compiling a VISION list that caters to the specialized
needs and interests of those involved in image/vision processing/theory/
implementation.  I broadcast a message to this effect over this BBOARD
about three weeks ago asking for all those that are interested to
respond.  Again, I reiterate the substance of that message:

        1)  If you are interested in participating in a VISION list,
            and have not already expressed your interest to me,
            please do so!  NOW is the time to express that interest,
            since NOW is when the need for such a list is being
            evaluated.
        2)  I cannot moderate the list (due to a lack of the proper type
            of resources to deal with the increased mail traffic).  A
            moderator is DESPERATELY NEEDED!  I will assist you in
            establishing the list, and I am presently in contact with
            the moderator of AILIST (Ken LAWS@SRI-AI) to establish what
            needs to be done.  The job of moderator involves the
            following:
                i)   All mail for the list is sent to you
                ii)  You screen (perhaps, format or edit, depending upon
                     the time and effort you wish to expend) all
                     incoming messages, then redistribute them to the
                     participants on the list at regular intervals.
                iii) You maintain/update the distribution list.
           Needless to say, the job of moderator is extremely rewarding
           and involves a great deal of high visibility.  In addition,
           you get to GREATLY AID in the dissemination and sharing of
           ideas and information in this growing field.  Enough said...
        3) If you know of ANYONE that might be interested in such a
           list, PLEASE LET THEM KNOW and have them express that interest
           to me by sending mail to KAHN@UCLA-CS.ARPA

                                Now's the time to let me know!
                                Philip Kahn

                        send mail to:  KAHN@UCLA-CS.ARPA

------------------------------

Date: 19 Jan 84 15:14:04 EST
From: Lou <STEINBERG@RUTGERS.ARPA>
Subject: Re: Expert systems for software debugging

I don't know of any serious work in AI on software debugging since
HACKER.  HACKER was a part of the planning work done at MIT some years
ago - it was an approach to planning/automatic programming where
planning was done with a simple planner that, e.g., ignored
interactions between plan steps.  Then HACKER ran the plan/program and
had a bunch of mini-experts that detected various kinds of bugs.  See
Sussman, A Computer Model of Skill Acquisition, MIT Press, 1975.

Also, there is some related work in hardware debugging.  Are you aware
of the work by Randy Davis at MIT and by Mike Genesereth at Stanford on
hardware trouble shooting?  This is the problem where you have a piece
of hardware (e.g. a VAX) that used to work but is now broken, and you
want to isolate the component (board, chip, etc.) that needs to be
replaced.  Of course this is a bit different from program debugging,
since you are looking for a broken component rather than a mis-design.
E.g. for trouble shooting you can usually assume a single thing is
broken, but you often have multiple bugs in a program.

Here at Rutgers, we're working on an aid for design debugging for
VLSI.  Design debugging is much more like software debugging.  Our
basic approach is to use a signal constraint propagation method to
generate a set of possible places where the bug might be, and then use
various sorts of heuristics to prune the set (e.g.  a sub-circuit
that's been used often before is less likely to have a bug than a
brand new one).

------------------------------

Date: Fri, 20 Jan 84 8:39:38 EST
From: Paul Broome <broome@brl-bmd>
Subject: Re:  Expert systems for software debugging?


        Debugging is a black art, not at all algorithmic, but almost totally
        heuristic.  There is a lot of expert knowledge around about how
        to debug faulty programs, but it is rarely written down or
        systemetized.  Usually it seems to reside solely in the minds of
        a few "debugging whizzes".

        Does anyone know of an expert system that assists in software
        debugging? Or any attempts (now or in the past) to produce such
        an expert?

There are some good ideas and a Prolog implementation in Ehud Shapiro's
Algorithmic Program Debugging, which is published as an ACM distinguished
dissertation by MIT Press, 1983.  One of his ideas is "divide-and-query:
a query-optimal diagnosis algorithm," which is essentially a simple binary
bug search.  If the program is incorrect on some input then the program
is divided into two roughly equal subtrees and the computation backtracks
to the midpoint.  If this intermediate result is correct then the
first subtree is ignored and the bug search is repeated on the second
subtree.   If the intermediate result is incorrect then the search
continues instead on the first subtree.

------------------------------

Date: 20 Jan 84 19:25:30-PST (Fri)
From: pur-ee!uiucdcs!nielsen @ Ucb-Vax
Subject: Re: Expert systems for software debuggin - (nf)
Article-I.D.: uiucdcs.4980

The Knowledge Based Programming Assistant Project here at the University of
Illinois was founded as a result of a very similar proposal.
A thesis you may be interested in which explains some of our work is
"GPSI : An Expert System to Aid in Program Debugging" by Andrew Laursen
which should be available through the university.

I would be very interested in corresponding with anyone who is considering
the use of expert systems in program debugging.

                                        Paul Nielsen
                                        {pur-ee, ihnp4}!uiucdcs!nielsen
                                        nielsen@uiucdcs

------------------------------

Date: 01/19/84 22:25:55
From: PLUKEL
Subject: January Monthly Meeting, Greater Boston Chapter/ACM

                 [Forwarded from MIT by SASW@MIT-MC.]


        On behalf of GBC/ACM,  J. Elliott Smith, the Lecture Chairman, is
        pleased to present a discussion on the topic of

                                LOGIC PROGRAMMING

                              Henryk Jan Komorowski
                          Division of Applied Sciences
                               Harvard University
                            Cambridge, Massachusetts

             Dr. Komorowski is an Assistant Professor of Computer Science,
        who  received  his MS from  Warsaw University  and  his Phd  from
        Linkoeping University, Linkoeping, Sweden, in 1981.   His current
        research interests include applications of logic programming  to:
        rapid  prototyping,  programming/specification development envir-
        onments, expert systems, and databases.

             Dr.  Komorowski's  articles have appeared in proceedings  of
        the  IXth  POPL,  the 1980 Logic Programming Workshop  (Debrecen,
        Hungary),  and the book "Logic Programming",  edited by Clark and
        Taernlund.   He  acted  as Program Chairman for the  recent  IEEE
        Prolog tutorial at Brandies University, is serving on the Program
        Committee  of  the  1984 Logic  Programming  Symposium  (Atlantic
        City),  and is a member of the Editorial Board of THE JOURNAL  OF
        LOGIC PROGRAMMING.

             Prolog  has been selected as the programming language of the
        Japanese  Fifth  Generation Computer Project.   It is  the  first
        realization of logic programming ideas,  and implements a theorem
        prover  based  on a design attributed  to  J.A.  Robinson,  which
        limits resolution to a Horn clause subset of assertions.

             A  Prolog program is a collection of true statements in  the
        form  of RULES.   A computation is a proof from these assertions.
        Numerous   implementations  of  Prolog  have   elaborated   Alain
        Colmerauer's original, including Dr. Komorowski's own Qlog, which
        operates in LISP environments.

             Dr.  Komorowski  will present an introduction to  elementary
        logic  programming  concepts  and an overview  of  more  advanced
        topics,    including   metalevel   inference,    expert   systems
        programming, databases, and natural language processing.

                                 DATE:     Thursday, 26 January 1984
                                 TIME:     8:00 PM
                                 PLACE:    Intermetrics Atrium
                                           733 Concord Avenue
                                           Cambridge, MA
                                         (near Fresh Pond Circle)

                COMPUTER MOVIE and REFRESHMENTS before the talk.
                 Lecture dinner at 6pm open to all GBC members.
                   Call (617) 444-5222 for additional details.

------------------------------

Date: 20 Jan 84  1006 PST
From: Rod Brooks <ROD@SU-AI>
Subject: Shaprio Seminars at Stanford and Berkeley

      [Adapted from the SU-SCORE bboard and the Prolog Digest.]


  Ehud Shapiro, The Weizmann Institute of Science
  The Bagel: A Systolic Concurrent Prolog Machine

  4:30pm, Terman Auditorium, Tues, Jan 24th, Stanford CSD Colloq.
  1:30pm, Evans 597, Wed., Jan 2th, Berkeley Prolog Seminar



It is argued that explicit mapping of processes to processors is
essential to effectively program a general-purpose parallel computer,
and, as a consequence, that the kernel language of such a computer
should include a process-to-processor mapping notation.

The Bagel is a parallel architecture that combines concepts of
dataflow, graph-reduction and systolic arrays. The Bagel's kernel
language is Concurrent Prolog, augmented with Turtle programs as a
mapping notation.

Concurrent Prolog, combined with Turtle programs, can easily implement
systolic systems on the Bagel. Several systolic process structures are
explored via programming examples, including linear pipes (sieve of
Erasthotenes, merge sort, natural-language interface to a database),
rectangular arrays (rectangular matrix multiplication, band-matrix
multiplication, dynamic programming, array relaxation), static and
dynamic H-trees (divide-and-conquer, distributed database), and
chaotic structures (a herd of Turtles).

All programs shown have been debugged using the Turtle graphics Bagel
simulator, which is implemented in Prolog.

------------------------------

Date: Fri 20 Jan 84 14:56:58-PST
From: Jayadev Misra <MISRA@SU-SIERRA.ARPA>
Subject: call for Papers- Principles of Distributed Computing


                         CALL FOR PAPERS
3rd ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing (PODC)

                        Vancouver, Canada
                      August 27 - 29, 1984

This conference will address fundamental issues in the theory  and
practice   of   concurrent  and  distributed  systems.   Original
research papers describing theoretical or  practical  aspects  of
specification.  design  or  implementation  of  such  systems are
sought.  Topics of interest include, but are not limited to,  the
following aspects of concurrent and distributed systems.

  . Algorithms
  . Formal models of computations
  . Methodologies for program development
  . Issues in specifications, semantics and verifications
  . Complexity results
  . Languages
  . Fundamental results in application areas such as
                distributed databases, communication protocols, distributed
                operating systems, distributed transaction processing systems,
                real time systems.

Please send eleven copies of a detailed abstract (not a  complete
paper) not exceeding 10 double spaced typewritten pages, by MARCH
8, 1984, to the Program Chairman:

  Prof. J. Misra
  Computer Science Department
  University of Texas
  Austin, Texas 78712

The abstract must include a clear description of the problem  be-
ing  addressed, comparisons with extant work and a section on ma-
jor original contributions of this work.  The abstract must  pro-
vide  sufficient detail for the program committee to make a deci-
sion.  Papers will be chosen on the basis  of  scientific  merit,
originality, clarity and appropriateness for this conference.

Authors will be notified of acceptance by April  30,  1984.   Ac-
cepted  papers,  typed on special forms, are due at the above ad-
dress by June 1, 1984.  Authors of accepted papers will be  asked
to sign ACM Copyright forms.

The Conference Chairman is Professor  Tiko  Kameda  (Simon  Fraser
University).   The Publicity Chairman is Professor Nicola Santoro
(Carleton University).  The Local Arrangement Chiarman is Profes-
sor Joseph Peters (Simon Fraser University).  The Program Commit-
tee consists of Ed Clarke (C.M.U.), Greg  N.  Frederickson  (Pur-
due),  Simon Lam (U of Texas, Austin), Leslie Lamport (SRI Inter-
national), Michael Malcom (U  of  Waterloo),  J.  Misra,  Program
Chairman  (U of Texas, Austin), Hector G. Molina (Princeton), Su-
san Owicki (Stanford), Fred Schneider (Cornell),  H.  Ray  Strong
(I.B.M. San Jose), and Howard Sturgis (Xerox Parc).

------------------------------

End of AIList Digest
********************
26-Jan-84 14:42:19-PST,17480;000000000001
Mail-From: LAWS created at 26-Jan-84 14:40:48
Date: Thu 26 Jan 1984 14:23-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #10
To: AIList@SRI-AI


AIList Digest            Friday, 27 Jan 1984       Volume 2 : Issue 10

Today's Topics:
  AI Culture - IJCAI Survey,
  Cognition - Parallel Processing Query,
  Programming Languages - Symbolics Support & PROLOG/ZOG Request,
  AI Software - KEE Knowledge Representation System,
  Review - Rivest Forsythe Lecture on Learning,
  Seminars - Learning with Constraints & Semantics of PROLOG,
  Courses - CMU Graduate Program in Human-Computer Interaction
----------------------------------------------------------------------

Date: 24 Jan 84 12:19:21 EST
From: Smadar <KEDAR-CABELLI@RUTGERS.ARPA>
Subject: Report on "How AI People Think..."

I received a free copy because I attended IJCAI.  I have an address
here, but I don't know if it is the appropriate one for ordering this
report:

Re: the report "How AI People Think - Cultural Premises of the AI community"
Commission of the European Communities
Rue de la Loi, 200
B-1049 Brussels, Belgium

(The report was compiled by Massimo Negrotti, Chair of Sociology of
 Knowledge, University of Genoa, Italy)

Smadar (KEDAR-CABELLI@RUTGERS).

------------------------------

Date: Wed 18 Jan 84 11:05:26-PST
From: Rene Bach <BACH@SUMEX-AIM.ARPA>
Subject: brain, a parallel processor ?

What are the evidences that the brain is a parallel processor?  My own
introspection seem to indicate that mine is doing time-sharing.  That is
I can follow only one idea at a time, but with a lot of switching
between reasoning paths (often more non directed than controlled
switching). Have different people different processors ? Or is the brain
able to function in more than one way (parallel, serial, time-sharing) ??

Rene (bach@sumex)

------------------------------

Date: Wed, 25 Jan 84 15:37:39 CST
From: Mike Caplinger <mike@rice>
Subject: Symbolics support for non-Lisp languages

[This is neither an AI nor a graphics question per se, but I thought
these lists had the best chance of reaching Symbolics users...]

What kind of support do the Symbolics machines provide for languages
other than Lisp?  Specifically, are there interactive debugging
facilities for Fortran, Pascal, etc.?  It's my understanding that the
compilers generate Lisp output.  Is this true, and if so, is the
interactive nature of Lisp exploited, or are the languages just
provided as batch compilers?  Finally, does anyone have anything to say
about efficiency?

Answers to me, and I'll summarize if there's any interest.  Thanks.

------------------------------

Date: Wed 25 Jan 84 09:38:25-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: KEE Representation System

The Jan. issue of IEEE Computer Graphics reports the following:

Intelligenetics has introduced the Knowledge Engineering Environment
AI software development system for AI professionals, computer
scientists, and domain specialists.  The database management program
development system is graphics oriented and interactive, permitting
use of a mouse, keyboard, command-option menus, display-screen
windows, and graphic symbols.

KEE is a frame-based representation system that provides support
for descriptive and procedural knowledge representation, and a
declarative, extendable formalism for controlling inheritance of
attributes and attribute values between related units of
knowledge.  The system provides support for multiple inheritance
hierarchies; the use of user-extendable data types to promote
knowledge-base integrity; object-oriented programming; multiple-
inference engines/rule systems; and a modular system design through
multiple knowledge bases.

The first copy of KEE sells for $60,000; the second for $20,000.
Twenty copies cost $5000 each.

------------------------------

Date: 01/24/84 12:08:36
From: JAWS@MIT-MC
Subject: PROLOG and/or ZOG for TOPS-10

Does anyone out there know where I can get a version of prolog and/or
ZOG to that will run on a DEC-10 (7.01)?  The installation is owned by the
US government, albeit beneign (DOT).

                                THANX JAWS@MC

------------------------------

Date: Tue 24 Jan 84 11:26:14-PST
From: Armar Archbold <ARCHBOLD@SRI-AI.ARPA>
Subject: Rivest Forsythe Lecture on Learning

[The following is a review of a Stanford talk, "Reflections on AI", by
Dr. Ron Rivest of MIT.  I have edited the original slightly after getting
Armar's permission to pass it along.  -- KIL]

Dr. Rivest's  talk  emphasized  the interest of small-scale studies of
learning through experience (a "critter"  with  a  few  sensing  and
effecting operations building up a world model of a blocks environment).
He stressed such familiar themes as

   - "the evolutionary function and value of world  models  is  predicting
     the  future,  and  consequently  knowledge is composed principally of
     expectations, possibilities, hypotheses -  testable  action-sensation
     sequences, at the lowest level of sophistication",

   - "the  field  of  AI  has  focussed  more  on 'backdoor AI', where you
     directly  program  in   data   structures   representing   high-level
     knowledge,  than  on  'front-door' AI, which studies how knowledge is
     built up from non-verbal experience, or 'side door AI', which studies
     how knowledge might be gained through teaching and instruction  using
     language;

   - such a study of simple learning systems in a simple environment -- in
     which an agent with a given  vocabulary  but  little  or  no  initial
     knowledge  ("tabula  rasa")  investigates  the  world (either through
     active experiementation or through changes imposed  by  perturbations
     in  the  surroundings)  and  attempts  to  construct a useful body of
     knowledge   through   recognition   of   identities,    equivalences,
     symmetries,  homomorphisms,  etc.,  and  eventually  metapatterns, in
     action-sensation chains (represented perhaps in dynamic logic) --  is
     of considerable interest.

Such concepts are not new. There have been many mathematical studies,
psychological similations, and AI explorations along the lines since the
50s.  At SRI, Stan Rosenschein was playing around with a simplified learning
critter about a year ago; Peter Cheeseman shares Rivest's interest in
Jaynes' use of entropy calculations to induce safe hypotheses in an
overwhelmingly profuse space of possibilities.  Even so, these concerns
were worth having reactivated by a talk.  The issues raised by some of the
questions from the audience were also intesting, albeit familiar:

   - The critter which starts out with a tabula rasa  will  only  make  it
     through  the  enormous  space  of  possible  patterns induceable from
     experience if it initially "knows" an awful lot about how  to  learn,
     at  whatever  level  of  procedural  abstraction  and/or  "primitive"
     feature selection (such as that done at the level of the eye itself).

   - Do we call intelligence the procedures that permit one to gain useful
     knowledge (rapidly), or the knowledge thus gained, or what mixture of
     both?

   - In addition, there is the question  of  what  motivational  structure
     best furthers the critter's education.  If the critter attaches value
     to  minimum  surprise (various statistical/entropy measures thereof),
     it can sit in a corner and do nothing, in which case it may  one  day
     suddenly  be very surprised and very dead.  If it attaches tremendous
     value to surprise, it could just flip a coin and always  be  somewhat
     surprised.    The  mix  between repetition (non-surprise/confirmatory
     testing) and exploration which produces the best cognitive system  is
     a  fundamental  problem.   And there is the notion of "best" - "best"
     given the critter's values other than curiosity, or "best"  in  terms
     of  survivability,  or  "best"  in  a  kind  of  Occam's  razor sense
     vis-a-vis truth (here it was commented you could rank Carnapian world
     models based on the  simple  primitive  predicates  using  Kolmogorov
     complexity measures, if one could only calculate the latter...)

   - The  success  or  failure  of the critter to acquire useful knowledge
     depends very much on the particular world it is placed in.    Certain
     sequences  of  stimuli will produce learning and others won't, with a
     reasonable, simple learning procedure.  In simple artificial  worlds,
     it  is possible to form some kind of measure of the complexity of the
     environment by seeing what the minimum length action-sensation chains
     are which are true regularities.  Here there is  another  traditional
     but  fascinating question: what are the best worlds for learning with
     respect to  critters  of  a  given  type  -  if  the  world  is  very
     stochastic,  nothing  can  be learned in time; if the world is almost
     unchanging, there is little motivation to learn and  precious  little
     data about regular covariances to learn from.

     Indeed,  in  psychological studies, there are certain sequences which
     will bolster reliance on certain conclusions to such an  extent  that
     those    conclusions    become    (illegitimately)   protected   from
     disconfirmation.  Could one recreate this phenomenon  with  a  simple
     learning  critter  with a certain motivational structure in a certain
     kind of world?

Although these issues seemed familiar, the talk certainly could stimulate
the general public.

                                                                 Cheers - Armar

------------------------------

Date: Tue 24 Jan 84 15:45:06-PST
From: Juanita Mullen  <MULLEN@SUMEX-AIM.ARPA>
Subject: SIGLUNCH ANNOUNCEMENT - FRIDAY, January 27, 1984

           [Reprinted from the Stanford SIGLUNCH distribution.]

Friday,   January 27, 1984
Chemistry Gazebo, between Physical & Organic Chemistry
12:05

SPEAKER:  Tom Dietterich, HPP
          Stanford University

TOPIC:    Learning with Constraints

In attempting to construct a program  that can learn the semantics  of
UNIX commands, several shortcomings of existing AI learning techniques
have been  uncovered.  Virtually  all  existing learning  systems  are
unable to (a)  perform data  interpretation in a  principled way,  (b)
form theories about systems that contain substantial amounts of  state
information, (c) learn from  partial data, and (d)  learn in a  highly
incremental fashion.  This talk  will describe these shortcomings  and
present techniques  for overcoming  them.  The  basic approach  is  to
employ a vocabulary of constraints to represent partial knowledge  and
to apply  constraint-propagation techniques  to draw  inferences  from
this partial knowledge.  These techniques  are being implemented in  a
system called, EG,  whose task is  to learn the  semantics of 13  UNIX
commands (ls, cp,  mv, ln, rm,  cd, pwd, chmod,  umask, type,  create,
mkdir, rmdir) by watching "over-the-shoulder" of a teacher.

------------------------------

Date: 01/25/84 17:07:14
From: AH
Subject: Theory of Computation Seminar

                       [Forwarded from MIT-MC by SASW.]


                           DATE:  February 2nd, 1984
                           TIME:  3:45PM  Refreshments
                                  4:00PM  Lecture
                          PLACE:  NE43-512A

           "OPERATIONAL AND DENOTATIONAL SEMANTICS FOR P R O L O G"

                                      by

                                 Neil D. Jones
                              Datalogisk Institut
                             Copenhagen University

                                   Abstract

  A PROLOG program can go into an infinite loop even when there exists a
refutation of its clauses by resolution theorem proving methods.  Conseguently
one can not identify resolution of Horn clauses in first-order logic with
PROLOG  as it is actually used, namely, as a deterministic programming
language.  In this talk two "computational" semantics of PROLOG will be given.
One is operational and is expressed as an SECD-style interpreter which is
suitable for computer implementation.  The other is a Scott-Strachey style
denotational semantics.  Both were developed from the SLD-refutation procedure
of Kowalski and APT and van Embden, and both handle "cut".

HOST:  Professor Albert R. Meyer

------------------------------

Date:     Wednesday, 25 Jan 84 23:47:29 EST
From:     reiser (brian reiser) @ cmu-psy-a
Reply-to: <Reiser%CMU-PSY-A@CMU-CS-PT>
Subject:  Human-Computer Interaction Program at CMU

                         ***** ANNOUNCEMENT *****

              Graduate Program in Human-Computer Interaction
                       at Carnegie-Mellon University

The  field  of  human-computer  interaction  brings  to  bear  theories and
methodologies from cognitive psychology and computer science to the  design
of   computer   systems,   to   instruction   about   computers,   and   to
computer-assisted instruction.  The new Human-Computer Interaction  program
at  CMU is geared toward the development of cognitive models of the complex
interaction between learning, memory, and language mechanisms  involved  in
using  computers.    Students  in  the  program  apply their psychology and
computer science  training  to  research  in  both  academic  and  industry
settings.

Students in the Human-Computer Interaction program design their educational
curricula  with  the  advice  of  three  faculty  members  who serve as the
student's committee.  The intent  of  the  program  is  to  guarantee  that
students   have  the  right  combination  of  basic  and  applied  research
experience and coursework so that they  can  do  leading  research  in  the
rapidly developing field of human-computer interaction.  Students typically
take  one  psychology  course and one computer science course each semester
for the first two years.  In addition, students participate in a seminar on
human-computer interaction held during the summer  of  the  first  year  in
which  leading  industry  researchers are invited to describe their current
projects.

Students are also actively involved in research throughout  their  graduate
career.    Research  training  begins  with  a collaborative and apprentice
relationship with a faculty member in laboratory research for the first one
or two years of the program.  Such involvement allows the  student  several
repeated   exposures  to  the  whole  sequence  of  research  in  cognitive
psychology and computer science, including conceptualization of a  problem,
design   and   execution   of   experiments,  analyzing  data,  design  and
implementation of computer systems, and writing scientific reports.

In the second half  of  their  graduate  career,  students  participate  in
seminars,  teaching,  and  an  extensive  research project culminating in a
dissertation.  In addition, an important component  of  students'  training
involves  an  internship working on an applied project outside the academic
setting.  Students and faculty in the  Human-Computer  Interaction  program
are  currently studying many different cognitive tasks involving computers,
including: construction of algorithms, design of instruction  for  computer
users,  design of user-friendly systems, and the application of theories of
learning and problem solving to the design of systems for computer-assisted
instruction.

Carnegie-Mellon University is exceptionally well suited for  a  program  in
human-computer   interaction.    It  combines  a  strong  computer  science
department with a strong  psychology  department  and  has  many  lines  of
communication  between  them.   There are many shared seminars and research
projects.  They also share in a computational community defined by a  large
network  of  computers.  In addition, CMU and IBM have committed to a major
effort to integrate personal computers into college education.    By  1986,
every  student  on  campus  will  have a powerful state-of-the-art personal
computer.  It is anticipated that members of the Human-Computer Interaction
program will be involved in various aspects of this effort.

The  following  faculty  from  the  CMU  Psychology  and  Computer  Science
departments  are  participating  in the Human-Computer Interaction Program:
John R. Anderson, Jaime G. Carbonell, John  R. Hayes,  Elaine  Kant,  David
Klahr,  Jill  H. Larkin, Philip L. Miller, Alan Newell, Lynne M. Reder, and
Brian J. Reiser.

Our   deadline   for   receiving   applications,   including   letters   of
recommendation,  is  March  1st.  Further information about our program and
application materials may be obtained from:

     John R. Anderson
     Department of Psychology
     Carnegie-Mellon University
     Pittsburgh, PA  15213

------------------------------

End of AIList Digest
********************
31-Jan-84 10:19:37-PST,15850;000000000001
Mail-From: LAWS created at 31-Jan-84 10:14:56
Date: Tue 31 Jan 1984 10:05-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #11
To: AIList@SRI-AI


AIList Digest            Tuesday, 31 Jan 1984      Volume 2 : Issue 11

Today's Topics:
  Techniques - Beam Search Request,
  Expert Systems - Expert Debuggers,
  Mathematics - Arnold Arnold Story,
  Courses - PSU Spring AI Mailing Lists,
  Awards - Fredkin Prize for Computer Math Discovery,
  Brain Theory - Parallel Processing,
  Intelligence - Psychological Definition,
  Seminars - Self-Organizing Knowledge Base, Learning, Task Models
----------------------------------------------------------------------

Date: 26 Jan 1984 21:44:11-EST
From: Peng.Si.Ow@CMU-RI-ISL1
Subject: Beam Search

I would be most grateful for any information/references to studies and/or
applications of Beam Search, the search procedure used in HARPY.

                                                        Peng Si Ow
                                                      pso@CMU-RI-ISL1

------------------------------

Date: 25 Jan 84 7:51:06-PST (Wed)
From: harpo!eagle!mhuxl!ulysses!unc!mcnc!ncsu!uvacs!erh @ Ucb-Vax
Subject: Expert debuggers
Article-I.D.: uvacs.1148

        See also "Sniffer: a system that understands bugs", Daniel G. Shapiro,
MIT AI Lab Memo AIM-638, June 1981
        (The debugging knowledge of Sniffer is organized as a bunch of tiny
experts, each understanding a specific type of error.  The program has an in-
depth understanding of a (very) limited class of errors.  It consists of
a cliche-finder and a "time rover".  Master's thesis.)

------------------------------

Date: Thursday, 26-Jan-84  19:11:37-GMT
From: BILL (on ERCC DEC-10) <Clocksin%edxa@ucl-cs.arpa>
Reply-to: Clocksin <Clocksin%edxa@ucl-cs.arpa>
Subject: AIList entry

In reference to a previous AIList correspondent wishing to know more about
Arnold Arnold's "proof" of Fermat's Last Theorem, last week's issue of
New Scientist explains all.  The "proof" is faulty, as expected.
Mr Arnold is a self-styled "cybernetician" who has a history of grabbing
headlines with announcements of revolutionary results which are later
proven faulty on trivial grounds.  I suppose A.I. has to put up with
its share of circle squarers and angle trisecters.

------------------------------

Date: 28 Jan 84 18:23:09-PST (Sat)
From: ihnp4!houxm!hocda!hou3c!burl!clyde!akgua!sb1!sb6!bpa!burdvax!psu
      vax!bobgian@Ucb-Vax
Subject: PSU Spring AI mailing lists
Article-I.D.: psuvax.433

I will be using net.ai for occasionally reporting "interesting" items
relating to the PSU Spring AI course.

If anybody would also like "administrivia" mailings (which could get
humorous at times!), please let me know.

Also, if you want to be included on the "free-for-all" discussion list,
which will include flames and other assorted idiocies, let me know that
too.  Otherwise you'll get only "important" items.

The "official Netwide course" (ie, net.ai.cse) will start up in a month
or so.  Meanwhile, you are welcome to join the fun via mail!

Bob

Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
UUCP:   bobgian@psuvax.UUCP       -or-    allegra!psuvax!bobgian
Arpa:   bobgian@PSUVAX1           -or-    bobgian%psuvax1.bitnet@Berkeley
Bitnet: bobgian@PSUVAX1.BITNET    CSnet:  bobgian@penn-state.csnet
USnail: 333 Whitmore Lab, Penn State Univ, University Park, PA 16802

------------------------------

Date: 26 Jan 84 19:39:53 EST
From: AMAREL@RUTGERS.ARPA
Subject: Fredkin Prize for Computer Math Discovery

                 [Reprinted from the RUTGERS bboard.]

Fredkin Prize to be Awarded for Computer Math Discovery

LOUISVILLE,  Ky.--The  Fredkin  Foundation  will award a $100,000 prize for the
first computer to make a major mathematical discovery, it was  announced  today
(Jan. 26).

Carnegie-Mellon  University  has  been  named trustee of the "Fredkin Prize for
Computer Discovery in Mathematics", according to Raj  Reddy,  director  of  the
university's  Robotics  Institute,  and a trustee of IJCAI (International Joint
Council on Artificial Intelligence) responsible for AI prizes.  Reddy said  the
prize  will be awarded "for a mathematical work of distinction in which some of
the pivotal ideas have been found automatically by a computer program in  which
they were not initially implicit."

"The criteria for awarding this prize will be widely publicized and reviewed by
the  artificial  intelligence  and  mathematics  communities to determine their
adequacy," Reddy said.

Dr. Woody Bledsoe of the University of Texas at Austin will head a committee of
experts  who  will  define  the  rules  of  the  competition.      Bledsoe   is
president-elect of the American Association for Artificial Intelligence.

"It  is  hoped,"  said  Bledsoe,  "that  this  prize  will stimulate the use of
computers in mathematical research and have a good long-range effect on all  of
science."

The  committee  of mathematicians and computer scientists which will define the
rules of the competition includes:  William Eaton of the University of Texas at
Austin, Daniel  Gorenstein  of  Rutgers  University,  Paul  Halmos  of  Indiana
University,  Ken  Kunen  of  the  University of Wisconsin, Dan Mauldin of North
Texas State University and John McCarthy of Stanford University.

Also, Hugh Montgomery of the University of Michigan, Jack Schwartz of New  York
University,  Michael  Starbird  of  the  University  of  Texas  at  Austin, Ken
Stolarsky of  the  University  of  Illinois  and  Francois  Treves  of  Rutgers
University.

The  Fredkin Foundation has a similar prize for a world champion computer chess
system.  Recently, $5,000 was awarded to Ken Thompson and Joseph  Condon,  Bell
Laboratories  researchers  who developed the first computer system to achieve a
Master rating in tournament chess.

------------------------------

Date: 26 Jan 84 15:34:50 PST (Thu)
From: Mike Brzustowicz <mab@aids-unix>
Subject: Re: Rene Bach's query on parallel processing in the brain

What happens when something is "on the tip of your tounge"  but is beyond
recall.  Often (for me at least)  if the effort to recall is displaced
by some other cognitive activity, the searched-for information "pops-up"
at a later time.  To me, this suggests at least one background process.

                                -Mike (mab@AIDS-UNIX)

------------------------------

Date: Thu, 26 Jan 84 17:19:30 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: How my brain works

I find that most of what my brain does is pattern interpretation.  I receive
various sensory input in the form of various kinds of vibrations (i.e.
eletromagnetic and acoustic) and my brain perceives patterns in this muck.
Then it attaches meanings to the patterns.  Within limits, I can attach these
meanings at will.  The process of logical deduction a la Socrates takes up
a negligible time-slice in the CPU.

  --Charlie

------------------------------

Date: Fri, 27 Jan 84 15:35:21 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Re: How my brain works

I see what you mean about the question as to whether the brain is a parallel
processor in consious reasoning or not.  I also feel like a little daemon that
sits and pays attention to different lines of thought at different times.

An interesting counterexample is the aha! phenomenon.  The mathematician
Henri Poincare, among others, has written an essay about his experience of
being interrupted from his conscious attention somehow and becoming instantly
aware of the solution to a problem he had "given up" on some days before.
It was as though some part of his brain had been working on the problem all
along even though he had not been aware of it.  When it had gotten the solution
an interrupt occurred and his conscious mind was triggered into the awareness
of  the solution.

  --Charlie

------------------------------

Date: Mon 30 Jan 84 09:47:49-EST
From: Alexander Sen Yeh <AY@MIT-XX.ARPA>
Subject: Request for Information

I am getting started on a project which combines symbolic artificial
intelligence and image enhancement techniques.  Any leads on past and
present attempts at doing this (or at combining symbolic a.i. with
signal processing or even numerical methods in general) would be
greatly appreciated.  I will send a summary of replies to AILIST and
VISION LIST in the future.  Thanks.

--Alex Yeh
--electronic mail: AY@MIT-XX.ARPA
--US mail: Rm. 222, 545 Technology Square, Cambridge, MA 02139

------------------------------

Date: 30 January 1984 1554-est
From: RTaylor.5581i27TK @ RADC-MULTICS
Subject: RE:  brain, a parallel processor ?

I agree that based on my own observations, my brain appears to be
working more like a time-sharing unit...complete with slow downs,
crashes, etc., due to overloading the inputs by fatigue, poor maintenance,
and numerous inputs coming too fast to be covered by the
time-sharing/switching mechanism!
                              Roz

------------------------------

Date: Monday, 30 Jan 84 14:33:07 EST
From: shrager (jeff shrager) @ cmu-psy-a
Subject: Psychological Definition of (human) Intelligence

Recommended reading for persons interested in a psychological view of
(human) intelligence:

Sternberg, R.J. (1983) "What should intelligence tests test?  Implications
 of a triarchic theory of intelligence for intelligence testing."  in
 Educational Researcher, Jan 1984.  Vol. 13 #1.

This easily read article (written for educational researchers) reviews
Sternberg's current view of what makes intelligent persons intelligent:

"The triarchic theory accounts for why IQ tests work as well as they do
 and suggests ways in which they might be improved...."

Although the readership of this list are probably not interested in IQ tests
per se, Sternberg is the foremost cognitive psychologist concerned directly
with intelligence so his view of "What is intelligence?" will be of interest.
This is reviewed quite nicely in the cited paper:

"The triachric theory of human intelligence comprises three subtheories.  The
first relates intelligence to the internal world of the individual,
specifying the mental mechanisms that lead to more and less intelligent
behavior.  This subtheory specifies three kinds of information processing
components that are instrumental in (a) learning how to do things, (b)
planning what to do and how to do them, and in (c) actually doing them. ...
The second subtheory specifies those points along the continuum of one's
experience with tasks or situations that most critically involve the use of
intelligence.  In particular, the account emphasizes the roles of novelty
(...) and of automatization (...) in intelligence.  The third subtheory
relates intelligence to the external world of the individual, specifying
three classes of acts -- environmental adaptation, selection, and shaping --
that characterize intelligent behavior in the everyday world."

There is more detail in the cited article.

(Robert J. Sternberg is professor of Psychology at Yale University.  See
also, his paper in Behavior and Flame Sciences (1980, 3, 573-584): "Sketch of
a componential subtheory of human intelligence." and his book (in press with
Cambridge Univ. Press): "Beyond IQ: A triarchic theory of human
intelligence.")

------------------------------

Date: Thu 26 Jan 84 14:11:55-CST
From: CS.BUCKLEY@UTEXAS-20.ARPA
Subject: Database Seminar

                [Reprinted from the UTEXAS-20 bboard.]

    4-5 Wed afternoon in Pai 5.60 [...]

    Mail-From: CS.LEVINSON created at 23-Jan-84 15:47:25

    I am developing a system which will serve as a self-organizing
    knowledge base for an expert system. The knowledge base is currently
    being developed to store and retrieve Organic Chemical reactions. As
    the fundamental structures of the system are merely graphs and sets,
    I am interested in finding other domains is which the system could be used.

    Expert systems require a large amount of knowledge in order to perform
    their tasks successfully. In order for knowledge to be useful for the
    expert task it must be characterized accurately. Data characterization
    is usually the responsibility of the system designer and the
    consulting experts. It is my belief that the computer itself can be
    used to help characterize and classify its knowledge. The system's
    design is based on the assumption that the key to knowledge
    characterization is pattern recognition.

------------------------------

Date: 28 Jan 84 21:25:17 EST
From: MSIMS@RUTGERS.ARPA
Subject: Machine Learning Seminar Talk by R. Banerji

                 [Reprinted from the RUTGERS bboard.]

                MACHINE LEARNING SEMINAR

Speaker:        Ranan Banerji
                St. Joseph's University, Philadelphia, Pa. 19130

Subject:        An explanation of 'The Induction of Theories from
                Facts' and its relation to LEX and MARVIN


In Ehud Shapiro's Yale thesis work he presented a framework for
inductive inference in logic, called the incremental inductive
inference algorithm.  His Model Inference System was able to infer
axiomatizations of concrete models from a small number of facts in a
practical amount of time.  Dr. Banerji will relate Shapiro's work to
the kind of inductive work going on with the LEX project using the
version space concept of Tom Mitchell, and the positive focusing work
represented by Claude Sammut's MARVIN.

Date:           Monday, January 30, 1984
Time:           2:00-3:30
Place:          Hill 7th floor lounge (alcove)

------------------------------

Date: 30 Jan 84  1653 PST
From: Terry Winograd <TW@SU-AI>
Subject: Talkware seminar Mon Feb 6, Tom Moran (PARC)

                [Reprinted from the SU-SCORE bboard.]

Talkware Seminar (CS 377)

Date: Feb 6
Speaker: Thomas P. Moran, Xerox PARC
Topic: Command Language Systems, Conceptual Models, and Tasks
Time: 2:15-4
Place: 200-205

Perhaps the most important property for the usability of command language
systems is consistency.  This notion usually refers to the internal
(self-) consistency of the language.  But I would like to reorient the
notion of consistency to focus on the task domain for which the system
is designed.  I will introduce a task analysis technique, called
External-Internal Task (ETIT) analysis.  It is based on the idea that
tasks in the external world must be reformulated in to the internal
concepts of a computer system before the system can be used.  The
analysis is in the form of a mapping between sets of external tasks and
internal tasks.  The mapping can be either direct (in the form of rules)
or "mediated" by a conceptual model of how the system works.  The direct
mapping shows how a user can appear to understand a system, yet have no
idea how it "really" works.  Example analyses of several text editing
systems and, for contrast, copiers will be presented; and various
properties of the systems will be derived from the analysis.  Further,
it is shown how this analysis can be used to assess the potential
transfer of knowledge from one system to another, i.e., how much knowing
one system helps with learning another.  Exploration of this kind of
analysis is preliminary, and several issues will be raised for
discussion.

------------------------------

End of AIList Digest
********************
 3-Feb-84 23:16:27-PST,12398;000000000001
Mail-From: LAWS created at  3-Feb-84 23:15:01
Date: Fri  3 Feb 1984 22:50-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #12
To: AIList@SRI-AI


AIList Digest            Saturday, 4 Feb 1984      Volume 2 : Issue 12

Today's Topics:
  Hardware - Lisp Machine Benchmark Request,
  Machine Translation - Request,
  Mathematics - Fermat's Last Theorem & Four Color Request,
  Alert - AI Handbooks & Constraint Theory Book,
  Expert Systems - Software Debugging Correction,
  Course - PSU's Netwide AI Course,
  Conferences -  LISP Conference Deadline & Cybernetics Congress
----------------------------------------------------------------------

Date: Wed, 1 Feb 84 16:37:00 cst
From: dyer@wisc-ai (Chuck Dyer)
Subject: Lisp Machines

Does anyone have any reliable benchmarks comparing Lisp
machines, including Symbolics, Dandelion, Dolphin, Dorado,
LMI, VAX 780, etc?

Other features for comparison are also of interest.  In particular,
what capabilities are available for integrating a color display
(at least 8 bits/pixel)?

------------------------------

Date: Thu 2 Feb 84 01:54:07-EST
From: Andrew Y. Chu <AYCHU@MIT-XX.ARPA>
Subject: language translator

                     [Forwarded by SASW@MIT-ML.]

Hi, I am looking for some information on language translation
(No, not fortran->pascal, like english->french).
Does anyone in MIT works on this field? If not, anyone in other
schools? Someone from industry ? Commercial product ?
Pointer to articles, magazines, journals etc. will be greatly appreciated.

Please reply to aychu@mit-xx. I want this message to reach as
many people as possible, are there other bboards I can send to?
Thanx.

------------------------------

Date: Thu, 2 Feb 84 09:48:48 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Fermat's Last Theorem

Fortunately (or unfortunately) puzzles like Fermat's Last Theorem, Goldbach's
conjecture, the 4-color theorem, and others are not in the same class as
the geometric trisection of an angle or the squaring of a circle.  The former
class may be undecidable propositions (a la Goedel) and the latter are merely
impossible.  Since one of the annoying things about undecidable propositions
is that it cannot be decided whether or not they are decidable, (Where are
you, Doug Hofstader, now that we need you?) people seriously interested in
these candidates for undecidablilty should not dismiss so-called theorem
provers like A. Arnold without looking at their work.

I have heard that the ugly computer proof(?) of the 4-color theorem that
appeared in Scientific American is incorrect, i.e. not a proof.  I also
have heard that one G. Spencer-Brown has proved the 4-color theorem.  I
do not know whether either of these things is true and it's bugging me!
Is the 4-color theorem undecidable or not?

  --Charlie

------------------------------

Date: 30 Jan 84 19:48:36-PST (Mon)
From: pur-ee!uiucdcs!uicsl!keller @ Ucb-Vax
Subject: AI Handbooks only .95
Article-I.D.: uiucdcs.5251

        Several people here have joined "The Library of Computer and
Information Sciences Book Club" because they have an offer of the complete
AI Handbook set (3 vols) for $3.95 instead of the normal $100.00. I got mine
and they are the same production as non book club versions. You must buy
three more books during the comming year and it will probably be easy to
find ones that you want. Here's the details:

Send to: The Library of Computer and Information Sciences
         Riverside NJ 08075

Copy of Ad:
Please accept my application for trial membership in the Library of Computer
and Information Sciences and send me the 3-volume HANDBOOK OF ARTIFICIAL
INTELLIGENCE (10079) billing me only $3.95. I agree to purchase at least
three additional Selections or Alternates over the next 12 months. Savings
may range up to 30% and occasionally even more. My membership is cancelable
any time after I buy these three books. A shipping and handling charge is
added to all shipments.

No-Risk Guarantee: If you are not satisfied--for any reason--you may return
the HANDBOOK OF ARTIFICIAL INTELLIGENCE within 10 days and your membership
will be canceled and you will owe nothing.

Name ________
Name of Firm ____ (if you want subscription to your office)
Address _____________
City ________
State _______ Zip ______

(Offer good in Continental U.S. and Canada only. Prices slightly higher in
Canada.)

Scientific American 8/83    7-BV8

-Shaun ...uiucdcs!uicsl!keller

[I have been a member for several years, and have found this club's
service satisfactory (and improving).  The selection leans towards
data processing and networking, but there have been a fair number
of books on AI, graphics and vision, robotics, etc.  After buying
several books you get enough bonus points for a very substantial
discount on a selection of books that you passed up when they were
first offered.  I do get tired, though, of the monthly brochures that
use the phrase "For every computer professional, ..." in the blurb for
nearly every book.  If you aren't interested in the AI Handbook,
find a current club member for a list of other books you can get
when you enroll.  The current member will also get a book for signing
you up.  -- KIL]

------------------------------

Date: 31 Jan 84 19:55:24-PST (Tue)
From: pur-ee!uiucdcs!ccvaxa!lipp @ Ucb-Vax
Subject: Constraint Theory - (nf)
Article-I.D.: uiucdcs.5285


*********************BOOK ANNOUNCEMENT*******************************

                     CONSTRAINT THEORY
                 An Approach to Policy-Level
                         Modelling
                             by
                     Laurence D. Richards

The cybernetic concepts of variety, constraint, circularity, and
process provide the foundations for a theoretical framework for the
design of policy support systems.  The theoretical framework consists
of a modelling language and a modelling mathematics.  An approach to
building models for policy support sys- tems is detailed; two case
studies that demonstrate the approach are described.  The modelling
approach focuses on the structure of mental models and the subjec-
tivity of knowledge.  Consideration is given to ideas immanent in
second-order cybernetics, including paradox, self-reference, and
autonomy. Central themes of the book are "complexity", "negative
reasoning", and "robust" or "value-rich" policy.

424 pages; 23 tables; 56 illustrations
Hardback: ISBN 0-8191-3512-7 $28.75
Paperback:ISBN 0-8191-3513-5 $16.75

order from:
                          University Press of America
                                4720 Boston Way
                           Lanham, Maryland 20706 USA

------------------------------

Date: 28 Jan 84 0:25:20-PST (Sat)
From: pur-ee!uiucdcs!renner @ Ucb-Vax
Subject: Re: Expert systems for software debugging
Article-I.D.: uiucdcs.5217

Ehud Shapiro's error diagnosis system is not an expert system.  It doesn't
depend on a heuristic approach at all.  Shapiro tries to find the faulty part
of a bad program by executing part of the program, then asking an "oracle" to
decide if that part worked correctly.  I am very impressed with Shapiro's
work, but it doesn't have anything to do with "expert knowledge."

Scott Renner
{ihnp4,pur-ee}!uiucdcs!renner

------------------------------

Date: 28 Jan 84 12:25:56-PST (Sat)
From: ihnp4!houxm!hocda!hou3c!burl!clyde!akgua!sb1!sb6!bpa!burdvax!psuvax!bobgian @ Ucb-Vax
Subject: PSU's Netwide AI course
Article-I.D.: psuvax.432

The PSU ("in person") component of the course has started up, but things
are a bit slow and confused regarding the "netwide" component.

For one thing, I am too busy finishing a thesis and teaching full-time to
handle the administrative duties, and we don't (yet, at least) have the
resources to hire others to do it.

For another, my plans presupposed a level of intellectual maturity and
drive that is VERY rare in Penn State students.  I believe the BEST that
PSU can offer are in my course right now, but only 30 percent of them are
ready for what I wanted to do (and most of THEM are FACULTY!!).

I'm forced to backtrack and run a slightly more traditional "mini" course
to build a common foundation.  That course essentially will read STRUCTURE
AND INTERPRETATION OF COMPUTER PROGRAMS by Hal Abelson and Gerry Sussman.
[This book was developed for the freshman CS course (6.001) at MIT and will
be published in April.  It is now available as an MIT LCS tech report by
writing Abelson at 545 Technology Square, Cambridge, MA 02139.]

The "netwide" version of the course WILL continue in SOME (albeit perhaps
delayed) form.  My "mini" course should take about 6 weeks.  After that
the "AI and Mysticism" course can be restarted.

For now, I won't create net.ai.cse but rather will use net.ai for
occasional announcements.  I'll also keep addresses of all who wrote
expressing interest (and lack of a USENET connection).  Course
distributions will go (low volume) to that list and to net.ai until
things start to pick up.  When it becomes necessary we will "fork off"
into a net.ai subgroup.

So keep the faith, all you excited people!  This course is yet to be!!

        Bob

Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
UUCP:   bobgian@psuvax.UUCP       -or-    allegra!psuvax!bobgian
Arpa:   bobgian@PSUVAX1           -or-    bobgian%psuvax1.bitnet@Berkeley
Bitnet: bobgian@PSUVAX1.BITNET    CSnet:  bobgian@penn-state.csnet
USnail: 333 Whitmore Lab, Penn State Univ, University Park, PA 16802

------------------------------

Date: Fri 3 Feb 84 00:24:28-EST
From: STEELE%TARTAN@CMU-CS-C.ARPA
Subject: 1984 LISP Conference submissions deadline moved back

Because of delays that occurred in getting out the call for papers,
the deadline for submissions to the 1984 ACM Symposium on LISP and
Functional Programming (to be held August 5-8, 1984) has been moved
back from February 6 to February 15.  The date for notification of
acceptance or rejection of papers is now March 20 (was March 12).
The date for return of camera-ready copy is now May 20 (was May 15).

Please forward this message to anyone who may find it of interest.
--Thanks,
        Guy L. Steele Jr.
        Program Chairman, 1984 ACM S. on L. and F.P.
        Tartan Laboratories Incorporated
        477 Melwood Avenue
        Pittsburgh, Pennsylvania 15213
        (412)621-2210

------------------------------

Date: 31 Jan 84 19:54:56-PST (Tue)
From: pur-ee!uiucdcs!ccvaxa!lipp @ Ucb-Vax
Subject: Cybernetics Congress - (nf)
Article-I.D.: uiucdcs.5284

6th International Congress of the World Organisation
        of General Systems and Cybernetics
        10--14 September 1984
        Paris, France
This transdisciplinary congress will present the contemporary aspects
of cybernetics and of systems, and examine their different currents.
The proposed topics include both methods and domains of cybernetics
and systems:
  1) foundations, epistemology, analogy, modelisation, general methods
     of systems, history of cybernetics and systems science ideas.
  2) information, organisation, morphogenesis, self-reference, autonomy.
  3) dynamic systems, complex systems, fuzzy systems.
  4) physico-chemical systems.
  5) technical systems: automatics, simulation, robotics, artificial
     intelligence, learning.
  6) biological systems: ontogenesis, physiology, systemic therapy,
     neurocybernetics, ethology, ecology.
  7) human and social systems: economics, development, anthropology,
     management, education, planification.

For further information:
                                     WOGSC
                               Comite de lecture
                                     AFCET
                               156, Bld. Pereire
                             F 75017 Paris, France
Those who want to attend the congress are urged to register by writing
to AFCET, at the above address, as soon as possible.

------------------------------

End of AIList Digest
********************
 4-Feb-84 23:14:11-PST,10353;000000000001
Mail-From: LAWS created at  4-Feb-84 23:12:52
Date: Sat  4 Feb 1984 23:06-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #13
To: AIList@SRI-AI


AIList Digest             Sunday, 5 Feb 1984       Volume 2 : Issue 13

Today's Topics:
  Brain Theory - Parallelism,
  Seminars - 
  Feb 7th CSD Colloquium
  [STORY: Neural networks]
----------------------------------------------------------------------

Date: 31 Jan 84 09:15:02 EST  (Tue)
From: Dana S. Nau <dsn%umcp-cs@CSNet-Relay>
Subject: parallel processing in the brain

       From: Rene Bach <BACH@SUMEX-AIM.ARPA>
       What are the evidences that the brain is a parallel processor?  My own
       introspection seem to indicate that mine is doing time-sharing.  That is
       I can follow only one idea at a time, but with a lot of switching
       between reasoning paths (often more non directed than controlled
       switching).

Does that mean you hold your breath and stop thinking while you're
walking, and stop walking in order to breathe or think?

More pointedly, I think it's incorrect to consider only
consciously-controlled processes when we talk about whether or not
the brain is doing parallel processing.  Perhaps the conscious part
of your mind can keep track of only one thing at a time, but most
(probably >90%) of the processing done by the brain is subconscious.

For example, most of us have to think a LOT about what we're doing
when we're first learning to drive.  But after a while, it becomes
largely automatic, and the conscious part of our mind is freed to
think about other things while we're driving.

As another example, have you ever had the experience of trying
unsuccessfully to remember something, and later remembering
whatever-it-was while you were thinking about something else?
SOME kind of processing was going on in the interim, or you
wouldn't have remembered whatever-it-was.

------------------------------

Date: 30 Jan 84 20:18:33-PST (Mon)
From: pur-ee!uiucdcs!parsec!ctvax!uokvax!andree @ Ucb-Vax
Subject: Re: intelligence and genius - (nf)
Article-I.D.: uiucdcs.5259

Sorry, js@psuvax, but I DO know something about what I spoke, even if I do
have trouble typing.

I am aware that theorom-proving machines are impossible. It's also fairly
obvious that they would use lots of time and space.

However, I didn't even MENTION them. I talked about two flavors of machine.
One generated well-formed strings, and the other said whether they were
true or not. I didn't say either machine proved them. My point was that the
second of these machines is also impossible, and is closely related to
Jerry's genius finding machines. [I assume that any statement containing
genius is true.]

        Down with replying without reading!
        <mike

------------------------------

Date: Wed, 1 Feb 84 13:54:21 PST
From: Richard Foy <foy@AEROSPACE>
Subject: Brain Processing

The Feb Scientific American has an article entitled "The
Skill of Typing" which can help one to form insights into
mechanisms of the brains processing.
richard

------------------------------

Date: Thu, 2 Feb 84 08:24:35 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: AIList Digest   V2 #10

Re: Parallel Processing in the Brain

  There are several instances of people experiencing what can most easily
be explained as "tasking" in the brain. (an essay by Henri Poincare in "The
World of Mathematics", "The Seamless Web" by Stanley Burnshaw)  It appears
that the conscious mind is rather clumsy at creative work and in large measure
assigns tasks (in parallel) to the subconscious mind which operates in the
background.  When the background task is finished, an interrupt is generated
and the conscious mind becomes aware of the solution without knowing how the
problem was solved.

  --Charlie

------------------------------

Date: Thu 2 Feb 84 10:17:08-PST
From: Kenji Sugiyama <SUGIYAMA@SRI-AI.ARPA>
Subject: Re: Parallel brain?

I had a strange experience when I had practiced abacus in Japan.
An abacus is used for adding, subtracting, multipling, and dividing
numbers.  The practice consisted of a set of calculations in a definite
amount of time, say, 15 minutes.  During that time, I began to think
of something other than the problem at hand.  Then I noticed that
fact ("Aha, I thought of this and that!"), and grinned at myself in
my mind.  In spite of these detours, I continued my calculations without
an interruption.  This kind of experience repeated several times.

It seems to me that my brain might be parallel, at least, in simple tasks.

------------------------------

Date: 2 Feb 1984 8:16-PST
From: fc%USC-CSE@ECLA.ECLnet
Subject: Re: AIList Digest   V2 #10

parallelism in the brain:
        Can you walk and chew gum at the same time?
                        Fred

------------------------------

Date: Sat, 4 Feb 84 15:06:09 PST
From: Philip Kahn <kahn@UCLA-CS>
Subject: The brain is parallel, yet data flow can be serial...

        In response to Rene Bach's question whether "the brain is a parallel
processor."  There is no other response other than an emphatic YES!  The
brain is comprised of about 10E9 neurons.  Each one of those neurons is
making locally autonomous calculations; it's hard to get more parallel than
that!  The lower brain functions (e.g., sensory preprocessing, lower motor
control, etc.) are highly distributed and locally autonomous processors (i.e.,
pure parallel data flow).  At the higher thought processing levels, however,
it has been shown (can't cite anything, but I can get sources if someone
wants me to dig them out) that logic tends to run in a serial fashion.
That is, the brain is parallel (a hardware structure), yet higher logic
processes apply the timing of thought in a serial nature (a "software"
structure).
        It is generally agreed that the brain is an associational
machine; it processes based upon the timing of diffuse stimuli and the
resulting changes in the "action potential" of its member neurons.
"Context" helps to define the strength and structure of those associational
links.  Higher thinking is generally a cognitive process where the context
of situations is manipulated.  Changing context (and some associational
links) will often result in a "conclusion" significantly different than
previously arrived upon.  Higher thought may be thought as a three process
cycle:  decision (evaluation of an associational network), reasonability
testing (i.e., is the present decision using a new "context" no different
from the decision arrived upon utilizing the previous "context"?), and
context alteration (i.e., "if my 'decision' is not 'reasonable' what
'contextual association' may be omitted or in error?").  This cycle is
continued until the second step -- 'reasonability testing' -- has concluded
that the result of this 'thinking' process is at least plausible.  Although the
implementation (assuming the trichotomy is correct) in the brain is
via parallel neural structures, the movement of information through those
structures is serial in nature.  An interesting note on the above trichotomy;
note what occurs when the input to the associational network is changed.
If the new input is not consistent with the previously existing 'context'
then the 'reasonability tester' will cause an automatic readjustment of
the 'context'.
        Needless to say, this is not a rigorously proven theory of mine,
but I feel it is quite plausible and that there are profuse psychophysical
and phychological studies that reinforce the above model.  As of now, I
use it as a general guiding light in my work with vision systems, but it
seems equally appplicable to general AI.

                        Philip Kahn
                        KAHN@UCLA-CS.ARPA

------------------------------

Date: 02/01/84 16:09:21
From: STORY at MIT-MC
Re:   Neural networks

                     [Forwarded by SASW@MIT-ML.]

DATE:   Friday, February 3, 1984
TITLE:  "NEURAL NETWORKS: A DISCUSSION OF VARIOUS MATHEMATICAL MODELS"
SPEAKER:        Margaret Lepley, MIT

Neural networks are of interest to researchers in artificial intelligence,
neurobiology, and even statistical mechanics.  Because of their random parallel
structure it is difficult to study the transient behavior of the networks.  We
will discuss various mathematical models for neural networks and show how the
behaviors of these models differ.  In particular we will investigate
asynchronous vs. synchronous models with undirected vs. directed edges of
various weights.

HOST:   Professor Silvio Micali

------------------------------

Date: 01 Feb 84  1832 PST
From: Rod Brooks <ROD@SU-AI>
Subject: Feb 7th CSD Colloquium - Stanford

                  [Reprinted from the SU-SCORE bboard.]

                  A Perspective on Automatic Programming
                             David R. Barstow
                        Schlumberger-Doll Research
                    4:30pm, Terman Aud., Tues Feb 7th

Most work in automatic programming has focused primarily on the roles of
deduction and programming knowledge. However, the role played by knowledge
of the task domain seems to be at least as important, both for the usability
of an automatic programming system and for the feasibility of building one
which works on non-trivial problems. This perspective has evolved during
the course of a variety of studies over the last several years, including
detailed examination of existing software for a particular domain
(quantitaive interpretation of oil well logs) and the implementation
of an experimental automatic programming system for that domain. The
importance of domain knowledge has two importatnt implications: a primary goal
of automatic programming research should be to characterize the programming
process for specific domains; and a crucial issue to be addressed
in these characterizations is the interaction of domain and programming
knowledge during program synthesis.

------------------------------

End of AIList Digest
********************
10-Feb-84 22:33:57-PST,18414;000000000001
Date: Fri 10 Feb 1984 22:16-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #14
To: AIList@SRI-AI


AIList Digest           Saturday, 11 Feb 1984      Volume 2 : Issue 14

Today's Topics:
  Requests - SHRDLU & Spencer-Brown & Programming Tests & UNITS,
  Replys - R1/XCON & AI Text & Lisp Machine Comparisons,
  Seminars - Symbolic Supercomputer & Expert Systems & Multiagent Planning
----------------------------------------------------------------------

Date: Sun, 29 Jan 84 16:30:36 PST
From: Rutenberg.pa@PARC-MAXC.ARPA
Reply-to: Rutenberg.pa@PARC-MAXC.ARPA
Subject: does anyone have SHRDLU?

I'm looking for a copy of SHRDLU, ideally in
machine readable form although a listing
would also be fine.

If you have a copy or know of somebody
who does, please send me a message!

Thanks,
        Mike

------------------------------

Date: Mon, 6 Feb 84 14:48:37 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Re: AIList Digest   V2 #12

I would dearly like to get in contact with G. Spencer-Brown.  Can anyone
give me any kind of lead?  I have tried his publisher, Bantam, and got
no results.

Thanks.

  --Charlie

------------------------------

Date: Wed,  8 Feb 84 19:26:38 CST
From: Stan Barber <sob@rice>
Subject: Testing Programming Aptitude or Compentence

I am interested in information on the following tests that have been or are
currently administered to determine Programming Aptitude or Compentence.

1. Aptitude Assessment  Battery:Programming (AABP) created by Jack M. Wolfe
and made available to employers only from Programming Specialists, Inc.
Brooklyn NY.

2. Programmer Aptitude/Compentence Test System sold by Haverly Systems,
Inc. (Introduced in 1970)

3. Computer Programmer Aptitude Battery by SRA (Science Research Associates),
Inc. (Examined in by F.L. Schmidt et.al. in Journal of Applied Psychology,
Volume 65 [1980] p 643-661)

4. CLEP Exam on Computers and Data Processing. The College Board and the
Educational Testing Service.

5. Graudate Record Exam Advanced Test in Computer Science by the Education
Testing Service.

Please send the answers to the following questions if you have taken or
had experience with any of these tests:

1. How many scores and what titles did they used for the version of the
exam that you took?

2. Did you feel the test actually measured your ability to learn to
program or your current programming competence (that is, did you feel it
asked relevant questions)?

3. What are your general impressions about testing and more specifically
about testing special abilities or skills (like programming, writing, etc.)

I will package up the results and send them to Human-nets.

My thanks.


                        Stan Barber
                        Department of Psychology
                        Rice University
                        Houston TX 77251

                        sob@rice                        (arapnet,csnet)
                        sob.rice@rand-relay             (broken arpa mailers)
                        ...!{parsec,lbl-csam}!rice!sob  (uucp)
                        (713) 660-9252                  (bulletin board)

------------------------------

Date: 6 Feb 84 8:10:41-PST (Mon)
From: decvax!linus!vaxine!chb @ Ucb-Vax
Subject: UNITS request: Second Posting
Article-I.D.: vaxine.182

Good morning!

   I am looking for a pointer to someone (or something) who is knowledgeable
about the features and the workings of the UNITS package, developed at
Stanford HPP.  If you know something, or someone, and could drop me a note
(through mail) I would greatly appreciate it.

   Thanks in advance.


                                Charlie Berg
                             ...allegra!linus!vaxine!chb

------------------------------

Date: 5 Feb 84 20:28:09-PST (Sun)
From: hplabs!hpda!fortune!amd70!decwrl!daemon @ Ucb-Vax
Subject: DEC's expert system for configuring VAXen
Article-I.D.: decwrl.5447

[This is in response to an unpublished request about R1. -- KIL]

Just for the record - we changed the name from "R1" to "XCON" about a year
ago I think.   It's a very useful system and is part of a family of expert
systems which assist us in the operation of various corporate divisions
(sales, service, manufacturing, installation).

Mark Palmer
Digital

        (UUCP)  {decvax, ucbvax, allegra}!decwrl!rhea!nacho!mpalmer

        (ARPA)  decwrl!rhea!nacho!mpalmer@Berkeley
                decwrl!rhea!nacho!mpalmer@SU-Shasta

------------------------------

Date: 6 Feb 84 7:15:33-PST (Mon)
From: harpo!utah-cs!hansen @ Ucb-Vax
Subject: Re: AI made easy??
Article-I.D.: utah-cs.2473

I'd try Artificial Intelligence by Elaine Rich (McGraw-Hill).  It's easy
reading, not too technical but gives a good overview to the novice.

Chuck Hansen {...!utah-cs}

------------------------------

Date: 5 Feb 84 8:48:26-PST (Sun)
From: hplabs!sdcrdcf!darrelj @ Ucb-Vax
Subject: Re: Lisp Machines
Article-I.D.: sdcrdcf.813

There really no such things as reasonable benchmarks for systems as different
as various Lisp machines and VAXen are.  Each machine has different strengths
and weaknesses.  Here is a rough ranking of machines:
VAX 780 running Fortran/C standalone
Dorado (5 to 10X dolphin)
LMI Lambda, Symbolics 3600, KL-10 Maclisp (2 to 3X dolphin)
Dolphin, dandelion, 780 VAX Interlisp, KL-10 Interlisp

Relative speeds are very rough, and dependent on application.

Notes:  Dandelion and Dolphin have 16-bit ALUs, as a result most arithmetic
is pretty slow (and things like trancendental functions are even worse
because there's no way to floating arithmetic without boxing each
intermediate result).  There is quite a wide range of I/O bandwidth among
these machines -- up to 530 Mbits/sec on a Dorado, 130M on a dolphin).

Strong points of various systems:
Xerox: a family of machines fully compatible at the core-image level,
spanning a wide range of price and performance (as low as $26k for a minumum
dandelion, to $150k for a heavily expanded Dorado).  Further, with the
exception of some of the networking and all the graphics, it is very highly
compatible with both Interlisp-10 and Interlisp-VAX (it's reasonable to have
a single set of sources with just a bit of conditional compilation).
Because of the use of a relatively old dialect, they have a large and well
debugged manual as well.

LMI and Symbolics (these are really fairly similar as both are licensed from
the MIT lisp machine work, and the principals are rival factions of the MIT
group that developed it) these have fairly large microcode stores, and as
a result more things are fast (e.g. much of graphics primitives are
microcoded, so these are probably the machines for moby amounts of image
processing and graphics.  There are also tools for compiling directly to
microcode for extra speed.  These machines also contain a secondary bus such
as Unibus or Multibus, so there is considerable flexibility in attaching
exotic hardware.

Weak points:  Xerox machines have a proprietary bus, so there are very few
options (philosphy is hook it to something else on the Ethernet).  MIT
machines speak a new dialect of lisp with only partial compatible with
MACLISP (though this did allow adding many nice features), and their cost is
too high to give everyone a machine.

The news item to which this is a response also asked about color displays.
Dolphin:  480x640x4 bits.  The 4 bits go thru a color map to 24 bits.
Dorado:  480x640x(4 or 8 or 24 bits).  The 4 or 8 bits go thru a color map to
         24 bits.  Lisp software does not currently support the 24 bit mode.
3600:  they have one or two (the LM-2 had 512x512x?) around 1Kx1Kx(8 or 16
or 24) with a color map to 30 bits.
Dandelion:  probably too little I/O bandwidth
Lambda:  current brochure makes passing mention of optional standard and
         high-res color displays.

Disclaimer:  I probably have some bias toward Xerox, as SDC has several of
their machines (in part because we already had an application in Interlisp.

Darrel J. Van Buer, PhD
System Development Corp.
2500 Colorado Ave
Santa Monica, CA 90406
(213)820-4111 x5449
...{allegra,burdvax,cbosgd,hplabs,ihnp4,sdccsu3,trw-unix}!sdcrdcf!darrelj
VANBUER@USC-ECL.ARPA

------------------------------

Date: 6 Feb 84 16:40 PDT
From: Kandt.pasa@PARC-MAXC.ARPA
Subject: Lisp Machines

I have seen several benchmarks as a former Symbolics and current Xerox
employee.  These benchmarks have typically compared the LM-2 with the
1100; they have even included actual or estimated(?) 3600, 1108, or 1132
performances.  These benchmarks, however, have seldom been very
informative because the actual test code is not provided or a detailed
discussion of the implementation.  For example, is the test on the
Symbolics machine coded in Zetalisp or with the Interlisp compatibility
package?  Or, in Interlisp, were fast functions used (FRPLACA vs.
RPLACA)?  (Zetalisp's RPLACA is equivalent to Interlisp's FRPLACA so
that if this transformation was not performed the benchmark would favor
the Symbolics machine.)  What about efficiency issues such as block
compiling, compiler optimizers, or explicitily declaring variables?
There are also many other issues such as what happens when the data set
gets very large in a real application instead of a toy benchmark or, in
Zetalisp, should you turn the garbage collector on (its not normally on)
and when you do what impact does it have on performance.  In summary, be
cautious about claims without thorough supportive evidence.  Also
realize that each machine has its own strengths and weaknesses; there is
no definitive answer.  Caveat emptor!

------------------------------

Date: Sat, 4 Feb 84 19:24 EST
From: Thomas Knight <tk@MIT-MC.ARPA>
Subject: Concurrent Symbolic Supercomputer

                      [Forwarded by SASW@MIT-MC]


                                FAIM-1

                       Fairchild AI Machine #1

              An Ultra-Concurrent Symbolic Supercomputer

                                  by


                           Dr. A. L. Davis
      Fairchild Laboratory for Artificial Intelligence Research

                       Friday, February 10, 1984


Presently AI researchers are being hampered in the development of large scale
symbolic applications such as expert systems, by the lack of sufficient machine
horsepower to execute the application programs at a sufficiently rapid rate to
make the application viable.  The intent of the FAIM-1 machine is to provide
a machine capable of 3 or 4 orders of magnitude performance improvement over
that currently available on today's large main-frame machines.  The
main source of performance increase is in the exploitation of concurrency at
the program, system, and architectural levels.

In addition to the normal ancillary support activities, the work is being
carried on in 3 areas:

        1.  Language Design - a frame based, object oriented language is being
            designed which allows the programmer to express highly concurrent
            symbolic algorithms.  The mechanism permits both logical and
            procedural programming styles in a unified message based semantics
            fashion.  In addition, the programmer may provide strategic
            information which aids the system in managing the concurrency
            structure on the physical resource components of the machine.

        2.  Machine Architecture - the machine derives its power from the
            homogeneous replication of a medium grain processor element.
            The element consists of a processor, message delivery subsystem,
            and a parallel pattern based memory subsystem known as the CxAM
            (Context Adressable Memory).  2 variants of a CxAM design are
            being done at this time and are targeted for fabrication on a
            sub 2 micron CMOS line.  The connection topology for the
            replicated elements is a 3 axis, single twist, Hex plane which
            has the advantages of planar wiring, easy extensibility, variable
            off surface bandwidth, and permits a variety of fault tolerant
            designs.  The Hex plane topology also permits nice hierarchical
            process growth without creating excess communication congestion
            which would cause false synchronization in otherwise concurrent
            activities.  In addition the machine is being designed in hopes
            of an eventual wafer-scale integrated implementation.

        3.  Resource Allocation - with any concurrent system which does not
            require machine dependent programming styles, there is a generic
            problem in mapping the concurrent activities extant in the program
            efficiently onto the multi-resource ensemble.  The strategy
            employed in the FAIM-1 system is to analyze the static structure of
            the source program, transform it into a graph, and then via a
            series of function preserving graph transforms produce a loadable
            version of the program which attempts to minimize communication
            cost while preserving the inherent concurrency structure.
            A certain level of dynamic compensation is guided by programmer
            supplied strategy information.

The talk will present an overview of the work we have done in these areas.

Host: Prof. Thomas Knight

------------------------------

Date: 8 Feb 84 15:59:49 EST
From: Smadar <KEDAR-CABELLI@RUTGERS.ARPA>
Subject: III Seminar on Expert Systems this coming Tuesday...

                    [Reprinted from the Rutgers bboard.]

                                 I I I SEMINAR


          Title:    Automation of Modeling, Simulation and Experimental
                    Design - An Expert System in Enzyme Kinetics

          Speaker:  Von-Wun Soo

          Date:     Tuesday, February 14,1983, 1:30-2:30 PM

          Location: Hill Center, Seventh floor lounge


  Von-Wun Soo, a Ph.D. student in our department, will give an informal talk on
the thesis research he is proposing.  This is his abstract:

       We  are proposing to develop a general knowledge engineering tool to
    aid biomedical researchers in developing biological models and  running
    simulation experiments. Without such powerful tools, these tasks can be
    tedious  and  costly.  Our aim is to integrate these techniques used in
    modeling, simulation, optimization, and experimental design by using an
    expert system approach. In addition we propose to carry out experiments
    on the processes of theory formation used by the scientists.

    Enzyme kinetics is the domain where we are concentrating  our  efforts.
    However, our research goal is not restricted to this particular domain.
    We  will attempt to demonstrate with this special case, how several new
    ideas  in  expert  problem  solving  including  automation  of   theory
    formation,  scientific  discovery,  experimental  design, and knowledge
    acquisition can be further developed.

    Four modules have been designed in parallel:  PROKINAL, EPX, CED, DISC.

    PROKINAL is a model generator which simulates the qualitative reasoning
    of the kineticists who conceptualize and postulate a reaction mechanism
    for a set of experimental data. By using a general procedure  known  as
    the  King-Altman  procedure to convert a mechanism topology into a rate
    law function, and  symbolic  manipulation  techniques  to  factor  rate
    constant   terms   to   kinetic   constant   term,  PROKINAL  yields  a
    corresponding FORTRAN function which computes the reaction rate.

    EPX is a model simulation aid which is designed by combining EXPERT and
    PENNZYME. It is supposed to guide the novice user in  using  simulation
    tools  and  interpreting  the  results.  It  will take the data and the
    candidate model that has been generated from PROKINAL and estimate  the
    parameters by a nonlinear least square fit.

    CED  is a experimental design consultant which uses EXPERT to guide the
    computation of experimental conditions.  Knowledge  of  optimal  design
    from  the  statistical  analysis  has  been taken into consideration by
    EXPERT in order to give advice  on  the  appropriate  measurements  and
    reduce the cost of experimentation.

    DISC  is  a  discovery  module which is now at the stage of theoretical
    development. We wish to explore and simulate the behavior of scientific
    discovery in enzyme kinetics research and use the results in automating
    theory formation tasks.

------------------------------

Date: 09 Feb 84  2146 PST
From: Rod Brooks <ROD@SU-AI>
Subject: CSD Colloquium

                [Reprinted from the Stanford bboard.]

CSD Colloquium
Tuesday 14th, 4:30pm Terman Aud
Michael P. Georgeff, SRI International
"Synthesizing Plans for Co-operating Agents"

Intelligent agents need to be able to plan their activities so that
they can assist one another with some tasks and avoid harmful
interactions on others.  In most cases, this is best achieved by
communication between agents at execution time. This talk will discuss
a method for synthesizing a synchronized multi-agent plan to achieve
such cooperation between agents.  The idea is first to form
independent plans for each individual agent, and then to insert
communication acts into these plans to synchronize the activities of
the agents.  Conditions for freedom from interference and cooperative
behaviour are established.  An efficient method of interaction and
safety analysis is then developed and used to identify critical
regions and points of synchronization in the plans.  Finally,
communication primitives are inserted into the plans and a supervisor
process created to handle synchronization.

------------------------------

End of AIList Digest
********************
-------
10-Feb-84 22:57:33-PST,13563;000000000001
Mail-From: LAWS created at 10-Feb-84 22:56:02
Date: Fri 10 Feb 1984 22:49-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #15
To: AIList@SRI-AI


AIList Digest           Saturday, 11 Feb 1984      Volume 2 : Issue 15

Today's Topics:
  Proofs - Fermat's Theorem & 4-Color Theorem,
  Brain Theory - Parallelism
----------------------------------------------------------------------

Date: 04 Feb 84  0927 PST
From: Jussi Ketonen <JK@SU-AI>
Subject: Fermat and decidability

From the logical point of view, Fermat's last theorem is a Pi-1
statement. It follows that it is decidable. Whether it is valid
or not is another matter.

------------------------------

Date: Sat 4 Feb 84 13:13:14-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Re: Spencer-Brown's Proof

I don't know anything about the current status of the computer proof of the
4-colour theorem, though the last I heard (five years ago) was that it was
"probably OK".   That's why I use the word "theorem".   However, I can shed
some light on Spencer-Brown's alleged proof -- I was present at a lecture in
Cambridge where he supposedly gave the outline of the proof, and  I applauded
politely, but was later fairly authoritatively informed that it disintegrated
under closer scrutiny.   This doesn't *necessarily* mean that the man is a
total flake, since other such proofs by highly reputable mathematicians have
done the same (we are told that one proof was believed for twelve whole years,
late in the 19th century, before its flaw was discovered).
                                                                - Richard

------------------------------

Date: Mon, 6 Feb 84 14:46:43 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Scientific Method

Isn't it interesting that most of what we think about proofs is belief!
I guess until one actually retraces the steps of a proof and their
justifications one can only express his belief in its truth or falsness.

  --Charlie

------------------------------

Date: 3 Feb 84 8:48:01-PST (Fri)
From: harpo!eagle!allegra!alan @ Ucb-Vax
Subject: Re: brain, a parallel processor ?
Article-I.D.: allegra.2254

I've been reading things like:

        My own introspection seem to indicate that ...
        I find, upon introspection, that ...
        I find that most of what my brain does is ...
        I also feel like ...
        I agree that based on my own observations, my brain appears to
          be ...

Is this what passes for scientific method in AI these days?

        Alan S. Driscoll
        AT&T Bell Laboratories

------------------------------

Date: 2 Feb 84 14:40:23-PST (Thu)
From: decvax!genrad!grkermit!masscomp!clyde!floyd!cmcl2!rocky2!cucard!
      aecom!alex @ Ucb-Vax
Subject: Re: brain, a parallel processor ?
Article-I.D.: aecom.358

        If the brain was a serial processor, the limiting processing speed
would be the speed that neurons conduct signals. Humans, however, do
very complex processing in real time! The other possibility is that the
data structures of the brain are HIGHLY optimized.


                                Alex S. Fuss
                        {philabs, esquire, cucard}!aecom!alex

------------------------------

Date: Tue, 7 Feb 84 13:09:25 PST
From: Adolfo Di-Mare <v.dimare@UCLA-LOCUS>
Subject: I can think in parale||,

but most of time I'm ---sequential. For example, a lot of * I can
talk with (:-{) and at the same time I can be thinking on s.m.t.i.g
else. I also do this when ai-list gets too boring: I keep browsing
until I find something intere sting, and then I do read, with a better
level of under-standing. In the u-time, I can daydream...

However, If I really want to get s.m.t.i.g done, then I cannot think
on anything else! In this cases, I just have one main-stream idea in
my mind. When I'm looking for a solution, I seldom use depth first,
or bread first search. Most of the time I use a convynatium of all
these tricks I know to search, until one 'works'.

To + up, I think we @|-< can do lots of things in lots of ways. And
until we furnish computers with all this tools, they won't be able to
be as intelligent as us. Just parale|| is not the ?^-1.

        Adolfo
              ///

------------------------------

Date: 7 Feb 1984 1433-PST
From: EISELT%UCI-20A@Rand-Relay
Subject: More on Philip Kahn's reply to Rene Bach

I recently asked Philip Kahn (via personal net mail) to elaborate on his three
cycle model of thought, which he described briefly in his reply to Rene Bach's
question.  Here is my request, and his reply:

                      -------------------------

  In your recent submission to AIList, you describe a three-process cycle
model of higher-level brain function.  Your model has some similarities to
a model of text understanding we are working on here at UC Irvine.  You say,
though, that there are "profuse psychophysical and psychological studies that
reinforce the ... model."  I haven't seen any of these studies and would
be very interested in reading them.  Could you possibly send me references
to these studies?  Thank you very much.

Kurt Eiselt
eiselt@uci-20a


                       ------------------------

Kurt,

        I said "profuse" because I have come across many psychological
and physiological studies that have reinforced my belief.  Unfortunately,
I have very few specific references on this, but I'll tell you as much as
I can....

        I claim there are three stages: associational, reasonability, and
context.  I'll tell you what I've found to support each.  Associational
nets, also called "computational" or "parameter" nets, have been getting
a lot of attention lately.  Especially interesting are the papers coming out
of Rochester (in New York state).  I suggest the paper by Feldman called
"Parameter Nets."  Also, McCullough in "Embodiments of Mind" introduced a
logical calculus that he proposes neural mechanisms use to form assocational
networks.  Since then, a considerable amount of work has been done on
logical calculus, and these works are directly applicable to the analysis
of associational networks.  One definitive "associational network" found
in nature that has been exhaustively defined by Ratliff is the lateral
inhibition that occurs in the linear image sensor of the Limulus crab.
Each element of the network inhibits its neighbors based upon its value,
and the result is the second spatial derivative of the image brightness.
Most of the works you will find to support associational nets are directly
culled from neurophysiological studies.  Yet, classical conditioning
psychology defines the effects of association in its studies on forward and
backward conditioning.  Personally, I feel the biological proof of
associational nets is more concrete.
        The support for a "reasonability" level of processing has more
psychological support, because it is generally a cognitive process.
For example, learning is facilitated by subject matter that is most
consistent with past knowledge; that is, knowledge is most facilitated by
a subject that is most "reasonable" in light of past knowledge.
Some studies have shown, though I can't cite them, that the less
"reasonable" a learning task, the lesser is the learned performance.
I remember having seen at least a paper (I believe it was by a natural
language processing researcher) that claimed that the facility of language
is a metaphorical process.  By definition, a metaphor is the comparison
of alike traits in dissimilar things; it seems to me this is a very good
way to look at the question of reasonability.  Again, though, no specific
references.  In neurophysiology there are found "feedback loops" that
may be considered "reasonability" testers in so far that they take action
only when certain conditions are not met.  You might want to look at work
done on the cerebellum to document this.
        "Context" has been getting a lot of attention lately.  Again,
psychology is the major source of supporting evidence, yet neurophysiology
has its examples also.  Hormones are a prime example of "contextual"
determinants.  Their presence or absence affects the processing that
occurs in the neurons that are exposed to them.  But on a more AI level,
the importance of context has been repeatedly demonstrated by psychologists.
I believe that context is a learned phenomena.  Children have no construct
of context, and thus, they are often able to make conclusions that may be
associationally feasible, yet clearly contrary to the context of presentation.
Context in developmental psychology has been approached from a more
motivational point of view.  Maslowe's hierarchies and the extensive work
into "values" are all defining different levels of context.  Whereas an
associational network may (at least in my book) involve excitatory
nodal influences, context involves inhibitory control over the nodes in
the associational network.  In my view, associational networks only know
(always associated), (often associated), and (weak association).
(Never associated) dictates that no association exists by default.  A
contextual network knows only that the following states can occur between
concepts: (never can occur) and (rarely occurs).  These can be defined using
logical calculus and learning theory.  The associational links are solely
determined by event pairing and is a more dynamic event.  Contextual
networks are more stable and can be the result of learning as well as
by introspective analysis of the associational links.
        As you can see, I have few specific references on "context," and rely
upon my own theory of context.  I hope I've been of some help, and I would
like to be kept apprised of your work.  I suggest that if you want research
evidence of some of the above, that you examine indices on the subjects I
mentioned.  Again,

                Good luck,
                Philip Kahn

------------------------------

Date: 6 Feb 84 7:18:25-PST (Mon)
From: harpo!ulysses!mhuxl!eagle!hou5h!hou5a!hou5d!mat @ Ucb-Vax
Subject: Re: brain, a parallel processor ?
Article-I.D.: hou5d.809

See the Feb. Scientific American for an article on typists and speed.  There
is indeed evidence for a high degree of parallelism even in SIMILAR tasks.

                                                Mark Terribile

------------------------------

Date: Wed,  8 Feb 84 18:19:09 CST
From: Doug Monk <bro@rice>
Subject: Re: AIList Digest   V2 #11

Subject : Mike Brzustowicz's 'tip of the tongue' as parallel process

Rather than being an example of parallel processing, the 'tip of the
tongue' phenomenon is probably more an example of context switch, where
the attempt to recall the information displaces it temporarily, due to
too much pressure being brought to bear. ( Perhaps a form of performance
anxiety ? ) Later, when the pressure is off, and the processor has a spare
moment, a smaller recall routine can be used without displacing the
information. This model assumes that concentrating on the problem causes
more of the physical brain to be involved in the effort, thus perhaps
'overlaying' the data desired. Once a smaller recall routine is used,
the recall can actually be performed.

        Doug Monk       ( bro.rice@RAND-RELAY )

------------------------------

Date: 6 Feb 84 19:58:33-PST (Mon)
From: ihnp4!ihopa!dap @ Ucb-Vax
Subject: Re: parallel processing in the brain
Article-I.D.: ihopa.153

If you consider pattern recognition in humans when constrained to strictly
sequential processing, I think we are MUCH slower than computers.

In other words, how long do you think it would take a person to recognize
a letter if he could only inquire as to the grayness levels in different
pixels?  Of course, he would not be allowed to "fill in" a grid and then
recognize the letter on the grid.  Only a strictly algorithmic process
would be allowed.

The difference here, as I see it, is that the human mind DOES work in
parallel.  If we were forced to think sequentially about each pixel in our
field of vision, we would become hopelessly bogged down.  It seems to me
that the most likely way to simulate such a process is to have a HUGE
number of VERY dumb processors in a heirarchy of "meshes" such that some
small number of processors in common localities in a low level mesh would
report their findings to a single processor in the next higher level mesh.
This processor would do some very quick, very simple calculations and pass
its findings on the the next higher level mesh.  At the top level, the
accumulated information would serve to recognize the pattern.  I'm really
speaking off the top of my head since I'm no AI expert.  Does anybody know if
such a thing exists or am I way off?

Darrell Plank
BTL-IH
ihopa!dap

[Researchers at the University of Maryland and at the University of
Massachusetts, among others, have done considerable work on "pyramid"
and "processing cone" vision models.  The multilayer approach was
also common in perceptron-based pattern recognition, although very
little could be proven about multilayer networks.  -- KIL]

------------------------------

End of AIList Digest
********************
11-Feb-84 01:04:13-PST,21483;000000000001
Mail-From: LAWS created at 10-Feb-84 23:11:50
Date: Fri 10 Feb 1984 23:05-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #16
To: AIList@SRI-AI


AIList Digest           Saturday, 11 Feb 1984      Volume 2 : Issue 16

Today's Topics:
  Lab Description - New UCLA AI Lab,
  Report - National Computing Environment for Academic Research,
  AI Journal - New Developments in the Assoc. for Computational Linguistics,
  Course - Organization Design,
  Conference - Natural Language and Logic Programming & Systems Science
----------------------------------------------------------------------

Date: Fri, 3 Feb 84 22:57:55 PST
From: Michael Dyer <dyer@UCLA-CS>
Subject: New UCLA AI Lab

              Announcing the creation of a new Lab for
              Artificial Intelligence Research at UCLA.


Just recently, the UCLA CS  department  received  a  private  foundation
grant  of  $450,000  with  $250,000  matching  funds  from the School of
Engineering and Applied Sciences to create a Laboratory  for  Artificial
Intelligence  Research.   The  departmental chairman as well as the dean
strongly support this effort and are both committed to the growth of  AI
at UCLA.

In  addition, UCLA has been chosen as the site of the next International
Joint Conference on Artificial Intelligence (IJCAI-85) in August, 1985.

UCLA is second in the nation among public research universities  and  in
the  top  six overall in quality of faculty, according to a new national
survey of 5,000 faculty and 228  universities.   In  a  two  year  study
(conducted  by the Conference Board of the Associated Research Councils,
consisting of the American Council of Learned  Societies,  the  American
Council  on  Education,  the  National  Research  Council and the Social
Science Research Council) the UCLA  Computer  Science  Dept.   tied  for
sixth place with U.  of Ill., after Stanford, MIT, CMU, UC Berkeley, and
Cornell.

The UCLA CS department is the recipient (in  1982)  of  a  $3.6  million
five-year  NSF  Coordinated  Experimental Research grant, augmented by a
$1.5 million award from DARPA.

Right  now  the  AI lab includes a dozen Apollo DN300 workstations on an
Apollo Domain ring network.  This ring is attached via an ethernet  gate
to the CS department LOCUS network of 20 Vax 750s and a 780.  UCLA is on
the Arpanet and CSNet.  Languages include Prolog and T  (a  Scheme-based
dialect of lisp).  A number of DN320s, DN460s and a color Apollo (DN660)
are on order and will be  housed  in  a  new  area  being  reserved  for
graduate  AI research.  One Vax 750 on the LOCUS net and 10 Apollos will
be  reserved for graduate AI instruction.  Robotics and vision equipment
is also being acquired.  The  CS  dept  is  seeking  an  assist.   prof.
(tenure  track) in the area of AI, with preference for vision, robotics,
problem-solving, expert systems, learning, and simulation  of  cognitive
processes.  The new AI faculty member will be able to direct expenditure
of a portion of available funds.  (Interested AI PhDs, reply to  Michael
Dyer, CS dept, UCLA, Los Angeles, CA 90024.  Arpanet:  dyer@ucla-cs).

Our AI effort is new, but growing, and includes the following faculty:

     Michael Dyer: natural language processing, cognitive modeling.
     Margot Flowers: reasoning, argumentation, belief systems.
     Judea Pearl: theory of heuristics, search, expert systems.
     Alan Klinger: signal processing, pattern recognition, vision.
     Michel Melkanoff: CAD/CAM, robotics.
     Stott Parker: logic programming, databases.

------------------------------

Date: 26 Jan 84 14:22:30-EDT (Thu)
From: Kent Curtis <curtis%nsf-cs@CSNet-Relay>
Subject: A National Computing Environment for Academic Research

The National Science Foundation has released a report entitled "A National
Computing Environment for Academic Research" prepared by an NSF Working Group
on Computers for Research, Kent Curtis, Chairman. The table of contents is:

Executive Summary

I. The Role of Modern Computing in Scientific and Engineering Research
        with Special Concern for Large Scale Computation

        Background

        A. Summary of Current Uses and Support of Large Scale Computing for
           Research

        B. Critique of Current Facilities and Support Programs

        C. Unfilled Needs for Computer Support of Research

II. The Role and Responsibilities of NSF with Respect to Modern Scientific
    Computing

III. A Plan of Action for the NSF: Recommendations

IV. A Plan of Action for the NSF: Funding Implications

Bibliography

Appendix
        Large-scale Computing Facilities

If you are interested in receiving a copy of this report contact
Kent Curtis, (202) 357-9747; curtis.nsf-cs@csnet-relay;
or write Kent K. Curtis
         Div. of Computer Research
         NSF
         Washington, D.C.  20550

------------------------------

Date: 10 Feb 84 09:35:51 EST (Fri)
From: Journal Duties  <acl@Rochester.ARPA>
Subject: ~New Developments in the Assoc. for Computational Linguistics


The AMERICAN JOURNAL OF COMPUTATIONAL LINGUISTICS -- Some New Developments

    The AMERICAN JOURNAL OF COMPUTATIONAL LINGUISTICS is the major
international journal devoted entirely to computational approaches to
natural language research.  With the 1984 volume, its name is being changed
to COMPUTATIONAL LINGUISTICS to reflect its growing international coverage.
There is now a European chapter of the ASSOCIATION FOR COMPUTATIONAL
LINGUISTICS and a growing interest in forming one in Asia.

The journal also has many new people on its Editorial Staff.  James Allen,
of the University of Rochester, has taken over as Editor.  The FINITE STRING
Editor is now Ralph Weischedel of the University of Delaware.  Lyn Bates of
Bolt Beranek and Newman is the Book Review Editor.  Michael McCord, now at
IBM, remains as Associate Editor.

With these major changes in editorial staffing, the journal has fallen
behind schedule.  In order to catch up this year, we will be publishing
close to double the regular number of issues.  The first issue for 1983,
which was just mailed out, contains papers on "Paraphrasing Questions Using
Given and New Information" by Kathleen McKeown and "Denotational Semantics
for 'Natural' Language Question-Answering Programs" by Michael Main and
David Benson.  There is a lengthy review of Winograd's new book by Sergei
Nirenburg and a comprehensive description of the new Center for the Study
of Language and Information at Stanford University.

Highlights of the forthcoming 1983 AJCL issues:

   - Volume 9, No. 2 (expected March '84) will contain, in addition
to papers on "Natural Language Access to Databases: Interpreting Update
Requests" by Jim Davidson and Jerry Kaplan and "Treating Coordination
in Logic Grammars" by Veronica Dahl and Michael McCord, will be accompanied
by a supplement: a Directory of Graduate Programs in Computational Linguistics.
The directory is the result of two years of surveys, and provides a fairly
complete listing of programs available internationally.

   - Volume 9, Nos. 3 and 4 (expected June '84) will be a special double
issue on Ill-Formed Input.  The issue will cover many aspects of processing
ill-formed sentences from syntactic ungrammaticality to dealing with inaccurate
reference.  It will contain papers from many of the research groups that
are working on such problems.

    We will begin publishing Volume 10 later in the summer.  In addition
to the regular contributions, we are planning a special issue on the
mathematical properties of grammatical formalisms.  Ray Perrault (now at
SRI) will be guest editor for the issue, which will contain papers addressing
most of the recent developments in grammatical formalisms (e.g., GPSG,
Lexical-Function Grammars, etc).  Also in the planning stage is a special
issue on Machine Translation that Jonathan Slocum is guest editing.

    With its increased publication activity in 1984, COMPUTATIONAL
LINGUISTICS can provide authors with an unusual opportunity to have their
results published in the international community with very little delay.
A paper submitted now (early spring '84) could actually be in print by the
end of the year, provided that major revisions need not be made.  Five
copies of submissions should be sent to:

                 James Allen, CL Editor
                 Dept. of Computer Science
                 The University of Rochester
                 Rochester, NY 14627, USA

    Subscriptions to COMPUTATIONAL LINGUISTICS come with membership in the
ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, which still is only $15 per year.
As a special bonus to new members, those who join the ACL for 1984 before
August will receive the special issue on Ill-Formed Input, even though it is
formally part of the volume for 1983.

To become a member, simply send your name, address and a check made out to
the Association for Computational Linguistics to:

                  Don Walker, ACL membership
                  SRI International
                  333 Ravenswood Avenue
                  Menlo Park, CA 94025, USA

People in Europe or with Swiss accounts can pay an equivalent value in Swiss
francs, by personal check in their own currency, or by a banker's draft that
credits account number 141.880.LAV at the Union Bank of Switzerland, 8 rue
de Rhone, CH-1211 Geneva 11, SWITZERLAND; send the statement with payment or
with a copy of the bank draft to:

                  Mike Rosner, ACL
                  ISSCO
                  54, route des Acacias
                  CH-1227 Geneva, SWITZERLAND

------------------------------

Date: Wednesday, 8 February 1984, 14:28-EST
From: Gerald R. Barber <JERRYB at MIT-OZ>
Subject: Course Announcement: Organization Design

                     [Forwarded by SASW@MIT-MC.]

The following is an announcement for a course that Tom Malone and I are
organizing for this spring term.  Anyone who is interested can come to
the course or contact:

        Tom Malone
        Malone@XX
        E53-307, x6843,
        or
        Jerry Barber
        Jerryb@OZ
        NE43-809, x5871



                          Course Announcement
                       15.963 Oranization Design

                  Wednesdays, 2:30 - 5:30 p.m, E51-016
                          Prof. Thomas Malone

In this graduate seminar we will review research from a number of
fields, identifying general principles of organization design that apply
to many kinds of information processing systems, including human
organizations and computer systems.  This novel approach will integrate
examples and theories from computer science, artificial intelligence,
organization theory and economics.  The seminar will also include
discussion of several special issues that arise when these general
principles are applied to designing organizations that include both
people and computers.

A partial list of topics includes:

I.  Introduction
        A. What is an organization?
                Scott, March & Simon, Etzioni, etc
        B. What is design?
                Simon: Science of Design

II. Alternative Organizational Designs
        A. Markets
                Computer Systems: Contract Nets, Enterprise
                Organizational Theories: Simon, Arrow, Hurwicz
        B.  Hierachies
                Computer Systems: Structured programming, inheritance
                  hierarchies
                Organizational Theories: Simon, March, Cyert, Galbraith,
                  Williamson
        C. Cooperating experts (or teams)
                Computer Systems: Hearsay, Ether, Actors, Smalltalk, Omega
                Organizational Theories: Marschak & Radner, Minsky & Papert

III. Integrating Computer Systems and Human Organizations
        A. Techniques for analyzing organizational needs
                Office Analysis Methodology, Critical Success Factors,
                Information Control Networks, Sociotechnical systems
        B. Possible technologies for supporting organizational problem-solving
                Computer conferencing, Knowledge-based systems

------------------------------

Date: Thu 2 Feb 84 20:35:47-PST
From: Pereira@SRI-AI
Subject: Natural Language and Logic Programming


                           Call for Papers

                      International Workshop On
                    Natural Lanugage Understanding
                        and Logic Programming

                Rennes, France - September 18-20, 1984

The workshop will consider fundamental principles and important
innovations in the design, definition, uses and extensions of logic
programming for natural language understanding and, conversely, the
adequacy of logic programming to express natural language grammar
formalisms. The topics of interest are:

* Formal representations of natural language
* Logic grammar formalisms
* Linguistic aspects (anaphora, coordination,...)
* Analysis methods
* Natural language generation
* Uses of techniques for logic grammars (unification)
  in other grammar formalisms
* Compilers and interpreters for grammar formalisms
* Text comprehension
* Applications: natural-language front ends (database
  interrogation, dialogues with expert systems...)

Conference Chairperson

Veronica Dahl  Simon Fraser University,
               Burnaby B.C. V5A 1S6
               Canada

Program Committee

H. Abrahamson (UBC, Canada)        F. Pereira (SRI, USA)
A. Colmerauer (GIA, France)        L. Pereira (UNL, Portugal)
V. Dahl (Simon Fraser U., Canada)  P. Sabatier (CNRS, France)
P. Deransart (INRIA, France)       P. Saint-Dizier (IRISA, France)
M. Gross (LADL, France)            C. Sedogbo (Bull, France)
M. McCord (IBM, USA)

Sponsored by: IRISA, Groupe BULL, INRIA

Deadlines:

        April 15:       Submission of papers in final form
        June 10:        Notification of acceptance to authors
        July 10:        Registration in the Workshop

Submission of papers:

Papers should contain the following items: abstract and title of
paper, author name, country, affiliation, mailing address and
phone (or telex) number, one program area and the following
signed statement: ``The paper will be presented at the Workshop
by one of the authors''.

Summaries should explain what is new or interesting abount
the work and what has been accomplished. Papers must report
recent and not yet published work.

Please send 7 copies of a 5 to 10 page single spaced manuscript,
including a 150 to 200 word abstract to:

-- Patrick Saint-Dizier
   Local Organizing Committee
   IRISA - Campus de Beaulieu
   F-35042 Rennes CEDEX - France
   Tel: (99)362000 Telex: 950473 F

------------------------------

Date: Sat, 4 Feb 84 10:18 cst
From: Bruce Shriver <ShriverBD.usl@Rand-Relay>
Subject: call for papers announcement

                              Eighteenth Annual
                       HAWAII INTERNATIONAL CONFERENCE
                                      ON
                               SYSTEM SCIENCES
                     JANUARY 2-4, 1985 / HONOLULU, HAWAII

This is the eighteenth in a series  of  conferences  devoted  to  advances  in
information  and  system sciences.  The conference will encompass developments
in theory or practice in the areas of  COMPUTER  HARDWARE  and  SOFTWARE,  and
advanced  computer  systems  applications in selected areas.  Special emphasis
will be devoted to MEDICAL  INFORMATION  PROCESSING,  computer-based  DECISION
SUPPORT SYSTEMS for upper-level managers in organizations, and KNOWLEDGE-BASED
SYSTEMS.

                               CALL FOR PAPERS

Papers are invited in the preceeding and related areas and may be theoretical,
conceptual,  tutorial  or descriptive in nature.  The papers submitted will be
refereed and those selected for conference presentation will be printed in the
CONFERENCE PROCEEDINGS; therefore, papers submitted for presentation must  not
have  been  previously presented or published.  Authors of selected papers are
expected to attend the conference to  present  and  discuss  the  papers  with
attendees.

Relevant topics include:
                                  Deadlines
HARDWARE                          * Abstracts may be submitted to track
* Distributed Processing            chairpersons for guidance and indication
* Mini-Micro Systems                of appropriate content by MAY 1, 1984.
* Interactive Systems               (Abstract is required for Medical
* Personal Computing                Information Processing Track.)
* Data Communication              * Full papers must be mailed to appropriate
* Graphics                          track chairperson by JULY 6, 1984.
* User-Interface Technologies     * Notification of Accepted papers will be
                                    mailed to the author on or before
SOFTWARE                            SEPTEMBER 7, 1984.
* Software Design Tools &         * Final papers in camera-ready form will
  Techniques                        be due by OCTOBER 19, 1984.
* Specification Techniques
* Testing and Validation
* Performance Measurement &       Instructions for Submitting Papers
  Modeling                        1. Submit three copies of the full paper,
* Formal Verification                not to exceed 20 double-spaced pages,
* Management of Software             including diagrams, directly to the
  Development                        appropriate track chairperson listed
                                     below, or if in doubt, to the conference
APPLICATIONS                         co-chairpersons.
* Medical Information             2. Each paper should have a title page
  Processing Systems                 which includes the title of the paper,
* Computer-Based Decision            full name of its author(s), affiliat-
  Support Systems                    ation(s), complete address(es), and
* Management Information Systems     telephone number(s).
* Data-Base Systems for           3. The first page should include the
  Decision Support                   title and a 200-word abstract of the
* Knowledge-Based Systems            paper.

                                   SPONSORS
The  Eighteenth  Annual  Hawaii  International Conference on System Science is
sponsored by the University of  Hawaii  and  the  University  of  Southwestern
Louisiana, in cooperation with the ACM and the IEEE Computer Society.

HARDWARE                            All Other Papers
Edmond L. Gallizzi                  Papers not clearly within one of the
HICSS-18 Track Chairperson          aforementioned tracks should be mailed
Eckerd College                      to:
St. Petersberg, FL 33733            Ralph H. Sprague, Jr.
(813) 867-1166                      HICSS-18 Conference Co-chairperson
                                    College of Business Administration
SOFTWARE                            University of Hawaii
Bruce D. Shriver                    2404 Maile Way, E-303
HICSS-18 Track Chairperson          Honolulu, HI 96822
Computer Science Dept.              (808)948-7430
U. of Southwestern Louisiana
P. O. Box 44330
Lafayette, LA 70504                 Conference Co-Chairpersons
(318) 231-6284                      RALPH H. SPRAGUE, JR.
                                    BRUCE D. SHRIVER
DECISION SUPPORT SYSTEM &
KNOWLEDGE-BASED SYSTEMS             Contributing Sponsor Coordinator
Joyce Elam                          RALPH R. GRAMS
HICSS-18 Track Chairperson          College of Medicine
Dept. of General Business           Department of Pathology
BEB 600                             University of Florida
U. of Texas at Austin               Box J-275
Austin, TX 78712                    Gainesville, FL 32610
(512) 471-3322                      (904) 392-4571

MEDICAL INFORMATION PROCESSING      FOR FURTHER INFORMATION
Terry M. Walker                     Concerning Conference Logistics
HICSS-18 Track Chairperson          NEM B. LAU
Computer Science Dept.              HICSS-18 Conference Coordinator
U. of Southwestern Louisiana        Center for Executive Development
P. O. Box 44330                     College of Business Administration
Lafayette, LA 70504                 University of Hawaii
(318) 231-6284                      2404 Maile Way, C-202
                                    Honolulu, HI 96822
                                    (808) 948-7396
                                    Telex: RCA 8216 UHCED    Cable: UNIHAW

The HICSS conference is a non-profit activity organized to provide a forum for
the  interchange of ideas, techniques, and applications among practitioners of
the system sciences.  It maintains objectivity to the systems sciences without
obligation to any commercial  enterprise.   All  attendees  and  speakers  are
expected  to  have  their  respective companies, organizations or universities
bear the costs of their expenses and registration fees.

------------------------------

End of AIList Digest
********************
11-Feb-84 21:33:00-PST,19687;000000000001
Mail-From: LAWS created at 11-Feb-84 21:27:53
Date: Sat 11 Feb 1984 20:58-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #17
To: AIList@SRI-AI


AIList Digest            Sunday, 12 Feb 1984       Volume 2 : Issue 17

Today's Topics:
  Jargon - Glossary of NASA Terminology,
  Humor - Programming Languages
----------------------------------------------------------------------

Date: 23 Jan 84 7:41:17-PST (Mon)
From: hplabs!hao!seismo!flinn @ Ucb-Vax
Subject: Glossary of NASA Terminology

[Reprinted from the Space Digest by permission of the author.
This strikes me as an interesting example of a "natural sublanguage."
It does not reflect the growth and change of NASA jargon, however:
subsequent discussion on the Space Digest indicates that many of the
terms date back eight years and many newer terms are missing.  The
author and others are continuing to add to the list. -- KIL]


        I've been collecting examples of the jargon in common use by
people at NASA Headquarters.  Here is the collection so far:
I have not made any of these up.  I'd be glad to hear of worthy
additions to the collection.

        The 'standard NASA noun modifiers' are nouns used as
adjectives in phrases like 'science community' or 'planetary area.'
Definitions have been omitted for entries whose meaning ought to be
clear.

        -- Ted Flinn

Action Item
Actors in the Program
Ancillary
Ankle: 'Get your ankles bitten' = running into unexpected trouble.
Ant: 'Which ant is steering this log?' = which office is in charge
        of a project.
Appendice (pronounced ap-pen-di-see):  some people, never having
        seen a document with only one appendix, think that this
        is the singular of 'appendices.'
Area:  Always as 'X Area,' where X is one of the standard NASA
        noun modifiers.
Asterick:  pronounced this way more often than not.
Back Burner
Bag It: 'It's in the bag' = it's finished.
Ball of Wax
Baseline: verb or noun.
Basis:  Always as 'X Basis,' where X is one of the standard NASA
         noun modifiers.
Bean Counters:  financial management people.
Bed: 'Completely out of bed' = said of people whose opinions
        are probably incorrect.
Belly Buttons: employees.
Bench Scientists
Bend Metal:  verb, to construct hardware.
Bending Your Pick:  unrewarding activity.
Bent Out of Shape:  disturbed or upset, of a person.
Big Picture
Big-Picture Purposes
Bite the Bullet
Big-Ticket Item: one of the expensive parts.
Black-belt Bureaucrat:  an experienced and knowledgable government
        employee.
Bless: verb, to approve at a high level of management.
Blow One's Skirts Up:  usually negative: 'that didn't blow
        their skirts up' = that didn't upset them.
Blow Smoke:  verb, to obfuscate.
Blown Out of the Water
Bottom Line
Bounce Off: to discuss an idea with someone else.
Brassboard (see Breadboard).
Breadboard (see Brassboard).
Bullet: one of the paragraphs or lines on a viewgraph, which are
         *never* numbered, but always labelled with a bullet.
Bulletize:  to make an outline suitable for a viewgraph.
Bureaucratic Hurdles
Burn:  verb, to score points off a competitor.
Burning Factor:  one of the critical elements.
Calibrate:  verb, to judge the capabilities of people or
              organizations.
Camel's Nose in the Tent
Can of Worms
Canned:  finished, as 'it's in the can.'
Can't Get There From Here.
Capture a Mission:  verb, to construct a launch vehicle for
                        a space flight.
Carve Up the Turkey
Caveat:  usually a noun.
Centers:  'on N-week centers' = at N-week intervals.
Choir, Preaching to the
Clock is Ticking = time is getting short.
Code:  Every section at NASA centers or Headquarters has a label
        consisting of one or more letters or numbers, and in
        conversations or less formal memos, sections are always
        referred to by the code rather than the name:
        Code LI, Code 931, Code EE, etc.
Commonality
Community:  'X Community,' where X is one of the standard NASA
                noun modifiers.
Concept:  'X Concept,' where X is one of the standard NASA
                noun modifiers.
Concur: verb, to agree.
Configure:  verb.
Constant Dollars:  cost without taking inflation into account
        (see Real-Year Dollars).
Contract Out
Core X:  The more important parts of X, where X is one of the
          nouns used as modifiers.
Correlative
Cost-Benefit Tradeoff
Cross-Cut:  verb, to look at something a different way.
Crump:  transitive verb, to cause to collapse.
Crutch: flimsy argument.
Cut Orders:  to fill out a travel order form; left over from the
                days when this was done with mimeograph stencils.
Cutting Edge
Data Base
Data Dump:  a report made to others, usually one's own group.
Data Point:  an item of information.
Debrief:  transitive verb, to report to one's own staff after
            an outside meeting.
Deep Yoghurt:  bad trouble.
Definitize:  verb, to make precise or definite.
De-integrate:  verb, to take apart (not dis-).
De-lid:  verb, to take the top off an instrument.
Delta:  an increment to cost or content.
Descope:  verb, to redesign a project as a result of budget
           cuts (not the opposite of scope, q.v.).
Development Concept
Dialog:  transitive verb.
Disadvantage:  transitive verb.
Disgruntee:  non-NASA person unhappy with program decisions.
Dog's Breakfast
Dollar-Limited
Driver:  an item making up a significant part of cost or
           schedule: 'X is the cost driver.'
Drop-Dead Date:  the real deadline; see 'hard deadline.'
Ducks in a Row
Egg on One's Face
End Item:  product.
End-Run the System
End to End
Extent to Which
Extramural
Facilitize:  verb, to make a facility out of something.
Factor in:  verb.
Feedback:  reaction of another section or organization to
             a proposition.
Fill This Square
Finalize
Finesse The System
First Cut:  preliminary estimate.
Fiscal Constraints
Flag:  verb, to make note of something for future reference.
Flagship Program
Flex the Parameters
Flux and Change
What Will Fly:  'see it if will fly.'
Folded In:  taken into account.
Forest: miss the f. for the trees.
Forgiving, unforgiving:  of a physical system.
Front Office
Full-Up:  at peak level.
Future:  promise or potential, as, 'a lot of potential future.'
Futuristic
Gangbusters
Glitch
Grease the Skids
Green Door:  'behind the green door' = in the Administrator's offices.
Go to Bat For
Goal:  contrasted to 'objective,' q.v.
Grabber
Gross Outline:  approximation.
Ground Floor
Group Shoot = brainstorming session.
Guidelines:  always desirable to have.
Guy:  an inanimate object such as a data point.
Hack:  'get a hack on X' = make some kind of estimate.
Hard Copy:  paper, as contrasted to viewgraphs.
Hard Deadline:  supposed deadline; never met.
Hard Over:  intransigent.
Head Counters:  personnel office staff.
Hit X Hard:  concentrate on X.
Hoop:  a step in realizing a program:  'yet to go through this hoop.'
Humanoid
Hypergolic:  of a person: intransigent or upset in general.
Impact:  verb.
Implement:  verb.
In-House
Initialize
Innovative
Intensive:  always as X-intensive.
Intercompare:  always used instead of 'compare.'
Issue:  always used instead of 'problem.'
Key:  adj., of issues:  'key issue; not particularly key'.
Knickers:  'get into their knickers' = to interfere with them.
Laicize: verb, to describe in terms comprehensible to lay people.
Lashup = rackup.
Lay Track:  to make an impression on management ('we laid a lot
                of track with the Administrator').
Learning Curve
Liaise:  verb.
Limited:  always as X-limited.
Line Item
Link Calculation
Liberate Resources:  to divert funds from something else.
Looked At:  'the X area is being looked at' = being studied.
Loop:  to be in the loop = to be informed.
Love It!   exclamation of approval.
Low-Cost
Machine = spacecraft.
Man-Attended Experiment
Marching Orders
Matrix
Micromanagement = a tendency to get involved in management of
                        affairs two or more levels down from
                        one's own area of responsibility.
Milestone
Mission Definition
Mode:  'in an X mode.'
Model-Dependent
Muscle:  'get all the muscle into X'
Music:  'let's all read from the same sheet of music.'
Necessitate
Nominal:  according to expectation.
Nominative:  adj., meaning unknown.
Nonconcur:  verb, to disagree.
Numb Nut:  unskilled or incapable person.
Objective:  as contrasted with 'goal' (q.v.)
Overarching Objective
Oblectation
Off-Load:  verb.
On Board:  'Y is on board' = the participation of Y is assured.
On-Boards:  employees or participants.
On Leave:  on vacation.
On the Part Of
On Travel:  out of town.
Open Loop
Out-of-House
Over Guidelines
Ox:  'depends on whose ox is gored.'
Package
Paradigm
Parking Orbit:  temporary assignment or employment.
Pathfinder Studies
Pedigree:  history of accumulation of non-NASA support for a mission.
Peg to Hang X On
Pie:  'another slice through this same pie is...'
Piece of the Action
Ping On:  verb, to remind someone of something they were
           supposed to do.
Pitch:  a presentation to management.
Placekeeper
Planning Exercise
Pony in This Pile of Manure Somewhere = some part of this mess
        may be salvageable.
Posture
Pre-Posthumous
Prioritize
Priority Listing
Problem Being Worked:  'we're working that problem.'
Problem Areas
Product = end item.
Programmatic
Pucker Factor:  degree of apprehension.
Pull One's Tongue Through One's Nose:  give someone a hard time.
Pulse:  verb, as, 'pulse the system.'
Quick Look
Rackup = lashup.
Rainmaker:  an employee able to get approval for budget increases
                or new missions.
Rapee: a person on the receiving end of an unfavorable decision.
Rattle the Cage:  'that will rattle their cage.'
Real-Year Dollars: cost taking inflation into account, as
        contrasted with 'constant dollars.'
Reclama
Refugee:  a person transferred from another program.
Report Out:  verb, used for 'report.'
Resources = money.
Resource-Intensive = expensive.
ROM: 'rough order of magnitude,' of estimates.
Rubric
Runout
Sales Pitch
Scenario
Scope:  verb, to attempt to understand something.
Scoped Out:  pp., understood.
Secular = non-scientific or non-technological.
Self-Serving
Sense:  noun, used instead of 'consensus.'
Shopping List
Show Stopper
Sign Off On something = approve.
Space Cadets:  NASA employees.
Space Winnies or Wieners:  ditto, but even more derogatory.
X-Specific
Speak to X:  to comment on X, where X is a subject, not a person.
Specificity
Speed, Up To
Spinning One's Wheels
Spooks:  DOD of similar people from other agencies.
Staff:  verb.
Standpoint:  'from an X standpoint'
Statussed:  adj., as, 'that has been statussed.'
Strap On:  verb, to try out:  'strap on this idea...'
Strawman
String to One's Bow
Street, On The:  distributed outside one's own office.
Stroking
Structure: verb.
Subsume
Success-Oriented:  no provision for possible trouble.
Surface:  verb, to bring up a problem.
Surveille: verb.
Suspense Date:  the mildest form of imaginary deadline.
Tail:  to have one's tail in a crack = to be upset or in trouble.
Tall Pole in the Tent:  data anomaly.
Tar With the Same Brush
On Target
Task Force
Team All Set Up
Tickler = reminder.
Tiger Team
Time-Critical:  something likely to cause schedule trouble.
Time Frame
Torque the System
Total X, where X is one of the standard NASA noun modifiers.
Total X Picture
Truth Model
Unique
Update:  noun or verb.
Up-Front:  adj.
Upscale
Upper Management
Vector:  verb.
Vector a Program:  to direct it toward some objective.
Ventilate the Issues:  to discuss problems.
Versatilify:  verb, to make something more versatile.
Viable: adj., something that might work or might be acceptable.
Viewgraph:  always mandatory in any presentation.
Viz-a-Viz
WAG = wild-assed guess.
Wall to Wall:  adj., pervasive.
Watch:  'didn't happen on my watch...'
Water Off a Duck's Back
Waterfall Chart:  one way of present costs vs. time.
I'm Not Waving, I'm Drowning
Wedge; Planning Wedge:  available future-year money.
Been to the Well
Where Coming From
Whole Nine Yards
X-Wide
X-wise
Workaround:  way to overcome a problem.
Wrapped Around the Axle:  disturbed or upset.

------------------------------

Date: Wed 8 Feb 84 07:14:34-CST
From: Werner Uhrig  <CMP.WERNER@UTEXAS-20.ARPA>
Subject: The Best Languages in Town!!! (forwarded from USENET)

                [Reprinted from the UTexas-20 bboard.]

From: bradley!brad    Feb  6 16:56:00 1984

                               Laidback with (a) Fifth
                               By  John Unger Zussman
                            From Info World, Oct 4, 1982


              Basic, Fortran, Cobol... These programming Languages are well
          known  and (more or less)  well loved throughout the computer in-
          dustry.  There are numerous other languages,  however,  that  are
          less  well  known yet still have ardent devotees.  In fact, these
          little-known languages generally have the most fanatic  admirers.
          For  those  who wish to know more about these obscure languages -
          and why they are obscure - I present the following catalog.

              SIMPLE ... SIMPLE is an acronym for Sheer Idiot's  Mono  Pur-
          pose   Programming   Lingusitic   Environment.    This  language,
          developed at the Hanover College for Technological  Misfits,  was
          designed  to  make it impossible to write code with errors in it.
          The statements are, therefore confined to BEGIN, END,  and  STOP.
          No matter how you arrange the statements, you can't make a syntax
          error.

              Programs written in  SIMPLE  do  nothing  useful.  Thus  they
          achieve  the  results  of  programs  written  in  other languages
          without the tedious, frustrating process of  testing  and  debug-
          ging.

              SLOBOL ... SLOBOL is best known for the speed, or lack of it,
          of  its  compiler.   Although  many compilers allow you to take a
          coffee break while they compile, SLOBOL compilers  allow  you  to
          take  a  trip to Bolivia to pick up the coffee.  Forty-three pro-
          grammers are known to have died of boredom sitting at their  ter-
          minals while waiting for a SLOBOL program to compile.  Weary SLO-
          BOL programmers often turn to a related (but  infinitely  faster)
          language, COCAINE.

              VALGOL ... (With special thanks to Dan and Betsy "Moon  Unit"
          Pfau)  -  From its modest beginnings in southern California's San
          Fernando Valley, VALGOL is enjoying a dramatic surge of populari-
          ty across the industry.

              VALGOL commands include REALLY, LIKE, WELL and Y$KNOW.  Vari-
          ables are assigned with the  =LIKE and =TOTALLY operators.  Other
          operators include the "CALIFORNIA BOOLEANS", FERSURE, and  NOWAY.
          Repetitions of code are handled in FOR-SURE loops. Here is a sam-
          ple VALGOL program:
                    14 LIKE, Y$KNOW (I MEAN) START
                    %% IF
                    PI A =LIKE BITCHEN AND
                    01 B =LIKE TUBULAR AND
                    9  C =LIKE GRODY**MAX
                    4K (FERSURE)**2
                    18 THEN
                    4I FOR I=LIKE 1 TO OH MAYBE 100
                    86 DO WAH + (DITTY**2)
                    9  BARF(I) =TOTALLY GROSS(OUT)
                    -17 SURE
                    1F LIKE BAG THIS PROGRAM
                    ?  REALLY
                    $$ LIKE TOTALLY (Y*KNOW)

              VALGOL is characterized by  its  unfriendly  error  messages.
          For  example, when the user makes a syntax error, the interpreter
          displays the message, GAG ME WITH A SPOON!

              LAIDBACK ... Historically, VALGOL is a  derivative  of  LAID-
          BACK,  which  was  developed  at  the  (now defunct) Marin County
          Center for T'ai Chi, Mellowness, and Computer Programming, as  an
          alternative uo the more intense atmosphere in nearby silicon val-
          ley.

              The center was ideal for programmers who liked to soak in hot
          tubs  while  they  worked.   Unfortunately, few programmers could
          survive there for long, since the center outlawed  pizza  and  RC
          Cola in favor of bean curd and Perrier.

              Many mourn the demise of LAIDBACK because of  its  reputation
          as  a  gentle  and nonthreatening language. For Example, LAIDBACK
          responded to syntax errors with the message, SORRY MAN,  I  CAN'T
          DEAL WITH THAT.

              SARTRE ... Named  after  the  late  existential  philosopher.
          SARTRE  is an extremely unstructured language. Statements in SAR-
          TRE have no purpose; they just are there. Thus,  SARTRE  programs
          are  left to define their own functions.  SARTRE programmers tend
          to be boring and depressed and are no fun at parties.

              FIFTH ... FIFTH is a precision mathematical language in which
          the  data types refer to quantity.  The data types range from CC,
          OUNCE,  SHOT,  and  JIGGER  to  FIFTH  (hence  the  name  of  the
          language),  LITER,  MAGNUM,  and  BLOTTO.   Commands refer to in-
          gredients such as CHABLIS, CHARDONNAY, CABERNET,  GIN,  VERMOUTH,
          VODKA, SCOTCH and WHATEVERSAROUND.

              The many versions of the FIFTH language reflect the sophisti-
          cation  and financial status of its users.  Commands in the ELITE
          dialect include VSOP and LAFITE, while commands in the GUTTER di-
          alect  include  HOOTCH  and  RIPPLE.  The latter is a favorite of
          frustrated FORTH programmers who end up using the language.

              C- ... This language was named for the grade received by  its
          creator  when  he  submitted  it as a class project in a graduate
          programming class.  C- is best described as  a  "Low-Level"  pro-
          gramming language.  In fact, the language generally requires more
          C- statements than machine-code statements  to  execute  a  given
          task.  In this respect, it is very similar to COBOL.

              LITHP  ...  This  otherwise  unremarkable  labuage  is   dis-
          tinguished  by  the absence of an "s" in its character set.  pro-
          grammers and users must substitute "TH". LITHP is said to  useful
          in prothething lithtth.

              DOGO ... Developed at the Massachussettes Institute of Obedi-
          ence Training.  DOGO heralds a new era of computer-literate pets.
          DOGO commands include SIT, STAY, HEEL and ROLL OVER.  An  innova-
          tive feature of DOGO is "PUPPY GRAPHICS", in which a small cocker
          spaniel occasionally leaves a deposit as he  travels  across  the
          screen.

                              Submitted By Ian and Tony Goldsmith

------------------------------

End of AIList Digest
********************
11-Feb-84 21:37:55-PST,12973;000000000001
Mail-From: LAWS created at 11-Feb-84 21:35:53
Date: Sat 11 Feb 1984 21:32-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #18
To: AIList@SRI-AI


AIList Digest            Sunday, 12 Feb 1984       Volume 2 : Issue 18

Today's Topics:
  AI and Meteorology -  Summary of Responses
----------------------------------------------------------------------

Date: 11 Jan 84 16:07:00-PST (Wed)
From: ihnp4!fortune!rpw3 @ Ucb-Vax
Subject: Re: AI and Weather Forecasting - (nf)
Article-I.D.: fortune.2249

As far as the desirability to use AI on the weather, it seems a bit
out of place, when there is rumoured to be a fairly straightforward
(if INCREDIBLY cpu-hungry) thermodynamic relaxation calculation that
gives very good results for 24 hr prediction. It uses as input the
various temperature, wind, and pressure readings from all of the U.S.
weather stations, including the ones cleverly hidden away aboard most
domestic DC-10's and L-1011's. Starting with those values as boundary
conditions, an iterative relaxation is done to fill in the cells of
the continental atmospheric model.

The joke is of course (no joke!), it takes 26 hrs to run on a Illiac IV
(somebody from Ames or NOAS or somewhere correct me, please). The accuracy
goes up as the cell size in the model goes down, but the runtime goes up as
the cube! So you can look out the window, wait 2 hours, and say, "Yup,
the model was right."

My cynical prediction is that either (1) by the time we develop an
AI system that does as well, the deterministic systems will have
obsoleted it, or more likely (2) by the time we get an AI model with
the same accuracy, it will take 72 hours to run a 24 hour forecast!

Rob Warnock

UUCP:   {sri-unix,amd70,hpda,harpo,ihnp4,allegra}!fortune!rpw3
DDD:    (415)595-8444
USPS:   Fortune Systems Corp, 101 Twin Dolphins Drive, Redwood City, CA 94065

------------------------------

Date: 19 Jan 84 21:52:42-EST (Thu)
From: ucbtopaz!finnca1 @ Ucb-Vax
Subject: Re: "You cant go home again"
Article-I.D.: ucbtopaz.370

It seems to me (a phrase that is always a copout for the ill-informed;
nonetheless, I proceed) that the real payoff in expert systems for weather
forecasting would be to capture the knowledge of those pre-computer experts who,
with limited data and even fewer dollars, managed to develop their
pattern-recognition facilities to the point that they could FEEL what was
happening and forecast accordingly.

I was privileged to take some meteorology courses from such an oldster many
years ago, and it was, alas,  my short-sightedness about the computer revolution
in meteorology that prevented me from capturing some of his expertise, to
buzz a word or two.

Surely not ALL of these veterans have retired yet...what a service to science
someone would perform if only this experise could be captured before it dies
off.

        ...ucbvax!lbl-csam!ra!daven    or
        whatever is on the header THIS time.

------------------------------

Date: 15 Jan 84 5:06:29-PST (Sun)
From: hplabs!zehntel!tektronix!ucbcad!ucbesvax.turner @ Ucb-Vax
Subject: Re: Re: You cant go home again - (nf)
Article-I.D.: ucbcad.1315

Re: finnca1@topaz's comments on weather forecasting

Replacing expertise with raw computer power has its shortcomings--the
"joke" of predicting the weather 24 hours from now in 26 hours of cpu
time is a case in point.  Less accurate but more timely forecasts used
to be made by people with slide-rules--and where are these people now?

It wouldn't surprise me if the 20th century had its share of "lost arts".
Archaelogists still dig up things that we don't know quite how to make,
and the technological historians of the next century might well be faced
with the same sorts of puzzles when reading about how people got by
without computers.

Michael Turner (ucbvax!ucbesvax.turner)

------------------------------

Date: Wed 8 Feb 84 15:29:01-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Summary of Responses

The following is a summary of the responses to my AIList request for
information on AI and meteorology, spatial and temporal reasoning, and
related matters.  I have tried to summarize the net messages accurately,
but I may have made some unwarranted inferences about affiliations,
gender, or other matters that were not explicit in the messages.

The citations below should certainly not be considered comprehensive,
either for the scientific literature as a whole or for the AI literature.
There has been relevant work in pattern recognition and image understanding
(e.g., the work at SRI on tracking clouds in satellite images), mapping,
database systems, etc.  I have not had time to scan even my own collection
of literature (PRIP, CVPR, PR, PAMI, IJCAI, AAAI, etc.) for relevant
articles, and I have not sought out bibliographies or done online searches
in the traditional meteorological literature.  Still, I hope these
comments will be of use.

                        ------------------

Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
reports that he and Alistair Frazer (Penn State Meteo Dept.) are advising
two meteorology/CS students who want to do senior/masters theses in AI.
They have submitted a proposal and expect to hear from NSF in a few months.


Capt. Roslyn (Roz) J. Taylor, Applied AI Project Officer, USAF, @RADC,
has read two of the Gaffney/Racer papers entitled "A Learning Interpretive
Decision Algorithm for Severe Storm Forecasting."  She found the algorithm
to be a "fuzzy math"-based fine-tuning algorithm in much the same spirit
as a Kalman filter.  The algorithm might be useful as the numerical
predictor in an expert system.


Jay Glicksman of the Texas Instruments Computer Science Lab suggests
that we check out

  Kawaguchi, E. et al. (1979)
  An Understanding System of Natural Language and Pictorial Pattern in
  the World of Weather Reports
  IJCAI-6 Tokyo, pp. 469-474

It does not provide many details and he has not seen a follow up, but
the paper may give some leads.  This paper is evidently related to the
Taniguchi et al. paper in the 6th Pat. Rec. proceedings that I mentioned
in my query.

Dr. John Tsotsos and his students at the Univ. of Toronto Laboratory for
Computational Medicine have been working for several years on the ALVEN
system to interpret heart images in X-ray films.  Dr. Tsotsos feels that the
spatial and temporal reasoning capabilities of the system would be of use in
meteorology.  The temporal reasoning includes intervals, points,
hierarchies, and temporal sampling considerations.  He has sent me the
following reports:

  R. Gershon, Y. Ali, and M. Jenkin, An Explanation System for Frame-based
  Knowledge Organized Along Multiple Dimensions, LCM-TR83-2, Dec. 1983.

  J.K. Tsotsos, Knowledge Organization: Its Role in Representation,
  Decision-making and Explanation Schemes for Expert Systems, LCM-TR83-3,
  Dec. 1983.

  J.K. Tsotsos, Representational Axes and Temporal Cooperative Processes,
  Preliminary Draft.

I regret that I have found time for only a cursory examination of these papers,
and so cannot say whether they will be useful in themselves for meteorology
or only as a source of further references in spatial and temporal reasoning.
Someone else in my group is now taking a look at them. Others papers from
Dr. Tsotsos group may be found in: IJACI77-79-81, PRIP81, ICPR82, PAMI Nov.80,
and IEEE Computer Oct. 83.


Stuart C. Shapiro at the Univ. of Buffalo (SUNY) CS Dept. added the
following reference on temporal reasoning:

  Almeida, M. J., and Shapiro, S. C., Reasoning about the temporal
  structure of narrative texts.  Proceedings of the Fifth Annual Meeting
  of the Cognitive Science Society, Rochester, NY, 1983.


Fanya S. Montalvo at MIT echoed my interest in

  * knowledge representations for spatial/temporal reasoning;
  * inference methods for estimating meteorological variables
    from (spatially and temporally) sparse data;
  * methods of interfacing symbolic knowledge and heuristic
    reasoning with numerical simulation models;
  * a bibliography or guide to relevant literature.

She reports that good research along these lines is very scarce, but
suggests the following:

  As far as interfacing symbolic knowlege with heuristic reasoning with
  numerical simulation, Weyhrauch's FOL system is the best formalism I've
  seen/worked-with to do that.  Unfortunately there are few references to it.
  One is Filman, Lamping, & Montalvo in IJCAI'83.  Unfortunately it was too
  short.  There's a reference to Weyhrauch's Prolegomena paper in there.  Also
  there is Wood's, Greenfeld's, and Zdybel's work at BBN with KLONE and a ship
  location database; they're no longer there.  There's also Mark Friedell's
  Thesis from Case Western Reserve; see his SIGGRAPH'83 article, also
  references to Greenfeld & Yonke there.  Oh, yes, there's also Reid Simmons,
  here at MIT, on a system connecting diagrams in geologic histories with
  symbolic descriptions, AAAI'83.  The work is really in bits and pieces and
  hasn't really been put together as a whole working formalism yet.  The
  issues are hard.


Jim Hendler at Brown reports that Drew McDermott has recently written
several papers about temporal and spatial reasoning.  The best one on
temporal reasoning was published in Cognitive Science about a year ago.
Also, one of Drew's students at Yale recently did a thesis on spatial
reasoning.


David M. Axler, MSCF Applications Manager at Univ. of Pennsylvania, suggests:

  A great deal of info about weather already exists in a densely-encoded form,
  namely proverbs and traditional maxims.  Is there a way that this system can
  be converted to an expert system, if for no other reason than potential
  comparison between the analysis it provides with that gained from more
  formal meteorological approaches?

  If this is of interest, I can provide leads to collections of weather lore,
  proverbs, and the like.  If you're actually based at SRI, you're near
  several of the major folklore libraries and should have relatively easy
  access (California is the only state in the union with two grad programs in
  the field, one at Berkeley (under the anthro dept.), and one at UCLA) to the
  material, as both schools have decent collections.

I replied:

  The use of folklore maxims is a good idea, and one fairly easy to build
  into an expert system for prediction of weather at a single site.  (The
  user would have to enter observations such as "red sky at night" since
  pattern recognition couldn't be used.  Given that, I suspect that a
  Prospector-style inference net could be built that would simultaneously
  evaluate hypotheses of "rain", "fog", etc., for multiple time windows.)
  Construction of the system and evaluation of the individual rules would
  make an excellent thesis project.

  Unfortunately, I doubt that the National Weather Service or other such
  organization would be interested in having SRI build such a "toy"
  system.  They would be more interested in methods for tracking storm
  fronts and either automating or improving on the map products they
  currently produce.

  As a compromise, one project we have been considering is to automate
  a book of weather forecasting rules for professional forecasters.
  Such rule books do exist, but the pressures of daily forecasting are
  such that the books are rarely consulted.  Perhaps some pattern
  recognition combined with some man-machine dialog could trigger the
  expert system rules that would remind the user of relevant passages.

Dave liked the project, and suggested that there may be additional unofficial
rule sources such as those used by the Farmer's Almanac publishers.


Philip Kahn at UCLA is interested in pattern recognition, and recommends
the book

  REMOTE SENSING: Optics and Optical Systems by Philip N. Slater
  Addison-Wesley Publ. Co., Reading, MA, 1980

for information on atmospherics, optics, films, testing/reliability, etc.


Alex Pang at UCLA is doing some non-AI image processing to aid weather
prediction.  He is interested in hearing about AI and meteorology.
Bill Havens at the University of British Columbia expressed interest,
particularly in methods that could be implemented on a personal computer.
Mike Uschold at Edinburgh and Noel Kropf at Columbia University (Seismology
Lab?) have also expressed interest.

                        ------------------

My thanks to all who replied.

                                        -- Ken Laws

------------------------------

End of AIList Digest
********************
15-Feb-84 00:12:57-PST,12576;000000000001
Mail-From: LAWS created at 15-Feb-84 00:12:06
Date: Tue 14 Feb 1984 17:27-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #19
To: AIList@SRI-AI


AIList Digest           Wednesday, 15 Feb 1984     Volume 2 : Issue 19

Today's Topics:
  Requests - OPS5 & IBM LISP,
  LISP - Timings,
  Bindings - G. Spencer-Brown,
  Knowledge Acquisition - Regrets,
  Alert - 4-Color Problem,
  Brain Theory - Definition,
  Seminars - Analogy & Causal Reasoning & Tutorial Discourse
----------------------------------------------------------------------

Date: Mon 13 Feb 84 10:06:53-PST
From: Ted Markowitz <G.TJM@SU-SCORE.ARPA>
Subject: OPS5 query

I'd like to find out some information on acquiring a copy of
the OPS5 system. Is there a purchase price, is it free-of-charge,
etc. Please send replies to

        G.TJM@SU-SCORE

Thanks.

--ted

------------------------------

Date: 1 Feb 1984 15:14:48 EST
From: Robert M. Simmons <simmons@EDN-UNIX>
Subject: lisp on ibm

Can anyone give me pointers to LISP systems that run on
IBM 370's under MVS?  Direct and indirect pointers are
welcome.

Bob Simmons
simmons@edn-unix

------------------------------

Date: 11 Feb 84 17:54:24 EST
From: John <Roach@RUTGERS.ARPA>
Subject: Timings of LISPs and Machines


I dug up these timings, they are a little bit out of date but seem a little
more informative.  They were done by Dick Gabriel at SU-AI in 1982 and passed
along by Chuck Hedrick at Rutgers.  Some of the times have been updated to
reflect current machines by myself.  These have been marked with the
date of 1984.  All machines were measured using the function -

an almost Takeuchi function as defined by John McCarthy

(defun tak (x y z)
       (cond ((not (< y x))
              z)
             (t (tak (tak (1- x) y z)
                     (tak (1- y) z x)
                     (tak (1- z) x y)))))

------------------------------------------

(tak 18. 12. 6.)

On 11/750 in Franz ordinary arith     19.9   seconds compiled
On 11/780 in Franz with (nfc)(TAKF)   15.8   seconds compiled   (GJC time)
On Rutgers-20 in Interlisp/1984       13.8   seconds compiled
On 11/780 in Franz (nfc)               8.4   seconds compiled   (KIM time)
On 11/780 in Franz (nfc)               8.35  seconds compiled   (GJC time)
On 11/780 in Franz with (ffc)(TAKF)    7.5   seconds compiled   (GJC time)
On 11/750 in PSL, generic arith        7.1   seconds compiled
On MC (KL) in MacLisp (TAKF)           5.9   seconds compiled   (GJC time)
On Dolphin in InterLisp/1984           4.81  seconds compiled
On Vax 11/780 in InterLisp (load = 0)  4.24  seconds compiled
On Foonly F2 in MacLisp                4.1   seconds compiled
On Apollo (MC68000) PASCAL             3.8   seconds            (extra waits?)
On 11/750 in Franz, Fixnum arith       3.6   seconds compiled
On MIT CADR in ZetaLisp                3.16  seconds compiled   (GJC time)
On MIT CADR in ZetaLisp                3.1   seconds compiled   (ROD time)
On MIT CADR in ZetaLisp (TAKF)         3.1   seconds compiled   (GJC time)
On Apollo (MC68000) PSL SYSLISP        2.93  seconds compiled
On 11/780 in NIL (TAKF)                2.8   seconds compiled   (GJC time)
On 11/780 in NIL                       2.7   seconds compiled   (GJC time)
On 11/750 in C                         2.4   seconds
On Rutgers-20 in Interlisp/Block/84    2.225 seconds compiled
On 11/780 in Franz (ffc)               2.13  seconds compiled   (KIM time)
On 11/780 (Diablo) in Franz (ffc)      2.1   seconds compiled   (VRP time)
On 11/780 in Franz (ffc)               2.1   seconds compiled   (GJC time)
On 68000 in C                          1.9   seconds
On Utah-20 in PSL Generic arith        1.672 seconds compiled
On Dandelion in Interlisp/1984         1.65  seconds compiled
On 11/750 in PSL INUM arith            1.4   seconds compiled
On 11/780 (Diablo) in C                1.35  seconds
On 11/780 in Franz (lfc)               1.13  seconds compiled   (KIM time)
On UTAH-20 in Lisp 1.6                 1.1   seconds compiled
On UTAH-20 in PSL Inum arith           1.077 seconds compiled
On Rutgers-20 in Elisp                 1.063 seconds compiled
On Rutgers-20 in R/UCI lisp             .969 seconds compiled
On SAIL (KL) in MacLisp                 .832 seconds compiled
On SAIL in bummed MacLisp               .795 seconds compiled
On MC (KL) in MacLisp (TAKF,dcl)        .789 seconds compiled
On 68000 in machine language            .7   seconds
On MC (KL) in MacLisp (dcl)             .677 seconds compiled
On SAIL in bummed MacLisp (dcl)         .616 seconds compiled
On SAIL (KL) in MacLisp (dcl)           .564 seconds compiled
On Dorado in InterLisp Jan 1982 (tr)    .53  seconds compiled
On UTAH-20 in SYSLISP arith             .526 seconds compiled
On SAIL in machine language             .255 seconds (wholine)
On SAIL in machine language             .184 seconds (ebox-doesn't include mem)
On SCORE (2060) in machine language     .162 seconds (ebox)
On S-1 Mark I in machine language       .114 seconds (ebox & ibox)

I would be interested if people who had these machines/languages available
could update some of the timings.  There also isn't any timings for Symbolics
or LMI.

John.

------------------------------

Date: Sun, 12 Feb 1984  01:14 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: AIList Digest   V2 #14

In regard to G Spencer Brown, if you are referring to author of
the Laws of Form, if that's what it was called:  I believe he was
a friend of Bertrand Russell  and that he logged out
quite a number of years ago.

------------------------------

Date: Sun, 12 Feb 84 14:18:04 EST
From: Brint <abc@brl-bmd>
Subject: Re:  "You cant go home again"

I couldn't agree more (with your feelings of regret at not
capturing the expertise of the "oldster" in meterological
lore).

My dad was one of the best automotive diagnosticians in
Baltimore until his death six years ago.  His uncanny
ability to pinpoint a problem's cause from external
symptoms was locally legendary.  Had I known then what I'm
beginning to learn now about the promise of expert systems,
I'd have spent many happy hours "picking his brain" with
the (unfilled) promise of making us both rich!

------------------------------

Date: Mon 13 Feb 84 22:15:08-EST
From: Jonathan Intner <INTNER@COLUMBIA-20.ARPA>
Subject: The 4-Color Problem

To Whom It May Concern:

        The computer proof of the 4 - color problem can be found in
Appel, K. and W. Haken ,"Every planar map is 4-colorable-1 :
Discharging", "Every planar map is 4-colorable-2: Reducibility",
Illinois Journal of Mathematics, 21, 429-567 (1977).  I haven't looked
at this myself, but I understand from Mike Townsend (a Prof here at
Columbia) that the proof is a real mess and involves thousands of
special cases.

        Jonathan Intner
        INTNER@COLUMBIA-20.ARPA

------------------------------

Date: 11 Feb 1984 13:50-PST
From: Andy Cromarty <andy@AIDS-Unix>
Subject: Re: Brain, a parallel processor?

        What are the evidences that the brain is a parallel processor?
        My own introspection seem to indicate that mine is doing time-sharing.
                        -- Rene Bach <BACH@SUMEX-AIM.ARPA>

You are confusing "brain" with "mind".

------------------------------

Date: 10 Feb 1984  15:23 EST (Fri)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Revolving Seminar

                     [Forwarded by SASW@MIT-MC.]

Wednesday, February 15, 4:00pm 8th floor playroom

Structure-Mapping: A Theoretical Framework for Analogy
Dedre Gentner

The structure-mapping theory of analogy describes a set of
principles by which the interpretation of an analogy is derived
from the meanings of its terms.  These principles are
characterized as implicit rules for mapping knowledge about a
base domain into a target domain.  Two important features of the
theory are (1) the rules depend only on syntactic properties of
the knowledge representation, and not on the specific content of
the domains; and (2) the theoretical framework allows analogies
to be distinguished cleanly from literal similarity statements,
applications of general laws, and other kinds of comparisons.

Two mapping principles are described: (1) Relations between
objects, rather than attributes of objects, are mapped from base
to target; and (2) The particular relations mapped are determined
by @u(systematicity), as defined by the existence of higher-order
relations.  Psychological experiments supporting the theory are
described, and implications for theories of learning are
discussed.


COMING SOON: Tomas Lozano-Perez, Jerry Barber, Dan Carnese, Bob Berwick, ...

------------------------------

Date: Mon 13 Feb 84 09:15:36-PST
From: Juanita Mullen  <MULLEN@SUMEX-AIM.ARPA>
Subject: SIGLUNCH ANNOUNCEMENT - FEBRUARY 24, 1984

[Reprinted from the Stanford SIGLUNCH distribution.]

Friday,   February 24, 1984
LOCATION: Chemistry Gazebo, between Physical & Organic Chemistry
12:05

SPEAKER:   Ben Kuipers, Department of Mathematics
           Tufts University

TOPIC:     Studying Experts to Learn About Qualitative
                       Causal Reasoning


By analyzing a  verbatim protocol  of an expert's  explanation we  can
derive constraints on the conceptual  framework used by human  experts
for causal reasoning  in medicine.   We use  these constraints,  along
with  textbook  descriptions  of  physiological  mechanisms  and   the
computational requirements  of successful  performance, to  propose  a
model of qualitative causal reasoning.  One important design  decision
in the model is the selection of the "envisionment" version of  causal
reasoning  rather  than  a  version  based  on  "causal  links."   The
envisionment process performs a qualitative simulation, starting  with
a description  of the  structure  of a  mechanism and  predicting  its
behavior.  The qualitative causal reasoning algorithm is a step toward
second-generation medical diagnosis programs  that understand how  the
mechanisms of  the  body work.   The  protocol analysis  method  is  a
knowledge  acquisition  technique   for  determining  the   conceptual
framework of new  types of  knowledge in  an expert  system, prior  to
acquiring large amounts of domain-specific knowledge.  The qualitative
causal reasoning algorithm has been implemented and tested on  medical
and non-medical examples.  It will be the core of RENAL, a new  expert
system for diagnosis in nephrology, that we are now developing.

------------------------------

Date: 12 Feb 84 0943 EST (Sunday)
From: Alan.Lesgold@CMU-CS-A (N981AL60)
Subject: colloquium announcement

          [Forwarded from the CMU-C bboard by Laws@SRI-AI.]


                 THE INTELLIGENT TUTORING SYSTEM GROUP
                LEARNING RESEARCH AND DEVELOPMENT CENTER
                        UNIVERSITY OF PITTSBURGH

                          AN ARCHITECTURE FOR
                           TUTORIAL DISCOURSE

                            BEVERLY P. WOOLF
              COMPUTER AND INFORMATION SCIENCE DEPARTMENT
                      UNIVERSITY OF MASSACHUSETTS

                        WEDNESDAY, FEBRUARY 15,
              2:00 - 3:00, LRDC AUDITORIUM (SECOND FLOOR)

    Human  discourse is quite complex compared to the present ability of
machines to handle communication.  Sophisticated research into discourse
is needed before we can construct intelligent interactive systems.  This
talk presents recent research in the areas of discourse generation, with
emphasis on teaching and tutoring dialogues.
    This talk describes MENO, a system where hand  tailored  rules  have
been  used  to  generate  flexible  responses  in  the  face  of student
failures.  The  system  demonstrates  the  effectiveness  of  separating
tutoring  knowledge  and  tutoring  decisions  from  domain  and student
knowledge.  The design of  the  system  suggests  a  machine  theory  of
tutoring and uncovers some of the conventions and intuitions of tutoring
discourse.    This  research  is applicable to any intelligent interface
which must reason about the users knowledge.

------------------------------

End of AIList Digest
********************
17-Feb-84 09:38:25-PST,18372;000000000001
Mail-From: LAWS created at 17-Feb-84 09:36:31
Date: Fri 17 Feb 1984 09:22-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #20
To: AIList@SRI-AI


AIList Digest            Friday, 17 Feb 1984       Volume 2 : Issue 20

Today's Topics:
  Lisp - Timing Data Caveat,
  Bindings - G. Spencer Brown,
  Logic - Nature of Undecidability,
  Brain Theory - Parallelism,
  Expert Systems - Need for Perception,
  AI Culture - Work in Progress,
  Seminars - Learning & Automatic Deduction & Commonsense Reasoning
----------------------------------------------------------------------

Date: 16 Feb 1984 1417-PST
From: VANBUER at USC-ECL.ARPA
Subject: Timing Data Caveat

A warning on the TAK performance testing:  this code only exercises
function calling and small integer arithmetic, and none of things
most heavily used in "real" lisp programming: CONSing, garbage collection,
paging (ai stuff is big after all).
        Darrel J. Van Buer

------------------------------

Date: Wed, 15 Feb 84 11:15:21 EST
From: John McLean <mclean@NRL-CSS>
Subject: G. Spencer-Brown and undecidable propositions


G. Spencer-Brown is very much alive.  He spent several months at NRL a couple
of years ago and presented lectures on his purported proof of the four color
theorem.  Having heard him lecture on several topics previously, I did not feel
motivated to attend his lectures on the four color theorem so I can't comment
on them first hand.  Those who knew him better than I believe that he is
currently at Oxford or Cambridge.  By the way, he was not a friend of Russell's
as far as I know.  Russell merely said something somewhat positive about LAWS
OF FORM.

With respect to undecidability, I can't figure out what Charlie Crummer means
by "undecidable proposition".  The definition I have always seen is that a
proposition is undecidable with respect to a set of axioms if it is
independent, i.e,. neither the proposition nor its negation is provable.
(An undecidable theory is a different kettle of fish altogether.) Examples are
Euclid's 5th postulate with respect to the other 4, Goedel's sentence with
respect to first order number theory, the continuum hypothesis with respect to
set theory, etc.  I can't figure out the claim that one can't decide whether
an undecidable proposition is decidable or not.  Euclid's 5th postulate,
Goedel's sentence, and the continuum hypothesis have been proven to be
undecidable.  For simple theories, such as sentential logic (i.e., no
quantifiers), there are even algorithms for detecting undecidability.
                                                                    John McLean

------------------------------

Date: Wed, 15 Feb 84 11:18:43 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: G. Spencer-Brown and undecidable propositions

Thanks for the lead to G. S-B.  I think I understand what he is driving at with
THE LAWS OF FORM so I would like to see his alledged 4-color proof.

Re: undecidability... Is it true that all propositions can be proved decidable
or not with respect to a particular axiomatic system from WITHIN that system?
My understanding is that this is not generally possible.  Example (Not a proof
of my understanding):  Is the value of the statement "This statement is false."
decidable from within Boolean logic?  It seems to me that from within Boolean
logic, i.e. 2-valued logic, all that would be seen is that no matter how long
I crank I never seem to be able to settle down to a unique value.  If this
proposition is fed to a 2-valued logic program (written in PROLOG, LISP, or
whatever language one desires) the program just won't halt.  From OUTSIDE the
machine, a human programmer can easily detect the problem but from WITHIN
the Boolean system it's not possible.  This seems to be an example of the
halting problem.

--Charlie

------------------------------

Date: 16 Feb 1984  12:22 EST (Thu)
From: "Steven C. Bagley" <BAGLEY%MIT-OZ@MIT-MC.ARPA>
Subject: Quite more than you want to know about George Spencer Brown

Yes, Spencer Brown was associated with Russell, but since Lord Russell
died recently (1970), I think it safe to assume that not ALL of his
associates are dead, yet, at least.

There was a brief piece about Spencer Brown in "New Scientist" several
years ago (vol. 73, no. 1033, January 6, 1977, page 6).  Here are two
interesting quotes:

"What sets him apart from the many others who have claimed a proof of
the [four-color] theorem are his technique, and his personal style.
Spencer Brown's technique rests on a book he wrote in 1964 called
`Laws of Form.'  George Allen and Unwin published it in 1969, on the
recommendation of Bertrand Russell.  In the book he develops a new
algebra of logic -- from which the normal Boolean algebra (a means of
representing propositions and arguments with symbols) can be derived.
The book has had a mixed reputation, from `a work of genius' to
`pretentious triviality.'  It is certainly unorthodox, and mixes
metaphysics and mathematics.  Russell himself was taken with the work,
and mentions it in his autobiography....

The style of the man is extravagant -- he stays at the Savoy -- and
all-embracing.  He was in the Royal Navy in the Second World War; has
degrees in philosophy and psychology (but not mathematics); was a
lecturer in logic at Christ Church College, Oxford; wrote a treatise
on probability; a volume of poetry, and a novel; was a chief logic
designer with Mullard Equipment Ltd where his patented design of a
transistorised elevator logic circuit led to `Laws of Form'; has two
world records for gliding; and presently lectures part-time in the
mathematics department at the University of Cambridge while also
managing his publishing business."

I know of two reviews of "Laws of Form": one by Stafford Beer, the
British cyberneticist, which appeared in "Nature," vol. 223, Sept 27,
1969, and the other by Lancelot Law Whyte, which was published in the
British Journal of the Philosophy of Science, vol 23, 1972, pages
291-292.

Spencer Brown's probability work was published in a book called
"Probability and Scientific Inference", in the late 1950's, if my
memory serves me correctly.  There is also an early article in
"Nature" called "Statistical Significance in Psychical Research", vol.
172, July 25, 1953, pp. 154-156.  A comment by Soal, Stratton, and
Trouless on this article appeared in "Nature" vol 172, Sept 26, 1953,
page 594, and a reply by Spencer Brown immediately follows.  The first
sentence of the initial article reads as follows: "It is proposed to
show that the logical form of the data derived from experiments in
psychical research which depend upon statistical tests is such as to
provide little evidence for telepathy, clairvoyance, precognition,
psychokinesis, etc., but to give some grounds for questioning the
practical validity of the test of significance used."  Careful Spencer
Brown watchers will be interested to note that this article lists his
affliation as the Department of Zoology and Comparative Anatomy,
Oxford; he really gets around.

His works have had a rather widespread, if unorthodox, impact.
Spencer Brown and "Laws of Form" are mentioned in Adam Smith's Powers
of Mind, a survey of techniques for mind expansion, contraction,
adjustment, etc., e.g., EST, various flavors of hallucinogens, are
briefly noted in Aurthur Koestler's The Roots of Coincidence, which
is, quite naturally enough, about probability, coincidence, and
synchronicity, and are mentioned, again, in "The Dyadic Cyclone," by
Dr. John C. Lilly, dolphin aficionado, and consciousness expander,
extraordinaire.

If this isn't an eclectic enough collection of trivia about Spencer
Brown, keep reading.  Here is quote from his book "Only Two Can Play
This Game", written under the pseudonym of James Keys.  "To put it
bluntly, it looks as if the male is so afraid of the fundamentally
different order of being of the female, so terrified of her huge
magical feminine power of destruction and regeneration, that he
doesn't look at her as she really is, he is afraid to accept the
difference, and so has repressed into his unconscious the whole idea
of her as ANOTHER ORDER OF BEING, from whom he might learn what he
could not know of himself alone, and replaced her with the idea of a
sort of second-class replica of himself who, because she plays the
part of a man so much worse than a man, he can feel safe with because
he can despise her."

There are some notes at the end of this book (which isn't really a
novel, but his reflections, written in the heat of the moment, about
the breakup a love affair) which resemble parts of "Laws of Form":
"Space is a construct.  In reality there is no space.  Time is also a
construct.  In reality there is no time.  In eternity there is space
but no time.  In the deepest order of eternity there is no space....In
a qualityless order, to make any distinction at all is at once to
construct all things in embryo...."

And last, I have no idea of his present-day whereabouts.  Perhaps try
writing to him c/o Cambridge University.

------------------------------

Date: Thu, 16 Feb 84 13:58:28 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Quite more than you want to know about George Spencer Brown

Thank you for the copious information on G. S-B.  If I can't get in touch
with him now, it will be because he does not want to be found.

After the first reading of the first page of "The Laws of Form" I almost
threw the book away.  I am glad, however, that I didn't.  I have read it
several times and thought carefully about it and I think that there is much
substance to it.

  --Charlie

------------------------------

Date: 15 Feb 84  2302 PST
From: John McCarthy <JMC@SU-AI>
Subject: Serial or parallel

        It seems to me that introspection can tell us that the brain
does many things serially.  For example, a student with 5 problems
on an examination cannot set 5 processes working on them.  Indeed
I can't see that introspection indicates that anything is done
in parallel, although it does indicate that many things are done
subconsciously.  This is non-trivial, because one could imagine
a mind that could set several processes going subconsciously and
then look at them from time to time to see what progress they
were making.

        On the other hand, anatomy suggests and physiological
experiments confirm that the brain does many things in parallel.
These things include low level vision processing and probably
also low level auditory processing and also reflexes.  For example,
the blink reflex seems to proceed without thought, although it
can be observed and in parallel with whatever else is going on.
Indeed one might regard the blink reflex and some well learned
habits as counter-examples to my assertion that one can't set
parallel processes going and then observe them.

        All else seems to be conjecture.  I'll conjecture that
a division of neural activity into serial and parallel activities
developed very early in evolution.  For example, a bee's eye is
a parallel device, but the bee carries out long chains of serial
activities in foraging.  My more adventurous conjecture is that
primate level intelligence involves applying parallel pattern
recognition processes evolve in connection with vision to records
of the serial activities of the organism.  The parallel processes
of recognition are themselves subconscious, but the results have
to take part in the serial activity.  Finally, seriality seems
to be required for coherence.  An animal that seeks food by
locomotion works properly only if it can go in one direction
at a time, whereas a sea anemone can wave all its tentacles at
once and needs only very primitive seriality that can spread
in a wave of activity.

        Perhaps someone who knows more physiology can offer more
information about the division of animal activity into serial
and parallel kinds.

------------------------------

Date: Wed, 15 Feb 84 22:40:48 pst
From: finnca1%ucbtopaz.CC@Berkeley
Subject: Re:  "You cant go home again"
        Date:     Sun, 12 Feb 84 14:18:04 EST
        From: Brint <abc@brl-bmd>

        I couldn't agree more (with your feelings of regret at not
        capturing the expertise of the "oldster" in meterological
        lore).

        My dad was one of the best automotive diagnosticians in
        Baltimore [...]

Ah yes, the scarcest of experts these days:  a truly competent auto
mechanic!  But don't you still need an expert to PERCEIVE the subtle
auditory cues and translate them into symbolic form?

Living in the world is a full time job, it seems.

                Dave N. (...ucbvax!ucbtopaz!finnca1)

------------------------------

Date: Monday, 13 Feb 1984 18:37:35-PST
From: decwrl!rhea!glivet!zurko@Shasta
Subject: Re: The "world" of CS

        [Forwarded from the Human-Nets digest by Laws@SRI-AI.]

The best place for you to start would be with Sheri Turkle, a
professor at MIT's STS department.  She's been studying both the
official and unofficial members of the computer science world as a
culture/society for a few years now.  In fact, she's supposed to be
putting a book out on her findings, "The Intimate Machine".  Anyone
heard what's up with it?  I thought it was supposed to be out last
Sept, but I haven't been able to find it.
        Mez

------------------------------

Date: 14 Feb 84 21:50:52 EST
From: Michael Sims  <MSIMS@RUTGERS.ARPA>
Subject: Learning Seminar

             [Forwarded from the Rutgers bboard by Laws@SRI-AI.]

                      MACHINE LEARNING BROWN BAG SEMINAR

Title:     When to Learn
Speaker:   Michael Sims
Date:      Wednesday, Feb. 15, 1984 - 12:00-1:30
Location:  Hill Center, Room 254 (note new location)

       In  this  informal  talk I will describe issues which I have broadly
    labeled  'when  to  learn'.    Most  AI  learning  investigations  have
    concentrated  on  the  mechanisms  of  learning.    In  part  this is a
    reasonable consequence of AI's close  relationship  with  the  'general
    process tradition' of psychology [1].  The influences of ecological and
    ethological   (i.e.,  animal  behavior)  investigations  have  recently
    challenged this research methodology in psychology, and I believe  this
    has important ramifications for investigations of machine learning.  In
    particular,  this  influence  would  suggest that learning is something
    which takes place when an appropriate environment  and  an  appropriate
    learning  mechanism  are  present,  and  that  it  is  inappropriate to
    describe learning by describing a learning mechanism without describing
    the environment in which it operates.  The most cogent new issues which
    arise are the description of the environment, and  the  confronting  of
    the  issue  of  'when  to learn in a rich environment'.   By a learning
    system in a 'rich environment' I  mean  a  learning  system  which must
    extract the items to be learned from sensory input which is too rich to
    be  exhaustively stored.  Most present learning systems operate in such
    a restrictive environment that there is no question of what or when  to
    learn.   I will also present a general architecture for such a learning
    system in a rich environment, called a Pattern Directed Learning Model,
    which was motivated by biological learning systems.


                                  References

[1]   Johnston, T. D.
      Contrasting approaches to a theory of learning.
      Behavioral and Brain Sciences 4:125-173, 1981.

------------------------------

Date: Wed 15 Feb 84 13:16:07-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: "Automatic deduction" and other stuff

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

A reminder that the seminar on automatic reasoning / theorem proving /logic
programming / mumble mumble mumble  which I advertised earlier is going to
begin shortly, under one title or another.   It will tentatively be on
Wednesdays at 1:30 in MJH301.   If you wish to be on the mailing list for this,
please mail to me or Yoni Malachi (YM@SAIL).   But if you are already on
Carolyn Talcott's mailing list for the MTC seminars, you will probably be
included on the new list unless you ask not to be.

For those interested specifically in the MRS system, we plan to continue MRS
meetings, also on Weds., at 10:30, starting shortly.   I expect to announce
such meetings on the MRSusers distribution list.   To get on this, mail to me
or Milt Grinberg (GRINBERG@SUMEX).   Note that MRSusers will contain other
announcements related to MRS as well.
                                                - Richard

------------------------------

Date: Wed 15 Feb 84
Subject: McCarthy Lectures on Commonsense Knowledge

      [Forwarded from the Stanford CSLI newsletter by Laws@SRI.]


   MCCARTHY LECTURES ON THE FORMALIZATION OF COMMONSENSE KNOWLEDGE

     John McCarthy  will  present  the remaining three lectures of his
series (the first of the four was held January 20) at 3:00 p.m. in the
Ventura Hall Seminar Room on the dates shown below.

Friday, Feb. 17   "The Circumscription Mode of Nonmonotonic Reasoning"

        Applications of circumscription to formalizing commonsense
        facts.  Application to the frame problem, the qualification
        problem, and to the STRIPS assumption.

Friday, March 2   "Formalization of Knowledge and Belief"

        Modal and first-order formalisms.  Formalisms in which possible
        worlds are explicit objects.  Concepts and propositions as
        objects in theories.

Friday, March 9   "Philosophical Conclusions Arising from AI Work"

        Approximate theories, second-order definitions of concepts,
        ascription of mental qualities to machines.

------------------------------

End of AIList Digest
********************
22-Feb-84 17:14:20-PST,15264;000000000001
Mail-From: LAWS created at 22-Feb-84 17:09:04
Date: Wed 22 Feb 1984 16:28-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #21
To: AIList@SRI-AI


AIList Digest           Thursday, 23 Feb 1984      Volume 2 : Issue 21

Today's Topics:
  Waveform Analysis - EEG/EKG Request,
  Laws of Form - Comment,
  Review - Commercial NL Review in High Technology,
  Humor - The Adventures of Joe Lisp,
  Seminars - Computational Discovery & Robotic Planning & Physiological
    Reasoning & Logic Programming & Mathematical Expert System
----------------------------------------------------------------------

Date: Tue, 21 Feb 84 22:29:05 EST
From: G B Reilly <reilly@udel-relay.arpa>
Subject: EEG/EKG Scoring

Has anyone done any work on automatic scoring and interpretation of EEG or
EKG outputs?

Brendan Reilly

[There has been a great deal of work in these areas.  Good sources are
the IEEE pattern recognition or pattern recognition and image processing
conferences, IEEE Trans. on Pattern Analysis and Machine Intelligence,
IEEE Trans. on Computers, and the Pattern Recognition journal.  There
have also been some conferences on medical pattern recognition.  Can
anyone suggest a bibliography, special issue, or book on these subjects?
Have there been any AI (as opposed to PR) approaches to waveform diagnosis?
-- KIL]

------------------------------

Date: 19-Feb-84 02:14 PST
From: Kirk Kelley  <KIRK.TYM@OFFICE-2>
Subject: G. Spencer-Brown and the Laws of Form

I know of someone who talked with G. on the telephone about six years
ago somewhere in Northern California.  My friend developed a quantum
logic for expressing paradoxes, and some forms of schyzophrenia, among
other things.  Puts fuzzy set theory to shame.  Anyway, he wanted to
get together with G. to discuss his own work and what he perceived in
the Laws of Form as very fundamental problems in generality due to
over-simplicity.  G. refused to meet without being paid fifty or so
dollars per hour.

Others say that the LoF's misleading notation masks the absence of any
significant proofs.  They observe that the notation uses whitespace as
an implicit operator, something that becomes obvious in an attempt to
parse it when represented as character strings in a computer.

I became interested in the Laws of Form when it first came out as it
promised to be quite an elegant solution to the most obscure proofs of
Whitehead and Russell's Principia Mathematica.  The LoF carried to
perfection a very similar simplification I attempted while studying
the same logical foundations of mathematics.  One does not get too far
into the proofs before getting the distinct feeling that there has GOT
to be a better way.

It would be interesting to see an attempt to express the essence of
Go:del's sentence in the LoF notation.

 -- kirk

------------------------------

Date: Fri 17 Feb 84 10:57:18-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Commercial NL Review in High Technology

The February issue of High Technology has a short article on
natural language interfaces (to databases, mainly).  The article
and business outlook section mention four NL systems currently
on the market, led by AIC's Intellect ($70,000, IBM mainframes),
Frey Associate's Themis ($24,000, DEC VAX-11), and Cognitive
System's interface.  (The fourth is not named, but some OEMs and
licensees of the first two are given.)  The article says that
four more systems are expected out this year, and discusses
Symantec's system ($400-$600, IBM PC with 256 Kbytes and hard disk)
and Cal Tech's ASK (HP9836 micro, licensed to HP and DEC).

------------------------------

Date:    Tue, 14 Feb 84 11:21:09 EST
From:    Kris Hammond <Hammond@YALE>
Subject: *AI-LUNCH*

         [Forwarded from a Yale bboard by Shrager@CMU-PSY-A.]

                 THE ADVENTURES OF JOE LISP, T MAN

   Brought to you by:  *AI-LUNCH*, its hot,  its  cold,  its  more  than
   a lunch...
                      This week's episode:

                  The Case of the Bogus Expert
                            Part I

   It was  late  on  a  Tuesday and I was dead in my seat from nearly an
   hour of grueling mail reading and idle chit-chat with random  passers
   by.  The  only  light  in  my  office  was the soft glow from my CRT,
   the only sound was the pain wracked rattle of  an  over-heated  disk.
   It was  raining  out,  but  the  steady staccato rhythm that beat its
   way into the skulls of others was held  back  by  the  cold  concrete
   slabs of  my windowless walls.  I like not having windows, but that's
   another story.

   I didn't hear her come in, but when the  scent  of  her  perfume  hit
   me, my  head swung faster than a Winchester.  She was wearing My-Sin,
   a perfume with the smell of an expert, but that wasn't what impressed
   me.  What  hit  me  was  her  contours.   She had a body with all the
   right variables.  She wore a dress with a single closure that  barely
   hid the  dynamic  scoping  of  what  was  underneath.  Sure I saw her
   as an object, but I guess I'm just object oriented.   It's  the  kind
   of operator I am.

   After she sat down and began to tell her story I  realized  that  her
   sophisticated look  was  just  cover.  She was a green kid, still wet
   behind the ears.  In fact she was wet all over.  As I  said,  it  was
   raining outside.  It's an easy inference.

   It  seems  the  kid's  step-father  had  disappeared.   He had been a
   medical specialist,  diagnosis  and  prescription,  but  one  day  he
   started making  wild  claims  about  knowledge  and planning and then
   he  vanished.   I  had  heard  of  this  kind  before.    Some   were
   specialists.  Some  in  medicine,  some  in geology, but all were the
   same kind of guy.  I looked the girl in the eye  and  asked  the  one
   question she  didn't  want  to  hear,  "He's  rule-based, isn't he?".

   She turned  her  head away and that was all the answer I needed.  His
   kind were cold, unfeeling, unchanging, but she still  loved  him  and
   wanted him back again.

   Once I  got  a  full  picture of the guy I was sure that I knew where
   to find him, California.  It was the haven for his  way  of  thinking
   and acting.   I  was  sure  that he had been swept up by the EXPERTS.
   They were a cult that had grown up in the past few  years,  promising
   fast and  easy  enlightenment.   What  they  didn't tell you was that
   the price was your ability  to  understand  itself.   He  was  there,
   as sure as I was a T Man.

   I knew of at least one operative in California who could be  trusted,
   and I  knew  that  I had to talk to him before I could do any further
   planning.  I reached for the phone and gave him a call.

   The conversation was short and  sweet.   He  had  resource  conflicts
   and couldn't  give  me  a  hand  right now.  I assumed that it had to
   be more complex than that and almost  said  that  resource  conflicts
   aren't  that  easy  to  identify,  but  I  had no time to waste on in
   fighting while the real enemy was still at  large.   Before  he  hung
   up, he  suggested  that  I pick up a radar detector if I was planning
   on driving out and asked if I could grab a half-gallon  of  milk  for
   him on  the  way.   I agreed to the favor, thanked him for his advice
   and wished him luck on his tan...

    That's all  for  now  kids.   Tune in next week for the part two of:

                  The Case of the Bogus Expert

                            Starring

                        JOE LISP, T MAN

   And remember kids, Wednesdays are *AI-LUNCH* days and  11:45  is  the
   *AI-LUNCH* time.  And kids, if you send in 3 box tops from *AI-LUNCH*
   you can get a JOE LISP magic decoder ring.  This  is  the  same  ring
   that saved  JOE  LISP only two episodes ago and is capable of parsing
   from surface to deep  structure  in  less  than  15  transformations.
   Its part plastic, part metal and all bogus, so order now.

------------------------------

Date: 17 February 1984 11:55 EST
From: Kenneth Byrd Story <STORY @ MIT-MC>
Subject: Computational Discovery of Mathamatical Laws

          [Forwarded from the MIT-MC bboard by Laws@SRI-AI.]

TITLE:  "The Computational Discovery of Mathematical Laws: Experiments in Bin
           Packing"
SPEAKER:        Dr. Jon Bentley, Bell Laboratories, Murray Hill
DATE:           Wednesday, February 22, 1984
TIME:           3:30pm  Refreshments
                4:15pm  Lecture
PLACE:          Bldg. 2-338


Bin packing is a typical NP-complete problem that arises in many applications.
This talk describes experiments on two simple bin packing heuristics (First Fit
and First Fit Decreasing) which show that they perform extremely well on
randomly generated data.  On some natural classes of inputs, for instance, the
First Fit Decreasing heuristic finds an optimal solution more often than not.
The data leads to several startling conjectures; some have been proved, while
others remain open problems.  Although the details concern the particular
problem of bin packing, the theme of this talk is more general: how should
computer scientists use simulation programs to discover mathematical laws?
(This work was performed jointly with D.S. Johnson, F.T. Leighton and C.A.
McGeoch.  Tom Leighton will give a talk on March 12 describing proofs of some
of the conjectures spawned by this work.)

HOST:   Professor Tom Leighton

THIS SEMINAR IS JOINTLY SPONSORED BY THE COMBINATORICS SEMINAR & THE THEORY OF
COMPUTATION SEMINAR

------------------------------

Date: 17 Feb 1984  15:14 EST (Fri)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Revolving Seminar

[Forwarded from the MIT-OZ bboard by SASW@MIT-MC.]

[I am uncertain as to the interest of AIList readers in robotics,
VLSI and CAD/CAM design, graphics, and other CS-related topics.  My
current policy is to pass along material relating to planning and
high-level reasoning.  Readers with strong opinions for or against
such topics should write to AIList-Request@SRI-AI.  -- KIL]


AUTOMATIC SYNTHESIS OF FINE-MOTION STRATEGIES FOR ROBOTS

Tomas Lozano Perez

The use of force-based compliant motions enables robots to carry out
tasks in the presence of significant sensing and control errors.  It
is quite difficult, however, to discover a strategy of such motions to
achieve a task.  Furthermore, the choice of motions is quite sensitive
to details of geometry and to error characteristics.  As a result,
each new task presents a brand new and difficult problem.  These
factors motivate the need for automatic synthesis for compliant
motions.  In this talk I will describe a formal approach to the
synthesis of compliant motion strategies from geometric description of
assembly operations.

(This is joint work [no pun intended -- KIL] with Matt Mason of CMU
and Russ Taylor of IBM)

------------------------------

Date: Fri 17 Feb 84 09:02:29-PST
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: Ph.D. Oral

             [Forwarded from the Stanford bboard by Laws@SRI-AI.]

                                  PH.D. ORAL

                        USE OF ARTIFICIAL INTELLIGENCE
                            AND SIMPLE MATHEMATICS
                       TO ANALYZE A PHYSIOLOGICAL MODEL

                    JOHN C. KUNZ, STANFORD/INTELLIGENETICS

                               23 FEBRUARY 1984

                  MARGARET JACKS HALL, RM. 146, 2:30-3:30 PM


   The objective of this research is to demonstrate a methodology for design
and use of a physiological model in a computer program that suggests medical
decisions.  This methodology uses a physiological model based on first
principles and facts of physiology and anatomy.  The model includes inference
rules for analysis of causal relations between physiological events.  The model
is used to analyze physiological behavior, identify the effects of
abnormalities, identify appropriate therapies, and predict the results of
therapy.  This methodology integrates heuristic knowledge traditionally used in
artificial intelligence programs with mathematical knowledge traditionally used
in mathematical modeling programs.  A vocabulary for representing a
physiological model is proposed.

------------------------------

Date: Tue 21 Feb 84 10:47:50-PST
From: Juanita Mullen  <MULLEN@SUMEX-AIM.ARPA>
Subject: ANNOUNCEMENT

[Forwarded from the Stanford SIGLUNCH distribution by Laws@SRI-AI.]


Thursday, February 23, 1984

Professor Kenneth Kahn
Upssala University

will give a talk:

"Logic Programming and Partial Evaluation as Steps Toward
 Efficient Generic Programming"

at: Bldg. 200, (History Building), Room 107, 12 NOON

PROLOG and extensions to it embedded in LM PROLOG will be presented as
a means of describing programs that can be used in many ways.  Partial
evaluation  is  a  process  that  automatically  produces   efficient,
specialized versions  of programs.   Two partial  evaluators, one  for
LISP and one for PROLOG, will be presented as a means for winning back
efficiency that  was sacrificed  for generality.   Partial  evaluation
will also be presented as a means of generating compilers.

------------------------------

Date: 21 Feb 84 15:27:53 EST
From: DSMITH@RUTGERS.ARPA
Subject: Rutger's University Computer Science Colloquium

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


                            COLLOQUIUM

                    Department of Computer Science


         SPEAKER:   John Cannon
                    Dept. of Math
                    University of Sydney
                    Syndey, AUSTRIA

         TITLE:    "DESIGN AND IMPLEMENTATION OF A PROGRAMMING
                    LANGUAGE/EXPERT SYSTEMS FOR MODERN ALGEBRA"

                                  Abstract

Over the past 25 years a substantial body of algorithms has been
devised for computing structural information about graphs.  In order
to make these techniques more generally available, I have undertaken
the development of a system for group theory and related areas of
algebra.  The system consists of a high-level language (having a
Pascal-like syntax) supported by an extensive library.  In that the
system attempts to plan, at a high level, the most economical solution
to a problem, it has some of the attributes of an expert system.  This
talk will concentrate on (a) the problems of designing appropriate
syntax for algebra and, (b) the implementation of a language professor
which attempts to construct a model of the mathematical microworld
with which it is dealing.

          DATE:  Friday, February 24, 1984
          TIME:  2:50 p.m.
          PLACE: Hill Center - Room 705
               * Coffee served at 2:30 p.m. *

------------------------------

End of AIList Digest
********************
29-Feb-84 14:04:15-PST,16017;000000000001
Mail-From: LAWS created at 29-Feb-84 13:58:16
Date: Wed 29 Feb 1984 13:46-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #22
To: AIList@SRI-AI


AIList Digest           Wednesday, 29 Feb 1984     Volume 2 : Issue 22

Today's Topics:
  Robotics - Personal Robotics Request,
  Books - Request for Laws of Form Review,
  Expert Systems - EURISKO Information Request,
  Automated Documentation Tools - Request,
  Mathematics - Fermat's Last Theorem & Map Coloring,
  Waveform Analysis - EEG/EKG Interpretation,
  Brain Theory - Parallelism,
  CS Culture - Computing Worlds
----------------------------------------------------------------------

Date: Thu 16 Feb 84 17:59:03-PST
From: PIERRE@SRI-AI.ARPA
Subject: Information about personal robots?

   Do you know anything about domestic robots? personal robots?
I'm interested by the names and adresses of companies, societies,
clubs, universities involved in that field. Does there exist any review
about this? any articles? Do you work or have you heard of any projects
in this field?
Thank you to answer at Pierre@SRI-AI.ARPA

          Pierre

------------------------------

Date: 23 Feb 84 13:58:28 PST (Thu)
From: Carl Kaun <ckaun@aids-unix>
Subject: Laws of Form


I hope that Charlie Crummer will share some of the substance he finds in
"Laws of Form" with us (ref AIList Digest V2 #20).  I myself am more in the
group that does not understand what LoF has to say that is new, and indeed
doubt that it does say anything unique.

------------------------------

Date: Fri, 24 Feb 84 15:32 MST
From: RNeal@HIS-PHOENIX-MULTICS.ARPA
Subject: EURISKO

I have just begun reading the AI digests (our copy starts Nov 3 1983)
and I am very interested in the one or two transactions dealing with
EURISKO.  Could someone explain what EURISKO does, and maybe give some
background of its development?

On a totally different note, has anyone done any AI work on lower order
intelligence (ie.  that using instinct) such as insects, reptiles, etc.?
Seems they would be easier to model, and I just wondered if anyone had
attempted to make a program which learns they way they do and the things
they do .  I don't know if this belongs in AI or some simulation meeting
(is there one?).
                      >RUSTY<

------------------------------

Date: 27 Feb 1984 07:26-PST
From: SAC.LONG@USC-ISIE
Subject: Automated Documentation Tools

Is anyone aware of software packages available that assist in the
creation of documentation of software, such as user manuals and
maintenance manuals?  I am not looking for simple editors which
are used to create text files, but something a little more
sophisticated which would reduce the amount of time one must
invest in creating manuals manually (with the aid of a simple editor).
If anyone has information about such, please send me a message at:

     SAC.LONG@USC-ISIE

or   Steve Long
     1018-1 Ave H
     Plattsmouth NE 68048

or   (402)294-4460 or reply through AIList.

Thank you.

  --  Steve

------------------------------

Date: 16 Feb 84 5:36:12-PST (Thu)
From: decvax!genrad!wjh12!foxvax1!minas @ Ucb-Vax
Subject: Re: Fermat's Last Theorem & Undecidable Propositions
Article-I.D.: foxvax1.317

Could someone please help out an ignorant soul by posting a brief (if that
is, indeed, possible!) explanation of what Fermat's last theorem states as
well as what the four-color theorem is all about.  I'm not looking for an
explanation of the proofs, but, simply, a statement of the propositions.

Thanks!

-phil minasian          decvax!genrad!wjh12!foxvax1!minas

------------------------------

Date: 15 Feb 84 20:15:33-PST (Wed)
From: ihnp4!mit-eddie!rh @ Ucb-Vax
Subject: Re: Four color...
Article-I.D.: mit-eddi.1290

I had thought that 4 color planar had been proved, but that
the "conjectures" of 5 colors for a sphere and 7 for a torus
were still waiting.  (Those numbers are right, aren't they?)

Randwulf  (Randy Haskins);  Path= genrad!mit-eddie!rh

------------------------------

Date: 17 Feb 84 21:33:46-PST (Fri)
From: decvax!dartvax!dalcs!holmes @ Ucb-Vax
Subject: Re: Four color...
Article-I.D.: dalcs.610

        The four colour problem is the same for a sphere as it is
for the infinite plane.  The problem for a torus was solved many
years ago.  The torus needs exactly 7 colours to paint it.

                                        Ray

------------------------------

Date: 26 Feb 1984 21:38:16-PST
From: utcsrgv!utai!tsotsos@uw-beaver
Subject: AI approach to ECG analysis

One of my PhD students, Taro Shibahara, has been working on an expert
system for arrhythmia analysis. The thesis should be finished by early summer.
A preliminary paper discussing some design issues appeared in IJCAI-83.
System name is CAA - Causal Arrhythmia Analyzer. Important contributions:
Two distinct KB's, one of signal domain the other of the electrophysiological
domain, communication via a "projection" mechanism, causal relations to assist
in prediction, use of meta-knowledge within a frame-based representation
for statistical knowledge. The overall structure is based on the
ALVEN expert system for left ventricular performance assessment, developed
here as well.

John Tsotsos
Dept. of Computer Science
University of Toronto

[Ray Perrault <RPERRAULT@SRI-AI> also suggested this lead.  -- KIL]

------------------------------

Date: 24 Feb 84 10:07:36-PST (Fri)
From: decvax!mcnc!ecsvax!jwb @ Ucb-Vax
Subject: computer ECG
Article-I.D.: ecsvax.2043

At least three companies are currently marketing computer ECG analysis
systems.  They are Marquette Electronics, IBM, Hewlett-Packard.  We use the
Marquette system which works quite well.  Marquette and IBM use variants of
the same program (the "Bonner" program below, original development funded by
IBM.)  Apparently because of fierce competition, much current information,
particularly with regard to algorithms, is proprietary.  Worst in this regard
(a purely personal opinion) is HP who seems to think nobody but HP needs to
know how they do things and physicians are too dumb to understand anyway.
Another way hospitals get computer analysis of ECG's is through "Telenet" who
offers telephone connection to a time sharing system (I think located in the
Chicago area).  Signals are digitized and sent via a modem through standard
phone lines.  ECG's are analyzed and printed information is sent back.
Turn-around time is a few minutes.  They offer an advantage to small hospitals
by offering verification of the analysis by a Cardiologist (for an extra fee).
I understand this service has had some financial problems (rumors).

Following is a bibliography gathered for a lecture to medical students about
computer analysis of ECG's.  Because of this it is mainly from more or less
clinical literature and is oriented toward methods of validation (This is
tough, because reading of ECG's by cardiologists, like many clinical
decisions, is partly a subjective process.  The major impact of these systems
so far has been to force the medical community to develop objective criteria
for their analysis.)

                                 BIBLIOGRAPHY
                  Computer Analysis of the Electrocardiogram
                               August 29, 1983

BOOK

Pordy L (1977) Computer electrocardiography:  present status and criteria.
Mt. Kisco, New York, Futura

PAPERS

Bonner RE, Crevasse L, Ferrer MI, Greenfield JC Jr (1972) A new computer
program for analysis of scalar electrocardiograms.  Computers and Biomedical
Research 5:629-653

Garcia R, Breneman GM, Goldstein S (1981) Electrogram computer analysis.
Practical value of the IBM Bonner-2 (V2MO) program.  J. Electrocardiology
14:283-288

Rautaharju PM, Ariet M, Pryor TA, et al. (1978)  Task Force III:  Computers in
diagnostic electrocardiography.  Proceedings of the Tenth Bethesda Conference,
Optimal Electrocardiography.  Am. J. Cardiol. 41:158-170

Bailey JJ et al (1974) A method for evaluating computer programs for
electrocardiographic interpretation

I.  Application to the experimental IBM program of 1971.  Circulation 50:73-79

II.  Application to version D of the PHS program and the Mayo Clinic program
of 1968.  Circulation 50:80-87

III.  Reproducibility testing and the sources of program errors.  Circulation
50:88-93

Endou K, Miyahara H, Sato (1980) Clinical usefulness of computer diagnosis in
automated electrocardiography.  Cardiology 66:174-189

Bertrand CA et al (1980) Computer interpretation of electrocardiogram using
portable bedside unit.  New York State Journal of Medicine.  August
1980(?volume):1385-1389

Jack Buchanan
Cardiology and Biomedical Engineering
University of North Carolina at Chapel Hill
(919) 966-5201

decvax!mcnc!ecsvax!jwb

------------------------------

Date: Friday, 24-Feb-84 18:35:44-GMT
From: JOLY G C QMA (on ERCC DEC-10) <GCJ%edxa@ucl-cs.arpa>
Subject: re: Parallel processing in the brain.

To compare the product of millions of years of evolution
(ie the human brain) with the recent invention of parallel
processors seems to me to be like trying to effect an analysis
of the relative properties of chalk and cheese.
Gordon Joly.

------------------------------

Date: Wed, 29 Feb 84 13:17:04 PST
From: Dr. Jacques Vidal <vidal@UCLA-CS>
Subject: Brains: Serial or Parallel?


Is the brain parallel?  Or is the issue a red herring?

Computing and thinking are physical processes and as all physical
processes unfold in time are ultimately SEQUENTIAL even "continu-
ous" ones although the latter are self-timed (free-running, asyn-
chronous) rather than clocked.

PARALLEL means that there are multiple tracks with similar  func-
tions  like availability of multiple processors or multiple lanes
on a superhighway. It is a structural characteristic.

CONCURRENT means simultaneous. It is a temporal characteristic.

REDUNDANT means that there is  structure  beyond  that  which  is
minimally  needed  for  function,  perhaps to insure integrity of
function under perturbations.

In this context, PARALLELISM,  i.e. the deployment  of  multiple
processors  is the currency with which a system designer may pur-
chase these two commodities: CONCURRENCY and REDUNDANCY (a neces-
sary but not sufficient condition).

Turing machines have zero  concurrency.  Almost  everything  else
that  computes exhibit some. Conventional processor architectures
and  memories  are  typically  concurrent  at  the  word   level.
Microprogram are sequences of concurrent gate events.

There exist systems that are   completely  concurrent  and  free-
running.   Analog computers and combinational logic circuits have
these properties.  There, computation progresses by chunk between
initial  and final states.  A new chunk starts when the system is
set to a new initial state.

Non-von architectures have moved away from single track computing
and  from  the linear organization of memory cells. With cellular
machines another property appears: ADJACENCY. Neighboring proces-
sors use adjacency as a form of addressing.

These concepts are applicable to natural  automata:  Brains  cer-
tainly  employ  myriads  of  processors  and thus exhibit massive
parallelism. From the numerous processes that are  simultaneously
active  (autonomous  as well as deliberate ones) it is clear that
brains utilize unprecedented concurrency.  These  proces-
sors  are   free-running.   Control  and  data flows are achieved
through three-dimensional networks. Adjacency is a key feature in
most  of the brain processes that have been identified. Long dis-
tance communication is provided for by millions of parallel path-
ways, carrying highly redundant messages.

Now introspection indicates that conscious thinking is limited to
one  stream of thought at any given time. That is a limitation of
the mechanisms supporting consciousness amd some will claim  that
it  can be overcome. Yet even a single stream of thinking is cer-
tainly supported  by  many  concurrent  processes,  obvious  when
thoughts are spoken, accompanied by gestures etc...

Comments?

------------------------------

Date: 18 Feb 1984 2051-PST
From: Rob-Kling <Kling%UCI-20B%UCI-750a@csnet2>
Subject: Computing Worlds

          [Forwarded from Human-Nets Digest by Laws@SRI-AI.]

Sherry Turkle is coming out with a book that may deal in part with the
cultures of computing worlds. It also examines questions about how
children come to see computer applications as alive, animate, etc.

It was to be called, "The Intimate Machine." The title was
appropriated by Neil Frude who published a rather superficial book
with an outline very similar to that Turkle proposed to
some publishers. Frude's book is published by New American Library.

Sherry Turkle's book promises to be much deeper and careful.
It is to be published by Simon and Schuster  under a different
title.

Turkle published an interesting article
called, "Computer as Rorschach" in Society 17(2)(Jan/Feb 1980).

This article examines the variety of meanings that people
attribute to computers and their applications.

I agree with Greg that computing activities are embedded within rich
social worlds. These vary. There are hacker worlds which differ
considerably from the worlds of business systems analysts who develop
financial applications in COBOL on IBM 4341's.  AI worlds differ from
the personal computing worlds, and etc.  To date, no one appears to
have developed a good anthropological account of the organizing
themes, ceremonies, beliefs, meeting grounds, etc.  of these various
computing worlds.  I am beginning such a project at UC-Irvine.

Sherry Turkle's book will be the best contribution (that I know of) in
the near future.

One of my colleagues at UC-Irvine, Kathleen Gregory, has just
completed a PhD thesis in which she has studied the work cultures
within a major computer firm.  She plans to transform her thesis into
a book.  Her research is sensitive to the kinds of langauage
categories Greg mentioned.  (She will joining the Department of
Information and Computer Science at UC-Irvine in the Spring.)

Also, Les Gasser and Walt Scacchi wrote a paper on personal computing
worlds when they were PhD students at UCI.  It is available for $4
from:

        Public Policy Research Organization
        University of California,  Irvine
        Irvine,Ca. 92717

(They are now in Computer Science at USC and may provide copies upon
request.)


Several years ago I published two articles which examine some of the
larger structural arrangments in computing worlds:

        "The Social Dynamics of Technical Innovation in the
Computing World" ^&Symbolic Interaction\&,
1(1)(Fall 1977):132-146.


        "Patterns of Segmentation and Intersection in the
Computing World"
^&Symbolic Interaction\& 1(2)(Spring 1978): 24-43.

One section of a more recent article,
        "Value Conflicts in the Deployment of Computing Applications"
^&Telecommunications Policy\& (March 1983):12-34.
examines the way in which certain computer-based technologies
such as automated offices, artificial intelligence,
CAI, etc. are the foci of social movements.


None of my papers examine the kinds of special languages
which Greg mentions. Sherry Turkle's book may.
Kathleen Gregory's thesis does, in the special setting of
one major computing vendor's software culture.

I'll send copies of my articles on request if I recieve mailing
addresses.


Rob Kling
University of California, Irvine

------------------------------

End of AIList Digest
********************
29-Feb-84 14:34:31-PST,13158;000000000001
Mail-From: LAWS created at 29-Feb-84 14:30:58
Date: Wed 29 Feb 1984 14:11-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #23
To: AIList@SRI-AI


AIList Digest            Thursday, 1 Mar 1984      Volume 2 : Issue 23

Today's Topics:
  Seminars - VLSI Knowledge Representation
    & Machine Learning
    & Computer as Musical Scratchpad
    & Programming Language for Group Theory
    & Algorithm Animation
  Conference - Very Large Databases Call for Papers
----------------------------------------------------------------------

Date: Wed 22 Feb 84 16:36:20-PST
From: Joseph A. Goguen <GOGUEN@SRI-AI.ARPA>
Subject: Hierarchical Software Processor

                     [Forwarded by Laws@SRI-AI.]

                          An overview of HISP
                           by K. Futatsugi

               Special Lecture at SRI, 27 February 1984


    HISP (hierarchical software processor) is an experimental
language/system, which has been developed at ETL (Electrotechnical
Laboratory, Japan) by the author's group, for hierarchical software
development based on algebraic specification techniques.
    In HISP, software development is simply modeled as the incremental
construction of a set of hierarchically structured clusters of
operators (modules).  Each module is the constructed as a result of
applying one of the specific module building operations to the already
existing modules.  This basic feature makes it possible to write
inherently hierarchical and modularized software.
    This talk will inroduce HISP informally by the use of simple
examples.  The present status of HISP implementation and future
possibilities will also be sketched.

------------------------------

Date: Thu 23 Feb 84 00:26:45-MST
From: Subra <Subrahmanyam@UTAH-20.ARPA>
Subject: Very High Level Silicon Compilation

    [Forwarded by Laws@SRI-AI.  This talk was presented at the SRI
                    Computer Science Laboratory.]


           VERY HIGH LEVEL SILICON COMPILATION: THEORY AND PRACTICE

                               P.A.Subrahmanyam
                        Department of Computer Science
                              University of Utah

The  possibility  of  implementing  reasonably  complex special purpose systems
directly in silicon using VLSI technologies has served to  underline  the  need
for design methodologies that support the development of systems that have both
hardware  and  software  components.    It  is  important  in  the long run for
automated design aids that support such methodologies to be based on a  uniform
set  of  principles  --  ideally,  on  a  unifying  theoretical basis.  In this
context, I have been investigating a general framework to support the  analytic
and synthetic tasks of integrated system design. Two of the salient features of
this basis are:

   - The  formalism  allows  various levels of abstraction involved in the
     software/hardware design  process  to  be  modelled.    For  example,
     functional  (behavioral),  architectural  (system  and  chip  level),
     symbolic  layout,  and  electrical  (switch-level)--  are  explicitly
     modelled  as  being  typical  of the levels of abstraction that human
     "expert designers" work with.

   - The  formalism  allows  for  explicit  reasoning  about   behavioral,
     spatial, temporal and performance criteria.

The  talk  will  motivate  the  general  problem,  outline  the  conceptual and
theoretical basis, and discuss some of our preliminary  empirical  explorations
in building integrated software-hardware systems using these principles.

------------------------------

Date: 22 Feb 84 12:19:09 EST
From: Giovanni <Bresina@RUTGERS.ARPA>
Subject: Machine Learning Seminar

              [Forwarded from the Rutgers bboard by Laws@SRI-AI.]

             *** MACHINE LEARNING SEMINAR AND PIZZA LUNCHEON ***


    Empirical Exploration of Problem Reformulation and Strategy Acquisition

Authors: N.S. Sridharan and J.L. Bresina
Location: Room 254, Hill Center, Busch Campus, Rutgers
Date: Wednesday, February 29, 1984
Time: Noon - 1:30 pm
Speaker: John L. Bresina

The  problem  solving  ability  of an AI program is critically dependent on the
nature of the symbolic  formulation  of  the  problem  given  to  the  program.
Improvement  in  performance  of  the  problem  solving  program can be made by
improving the strategy of controlling and directing search but more importantly
by shifting the problem formulation to a more appropriate form.

The choice of the initial formulation is critical, since  certain  formulations
are  more  amenable  to  incremental  reformulations than others.  With this in
mind,  an  Extensible  Problem  Reduction  method  is  developed  that   allows
incremental  strategy  construction.    The class of problems of interest to us
requires dealing with interacting subgoals.  A variety  of  reduction  operator
types   are   introduced  corresponding  to  different  ways  of  handling  the
interaction among subgoals.  These reduction  operators  define  a  generalized
And/Or  space including constraints on nodes with a correspondingly generalized
control structure for dealing with constraints and for combining  solutions  to
subgoals.    We  consider a modestly complex class of board puzzle problems and
demonstrate, by example, how reformulation of the problem can be carried out by
the construction and modification of reduction operators.

------------------------------

Date: 26 Feb 84 15:16:08 EST
From: BERMAN@RU-BLUE.ARPA
Subject: Seminar: The Computer as Musical Scratchpad

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]

SEMINAR: THE COMPUTER AS MUSICAL SCRATCHPAD

Speaker: David Rothenburg, Inductive Inference, Inc.
Date:   Monday, March 5, 1984
Place:  CUNY Graduate Center, 33 West 42nd St., NYC
Room:   732
Time:   6:30 -- 7:30 p.m.

        The composer can use a description language wherein only those
properties and relations (of and between protions of the musical
pattern) which he judges significant need be specified.  Parameters of
these unspecified properties and relations are assigned at random.  It
is intended that this description of the music be refined in response
to iterated auditions.

------------------------------

Date: Sun 26 Feb 84 17:06:23-CST
From: Bob Boyer <CL.BOYER@UTEXAS-20.ARPA>
Subject: A Programming Language for Group Theory (Dept. of Math)

        [Forwarded from the UTexas-20 bboard by Laws@SRI-AI.]

            DEPARTMENT OF MATHEMATICS COLLOQUIUM
          A Programming Language for Group Theory
                        John Cannon
        University of Sydney and Rutgers University
                 Monday, February 27, 4pm

     The past 25 years has seen the emergence of a small but vigorous branch of
group theory which is concerned with the discovery and implementation of
algorithms for computing structural information about both finite and infinite
groups.  These techniques have now reached the stage where they are finding
increasing use both in group theory research and in its applications.  In order
to make these techniques more generally available, I have undertaken the
development of what in effect is an expert system for group theory.

     Major components of the system include a high-level user language (having
a Pascal-like syntax) and an extensive library of group theory algorithms.  The
system breaks new ground in that it permits efficient computation with a range
of different types of algebraic structures, sets, sequences, and mappings.
Although the system has only recently been released, already it has been
applied to problems in topology, algebraic number theory, geometry, graphs
theory, mathematical crystalography, solid state physics, numerical analysis
and computational complexity as well as to problems in group theory itself.

------------------------------

Date: 27 Feb 1984 2025-PST (Monday)
From: Forest Baskett <decwrl!baskett@Shasta>
Subject: EE380 - Wednesday, Feb. 29 - Sedgewick on Algorithm Animation

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

EE380 - Computer Systems Seminar
Wednesday, February 29, 4:15 pm
Terman Auditorium

                        Algorithm Animation
                          Robert Sedgewick
                          Brown University

    The central thesis of this talk is that it is possible to expose
fundamental characteristics of computer programs through the use of
dynamic (real-time) graphic displays, and that such algorithm animation
has the potential to be useful in several contexts.  Recent research in
support of this thesis will be described, including the development of
a conceptual framework for the process of animation, the implementation
of a software environment on high-performance graphics-based
workstations supporting this activity, and the use of the system as a
principal medium of communication in teaching and research.  In
particular, we have animated scores of numerical, sorting, searching,
string processing, geometric, and graph algorithms.  Several examples
will be described in detail.

[Editorial remark: This is great stuff.  - Forest]

------------------------------

Date: 23 Feb 84 16:32:24 PST (Thu)
From: Gerry Wilson <wilson@aids-unix>
Subject: Conference Call for Papers


                        CALL  FOR PAPERS
                        ================

               10'th International Conference on

                    Very Large Data Bases


The tenth VLDB conference is dedicated to the identification and
encouragement of research, development, and application of
advanced technologies for management of large data bases.  This
conference series provides an international forum for the promotion
of an understanding of current research; it facilitates the exchange
of experiences gained in the design, construction and use of data
bases; it encourages the discussion of ideas and future research
directions.  In this anniversary year, a special focus is the
reflection upon lessons learned over the past ten years and the
implications for future research and development.  Such lessons
provide the foundation for new work in the management of large
data bases, as well as the merging of data bases, artificial
intelligence, graphics, and software engineering technologies.

TOPICS:

Data Analysis and Design           Intelligent Interfaces
    Multiple Data Types                User Models
    Semantic Models                    Natural Language
    Dictionaries                       Knowledge Bases
                                       Graphics
Performance and Control
    Data Representation            Workstation Data Bases
    Optimization                       Personal Data Mangement
    Measurement                        Development Environments
    Recovery                           Expert System Applications
                                       Message Passing Designs
Security
    Protection                     Real Time Systems
    Semantic Integrity                 Process Control
    Concurrency                        Manufacturing
                                       Engineering Design
Huge Data Bases
    Data Banks                     Implementation
    Historical Logs                    Languages
                                       Operating Systems
                                       Multi-Technology Systems

Applications                       Distributed Data Bases
    Office Automation                  Distribution Management
    Financial Management               Heterogeneous and Homogeneous
    Crime Control                      Local Area Networks
    CAD/CAM

Hardware
    Data Base Machines
    Associative Memory
    Intelligent Peripherals


LOCATION:  Singapore
DATES:     August 29-31, 1984

TRAVEL SUPPORT: Funds will be available for partial support of most
                participants.

HOW TO SUBMIT:  Original full length (up to 5000 words) and short (up
  to 1000 words) papers are sought on topics such as those above.  Four
  copies of the submission should be sent to the US Program Chairman:

       Dr. Umeshwar Dayal
       Computer Corporation of America
       4 Cambridge Center
       Cambridge, Mass. 02142
       [Dayal@CCA-UNIX]

IMPORTANT DATES:    Papers Due:         March 15, 1984
                    Notification:       May 15, 1984
                    Camera Ready Copy:  June 20, 1984

For additional information contact the US Conference Chairman:

      Gerald A. Wilson
      Advanced Information & Decision Systems
      201 San Antonio Circle
      Suite 286
      Mountain View, California  94040
      [Wilson@AIDS]

------------------------------

End of AIList Digest
********************