 3-Jul-83 12:01:57-PDT,11325;000000000001
Mail-From: LAWS created at  1-Aug-83 17:01:10
Date: Sunday, July 3, 1983 5:01PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #19
To: AIList@SRI-AI


AIList Digest             Monday, 4 Jul 1983       Volume 1 : Issue 19

Today's Topics:
  AI Interfacing
  Computational Linguistics
  Foundations of Perception, AI (2)
  A Simple Logic/Number Theory/AI/Scheduling/Graph Theory Problem
  AISB/GI Tutorials at IJCAI
  Robustness Stories, Program Logs Wanted
  Program Verification Award  [Long Msg]
----------------------------------------------------------------------

Date: Tue 28 Jun 83 12:56:43-PDT
From: W. Wipke <WIPKE@SUMEX-AIM.ARPA>
Subject: AI interfacing

        I have a simple question many of you probably have answers to:
when one has an existing application program for which you want to 
create an AI front end, should one design the AI part as a separate
task in its own address space and communicate via msgs to the
application program, or should one build the AI part into the same
address space as the application program?

        Obviously the former may constrain communication and the
latter may suffer from accidental communication, ie, global conflicts.
What is the best wisdom in this question and where is it
systematically discussed?
                                       Todd Wipke (WIPKE@SUMEX)
                                       Professor of Chemistry
                                       Univ. of Calif, Santa Cruz

------------------------------

Date: Fri 1 Jul 83 13:43:21-PDT
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Computational Linguistics

                [Reprinted from the SU-SCORE BBoard.]

Computers and Mathematics with Applications volume 9 number 1 1983 is
a special issue on comutational linguistics.  This issue is currently 
on the new journals shelf.  HL

------------------------------

Date: Tuesday, 28 June 1983, 21:13-EDT
From: John Batali <Batali@MIT-OZ>
Subject: Foundations of Perception, AI

              [Reprinted from the Phil-Sci discussion.]

[...]

We aren't in the same position in AI as early physicists were.
Physics started out with a more or less common and very roughly
accurate conception of the physical world.  People understood that
things fell, that bigger things hurt more when they fell on you and so
on.  Physics was able to proceed to sharpen up the pre-theoretic
understanding people had of the world until very recently when its
discoveries ceased to be simply sharpenings and began to seem to be
contradictions.

"Mind studies" (AI, psychology, philosophy, and so on) don't seem to 
have such a common, roughly correct, theory to start with.  We don't 
even agree on what it is we are supposed to be explaining, how such 
explanations ought to go, or what constitutes success.

                        [John Batali <Batali@MIT-OZ>]

------------------------------

Date: Wed, 29 Jun 1983  03:13 EDT
From: KDF@MIT-OZ
Subject: Re: Foundations of Perception, AI

            [Reprinted from the Phil-Science discussion.]

[...]

<Aside on Physics: I interpret (not perceive) reports on early studies
of heat and motion as indicating that there WASN'T a "common, roughly 
corrrect" theory to start with.  Even if there was, it was acquired 
somehow.  One way to view what we are doing is building up enough 
experience to construct such theories for computation.>

------------------------------

Date: 30 Jun 1983 1111-CDT
From: CS.CLINE@UTEXAS-20
Subject: a simple logic/number theory/A I/scheduling/graph theory
         problem

                [Reprinted from the UTexas-20 BBoard.]

I have a trivial problem (at least trivial to state) whose solution 
possibly uses elements from many cs/math areas:

 Problem 1: Using pennies, nickels, dimes, quarters, and halves find a
set of coins for which any amount less than one dollar can accumulated
and which minimizes the number of coins over those such sets.

  You can probably solve this problem in the time it takes to read it,
but proving you have a minimal solution is tricky. I'm interested in
elegant solutions. My own uses a little bit of combinatorics.

  Possibly you'd like to take a more general approach:

 Problem 2: Using coins of value v[1],...,v[n] find a set of coins for
which any amount less than M can be accumulated and which minimizes 
the number of coins over those such sets.

 I'd like to see algorithms (with proofs of course) for this one. You 
may notice that the approach you apply to Problem 1 does not
generalize to problem 2.

------------------------------

Date: Friday, 24-Jun-83  16:40:33-BST
From: RITCHIE  HWC (on ERCC DEC-10)  <g.d.ritchie@edxa>
Reply-to: g.d.ritchie%edxa%ucl-cs@isid
Subject: AISB/GI Tutorials at IJCAI



     TUTORIAL ON ARTIFICIAL INTELLIGENCE

        7th-8th August 1983

        Karlsruhe, West Germany

            -------------

    Lectures on:

       Knowledge Representation  (R.Brachman, H.Levesque)

       Computational Vision  (H.Barrow, J.Tenenbaum)

       Robotics  (K.Kempf)

       Expert Systems  (L. Erman)

       Natural Language Processing  (P.Hayes, J.Carbonell)

             _____________


Details in IJCAI brochure, obtainable from:

       G.D.Ritchie (AISB)
       Department of Computer Science,
       Heriot-Watt University,
       Grassmarket,
       Edinburgh EH1 2HJ
       SCOTLAND.

(g.d.ritchie%edxa%ucl-cs%isid)


------------------------------

Date: 27 Jun 83 1117 EDT (Monday)
From: Craig.Everhart@CMU-CS-A
Reply-to: Robustness@CMU-CS-A
Subject: Robustness stories, program logs wanted

Needed: descriptions of robustness features--designs or fixes that
have made programs meet their users' expectations better, beyond bug
fixing.  E.g.:

    - An automatic error recovery routine is a robustness
      feature, since the user (or client) doesn't then have to
      recover by hand.

    - A command language that requires typing more for a
      dangerous command, or supports undoing, is more robust than
      one that has neither feature, since each makes it harder for
      the user to get in trouble.

There are many more possibilities.  Anything where a system doesn't
meet user expectations because of incomplete or ill-advised design is
fair game.

Your stories will be properly credited in my PhD thesis at CMU, which
is an attempt to build a discrimination net that will aid system
designers and maintainers in improving their designs and programs.

Please send a description of the problem, including an idea of the
task and what was going wrong (or what might have gone wrong) and a
description of the design or fix that handled the problem.  Or, if you
know of a program change log and would be available to answer a
question or two on it, please send it.  I'll extract the reports from
it.

Please send stories and logs to Robustness@CMU-CS-A.  Send queries
about the whole process to Everhart@CMU-CS-A.  I appreciate it--thank
you!

------------------------------

Date: Tue 28 Jun 83 21:35:57-PDT
From: Karl N. Levitt  <LEVITT@SRI-AI.ARPA>
Subject: Program Verification Award  [Long Msg]

               [Reprinted from the UTexas-20 BBoard.]

        ROBERT S. BOYER AND J STROTHER MOORE: RECIPIENTS OF
        THE 1983 JOHN MCCARTHY PRIZE FOR WORK IN PROGRAM
                       VERIFICATION


An anonymous donor has established the John McCarthy Prize, to be 
awarded every two years for outstanding work in Program Verification.
The prize, is intended to recognize outstanding current work -- not 
necessarily work of a milestone value. This first award is for work 
carried out and published during the past 5 years.

Our committee has decided to give the initial award to Robert S. Boyer
and J Strother Moore for work carried out at the following 
institutions: University of Edinburgh, SRI International and, 
currently, the University of Texas. Their main achievement is the 
development of an elegant logic implemented in a very powerful theorem
prover. Particularly noteworthy about the logic is the use of 
induction to express properties about the objects common to programs.
Their theorem prover is among the most powerful of the current 
mechanical provers, combining heuristics in support of automatic 
theorem proving with a user interface that allows a human to drive 
proofs that cannot be accomplished automatically. They have extended 
their theorem prover with a Verification Condition Generator for 
Fortran that handles most of the features -- even those thought to be 
too "dirty" for verification -- of a "real" programming language. They
have used their system to prove numerous applications, including 
programs subtle enough to tax human verifiers, and such real 
applications as crytographic algorithms and simple flight control 
systems; their proofs are always very "honest", using "believable" 
specifications and assuming little more than a core set of axioms.  
Their work has led to a constant stream of high quality publications, 
including the book "A Computational Logic", Academic Press, 1979, and 
a comprehensive User's Manual to the theorem prover.

The other individuals nominated by the committee are the following:  
Donald Good: for the language Gypsy which enhances the possibility for
verifying concurrent and real-time systems, for the verification 
system based on Gypsy, and for carrying out the verification of 
numerous "real" systems; Robin Milner: for the Logic of Computable 
Functions which has led to elegant formal definitions of programming 
languages, to elegant specifications of varied applications, and to a 
powerful mechanical theorem prover; Susan Owicki and David Gries: for 
a practical method for the verification of concurrent programs; and to
Wolfgang Polak: for the verification of a "real" Pascal compiler, 
perhaps the largest and most comlicated program verified to date.

The committee would also like to call attention to interesting and 
important work in a number of areas related to program verification.  
Included herein are the following: the formal definition of large and 
complex programming languages; numerous mechanical verification 
systems for a variety of programming languages; the verification of 
systems covering such applications as computer security, compilers, 
operating systems, fault-tolerant computers, and digital logic; 
program testing; and program transformation. This work indicates that 
program verification (and its extensions) besides being a rich area 
for research gives promise of being usable to achieve reliability when
needed for critical applications.

	  Robert Constable -- Cornell
	  Susan Gerhart -- Wang Institute
	  Karl Levitt (Chairman) -- SRI International
	  David Luckham -- Stanford
	  Richard Platek -- Cornell and Odyssey Research Associates
	  Vaughan Pratt -- Stanford
	  Charles Rich -- MIT

------------------------------

End of AIList Digest
********************
 6-Jul-83 17:34:50-PDT,8468;000000000001
Mail-From: LAWS created at  1-Aug-83 17:11:58
Date: Wednesday, July 6, 1983 5:34PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #20
To: AIList@SRI-AI


AIList Digest            Thursday, 7 Jul 1983      Volume 1 : Issue 20

Today's Topics:
  Coupled Systems
  Re: Foundation of Perception, AI
  AI in the media
  Re: Lunar Rovers
  Solution Found to Coin Problem (2)
  HP Computer Colloquium, 7/7/83
  List-of-Lists Updated
----------------------------------------------------------------------

Date: Mon 4 Jul 83 19:25:23-PDT
From: Ira Kalet <IRA@WASHINGTON.ARPA>
Subject: coupled systems

This is in response to the query about when to build an AI "front-end"
to an existing software system as a separate process with its own
address space, as opposed to putting more code in the existing system
to implement the AI component.  At the University of Washington we
have built a very complex graphic simulation system for planning of
radiation therapy treatments for cancer.  We are now starting to work
on a rule based expert system that will model the clinical decision
making part of the process, with the two (separate) systems to
communicate via messages.  We do this as two separate processes
because the simulation system is already a system of multiple 
concurrent processes communicating by messages, and because the
simulation system is written in PASCAL, which seems less suitable
than, for example, INTERLISP, for the AI component.  The kind of
information needed to pass between the systems also affects the
decision.  In our case, the AI system will consult the graphic
treatment planning system for answers to questions that are rather
traditionally compute intensive, eg. radiation dose calculation,
geometric calculations...so the messages are simple and well defined.

------------------------------

Date: Tue, 5 Jul 83 08:16:13 EDT
From: "John B. Black" <Black@YALE.ARPA>
Subject: Re: Foundation of perception, AI


     The recent assertion on this list that "Mind Sciences" (unlike
physics) do not have a "common, roughly correct, theory to start with"
is just dead wrong.  In fact, the study of "naive psychology" (i.e.,
people's folk theories of how other people behave) constitutes a
sizable subfield within formal psychology.  You don't have to be a
professional psychologist to recognize this, just listen to the
conversations around you and you will find a large proportion of them
are composed of people offering explanations and predictions of other
people's behavior.  The source of these explanations and predictions
are, of course, people's folk or naive theories of human behavior (and
these theories ae "roughly correct").  Thus AI and the other "mind
sciences" do seem to be like physics in this regard.

------------------------------

Date: 03 Jul 83  1521 PDT
From: Jim Davidson <JED@SU-AI>
Subject: AI in the media

                [Reprinted from the SU-SCORE BBoard.]

The July issue of Psychology Today contains a letter to the editor, 
which refers to the earlier interview with Roger Schank:

"I was shocked to read Roger Schank's claims of success in building an
English-language front end for a large oil company's geological
mapping system ['Conversation', April].  I was chief programmer of
that system, and it was a dismal failure.  It suffered from the same
disease as all the other "user-friendly" software I have seen.  It is
friendly as long as you play by its rules and tell it what it expects
to hear.  The slightest departure causes apparently random results.

Computers are completely linear in their 'thinking', while the
mind is both linear and at the same time capable of wondrously
spontaneous associations and creative flights into fantasy.  The mind
has an infinite number of scripts, each with hundreds of possible
hooks on which associations with other scripts can be hung.  I don't
think we'll ever duplicate the mind's linguistic ability.
                        Stanley M. Davis
                            Chicago, Ill.  "

------------------------------

Date: 30 Jun 83 9:23:58-PDT (Thu)
From: 
Subject: Re: Lunar Rovers - (nf)
Article-I.D.: ucbcad.188

Another contribution to the growing class of "NOW WAIT A MINUTE"
notes:

        The weight of AI is nearly zero.

Tell me that when you can lift a LISP machine in one hand.

        In addition, the reliability of a system decreases with
        increased quantity of hardware,

Are ECC chips on RAM boards an "increased quantity of hardware"?  
Consider the electrical shielding problems above the atmosphere.

Let's be little more cautious here...

        Flame Off,
                Michael Turner

------------------------------

Date: 5 Jul 83 10:33:11 EDT  (Tue)
From: Dana S. Nau <dsn.umcp-cs@UDel-Relay>
Subject: Re:  a simple logic/number theory/A I/scheduling/graph
         theory problem

    . . .  Using coins of value v[1],...,v[n] find a
    set of coins for which any amount less than M can
    be accumulated and which minimizes the number of
    coins over those such sets.

This problem appears similar (although not identical) to the 0/1 
Knapsack problem, and thus is probably NP-hard.  For approaches to 
solving it, I would recommend Branch and Bound (for example, see 
Fundamentals of Computer Algorithms, by Horowitz and Sahni).
                        Dana S. Nau

------------------------------

Date: 4 Jul 1983 0825-CDT
From: CS.CLINE@UTEXAS-20
Subject: solution found to coin problem

               [Reprinted from the UTexas-20 BBoard.]

The coin problem suggested in my BBOARD message of 1 July has been 
solved. Rich Cohen developed an algorithm and he, Elaine Rich, and I
proved that it solves the problem. Interested parties should contact 
me.

------------------------------

Date: 6 Jul 83 14:00:26 PDT (Wednesday)
From: Kluger.PA@PARC-MAXC.ARPA
Reply-to: Kluger.PA@PARC-MAXC.ARPA
Subject: HP Computer Colloquium, 7/7/83


                Professor Robert Willensky
                Computer Science Department
                U.C. Berkeley

  Talking to UNIX in English: An Overview of an
             On-Line UNIX Consultant


UC (UNIX Consultant) is an intelligent natural language interface that
allows naive users to communicate with the UNIX operating system in 
ordinary English.  The goal of UC is to provide a natural language
help facility that allows new users to learn operating systems'
conventions in a relatively painless way.

UC exploits Artificial Intelligent developments in common sense 
reasoning as well as natural language processing in an attempt to 
provide an interface that is helpful and intelligent, and not merely a
passive repository of facts.  Areas of current research involve 
multi-lingual capabilities, analyzing the user's plan structure via 
natural dialogue, computing possible solutions to a user's problem,
and generating responses in natural language.

        Thursday, July 7, 1983 4:00 pm

        Hewlett-Packard
        Stanford Park Division
        5M conference room
        1501 Page Mill Rd
        Palo Alto, CA 94304

        *** Be sure to arrive at the building's lobby ON TIME, so that
you may be escorted to the conference room.

------------------------------

Date: 1 Jul 1983 0002-PDT
From: Zellich@OFFICE-3 (Rich Zellich)
Subject: List-of-lists updated

OFFICE-3 file <ALMSA>INTEREST-GROUPS.TXT has been updated and is ready
for FTP.  OFFICE-3 supports the net-standard "ANONYMOUS" Login within
FTP, using any password.

INTEREST-GROUPS.TXT is currently 1290 lines (or 52,148 characters).
Please try to limit any weekday FTP jobs to before 0600-CDT and after
1600-CDT if possible, as the system is heavily loaded during most of
the day.

Enjoy, Rich

CHANGES SINCE LAST UPDATE-NOTICE (10 May 83):
   Icon-Group
      Distribution address updated with host name.
   INFO-PRINTERS
      New coordinator.
   PROLOG/PROLOG-HACKERS
      New mailing-lists added.
   SF-LOVERS
      New moderator; Archive references updated for current volume.
   UNIX-WIZARDS
      New host; New coordinator.

------------------------------

End of AIList Digest
********************
 9-Jul-83 16:47:52-PDT,13478;000000000001
Mail-From: LAWS created at  1-Aug-83 17:11:59
Date: Saturday, July 9, 1983 4:47PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #21
To: AIList@SRI-AI


AIList Digest            Sunday, 10 Jul 1983       Volume 1 : Issue 21

Today's Topics:
  Prolog Programs [Request]
  Computer Security [Request]
  Re: AI, Perception, and the Media
  AI and Legal Reasoning
  A Statistician's Assistant
  Rovers
  NMODE [LISP-Based Editor] and PSL
----------------------------------------------------------------------

Date: Thu 7 Jul 83 19:37:44-EDT
From: STEVE@COLUMBIA-20.ARPA
Subject: Prolog Programs

I would like to do some statistical analysis on large PROLOG programs.
I am particularly interested in AI programs in the following areas:

                1) Expert Systems,
                2) Data Bases,
                3) Planning or Robotics,
                4) NLP

Can anyone provide sample programs that I can use?  They should be 
large programs that run on Edinburgh Prolog 3.47 (Dec-20) or C-Prolog 
1.2 (Unix 4.1/Vax).  I would like to collect a good variety, so any 
programs will be useful.  I would also appreciate a sample journal of
a session with the program so that it can be exercised quickly and 
effectively.

                Many Thanks... Stephen Taylor

------------------------------

Date: 7 Jul 1983 17:48:15-EDT
From: Ron.Cole at CMU-CS-SPEECH
Subject: Computer Security

                  [Reprinted from the CMUC BBoard.]

ABC Nitely news is doing a feature in response to the movie War Games
to investigate whether the premise of the movie is legitimate: That
there is no totally secure computer.  They want to interview someone
who has broken into a supposedly secure system.  If you want to get
infamous, please call Shelly Diamond or Jean McCormick at 212 887
4995.

------------------------------

Date: Fri 8 Jul 83 15:33:11-PDT
From: Slava Prazdny <Prazdny at SRI-KL>
Subject: Re: AI, Perception, and the Media

It is ridiculous to assume that the "naive theories", in this case of 
perception, will get you somewhere.  In fact, it is easy to see that
they are wrong.  Nobody knows, for example, what the "Mexican hat"
operators, the simple cells, etc. in the cortex are for.

It is common, especially within the AI comunity not to report the
limitations of the achieved success.  No wonder one hears about robots
nearly walking around, and cleaning a house, or walking a dog, etc.  
Or "english interfaces" which are user friendly.  I think it is about
the time we realize, and frankly say, that such interpolations are
very far in the future indeed.

------------------------------

Date: Thu 7 Jul 83 09:01:53-PDT
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: AI and Legal Reasoning


                                  PH.D. ORAL
                                 JULY 15, 1983
                         ROOM 252, MARGARET JACKS HALL
                                   2:15 P.M.
            AN ARTIFICIAL INTELLIGENCE APPROACH TO LEGAL REASONING

                              Anne v.d.L. Gardner

        The analysis of legal problems is a relatively new domain for 
artificial intelligence.  This thesis describes an AI model of legal
reasoning, giving special attention to the distinctive characteristics
of the domain, and reports on a program based on the model.  Major
features include (1) distinguishing between questions the program has
enough information to resolve and questions that competent
professionals could argue either way; (2) using incompletely defined
("open-textured") technical concepts; (3) combining the use of
knowledge expressed as rules and knowledge expressed as examples; and 
(4) combining the use of professional knowledge and commonsense
knowledge.  All these features are likely to prove important in other
domains besides law.  Previous AI research has left them largely
unexplored.

------------------------------

Date: Tue 5 Jul 83 13:20:42-PDT
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: A Statistician's Assistant

[This talk has already been given at SRI and at Stanford.  Printing
seminar notices seem to be a reasonable way to keep the AIList
community informed about current work in AI, even when readers cannot
be expected to attend.  Anyone with strong feelings about this
practice should contact AIList-Request. -- KIL]


                         BUILDING AN EXPERT INTERFACE

                                William A. Gale
                          Bell Telephone Laboratories
                             Murray Hill, NJ 07974


We are building an expert system for the domain of statistical data
analysis, initially focusing on regression analysis.  Two
characteristics of this domain are current availability of massive but
'dumb' software, and a need to repeatedly diagnose problems and apply
a treatment.

REX (Regression EXpert) is a Franz Lisp program which is an
intelligent interface for the S Statistical System.  It guides a user
through a regression analysis, interprets intermediate and final
results, and instructs the user in statistical concepts.  It is
designed for interactive use, but a non-interactive mode can be used
with lower quality results.

[A particular feature of REX is the ability to suggest data
transformations such as a log or squared term.  The BACON system at
CMU can also do this using an entirely different heuristic approach.
Another automated statistical system is the RX medical database
analyzer by Dr. R. Blum at Stanford; it forms and then attempts to
verify sophisticated hypotheses based on knowledge of drug and disease
interactions, lag times of observable effects, and the incomplete
nature of patient histories. -- KIL]

------------------------------

Date: 6 Jul 1983 21:26-PDT
From: Andy Cromarty <andy@aids-unix>
Subject: Rovers

First: Thanks to all who have responded to my initial note about
rovers.

Most people seem to have taken what I would regard as the easy (and 
commensurately uninteresting) way out by choosing a lunar environment,
precisely because teleoperation is feasible there, if a nuisance.  But
what about systems operating on more distant heavenly bodies or in
deep space?  Even robotic vehicles on Mars would suffer rather severe 
performance degradation if they had to rely upon an (approximately) 
earth-bound intelligence for control.  (A friend provides the
following simple gedankenexperiment: decide now to start
scratching-your-leg-until- it-stops-itching twenty minutes from now;
now wait twenty minutes before you can start; then, perhaps, wait at
least twenty minutes before you can consider stopping....)

Note that I'm not taking issue with the desirability of teleoperated 
lunar vehicles.  (In fact, there's good reason to believe that a 
planetary or lunar rover is politically unrealistic if NASA has 
anything to say about it, given what I understand to be the prevailing
NASA attitude towards *unmanned* space exploration, but that fact 
doesn't motivate my comments here.)  Rather, I'm suggesting we tackle
a problem domain sufficiently rich in AI problems to (a) keep things 
interesting and (b) allow us to explore what contribution, if any, we 
might be able to make as computer scientists, AI researchers, and 
engineers.

Do we know enough to solve, or even identify, the difficult issues in 
situation assessment, planning, and resource allocation faced by such
a system?  For example, reinterpreting Professor Minsky's desire that 
"anyone with such budgets should aim them at AI education and research
fellowships", let us then assume that these fellowships are provided
by NASA and have a problem domain specified: perhaps, for example, we 
might choose a space station orbiting Mars as our testing grounds,
with robot assembly prior to arrival of humans on-site as the problem.
What problems can we already solve, and where is the research needed?

                                        asc

------------------------------

Date: 5 Jul 1983 0731-MDT
From: William Galway <Galway@UTAH-20>
Subject: NMODE [LISP-Based Editor] and PSL

           [Reprinted from the Editor-People Discussion.]

I thought I'd add a bit more to what JQJ has said about NMODE, and add
a sales pitch, since I'm pretty close to its development.  NMODE was
written by Alan Snyder (and others) at Hewlett Packard Computer
Research Labs in Palo Alto, with some additional work done by folks
here at the University of Utah.  NMODE is written in PSL (Portable
Standard Lisp), a Lisp dialect developed at the University of Utah
under the direction of Martin Griss.  NMODE is distantly related to
EMODE (my not-quite-finished-thesis-project) in that it shares some of
the ideas and algorithms, but it's carried them much further (and more
cleanly).  (In fact, I hope to steal quite a bit from NMODE for my
final version of EMODE.)

We've tried to make PSL and NMODE quite portable, and we currently
have NMODE running on at least 4 different systems--TWENEX, Vax Unix,
and two different flavors of the Motorola 68000, one of them being the
Apollo.  (The Apollo version was just brought up last week.)

NMODE is quite TWENEX EMACS compatible.  Of course it doesn't have
nearly as many "libraries" developed for it yet.  It has quite a nice
Lisp Mode (of course), including the ability to directly execute code
from a buffer, but is weaker in other modes.  It's quite strong on
handling multiple windows (and multiple simultaneous terminals).
NMODE also supports a generalized browser mechanism (similar to Dired,
RMAIL, and the Smalltalk browser) which provides a common user
interface to file directories, source code, electronic mail,
documentation, etc.

There's a library available for the TWENEX version of NMODE that 
provides a hook to processes similar to what's available in Gosling's
EMACS for Unix.  (Unfortunately, nobody's gotten around to porting
that to the other machines--it's fairly easy to write machine specific
code in PSL, as well as machine independent code.)  We also have a
fairly nice "dynamic abbreviation" option (expands an abbreviation by
scanning the buffer for a word with the same prefix), although we
don't yet have the "standard" EMACS abbreviation mode.

Of course, one of the nicest features of NMODE is the fact that its
implementation language is Lisp.  New extensions can be added simply
by editing code in a buffer, testing it interactively, and then
compiling it.  (Of course, this gets tricky sometimes--it is possible
to break the editor while adding a new feature.)

NMODE does tend to be a bit slow--it seems to perform quite acceptably
on the DEC-20 and on single-user M68000's with lots of real memory.
It tends to be somewhat painful on loaded Vaxen and Apollo 400s with
only 1 megabyte of real memory.  This could probably be improved by
spending more time on tuning the code (or, preferably, by tuning the
PSL compiler or its machine specific tables).

I'd like to take exception to the claim that "PSL is not a very 
powerful lisp", although it is true that "it is not clear it will 
catch on widely".  I don't have extensive experience with any other
Lisp systems, so I'm not really in a good position to compare them.
There are over 700 functions documented in the current PSL manual.
Perhaps the major feature of "bare" PSL is its ability to let you
write Lisp that compiles to "raw" machine code.  This is VERY
important for getting NMODE to run acceptably fast.  Perhaps the idea
that PSL isn't powerful comes from the belief that there are few big
systems built on top of it.  But that's changed quite a lot over last
couple of years.  In addition to NMODE, here's a list of some other
applications built on top of PSL:

   - Hearn's REDUCE computer algebra system.
   - Expert systems developed at HP (using a successor to FRL).
   - Ager's VALID logic teaching program.
   - Riesenfeld's ALPHA-1 Computer Aided Geometric Design
     System.
   - Novak's GLISP, an object oriented dialect of LISP.

NMODE is currently available "for internal use" as part of the PSL
distribution.  Future plans for distribution and maintenance of NMODE
are unclear.  (Nobody's very anxious to get tied up with maintaining
it.)

PSL systems are available from Utah for the following systems:

  VAX, Unix (4.1, 4.1a)     1600 BPI tar format
  DEC-20, Tops-20 V4 & V5   1600 BPI Dumper format
  Apollo, Aegis 5.0         6 floppy disks, RBAK format
  Extended DEC-20,          1600 BPI Dumper format
    Tops-20 V5

We are currently charging a $200 tape or floppy distribution fee for
each system.  To obtain a copy of the license and order form, please
send a NET message or letter with your US MAIL address to:

    Utah Symbolic Computation Group Secretary
    University of Utah - Dept. of Computer Science
    3160 Merrill Engineering Building
    Salt Lake City, Utah 84112

    ARPANET: CRUSE@UTAH-20
    USENET:  utah-cs!cruse

Send a note to me if you're interested in more information on NMODE.

--Will Galway [ GALWAY@UTAH-20 ]

------------------------------

End of AIList Digest
********************
 18-Jul-83 15:34:53-PDT,10671;000000000001
Mail-From: LAWS created at  1-Aug-83 17:11:59
Date: Monday, July 18, 1983 3:34PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #22
To: AIList@SRI-AI


AIList Digest            Tuesday, 19 Jul 1983      Volume 1 : Issue 22

Today's Topics:
  A Note from the Moderator
  Response to Extensible Editor Request
  How Many Prologs Are There ?
  Grammar Correction
  Machine Learning Workshop Proceedings
  Upcoming Conferences
  Computers in the Media ...
  CSCSI-84 Call for Papers
----------------------------------------------------------------------

Date: Mon 18 Jul 83 09:10:36-PDT
From: AIList-Request@SRI-AI <Laws@SRI-AI.ARPA>
Subject: A Note from the Moderator

This issue of AIList depends heavily on reprints from several BBoards.
Such reporting is important, but should not be the only function of
this "discussion list".  Lets have a little audience participation.

                                        -- Ken Laws

------------------------------

Date: 25 Jun 1983 1247-EDT
From: Chris Ryland <CPR@MIT-XX>
Subject: Response to Extensible Editor Request

         [Reprinted from the Editor-People discussion list.]

Let me point out that T, the Yale Scheme derivative, has been ported 
to the Apollo, VAX/Unix, VAX/VMS, and, soon, the 370 family, from what
I hear.  It appears to be the most efficient and portable Lisp to
appear on the market.  John O'Donnell at Yale (Odonnell@YALE) is the T
project leader.

------------------------------

Date: 2 Jul 83 13:11:36 EDT  (Sat)
From: Bruce T. Smith <BTS.UNC@UDel-Relay>
Subject: How Many Prologs Are There ?

                 [Reprinted from the Prolog Digest.]

        Here's Randy Harr's latest list of Prolog systems.  He's away 
from CWRU for the summer, and he asked me to keep up the list for him.
Since there have been several requests for information on finding a 
Prolog lately, I've recently submitted it to net.lang.prolog.  The 
info on MU-Prolog is the only thing I've added this summer, from a 
recent mailing from the U. of Melbourne.  (Now, if I could only find 
$100, I would like to try it...)

--Bruce T. Smith, UNC-CH
  duke!unc!bts (USENET)
  bts.unc@udel-relay (lesser NETworks)


list compiled by:  Randolph E. Harr
                   Case Western Reserve University
                   decvax!cwruecmp!harr
                   harr.Case@UDEL-RELAY

{ the list can be FTP'd as [SU-SCORE]PS:<PROLOG>Prolog.Availability.
  SU-SCORE observes Anonymous Login convention.  If you cannot FTP,
  I have a limited number of hard copies I could mail.  -ed }

------------------------------

Date: Mon 18 Jul 83 09:14:25-PDT
From: AIList-Request@SRI-AI <Laws@SRI-AI.ARPA>
Subject: Grammar Correction

The July issue of High Technology has an article titled "Software 
Tackles Grammar".  It includes very brief discussions of the Bell Labs
Writer's Workbench and the IBM EPISTLE systems.

------------------------------

Date: 15 Jul 83 09:25:36 EDT
From: GABINELLI@RUTGERS.ARPA
Subject: Machine Learning Workshop Proceedings

                [Reprinted from the Rutgers BBoard.]

Anyone wishing to order the Proceedings from the MLW can do so by
sending a check made out to the University of Illinois, in the amount
of $27.88 ($25 for Proceedings, $2.88 for postage) to:

            Ms.June Wingler
            Department of Computer Science
            1304 W. Springfield
            University of Illinois
            Urbana, Illinois 61801

------------------------------

Date: Fri 15 Jul 83 11:40:41-PDT
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Upcoming Conferences

                     [Reprinted from SU-BBoard.]

1983 ACM Sigmetrics Conference on Measurement and Modeling of Computer
Systems August 29_31, 1983 Minneapolis, Minn. To register mail to
Registrar, Nolte Center, 315 Pillsbury Drive. S.E. Minneapolis, MN.
55455-0118.  For information contact Steven Bruell CS Dept. Univ. MN
123a Lind Hall 612-376-3958

2nd ACM Sigact-Sigops Symposium on Principles of Distributed Computing
at Le Parc Regent, 3625 Avenue du Parc, Montreal, Quebec, Canada
August 17-19, 1983.  Pre register by July 31, PODC Registration,
%Edward G. H. Smith, The Laurier Group, 275 Slater Street, Suite 1404
Ottawa, Ontario K1P 5H9 Canada.

HL

------------------------------

Date: 16 Jul 83  1610 PDT
From: Jim Davidson <JED@SU-AI>
Subject: Computers in the Media ...

                     [Reprinted from SU-BBoard.]

The August issue of Science Digest has an interview with Joseph
Weizenbaum.

He starts off by saying that the current popularity for personal
computers is something of a fad.  He claims that many of the uses of
PC's, such as storing recipes or recording appointments, are tasks
that are better done manually.

Then the discussion turns to AI:

Science Digest: You know, many of the computer's biggest promoters are
university computer scientists themselves, particularly in the more
exotic areas of computer science, like artificial intelligence.  Roger
Schank of Yale has set up a company, Cognitive Systems, that hopes to
market computer investment counselors, computer will-writers,
computers that can actually mimic a human's performance of a job.
[JED--but they have real trouble locating Bibb County.]  What do you
think of artificial intelligence entering the market place?

Joseph Weizenbaum: I suppose first of all that the word "mimicking" is
fairly significant.  These machines are not realizing human thought 
processes; they're mimicking them.  And I think what's being worked on
these days resembles the language, understanding and production of
human beings only very superficially.  By the way, who needs a
computer will-maker?

SD: Some people can't afford a lawyer.

JW: The poor will be grateful to Dr. Schank for thinking of them...

...

SD: Yet, you know Dr. Schank's firm is videotaping humans in the hope
that by this means it can create a program which closely models the
expertise of the individual.

JW: That attitude displays such a degree of arrogance, such hubris
and, furthermore, a great deal of contempt for human beings.  To think
that one can take a very wise teacher, for example, and by observing
her capture the essence of that person to any significant degree is
simply absurd.  I'd say people who have that ambition, people who that
that it's going to be that easy or possible at all, are simply
deluded.

...

SD: Does it bother you that other computer scientists are marketing 
artificial intelligence?

JW: Yes, it bothers me.  It bothers me to the extent that these
commercial efforts are characterized at the same time as disinterested
science, the search for knowledge for knowledge's sake.  And it isn't.
It's done for money.  These people are spending the only capital
science has to offer:  its good name.  And once we lose that we've
lost everything.

------------------------------

Date: 14 Jul 83 11:10:07-PDT (Thu)
From: decvax!linus!utzoo!utcsrgv!tsotsos @ Ucb-Vax
Subject: CSCSI-84 Call for Papers
Article-I.D.: utcsrgv.1754

                         CALL FOR PAPERS

                         C S C S I - 8 4

                      Canadian Society for
              Computational Studies of Intelligence

                  University of Western Ontario
                         London, Ontario
                         May 18-20, 1984

     The Fifth National Conference of the CSCSI will be held at the
University of Western Ontario in London, Canada.  Papers are requested
in all areas of AI research, particularly those listed below.  The
Program Committee members responsible for these areas are included.

  Knowledge Representation:
    Ron Brachman (Fairchild R & D), John Mylopoulos (U of Toronto)
  Learning:
    Tom Mitchell (Rutgers U), Jaime Carbonell (CMU)
  Natural Language:
    Bonnie Weber (U of Pennsylvania), Ray Perrault (SRI)
  Computer Vision:
    Bob Woodham (U of British Columbia), Allen Hanson (U Mass)
  Robotics:
    Takeo Kanade (CMU), John Hollerbach (MIT)
  Expert Systems and Applications:
    Harry Pople (U of Pittsburgh), Victor Lesser (U Mass)
  Logic Programming:
    Randy Goebel (U of Waterloo), Veronica Dahl (Simon Fraser U)
  Cognitive Modelling:
    Zenon Pylyshyn, Ed Stabler (U of Western Ontario)
  Problem Solving and Planning:
    Stan Rosenschein (SRI), Drew McDermott (Yale)

     Authors are requested to prepare Full papers, of no more than
4000 words in length, or Short papers of no more than 2000 words in
length.  A full page of clear diagrams counts as 1000 words.  When
submitting, authors must supply the word count as well as the area in
which they wish their paper reviewed.  (Combinations of the above
areas are acceptable).  The Full paper classification is intended for
well-developed ideas, with significant demonstration of validity,
while the Short paper classification is intended for descriptions of
research in progress.  Authors must ensure that their papers
describe original contributions to or novel applications of
Artificial Intelligence, regardless of length classification, and
that the research is properly compared and contrasted with relevant
literature.
     Three copies of each submitted paper must be in the hands of the
Program Chairman by December 7, 1983.  Papers arriving after that date
will be returned unopened, and papers lacking word count and
classifications will also be returned.  Papers will be fully reviewed
by appropriate members of the program committee.  Notice of acceptance
will be sent on February 28, 1984, and final camera ready versions are
due on March 31, 1984.  All accepted papers will appear in the
conference proceedings.

     Correspondence should be addressed to either the General Chairman
or the Program Chairman, as appropriate.

  General Chairman                  Program Chairman

  Ted Elcock,                       John K. Tsotsos
  Dept. of Computer Science,        Dept. of Computer Science,
  Engineering and Mathematical      10 King's College Rd.,
       Sciences Bldg.,              University of Toronto,
  University of Western Ontario     Toronto, Ontario, Canada,
  London, Ontario, Canada           M5S 1A4
  N6A 5B9                           (416)-978-3619
  (519)-679-3567

------------------------------

End of AIList Digest
********************
 20-Jul-83 15:35:54-PDT,13213;000000000001
Mail-From: LAWS created at  1-Aug-83 17:12:00
Date: Wednesday, July 20, 1983 3:35PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #23
To: AIList@SRI-AI


AIList Digest           Thursday, 21 Jul 1983      Volume 1 : Issue 23

Today's Topics:
  Reply from Cognitive Systems
  Lisp Portability
  UTILISP
  Hampshire College Summer Studies in Mathematics
  Re: CSCSI-84 Call for Papers
  AI Definitions (3)
  HP Computer Colloquium 7/21
  Next AFLB talk(s)
  Special Seminar--C. Beeri
----------------------------------------------------------------------

Date: Tue, 19 Jul 83 18:18:54 EDT
From: Steven Shwartz <Shwartz@YALE.ARPA>
Subject: Reply from Cognitive Systems

The following is a response to the recent letter to the editor of 
Psychology Today that was circulated on AI-List concerning a natural 
language system developed by Cognitive Systems Inc. for an oil 
company.  It states that "[the Cognitive Systems program] is friendly 
as long as you play by its rules and tell it what it expects to hear."

The system in question was not designed nor touted to be a general 
natural language system.  It was designed to understand and respond to
queries about oil wells and topographical maps, and within its 
specified domain, it performs extremely well.  This system has been 
demonstrated at several conferences, most recently the Applied Natural
Language Conference in Santa Monica (February, 1983), where numerous 
members of the academic community tested the system and were favorably
impressed.

It should be noted that the individual who wrote the letter was not 
employed by either Cognitive Systems or the division of the oil 
company which commissioned this program.  In fact, he was a programmer
of the query language that the natural language front end was designed
to replace.

------------------------------

Date: Tue 19 Jul 83 15:24:00-EDT
From: Chip Maguire <Maguire@COLUMBIA-20.ARPA>
Subject: Lisp Portability

  [In response to Chris Ryland's message to Editor-People. -- KIL]

        Once again T is Touted as "... the most efficient and portable
Lisp to appear on the market." As one of the people associated with
the development of PSL (Portable Standard LISP) at the University of
Utah, I feel that I must point out that PSL has been ported to the
Apollo, VAX/UNIX, DECSystem-20/TOPS-20, HP9836/???, Wicat/!?!?!?, and
versions are currently being implemented for the CRAY and 370
families.

The predecessor system "Standard LISP" along with the REDUCE symbolic 
algebra system ran on the following machines (as October 1979):  
Amdahl: 470V/6; CDC: 640, 6600, 7600, Cyber 76; Burroughs: B6700,
B7700; DEC: PDP-10, DECsystem-10, DECsystem-20; CEMA: ES 1040;
Fujitsu: FACOM M-190; Hitachi: MITAC M-160, M-180; Honneywell: 66/60;
Honeywell-Bull:  1642; IBM: 360/44, 360/67, 360/75, 360/91, 370/155,
370/158, 370/165, 370/168, 3033, 370/195; ITEL: AS-6; Siemens: 4004;
Telefunken: TR 440; and UNIVAC: 1108, 1110.

  Then experiments began to port the system without having to deal
with a hand-coded LISP system which was slightly or grossly different
for each machine. This lead to a series of P-coded implementations
(for the 20, PDP-11, Z80, and Cray). This then lead via the Portable
LISP Compiler (Hearn and Griss) to the current compiler-based PSL
system.

So lets hear more about the good ideas in T and fewer nebulous 
comments like: "more efficient and portable".

------------------------------

Date: 19 Jul 1983 13:02:23-EDT
From: Ichiro.Ogata at CMU-CS-G
Subject: UTILISP

                  [Reprinted from the CMU BBoard.]

        I came from Univ. of Tokyo, and brought the MT that contains
  UTILISP ( lisp-machine-lisp like lisp), PROLOG-KR (discribed in
  UTILISP) and AMUSE (Structured Editor).
        It works on IBM 370's (and its compatible machines). If this
interests you, Please contact me.
                Ichiro Ogata io@cmu-cs-g


[and, for AIList, ...]

Yes, we are pleased to deliver UTILISP for all the people.  UTILISP is
written in Asembler, and contains a Compiler.  If you want more
information, please contact our colleges.  Their address is

        Tokyo-To Bunkyo-Ku Hongo
                7chome 3-1
         Tokyo-Daigaku Kogaku-Bu Keisukogaku-Ka
                Wada labolatory

        Ichiro Ogata..

------------------------------

Date: 19 Jul 83 8:59:19-PDT (Tue)
From: ihnp4!houxm!hocda!machaids!pxs @ Ucb-Vax
Subject: Hampshire College Summer Studies in Mathematics
Article-I.D.: machaids.408


(7/17/83):

The 12th Hampshire College Summer Studies in Mathematics for high
ability high school students is now in session until August 19 in
Amherst, MA.  The Summer Studies has initiated a program in cognitive
sciences and is actively seeking foundation and industry support.
(Observers and guest lecturers are invited.)  For more information,
please write David Kelly, Box SS, Hampshire College, Amherst, MA
01002, or call (413) 549-4600 x357 (messages on x371).


Submitted to USENET for David Kelly by Peter Squires, HCSSiM, '77,
                                        ...ihnp4!machaids!pxs

------------------------------

Date: 19 Jul 83 18:43:10 EDT  (Tue)
From: Craig Stanfill <craig.umcp-cs@UDel-Relay>
Subject: Re: CSCSI-84 Call for Papers

    Authors are requested to prepare Full papers, of
    no more than 4000 words in length, or Short papers
    of no more than 2000 words in length.  A full page
    of clear diagrams counts as 1000 words ...

In other words, a picture is worth a thousand words? (ick)

------------------------------

Date: 18 Jul 83 18:13:40 EDT
From: Sri <Sridharan@RUTGERS.ARPA>
Subject: Defining AI ?

                [Reprinted from the Rutgers BBoard.]

I found the following sample entries in a dictionary and thought that
they were good definitions, esp. for a popular dictionary.  Your
reactions are welcome.

Selected entries from the Dictionary of Information Technology by
Dennis Longley and Michael Shain, John Wiley, 1982.

  Artificial Intelligence
    Research and study into methods for the development of
    systems that can demonstrate some of those attributes
    associated with human intelligence, e.g. the ability to
    recognize a variety of patterns from various viewpoints, the
    ability to form hypotheses from a llimited set of
    information, the ability to select relevant information from
    a large set and draw conclusions from it etc.  See Expert
    Systems, Pattern Recognition, Robotics.

  Expert Systems
    In data bases, systems containing a database and associated
    software that enable a user to conduct an apparently
    intelligent dialog with the system in a user oriented
    language.  See Artificial Intelligence.

  Pattern Recognition
    In computing, the automatic recognition of shapes, patterns
    and curves.  The human optical and brain system is much
    superior to the most advanced computer system in matching
    images to those stored in memory.  This area is subject to
    intensive research effort because of its importance in the
    fields of robotics and artificial intelligence, and its
    potential areas of application, e.g.  reading handwritten
    script.  See Artificial Intelligence, Robotics.

  Robotics
    An area of artificial intelligence concerned with robots.

 Robot
    A device that can accept input signals and/or sense
    environmental conditions, process the data so obtained and
    activate a mechanical device to perform a desired action
    relating to the perceived environmental conditions or input
    signal.

------------------------------

Date: 19 Jul 83 09:43:02 EDT
From: Michael <Berman@RUTGERS.ARPA>
Subject: AI Definitions

                [Reprinted from the Rutgers BBoard.]

Speaking as an AI "outsider" the definitions seemed pretty good to me,
except for robotics.  I'm not sure I would classify it as a field of
AI, but rather as one that uses techniques from AI as well as other
areas of computer science and engineering.  Comments?

------------------------------

Date: 19 Jul 83 09:43:10 EDT
From: KELLY@RUTGERS.ARPA
Subject: re: Defining AI?

                [Reprinted from the Rutgers BBoard.]

Those definitions all look pretty good to me, except for the 
content-free entry under EXPERT SYSTEMS.  That is certainly a common
view among implementers of a certain mold (i.e. those coming from an
quasi-N.L. approach, e.g. LUNAR), but I wouldn't say that this is
where the FOCUS of *our* expert systems research has been.  What ever
happened to the reason for calling such beasts "Expert" systems in the
first place?  It certainly wasn't because they were sterling
conversationalists!!

Anyway 4 out of 5 is pretty good.

Sorry to flame on friendly ears.

VK

------------------------------

Date: 18 Jul 83 20:37:04 PDT (Monday)
From: Kluger.PA@PARC-MAXC.ARPA
Reply-to: Kluger.PA@PARC-MAXC.ARPA
Subject: HP Computer Colloquium 7/21


                Guy M. Lohman

                Research Staff Member
                IBM Research Laboratory
                San Jose, CA

                R* Project

The R* project was formed to address the problems of distributed 
databases, with the objective of designing and building an
experimental prototype database management system which would handle
replicated and partitioned data for both query and modification.  The
R* prototype supports a confederation of voluntarily cooperating,
homogeneous, relational database management systems, each with its own
data, sharing data across a communication network.

Two seemingly conflicting goals of distributed databases have been 
resolved efficiently in R*:  single-site image and site autonomy.  To 
make the system easy to use, R* presents a single-site image:  a
user's request for data need not be aware of or specify either the
location or the access path for retrieving that data, requiring close
coordination among sites.  On the other hand, to make local data
available even when other sites or communication lines fail, each R*
database site must be highly autonomous.

The talk will discuss how these goals were compatibly achieved in the 
design and implementation of R* without sacrificing system
performance.

        Thursday, July 21, 1983 4:00 pm

        Stanford Park Labs
        Hewlett Packard
        5M Conference room
        1501 Page Mill Road

*** Be sure to arrive at the building's lobby ON TIME, so that you may
be escorted to the conference room.

------------------------------

Date: Tue 19 Jul 83 22:41:51-PDT
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: Next AFLB talk(s)

                     [Reprinted from SU-BBoard.]


                   N E X T A F L B T A L K (S)

Despite the heat of summer AFLB is still alive!

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


7/21/83 - Michael Luby (Berkeley):

"Monte Carlo Algorithms to Approximate Solutions for NP-hard 
Enumeration and Reliability Problems"

****** Time and place: July 21, 12:30 pm in MJ352 (Bldg. 460) *****

If you'd like an abstract, you should be on the AFLB mailing list. -
Andrei

------------------------------

Date: Tue 19 Jul 83 15:42:54-PDT
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: Special Seminar--C. Beeri

                                SPECIAL SEMINAR

                          Thursday - July 21 - 2 P.M.

                  Margaret Jacks Hall (Bldg. 460) - Room 352

              CONCURRENCY CONTROL THEORY FOR NESTED TRANSACTIONS

                                   C. Beeri

Nested transactions occur in many situations, including explicit
nesting in application programs and implicit nesting in computing
systems.  E.g., database systems are usually implemented as multilevel
systems where operations of a high level language are translated in
several stages into programs using low level operations.  This creates
a nested transaction structure.  The same applies to systems that
support atomic data types, or concurrent access to search structures.
Synchronization of concurrent transactions can be performed at one or
more levels.  The existing theory does not provide a framework for
reasoning about concurrency in systems that support nesting.

In the talk, a general nested transaction model will be described.
The model can accomodate most of the nested transaction systems
currently known.  Tools for proving the serilizability of
computations, hence the correctness of the algorithms generating
them, wil be presented.  In particular, it will be shown that the
p r a c t i c a l theory of CPSR logs can be easily generalized
so that previously known results (e.g., correctness of 2PL) can
be used.  Examples will be presented.

------------------------------

End of AIList Digest
********************
 21-Jul-83 16:37:55-PDT,44242;000000000001
Mail-From: LAWS created at  1-Aug-83 17:12:02
Date: Thursday, July 21, 1983 4:37PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #24
To: AIList@SRI-AI


AIList Digest            Friday, 22 Jul 1983       Volume 1 : Issue 24

Today's Topics:
  Weizenbaum in Science Digest
  AAAI Preliminary Schedule [Pointer]
  Report on Machine Learning Workshop
----------------------------------------------------------------------

Date: 20 July 1983 22:28 EDT
From: Steven A. Swernofsky <SASW @ MIT-MC>
Subject: Weizenbaum in Science Digest

How much credence do Professor Weizenbaum's ideas get among the
current A.I. community?  How do these statements relate to his work?

-- Steve

------------------------------

Date: 20 Jul 1983 0407-EDT
From: STRAZ.TD%MIT-OZ@MIT-MC
Subject: AAAI Preliminary Schedule

What follows is a complete preliminary schedule for AAAI-83.
Presumably changes are still possible, particularly in times, but it
does tell what papers will be presented.

AAAI-83 THE NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE at the
Washington Hilton Hotel, Washington, D.C. August 22-26, 1983, 
sponsored by THE AMERICAN ASSOCIATION FOR ARTIFICIAL INTELLIGENCE and
co-sponsored by University of Maryland and George Washington
University.

[Interested readers should FTP file <AILIST>V1N25.TXT from SRI-AI.  It
is about 19,000 characters.  -- KIL]

------------------------------

Date: 19 Jul 1983 1535-PDT
From: Jack Mostow <MOSTOW@USC-ISIF>
Subject: Report on Machine Learning Workshop


                 1983 INTERNATIONAL MACHINE LEARNING WORKSHOP:
                              AN INFORMAL REPORT

                                  Jack Mostow
                      USC Information Sciences Institute
                              4676 Admiralty Way
                           Marina del Rey, CA. 90291

                           Version of July 18, 1983

  [NOTE:   This is a draft of a report to appear in the October 1983 SIGART.  I
am circulating it at this time to get comments  before  sending  it  in.    The
report should give the flavor of the work presented at the workshop, but is not
intended  to  be formal, precise, or complete.  With this understanding, please
send  corrections  and  questions  ASAP   (before   the   end   of   July)   to
MOSTOW@USC-ISIF.  Thanks.  - Jack]

  The  first  invitational  Machine  Learning  Workshop was held at C-MU in the
summer of 1980; selected papers were eventually published in Machine  Learning,
edited  by  the  conference organizers, Ryszard Michalski, Jaime Carbonell, and
Tom Mitchell.  The same winning team has now brought us the 1983  International
Machine  Learning Workshop, held June 21-23 in Allerton House, an English manor
on a park-like estate donated to the University  of  Illinois.    The  Workshop
featured 33 papers, two panel discussions, countless bull sessions, very little
sleep, and lots of fun.

  This  totally  subjective report tries to convey one participant's impression
of the event, together with  a  few  random  thoughts  it  inspired.    I  have
classified  the  papers  rather  arbitrarily  under  the  topics  of "Analogy,"
"Knowledge Transformation," and "Induction" (broadly construed), but of  course
33  independent research efforts can hardly be expected to fall neatly into any
simple classification scheme.  The papers are discussed in semi-random order; I
have tried to put related papers next to each other.

1. Analogy
  One notable change from the first  Machine  Learning  workshop  was  the  new
abundance of work on analogy.  In 1980, analogy was a topic that clearly needed
work,  but  for  which ideas were lacking.  In 1983, several papers relevant to
analogical reasoning were presented:

  Pat Winston (MIT) "Learning by Augmenting  Rules  and  Accumulating  Censors"
makes an interesting connection between analogy and non-monotonic reasoning.

  Jaime  Carbonell (CMU) "Derivational Analogy in Problem Solving and Knowledge
Acquisition" argues for the inseparability of learning and problem solving.

  Lindley Darden (U. of Maryland) "Reasoning by Analogy  in  Scientific  Theory
Construction"  shows  different  ways in which analogy was used historically in
scientific discovery, and challenges AI to implement them.

  Mark Burstein (Yale) "Concept Formation by Incremental  Analogical  Reasoning
and  Debugging" models a student learning the semantics of programming language
assignment by combining analogies  given  by  a  teacher  or  textbook.    This
excellent paper is discussed below in more detail.

  Ken  Forbus  and  Dedre  Gentner (BBN) "Learning Physical Domains:  Towards a
Theoretical Framework" describes "qualitative process  theory"  for  describing
naive  physics  and  "structure  mapping"  for analogical reasoning.  They have
tackled the difficult and important problem  of  reasoning  symbolically  about
continuous processes.

  Nachum  Dershowitz  (U.  of Illinois) "Programming by Analogy" suggests how a
program to compute cube roots can be constructed by analogy with a program  for
division, and how both can be abstracted into a common schema.  Nachum wins the
"Presenting  by  Analogy"  award  for  using  abstract  geometrical  figures to
communicate most of his talk.



1.1. Lessons
  It is clear that much progress has been made in analogy.

  In contrast to classical work on abstract analogies of the sort  used  in  IQ
tests,  the  1983 papers emphasized analogy as a knowledge transfer method that
uses knowledge about old problems to help solve new ones.

  The idea of analogies as matching graph structures in semantic  networks  was
already  established,  but  it  has  now  been  refined in some important ways.
First, there is a consensus that causal relations are crucial  to  the  analogy
while  certain  other  parts  of  the  graph  (unary  surface features) are not
[Winston, Burstein, Gentner].

  Related to this is  the  idea  of  analogy  as  a  process  of  inheriting  a
justification  [Carbonell,  Winston].   Carbonell had previously introduced the
idea of "transformational  analogy"  in  problem-solving  --  to  solve  a  new
problem,  find  a  similar old problem, retrieve its solution path (sequence of
problem-solving operators), and perturb it into a solution to the new  problem.
His  new  paper  extends this into "derivational analogy" by adding information
about the goal structure motivating the operator sequence, the choices for  how
to  reduce  each subgoal, and the reasons for choosing one over another.  Via a
truth maintenance mechanism, each goal points to the choices that depend on it.
To solve a new problem, the derivation of the old solution is  replayed  as  in
the  POPART  system  developed  by  David  Wile  at  ISI, but with an important
difference.  At each goal  node,  the  justifications  for  how  the  goal  was
achieved  are  checked.  If the reasons that support them are still true, or if
they can be proven based on new reasons,  the  solution  can  be  used  as  is.
Otherwise,  only  the  steps that depend on the violated justifications need be
modified.  In short, adding explicit justifications gives a clean way to  patch
old  solutions instead of completely replaying their derivations.  I think this
technique should be very useful for making the replay mechanism efficient.

  Winston illustrated the problem of relating function to structure by  asking,
"What  is  a  cup?"  Given a general functional definition ("graspable, stable,
holds liquid"), a structural description of a coffee cup ("handle, flat bottom,
concave upward"), and an explanation of how the latter instantiates the  former
("the  handle  provides  graspability in the case of hot liquids"), he draws an
analogy to a styrofoam cup by repairing the  explanation  (the  handle  is  not
needed because the styrofoam insulates).

  The  problem  of  patching  up  sloppy  or  partial  analogies  received some
much-needed attention [Carbonell, Winston, Burstein, Darden].   In  particular,
Burstein  addressed  the  problem of integrating imperfect analogies.  His CARL
program models the behavior of a  (real)  student  learning  the  semantics  of
assignment  statements  from  a teacher's analogies to putting things in boxes,
algebraic equality, and remembering.   He  refined  the  model  of  analogy  as
finding  a  match  all  at  once  to  the  more  realistic one of incrementally
extending an initial correspondence (suggested by a tutor) into a more detailed
analogy.  This can  involve  selecting  among  alternative  mappings  when  the
initial  analogy  is  ambiguous.    For example, if `X = 6' is like putting the
value 6 in the box named X, does `X = Y' mean putting the box  named  Y  inside
the box named X, or putting the value of Y into X?  CARL infers the answer from
the analogy with algebraic equality.

2. Knowledge Transformation
  Knowledge  transformation  converts knowledge from an inefficient or unusable
form to a more useful one.

  Doug Lenat (Stanford), Rick Hayes-Roth (Teknowledge), and Phil  Klahr  (Rand)
"Cognitive Economy in a Fluid Task Economy" updates their 1979 Rand tech report
on caching, work worth knowing, but what Doug actually presented was one of the
entertaining  (and  instructive) two-screen EURISKO talks we have come to enjoy
so much.

  To nobody's surprise but his own,  Doug  won  the  official  workshop  puzzle
contest,  getting  only one word wrong (which I helped him with).  Doug's prize
was a diseased soybean plant, symbolizing U. of Illinois's  favorite  induction
problem.    There  was no second prize, but if there had been it probably would
have been two diseased soybean plants.

  John Anderson (CMU) "Knowledge Compilation:  The General Learning  Mechanism"
uses  production "composition" and "proceduralization" to model the progress of
a student learning how to program in LISP.

  Paul Rosenbloom (CMU) "The Chunking of Goal Hierarchies:  A Generalized Model
of Practice."  Paul's Ph.D. thesis accounts for  the  universal  power  law  of
practice with a chunking model (fast encode, connect, fast decode) that unifies
classical   chunking,   memo   functions   (alias  "caching"),  and  production
composition.

  Jack Mostow (USC-ISI) "Operationalizing Advice:    A  Problem-Solving  Model"
describes  a  problem-solver  called BAR, the successor to the 1981 FOO system.
Given a piece of advice for the card game Hearts, BAR helps find a sequence  of
general program transformations that converts it into a procedure executable by
the learner.

  Tom  Mitchell  and  Rich  Keller (Rutgers) "Goal-Directed Learning" describes
LEX2, which learns heuristics for symbolic integration by analyzing  worked-out
examples.    Unlike  LEX1,  which  performed  empirical induction from multiple
examples  using  the  version  space  method,   LEX2   constructs   justifiable
generalizations from single examples based on an elegant explicit definition in
predicate  calculus  of  what  it  means in LEX to be a heuristic (to lie on an
[optimal] solution path).  This work fits both the  "Knowledge  Transformation"
and  "Induction"  categories  because  it  induces  heuristics  by converting a
precise but inefficient definition of  "heuristic"  into  specialized  patterns
that  can  be  tested inexpensively by matching.  Tom's talk compared LEX2 with
DeJong's explanatory schema acquisition, Winston's  analogical  reasoning,  and
Mostow's   operationalizer,   in   terms  of  a  three-step  process  (generate
explanation;  extract  sufficient  condition  for  satisfying   goal   concept;
optimize).    I  enjoy  seeing attempts to unify and compare different research
projects, especially when one of them is mine.



2.1. Lessons
  In his keynote address at IJCAI79 in Tokyo, Herb Simon suggested that Lenat's
AM and Langley's BACON provided examples of discovery  systems  that  might  be
used  as  the  basis for a theory of discovery, and that such a theory might in
turn serve to guide research in AI.  When he pointed out that both AM and BACON
shared a heavily empirical bent, I realized to my distress that my FOO  program
was  just  the  opposite  -- completely analytic.  At the 1980 Machine Learning
Workshop, Tom Mitchell and I discussed  the  "analytic-to-empirical"  spectrum,
and  wondered  how  the  two approaches might profitably be combined.  The 1983
Workshop gives at least a couple of answers; more should be found.

  An example of a purely empirical approach to knowledge  transformation  would
be   a   program   that   compiles   frequently-used   action   sequences  into
macro-operators without regard to such factors as the goal structure motivating
them; such macro-operators lack flexibility since  they  apply  only  to  cases
where  the exact sequence of operators applies.  At the other extreme, a purely
analytic knowledge transformer (e.g., FOO/BAR) converts  declarative  knowledge
into  an  effective form without regard to such factors as which cases actually
arise in practice; the  failure  to  exploit  realistic  assumptions  leads  to
procedures that are very general but very weak.

  One  way  to combine empirical and analytic techniques is to analyze specific
examples  that  have  arisen  in  actual  practice,  and  generalize  them   by
identifying which properties were actually relevant to the outcomes [LEX2].

  Another  way  takes a general piece of knowledge, an interpreter that applies
it  to  specific  cases, and a caching mechanism that records the results.  The
general knowledge is gradually compiled into streamlined procedures for special
cases [Anderson, Lenat, Rosenbloom].

3. Induction
  Induction  generalizes  examples  obtained  from   experience,   observation,
experiments, tutors, newspapers, or elsewhere.



3.1. Inducing Rules
  Ryszard Michalski and Robert Stepp (U. Illinois) "How to Structure Structured
Objects"  manages  to  discuss  classification  of  structured  objects without
referring to soybean diseases.

  Tom Dietterich (Stanford) and Ryszard Michalski  (U.  Illinois)  "Discovering
Patterns  in  Sequences  of  Objects" describes an extension of Tom's 1979 M.S.
thesis program for the card game Eleusis, where the  problem  is  to  induce  a
secret rule from positive and negative examples.

  Tom  Dietterich and Bruce Buchanan (Stanford) "The Role of Experimentation in
Theory Formation" reports on Tom's ongoing Ph.D. thesis project, EG, to  induce
the  semantics  of Unix commands by performing experiments to see what they do.
EG ignores the explanations provided by Unix error  messages,  but  it  is  not
clear  that this loses very much information.  Previous work on experimentation
has  focussed  on  internally  formalizable  domains  in  order  to  avoid  the
bottleneck  of  a low-bandwidth interface to the outside world, so this project
is a welcome entry into an area deserving exploration.  I'm eager  to  see  the
results; I'm sure Tom and Bruce are too!

  Pat  Langley,  Jan  Zytkow,  Herb Simon (CMU) "Mechanisms for Qualitative and
Quantitative Discovery" reports on four discovery programs.    BACON.6  extends
previous  BACON.i  by  finding  quantitative  functional relationships in noisy
data.  The other three programs induce qualitative theories from collections of
chemical reactions:  GLAUBER discovers the concepts of acids, bases, and salts;
STAHL  infers  the  composition  of  substances,  recreating   something   like
phlogiston  theory;  and  DALTON  infers the number of atoms per molecule.  The
next step is to integrate these programs.

  Saul Amarel (Rutgers) "Program Synthesis as a Theory Formation Task:  Problem
Representations and Problem Methods"  describes  a  program  that  induces  the
algebraic structure of a relation represented as a set of tuples.

  Donald  Michie (U. of Edinburgh) "Inductive Rule Generation in the Context of
the Fifth Generation" provocatively suggests that to  interface  usefully  with
human experts, induction systems should produce "brain-compatible" results that
are both human-understandable and "mentally executable."

  Paul  Utgoff (Siemens CRS) "Adjusting Bias in Concept Learning" discussed his
Ph.D. work on getting  LEX  to  modify  its  inductive  bias,  defined  as  the
knowledge  that  causes a learner to choose one hypothesis over another.  LEX's
bias  is  determined  by  its  pattern  language  for  describing  classes   of
integration   problems.    Paul's  program  infers  new  terms  like  "odd"  or
"twice-integrable" based on analysis of worked-out examples,  and  figures  out
how to assimilate them into the language.

  Bernard  Silver  (U.  of  Edinburgh)  "Learning Equation Solving Methods from
Worked Examples" describes LP, a program that solves  difficult  algebraic  and
trigonometric  equations better than many of us, and learns new problem-solving
"schemas" from worked-out examples.  LP apparently derives  its  power  from  a
well-chosen  abstraction  function that describes each equation in terms of its
"characteristic 4-tuple" (number of occurrences of unknown;  type  of  function
symbols,  e.g.  trig;  single  equation vs. disjunction; top-level connective).
Essentially, LP performs means-ends  analysis  in  the  abstracted  space:    a
difference  between  two  tuples indexes a collection of operators for reducing
it.  I view LP as learning what order to reduce differences, but  if  you  want
Bernard's view of the matter you should read his paper.



3.2. Dealing with Noise
  Ross  Quinlan  (New South Wales Institute of Technology) "Learning from Noisy
Data" reports some interesting empirical results  from  introducing  controlled
amounts  of  noise  into the training and test data for a binary classification
system that induces decision trees.  By  storing  in  each  leaf  node  of  the
decision tree the proportion of positive instances among the objects classified
under  that node, the system identifies which attributes classify the data most
reliably -- i.e., in some sense it learns  about  the  noise.    Among  several
surprising  results:    if  it is known that the test data will be noisy, it is
actually better to use noisy  training  data!    Such  results  have  important
implications:    for  example,  if a medical diagnosis system is to be built by
induction from a medical database and applied to patients  whose  symptoms  are
reported unreliably, it may actually perform better if the database is munged a
bit  first.    Of course further work is needed to analyze why Quinlan's system
behaves this way, and what class of induction systems will behave similarly.

  Michael Lebowitz (Columbia) "Concept Learning in a Rich Input Domain" induces
predictive stereotypical patterns from event descriptions such as news stories.
An interesting aspect of his approach is the ability to induce  generalizations
based on noisy data.

  Casimir  Kulikowski  (Rutgers) "Knowledge Acquisition and Learning in EXPERT"
describes how the SEEK system addresses  the  important  practical  problem  of
debugging  a  large collection of expert rules.  SEEK extends the contributions
of Randy Davis' TEIRESIAS  to  "knowledge  acquisition  in  the  context  of  a
shortcoming  in the knowledge base."  SEEK experiments by perturbing rules, and
uses the number and type of resulting errors with respect to  stored  cases  to
suggest   possible  improvements.    It  gathers  statistics  on  the  "missing
components" that prevent rules from firing when  they  should,  using  work  by
Politakis on statistical credit assignment.

  John Holland (U. of Michigan) "Escaping Brittleness" describes recent results
in  his  continuing work on genetic learning algorithms.  These methods exploit
parallelism and ideas from ecology and capitalism, and are  actually  producing
usable application systems for arm-eye coordination tasks.  I enjoy John's work
because  it  is  so  mind-bendingly  different  from what the rest of us do.  I
suspect it  may  serve  as  an  elegant  simplified  model  to  understand  the
computational  aspects  of molecular biology and evolution, by bridging the gap
between our standard computational metaphors (subroutine call, naming, symbolic
processing) and as-yet undeciphered  biological  mechanisms  (enzymes,  codons,
complex feedback systems).



3.3. Logic-based Work
  Clark  Glymour,  Kevin  Kelly,  and  Richard Scheines (U. of Pittsburgh) "Two
Programs for Testing  Hypotheses  of  any  Logical  Form"  implements  Hempel's
confirmation  relation  and  extends  it  to  handle partial confirmation.  The
resulting programs tell whether (or to what extent) a given set of propositions
confirms a given inductive hypothesis.

  Claude Sammut and  Ranan  Banerji  (St.  Joseph's  University)  "Hierarchical
Memories:    An  Aid  to  Concept  Learning" describes a logic-based system for
inserting new categories into is-a hierarchies.

  Y. Kodratoff  and  J.-G.  Ganascia  (Universite  de  Paris)  "Learning  as  a
Non-deterministic   but   Exact   Logical   Process"  describes  a  logic-based
generalization algorithm that I got the impression  extends  previous  work  by
Hayes-Roth and Vere with respect to many-to-one mappings.



3.4. Cognitive Modelling
  Derek  Sleeman  (Stanford) "Inferring (MAL) Rules from Pupil's Protocols" was
an amusing report on  automating  the  induction  of  students'  buggy  algebra
productions  from  their  incorrectly  worked-out  problems.    Apparently  the
students are so eager to achieve the goal -- get the unknown on one side and  a
number on the other -- that they resort to a powerful problem-solving technique
I like to call "ends-justifies-the-means analysis."

  Kurt  VanLehn  (Xerox  PARC) "Validating a Theory of Human Skill Acquisition"
reports on some similar work:  modelling students' subtraction errors in  terms
of  the  hypothesized  induction  methods  whereby  they  infer the subtraction
algorithm from the teachers' examples.  The  theory  posits  several  "felicity
conditions" -- conventions on teacher-student communication that facilitate the
induction  process.    One  such  condition is the "one disjunction per lesson"
rule.    This  work,  in  the  Buggy-Debuggy  tradition,   uses   a   flowgraph
representation in contrast to Sleeman's production system representation.

  Bob  Berwick  (MIT)  "Domain-specific Learning and the Subset Principle" used
certain linguistic data as evidence that human languages conform to constraints
on how much humans induce from each example in a sequence.  Unfortunately, as a
non-linguist I was unable to induce anything from the examples Bob used in  his
talk.    This  datum may actually constitute further evidence in support of his
theory.

  Douglas Medin (U. of Illinois) "Linear Separability and Concept  Naturalness"
presents  evidence  that linearly separable categories are not generally easier
for people to learn.

  Doug Hofstadter (Indiana University) "The Architecture of Jumbo"  models  the
process of permuting a string of letters into a recognizable word.  Doug's talk
started  by  borrowing  the  last name of a Cognitive Modelling panelist (Janet
Kolodner) as an example and suggesting that it had no one-word permutation.   I
immediately  set  about  looking  for one, and by the end of the talk had found
"elk-donor" (one who donates elks)  and  "do-Kloner"  (one  who  implements  or
applies  the representation language KLONE) as well as several two-word phrases
of varying social and orthographic acceptability ("lone dork," "red kolon," "no
kolder," ...).  Doug's talk certainly wins the "Giving the  Audience  Something
to  Do  to  Keep it Amused During Your Talk" award.  Unfortunately I can't tell
you what it was about, except that by analogy with the concept of  "spoonerism"
he introduced such new concepts as "forkerism" and "kniferism."

  Mallory  Selfridge  (U. of Connecticut) "How Can CHILD Learn About Agreement?
Explorations of CHILD's Syntactic  Inadequacies"  was  the  last  talk  of  the
conference.    Mallory  spoke  without slides, allowing worn-out members of the
audience could close their eyes and concentrate better.

  Gerry Dejong (U. of Illinois) "An  Approach  to  Learning  from  Observation"
describes  his  continued  work  on  learning  by  composing schemas to explain
observed  event  sequences.    This  might  be  classified   under   "Knowledge
Transformation"  since  it  consists  of  recognizing  and  naming  specialized
combinations of existing concepts.  Gerry also composed the  official  workshop
puzzle  to  divert  participants  when  not listening to presentations, thereby
winning the "Giving the Audience Something to Keep It Amused  During  Everybody
Else's Talk" award.



3.5. Lessons
  Some  induction  systems  use  user input to help fill in a gap in a chain of
reasoning otherwise derivable by existing rules [Silver, Sleeman, Kulikowski].

  The incremental learning theme evidenced in the work on analogy also appeared
in induction systems that construct and refine hypotheses [Amarel, Dietterich &
Buchanan, Holland, Lebowitz].

  The real-world  problem  of  noisy  data  is  receiving  attention  [Langley,
Quinlan, Lebowitz], and statistical induction is being used in interesting ways
[Holland, Kulikowski].

4. Panel Discussion:  Cognitive Modelling -- Why Bother?
  The  first  day  of  the  workshop  ended with an evening panel on "Cognitive
Modelling of Learning Processes."  Having arisen at 4am  Pacific  Sleepy  Time,
sat through morning and afternoon sessions filled with paper presentations, and
partaken  of  three  meals filled with shop talk, I found my capacity to absorb
the insights of the panelists severely diminished.  I did  find  the  panel  on
cognitive  modelling  a  convincing  argument  for  the  importance of combined
aural-visual input in human learning, insofar as some of the  panelists  didn't
use slides and now I can't remember what they said.  On the other hand, I'm not
doing  too  well  at remembering what any of them said.  In fact, I had trouble
reconstructing who was on the panel.  All of which  illustrates  at  least  one
area  for  applying  cognitive modelling to AI:  investigating, and preventing,
the  process  whereby  researchers  forget  what  they  see  and  hear  at   AI
conferences.

  Fortunately  Jaime  Carbonell,  who  moderated  the panel, was kind enough to
supply a description of what transpired while I was lapsing in and  out  of  my
stupor:

  ``I started the discussion by noting several examples where work in cognitive
modelling  had  inspired  and  influenced  work in machine learning (e.g., Earl
Hunt's  work  on  concept  acquisition  helped  motivate   work   on   symbolic
descriptions  in  learning  over  earlier  neural  net  approaches),  and a bit
vice-versa.  The production systems paradigm emerged from the joint concerns of
both camps. Then I  asked  the  panelists  to  draw  from  their  own  work  to
substantiate or criticize the cross-fertilization hypothesis.

  ``Paul  Kline  (Texas  Instruments) presented the major result in his thesis:
Concept acquisition in humans is clearly not commutative with  respect  to  the
order  of  presentation  of  examples. This is important, as most of the recent
work in machine  learning  no  longer  assumes  commutativity  as  a  requisite
constraint  (e.g. learning by analogy is clearly governed by past knowledge and
experience -- what you  know  structures  what  you  learn  and  what  you  pay
attention to in new information).

  ``Janet  Kolodner  (Georgia  Institute  of  Technology)  argued  in  favor of
case-based reasoning in  expert  systems  design,  where  episodic  traces  and
generalizations   therefrom  may  constitute  the  primary  form  of  expertise
acquisition.  She argued in favor of using human memory structuring  principles
as a guiding criterion for modelling expertise and its acquisition.

  ``John Anderson (CMU) played the role of devil's advocate, saying "Why should
you  machine learning people handcuff yourselves by known restrictions on human
learning?" and sketched a case for separating the lines of reserach.

  ``Paul Rosenbloom (CMU) and Dedre Gentner (BBN) served as discussants in  the
panel  and  addressed  Anderson's  concerns,  mostly  refuting  the argument by
counterexamples and by suggesting that the only known existence proof of robust
learning behavior is to be found in humans and other  biological  systems,  and
therefore  ought to serve as inspiration for machine learning, rather than as a
constraint.  Anderson quickly  agreed,  since  he  didn't  really  believe  his
straw-man  position  anyway.  The  discussion  went on to conclude that problem
solving, memory organization, and learning are  inextricably  woven  phenomena,
and the study of each impacts strongly upon the others.''

5. Panel Discussion:  "Machine Learning -- Challenges of the 80's"
  At  one  late-night bull session, several of us were trying to figure out how
to spice up the final panel  discussion.    A  panel  whose  members  agree  on
everything  is  boring;  perhaps  a  panel discussion shouldn't be considered a
complete success unless it comes to blows.    What  issue  might  provoke  some
edifying disagreement?

  Pat  Langley  suggested distinguishing between "Darwinian" induction systems,
which generate hypotheses independent  of  the  environment  in  which  they're
tested,  and  "Lysenkoist" systems, where the hypothesis generator is sensitive
to the result of such tests.    Reincarnating  the  Darwin-Lysenko  controversy
would  have  served  to replace the now-passe' "declarative vs.  procedural" and
"neat vs.    scruffy"  controversies  as  a  source  of  much  meaningless  and
entertaining  debate, while adding a classy touch of history.  Unfortunately we
all chickened out, and the panelists found little to disagree about.  But  they
tried....

  Saul  Amarel  suggested  that  the term "analogy" be banned and replaced with
precise terms  denoting  the  processes  of  identifying  a  relevant  analogy,
importing  it  into  a  new  area,  assimilating  it,  and repairing it.  Jaime
Carbonell responded promisingly ("Oh come on, Saul..."),  but  eventually  they
degenerated  into  agreement.  My personal feeling is that a precise definition
is still premature and the field can benefit from looking for more patterns  of
reasoning  that  might  be called "analogy;" I suspect there are some important
ones not used in current analogy systems.

  While pondering  Saul's  provocative  stance  on  this  issue,  I  failed  to
concentrate fully on the research directions he next proposed.  Fortunately Tom
Dietterich filled me in.  Here's one of my favorites: ``Another fruitful avenue
for future research is to develop problem-solving environments in which experts
can be automatically observed while they solve problems.  In this way, programs
might  be able to capture expertise by "watching over the shoulder" of experts.
This is a good area for research on psychology  and  man/machine  interactions,
too.''    Tom Mitchell mentioned to me that he is planning to do something like
this in the VLSI domain.  It's hard to find a hotter combination of topics than
VLSI design and  machine  learning!  Saul  would  also  like  to  see  work  on
real-world  scientific  theory  formation  problems  in  areas like physics and
biology, and a new MetaDendral-like project.

  Donald Michie made a good try at generating  some  dissension  by  suggesting
that  the  next  Machine  Learning workshop should be restricted to papers that
report complete results, but apparently nobody was brave enough to disagree.

  Doug Lenat identified some sources of power for current and future success in
the field:

   - Synergy of learning programs

        * with humans: EURISKO is one example of a  cooperative  discovery
          system.    Learning  systems  will  want  fancy  front ends with
          natural language, visual, and non-verbal I/O; conversely,  fancy
          front ends will need to induce models of a session or user.

        * with  AI:  MetaDendral  illustrates a performance program with a
          learning component for improving itself.

        * with other learning  programs:  One  lesson  of  early  learning
          research is that no single general technique suffices by itself;
          progress requires combining them.

     One  way  to provide such synergy is to package learning methods into
     tools usable by other researchers and their  systems,  somewhat  like
     the  way  certain  program  analysis  methods have been packaged into
     Interlisp's Masterscope.  Doug plans to  package  EURISKO  in  usable
     form  and distribute it to the AI community in a year or so, which is
     great news.

   - Analogy

        * as a paradigm  for  knowledge  acquisition:  Help  automate  the
          find-copy-edit  technique  often  used  to construct new schemas
          manually by adapting existing ones.

        * as a technique for  suggesting  plausible  approaches  based  on
          similar  past problems: This will require a broad knowledge base
          of common sense and facts, along the lines  of  Alan  Kay's  new
          project at Atari to encode an encyclopedia [see IJCAI83].

   - Heuretics: The study of heuristics will both require and help produce
     a broad base of heuristics.

   - Representation:  Only  a  few  basic  representations  are now known;
     automatic  change  of  representation  will  require  the   kind   of
     self-modelling,  -monitoring, and -modifying systems discussed in the
     Cognitive Economy paper.

   - Parallelism: VLSI offers obvious potential.

   - Morphological analysis: There  are  other  natural  learning  systems
     besides  human  cognition, including the immune system and evolution;
     what can they teach us?

  During  the  panel  discussion,  Pat  Winston  observed  that  the   workshop
represented  a  healthy  balance  between  different  types  of  work,  such as
experimental and theoretical, analytic and empirical, basic and  applied,  etc.
To  which  Donald  Michie added "good and bad."  It might be mentioned that the
winner in the "Activities for Keeping Amused Between (and at  the  Expense  of)
Other  People's  Talks"  category  consisted  of exchanging nominations for the
"Worst Talk" category.  The overall quality of the  workshop  was  better  than
most   conferences,   but  there  was  intense  competition  in  this  category
nonetheless.

  Winston foresees a danger of success  in  machine  learning  leading  to  the
"Expert  Systems  Syndrome,"  with  reporters  and  venture capitalists getting
underfoot and interfering with scientific progress by tempting researchers away
from their work to fame and riches.  (Some of us would like to  know  where  we
can sign up for this.)  Pat also sees a great opportunity for supercomputers to
qualitatively  change  how  we think and do research, analogous to the way fast
computers liberated early work on vision from the limitations of running 3-by-3
operators over 256-by-256 arrays.

  Ryszard Michalski, panel moderator and intrepid Workshop Chairman, called for
the development of a body of theory  to  help  identify  isomorphic  ideas  and
establish a uniform terminology for them.  He also emphasized the importance of
general  methods that can be applied to the problem of knowledge acquisition in
expert systems.

  In response to repeated pleas to extend MetaDendral, Bruce  Buchanan  pointed
out  that  he  and his co-workers quit working on Dendral and MetaDendral after
several years largely because they were just plain tired of it.    Having  made
this candid admission, he immediately left town.

  One  question  that  I  raised  in bull sessions and Pat Langley posed to the
panel is whether the time is yet ripe for a large-scale machine learning effort
analogous to the ARPA Speech Understanding Project.  This question does not yet
have a clear answer.  On the one hand, we are running up against limitations on
the kind of learning attainable in a one-Ph.D.-student project.  On  the  other
hand,  integrating multiple learning methods in a single system would appear to
require much tighter coupling than was  necessary,  say,  between  the  various
knowledge  source  modules in Hearsay-II.  In particular, learning methods tend
to be representation-specific.  The development of a learning system  employing
multiple  evolving methods, each with its own evolving representation(s), would
be very difficult to manage.

  Moreover, much thought must be given to the goals of such  a  project.    The
ARPA  Speech  Understanding  Study Group formulated clear goals for the kind of
system to be developed.  What would be appropriate goals for a learning system?
They would have to be defined so as to preclude simply programming in the skill
to be learned.  What would be gained by building a large learning system?    As
Herb  Simon  points  out  in  his  provocative chapter in Machine Learning, the
knowledge transfer function that learning serves for people  can  be  fulfilled
much  more  easily  in  computers by copying code.  If the system learns things
that people already know, why would this be better than programming them in  by
hand?    If the system is supposed to discover things that people don't already
know,  how can one set realistic goals for its performance?  Although many such
devil's-advocate  questions  remain,  I  find  the  problem  of   designing   a
Hearsay-style  learning  system  a  useful  mental  exercise for thinking about
research issues and strategies.

6. A Bit of Perspective
  No overview would be complete without a picture that tries to put  everything
in perspective:


     -------------> generalizations ------------
    |                                           |
    |                                           |
INDUCTION                                  COMPILATION
(Knowledge Discovery)                   (Knowledge Transformation)
    |                                           |
    |                                           v
examples ----------- ANALOGY  --------> specialized solutions
                (Knowledge Transfer)

     Figure 6-1:   The Learning Triangle:  Induction, Analogy, Compilation

  Of  course  the distinction between these three forms of learning breaks down
under close  examination.    For  example,  consider  LEX2:    does  it  induce
heuristics  from  examples, guided by its definition of "heuristic," or does it
compile that definition into special cases, guided by examples?

7. Looking to the Future
  The 1983 International Workshop on Machine Learning felt like history in  the
making.  What could be a more exciting endeavor than getting machines to learn?
As  we  gathered  for  the  official  workshop  photograph, I thought of Pamela
McCorduck's Machines Who Think, and wondered if  twenty  years  from  now  this
gathering  might  not  seem as significant as some of those described there.  I
felt privileged to be part of it.

  In the meantime, there are lessons to be absorbed, and work to be done....

  One lesson of the workshop is the importance of incremental learning methods.
As one speaker observed, you can only learn things  you  already  almost  know.
The  most  robust  learning  can  be  expected  from systems that improve their
knowledge gradually, building on what they have already learned, and using  new
data  to  repair deficiencies and improve performance, whether it be in analogy
[Burstein, Carbonell],  induction  [Amarel,  Dietterich  &  Buchanan,  Holland,
Lebowitz, Mitchell], or knowledge transformation [Rosenbloom, Anderson, Lenat].
This  theme  reflects  the  related  idea  of  learning  and problem-solving as
inherent parts of each other [Carbonell, Mitchell, Rosenbloom].

  Of course not everyone saw things the way I do.  Here's Tom Dietterich again:
``I was surprised that you summarized the workshop in terms of an "incremental"
theme.  I don't think incremental-ness  is  particularly  important--especially
for  expert  system  work.    Quinlan gets his noise tolerance by training on a
whole batch of examples at once.  I  would  have  summarized  the  workshop  by
saying  that the key theme was the move away from syntax.  Hardly anyone talked
about "matching" and syntactic generalization.  The whole concern was with  the
semantic  justifications  for  some  learned concept:  All of the analogy folks
were doing this, as were Mitchell, DeJong, and Dietterich and  Buchanan.    The
most  interesting  point that was made, I thought, was Mitchell's point that we
need to look at cases where we can provide only partial justification  for  the
generalizations.      DeJong's   "causal   completeness"  is  too  stringent  a
requirement.''

  Second, the importance of making knowledge and goals explicit is  illustrated
by  the progress that can be made when a learner has access to a description of
what it is trying to acquire, whether it is a criterion  for  the  form  of  an
inductive hypothesis [Michalski et al] or a formal characterization of the kind
of heuristic to be learned for guiding a search [Mitchell et al].

  Third, as Doug Lenat pointed out, continued progress in learning will require
integrating  multiple methods.  In particular, we need ways to combine analytic
and empirical techniques to escape from their limitations when used alone.

  Finally, I think we can extrapolate from the experience of AI  in  the  '60's
and '70's to set a useful direction for machine learning research in the '80's.
Briefly,  in  AI the '60's taught us that certain general methods exist and can
produce some results, while the '70's  showed  that  large  amounts  of  domain
knowledge  are  required to achieve powerful performance.  The same can be said
for learning.  I consider a primary goal  of  AI  in  the  '80's,  perhaps  the
primary goal, to be the development of general techniques for exploiting domain
knowledge.  One such technique is the ability to learn, which itself has proved
to require large amounts of domain knowledge.  Whether we approach this goal by
building  domain-specific  learners  (e.g.  MetaDendral)  and then generalizing
their methods (e.g. version space induction), or  by  attempting  to  formulate
general methods more directly, we should keep in mind that a general and robust
intelligence  will  require  the ability to learn from its experience and apply
its knowledge and methods to problems in a variety of domains.

  A well-placed source has informed me that plans are already afoot to  produce
a  successor  to  the Machine Learning book, using the 1983 workshop papers and
discussions as raw material.  In the meantime, there is a small number of extra
proceedings which can be acquired (until they run out) for $27.88 ($25 + $2.88
postage in U.S., more elsewhere), check payable to University of Illinois.
Order from

     June Wingler
     University of Illinois at Urbana-Champaign
     Department of Computer Science
     1304 W. Springfield Avenue
     Urbana, IL 61801

  There are tentative plans for a similar  workshop  next  summer  at  Rutgers.

------------------------------

End of AIList Digest
********************
 21-Jul-83 16:37:57-PDT,18852;000000000001
Mail-From: LAWS created at  1-Aug-83 17:12:03
Date: Thursday, July 21, 1983 4:37PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #25
To: AIList@SRI-AI


AIList Digest            Friday, 22 Jul 1983       Volume 1 : Issue 25

Today's Topics:
  AAAI Preliminary Schedule
----------------------------------------------------------------------

Date: 20 Jul 1983 0407-EDT
From: STRAZ.TD%MIT-OZ@MIT-MC
Subject: AAAI Preliminary Schedule

What follows is a complete preliminary schedule for AAAI-83.
Presumably changes are still possible, particularly in times, but it
does tell what papers will be presented.

AAAI-83 THE NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE at the
Washington Hilton Hotel, Washington, D.C. August 22-26, 1983, 
sponsored by THE AMERICAN ASSOCIATION FOR ARTIFICIAL INTELLIGENCE and
co-sponsored by University of Maryland and George Washington
University.

CONFERENCE SCHEDULE

SUNDAY, AUGUST 21
_________________

5:30-7:00 CONFERENCE, TUTORIAL, AND TECHNOLOGY TRANSFER SYMPOSIUM REGISTRATION

MONDAY, AUGUST 22 - FRIDAY, AUGUST 26
_____________________________________

9:00-5:00 AAAI-83 R & D EXHIBIT PROGRAM 

WEDNESDAY, AUGUST 24 - FRIDAY, AUGUST 26
--------------------------------------

8:00 p.m.- SMALL GROUP MEETINGS : please sign up for rooms at the information
           desk in the Concourse Lobby.

SUNDAY, AUGUST 21 - THURSDAY, AUGUST 23
----------------------------------------

7:00 p.m. FREDKIN- AAAI COMPUTER CHESS TOURNAMENT

Each night at 7:00 p.m., the Fredkin-AAAI Tournament will demonstrate
the Turing Test where human players do not know if they are playing
a machine or other human players with equal probability.  Human players
will be rewarded primarily for winning, but secondarily for guessing the 
genus of their opponent.  The audience also will be kept in the dark,
and there should be some fun in guessing who is who as the game progresses.

There will be three games per night; each night, two games will pit
a human being against a computer and the other game will pit two
human players against each other.  The computer system's names are
Belle and Nuches.

TUTORIAL PROGRAM

MONDAY, AUGUST 22 - TUESDAY, AUGUST 23
______________________________________

8:00-5:00 TUTORIAL REGISTRATION in the CONCOURSE LOBBY, CONCOURSE LEVEL

MONDAY, AUGUST 22
_________________

9:00-1:00- TUTORIAL NUMBER 1: AN INTRODUCTION TO ARTIFICIAL INTELLIGENCE
			Dr. Eugene Charniak, Brown University
			    
9:00-1:00  TUTORIAL NUMBER 2: AN INTRODUCTION TO ROBOTICS
			Dr. Richard Paul, Purdue University

2:00-6:00  TUTORIAL NUMBER 3: NATURAL LANGUAGE PROCESSING
		        Dr. Gary G. Hendrix, SYMANTEC, Inc.

2:00-6:00  TUTORIAL NUMBER 4: EXPERT SYSTEMS - PART 1 - FUNDAMENTALS
			Drs. Randall Davis and Charles Rich, MIT

TUESDAY, AUGUST 23
__________________

9:00-1:00 TUTORIAL NUMBER 5: EXPERT SYSTEMS - PART 2 - APPLICATION AREAS
			Drs. Randall Davis and Charles Rich, MIT

9:00-1:00 TUTORIAL NUMBER 6: AI PROGRAMMING TECHNOLOGY - LANGUAGES AND MACHINES
			Dr. Howard Shrobe, MIT and Symbolics
		        Dr. Larry Masinter, Xerox Palo Alto Research Center
				
MONDAY, AUGUST 22
_________________

8:00-5:00 TECHNOLOGY TRANSFER SYMPOSIUM REGISTRATION in CONCOURSE LOBBY

TUESDAY, AUGUST 23
__________________

8:00-2:00 TECHNOLOGY TRANSFER SYMPOSIUM REGISTRATION in CONCOURSE LOBBY

2:00-9:30 TECHNOLOGY TRANSFER SYMPOSIUM (6-7:30 dinner break)

TECHNICAL WORKSHOPS
___________________

MONDAY, AUGUST 22 AND TUESDAY, AUGUST 23
________________________________________

9:00-5:00 SENSORS AND ALGORITHMS FOR 3-D VISION Dr. Azriel Rosenfeld, Maryland

9:00-5:00 PLANNING organized by Dr. Robert Wilensky, Berkeley

HOSPITALITY
___________

MONDAY, AUGUST 22
_________________

6:00-8:00 RECEPTION (Welcome!) in the CONCOURSE EXHIBIT HALL, CONCOURSE LEVEL

TUESDAY, AUGUST 23
__________________

5:30-7:00 CONFERENCE REGISTRATION RECEPTION; INTERNATIONAL TERRACE

WEDNESDAY, AUGUST 24
____________________

6:00-8:00 MAIN CONFERENCE RECEPTION (NO HOST BAR); INTERNATIONAL TERRACE

THURSDAY, AUGUST 25
___________________

6:00-7:00 BOARDING BUSES FOR GALA at the T STREET ENTRANCE, TERRACE LEVEL
				
7:00-10:30 GALA RECEPTION AND ENTERTAINMENT AT THE CAPITOL CHILDREN'S MUSEUM 
           (NO HOST BAR) *** RESERVATIONS ONLY ***
				
FRIDAY, AUGUST 26
_________________

6:00-8:00 HAIL AND FAREWELL in the INTERNATIONAL BALLROOM EAST

TECHNICAL CONFERENCE SCHEDULE
_____________________________	

* PLEASE NOTE: Depending on the size of attendance, closed circuit T.V.
will be available Wednesday, August 24 thru Friday, August 26, for
particular sessions (that is, those sessions scheduled for the
International Ballroom Center and West).  The closed circuit
T.V. rooms will be the Georgetown Room, Concourse Level, and the
Back Terrace, Terrace Level.

MONDAY, AUGUST 22
_________________

8:00-5:00 TECHNICAL CONFERENCE REGISTRATION

TUESDAY, AUGUST 23
__________________

8:00-7:00 TECHNICAL CONFERENCE REGISTRATION 

7:00 p.m. SPECIAL SESSION dedicated to Dr. Victor Lesser, USSR

WEDNESDAY, AUGUST 24
____________________

8:00-5:00 TECHNICAL CONFERENCE REGISTRATION

KNOWLEDGE REPRESENTATION AND PROBLEM SOLVING SESSION I
______________________________________________________

9:00-9:20 AN OVERVIEW OF META-LEVEL ARCHITECTURE Michael Genesereth, Stanford 
				
9:20-9:40 FINDING ALL OF THE SOLUTIONS TO A PROBLEM David Smith, Stanford 
				
9:40-10:00 COMMUNICATION & INTERACTION IN MULTI-AGENT PLANNING
           Michael Georgeff, SRI

10:00-10:20 DATA DEPENDENCIES ON INEQUALITIES Drew McDermott, Yale 

10:20-10:40 KRYPTON: INTEGRATING TERMINOLOGY & ASSERTION 
            Ronald Brachman and Hector Levesque, Fairchild AI Laboratory
            Richard Fikes, Xerox PARC

in the INTERNATIONAL BALLROOM CENTER

COGNITIVE MODELLING SESSION I
_____________________________

9:00-9:20 THREE DIMENSIONS OF DESIGN DEVELOPMENT Neil M. Goldman, USC/ISI

9:20-9:40 SIX PROBLEMS FOR STORY UNDERSTANDERS 	Peter Norvig, Berkeley

9:40-10:00 PLANNING AND GOAL INTERACTION: THE USE OF PAST SOLUTIONS IN PRESENT
           SITUATIONS Kristian Hammond, Yale 

10:00-10:20 A MODEL OF INCREMENTAL LEARNING BY INCREMENTAL AND ANALOGICAL 
            REASONING & DEBUGGING Mark Burnstein, Yale 

10:20-10:40 MODELLING OF HUMAN KNOWLEDGE ROUTES: PARTIAL AND INDIVIDUAL 
            VARIATION Benjamin Kuipers, Tufts 

in the INTERNATIONAL BALLROOM WEST

VISION AND ROBOTICS SESSION I
_____________________________

9:00-9:20 A VARIATIONAL APPROACH TO EDGE DETECTION John Canny, MIT

9:20-9:40 SURFACE CONSTRAINTS FROM LINEAR EXTENTS John Kender, Columbia

9:40-10:00 AN ITERATIVE METHOD FOR RECONSTRUCTING CONVEX POLYHEDRA FROM 
           EXTENDED GAUSSIAN IMAGES James J. Little, U.of British Columbia

10:00-10:20 TWO RESULTS CONCERNING AMBIGUITY IN SHAPE FROM SHADING
            M.J. Brooks, Flinders University of South Australia

In the INTERNATIONAL BALLROOM EAST


10:40-11:00 BREAK

11:00-12:30 PANEL: LOGIC PROGRAMMING
            Howard Shrobe, Organizer, MIT
            Michael Genesereth, Stanford,
            J. Alan Robinson, David Warren, SRI International

In the INTERNATIONAL BALLROOM CENTER

12:30-2:00 LUNCH BREAK
           ANNUAL SIGART BUSINESS MEETING in the HEMISPHERE ROOM

2:00-3:10 INVITED LECTURE: THE STATE OF THE ART IN COMPUTER LEARNING
          Douglas Lenat, Stanford in the INTERNATIONAL BALLROOM CENTER

3:10-3:30 BREAK

NATURAL LANGUAGE SESSION I
__________________________

3:30-3:50 RECURSION IN TEXT AND ITS USE IN LANGUAGE GENERATION 
           Kathleen McKeown, Columbia

3:50-4:10 RELAXATION IN REFERENCE Bradley Goodman, BBN

4:10-4:30 TRACKING USER GOALS IN AN INFORMATION-SEEKING ENVIRONMENT 
          M. Sandra Carberry, Delaware

4:30-4:50 REASONS FOR BELIEFS IN UNDERSTANDING: APPLICATIONS OF NON-MONOTONIC
          DEPENDENCIES TO STORY PROCESSING Paul O' Rorke, Illinois

4:50-5:10 RESEARCHER: AN OVERVIEW Michael Lebowitz, Columbia 
	
in the INTERNATIONAL BALLROOM EAST

LEARNING SESSION I
__________________

3:30-3:50 EPISODIC LEARNING Dennis Kibler and Bruce Porter, California-Irvine

3:50-4:10 HUMAN PROCEDURAL SKILL ACQUISITION: THEORY, MODEL AND PSYCHOLOGICAL
          VALIDATION Kurt VanLehn, Xerox PARC

4:10-4:30 A PRODUCTION SYSTEM FOR LEARNING FROM AN EXPERT
          D. Paul Benjamin and Malcolm Harrison, Courant Institute, NYU

4:30-4:50 OPERATOR DECOMPOSABILITY: A NEW TYPE OF PROBLEM STRUCTURE 
          Richard Korf, CMU

4:50-5:10 SCHEMA SELECTION AND STOCHASTIC INFERENCE IN MODULAR 	ENVIRONMENT
          Paul Smolensky, UCSD

in the INTERNATIONAL BALLROOM WEST

EXPERT SYSTEMS SESSION I
------------------------

3:30-3:50 THE DESIGN OF A LEGAL ANALYSIS PROGRAM Anne v.d.L. Gardner, Stanford

3:50-4:10 THE ADVANTAGES OF ABSTRACT CONTROL KNOWLEDGE IN EXPERT SYSTEM DESIGN
          William J. Clancey, Stanford 

4:10-4:30 THE GIST BEHAVIOR EXPLAINER William Swartout, USC/ISI

4:30-4:50 A COMPARATIVE STUDY OF CONTROL STRATEGIES FOR EXPERT SYSTEMS: AGE 
          IMPLEMENTATION OF THE THREE VARIATIONS OF PUFF 
          Nelleke Aiello, Stanford 

4:50-5:10 A RULE-BASED APPROACH TO INFORMATION RETRIEVAL: SOME RESULTS AND 
          COMMENTS Richard Tong, Daniel Shapiro, Brian McCune & Jeffrey Dean,
          Advanced Information & Decision Systems

5:10-5:30 EXPERT SYSTEM CONSULTATION CONTROL STRATEGY James Slagle and Michael
          Gaynor, Naval Research Laboratory

in the INTERNATIONAL BALLROOM CENTER

7:00 P.M. AAAI EXECUTIVE COMMITTEE MEETING 

THURSDAY, AUGUST 25
___________________

8:00-5:00 TECHNICAL CONFERENCE REGISTRATION in the CONCOURSE LOBBY

KNOWLEDGE REPRESENTATION AND PROBLEM SOLVING SESSION II
_______________________________________________________

9:00-9:20 THE DENOTATIONAL SEMANTICS OF HORN CLAUSES AS A PRODUCTION SYSTEM
          J-L. Lassez and M. Maher, University of Melbourne

9:20-9:40 THEORY RESOLUTION: BUILDING IN NONEQUATIONAL THEORIES
          Mark Stickel, SRI International

9:40-10:00 IMPROVING THE EXPRESSIVENESS OF MANY SORTED LOGIC 
           Anthony Cohn, University of Warwick

10:00-10:20 THE BAYESIAN BASIS OF COMMON SENSE MEDICAL DIAGNOSIS 
            Eugene Charniak, Brown

10:20-10:40 ANALYZING THE ROLES OF DESCRIPTIONS AND ACTIONS IN OPEN SYSTEMS
            Carl Hewitt and Peter DeJong, MIT

in the INTERNATIONAL BALLROOM CENTER

NATURAL LANGUAGE SESSION II
___________________________

9:00-9:20 PHONOTACTIC AND LEXICAL CONSTRAINTS IN SPEECH RECOGNITION 
          Daniel P. Huttenlocher and Victor W. Zue, MIT

9:20-9:40 DETERMINISTIC AND BOTTOM-UP PARSING IN PROLOG 
          Edward Stabler, Jr., University of Western Ontario

9:40-10:00 MCHART: A FLEXIBLE, MODULAR CHART PARSING SYSTEM 
           Henry Thompson, Edinburgh

10:00-10:20 INFERENCE-DRIVEN SEMANTIC ANALYSIS Martha Stone Palmer, Penn & SDC

10:20-10:40 MAPPING BETWEEN SEMANTIC REPRESENTATIONS USING HORN CLAUSES
	    Ralph M. Weischedel, Delaware

in the INTERNATIONAL BALLROOM WEST

SEARCH SESSION I
________________

9:00-9:20 A THEORY OF GAME TREES Chun-Hung Tzeng, Paul Purdom, Jr., Indiana

9:20-9:40 OPTIMALITY OF A A* REVISITED 	Rina Dechter and Judea Pearl, UCLA

9:40-10:00 SOLVING THE GENERAL CONSISTENT LABELING (OR CONSTRAINT SATISFACTION)
           TWO ALGORITHMS AND THEIR EXPECTED COMPLEXITIES Bernard Nudel,Rutgers

10:00-10:20 THE COMPOSITE DECISION PROCESS: A UNIFYING FORMULATION FOR 
            HEURISTIC SEARCH, DYNAMIC PROGRAMMING AND BRANCH & BOUND PROCEDURES
            Vipin Kumar, Texas & Laveen Kanal, Maryland

10:20-10:40 NON-MINIMAX SEARCH STRATEGIES FOR USE AGAINST FALLIBLE OPPONENTS
            Andrew Louis Reibman and Bruce Ballard, Duke 

in the INTERNATIONAL BALLROOM EAST

10:40-11:00 BREAK

11:00-12:30 AAAI PRESIDENTIAL ADDRESS Nils Nilsson, SRI International
            ANNOUNCEMENT OF THE PUBLISHER'S PRIZE
            AAAI COMMENDATION FOR EXCELLENCE to MARVIN DENICOFF, Office of 
            Naval Research

in the INTERNATIONAL BALLROOM CENTER

12:30-2:00 LUNCH BREAK
           ANNUAL AAAI BUSINESS MEETING in the INTERNATIONAL BALLROOM CENTER

2:00-3:10 THE GREAT DEBATE: METHODOLOGIES FOR AI RESEARCH 
          John McCarthy, Stanford vs. Roger Schank, Yale 
				 	
in the INTERNATIONAL BALLROOM CENTER


3:10-3:30 BREAK

KNOWLEDGE REPRESENTATION AND PROBLEM SOLVING SESSION III
-------------------------------------------------------

3:30-3:50 PROVING THE CORRECTNESS OF DIGITAL HARDWARE DESIGNS
          Harry G. Barrow, Fairchild AI Laboratory

3:50-4:10 A CHESS PROGRAM THAT CHUNKS Murray Campbell & Hans Berliner, CMU

4:10-4:30 THE DECOMPOSITION OF A LARGE DOMAIN: REASONING ABOUT MACHINES
          Craig Stanfill, Maryland

4:30-4:50 REASONING ABOUT STATE FROM CAUSATION AND TIME IN A MEDICAL DOMAIN
          William Long, MIT

4:50-5:10 THE USE OF QUALITATIVE AND QUANTITATIVE SIMULATIONS Reid Simmons, MIT

5:10-5:30 AN AUTOMATIC ALGORITHM DESIGNER: AN INITIAL IMPLEMENTATION
          Elaine Kant and Allen Newell, CMU

in the INTERNATIONAL BALLROOM EAST

LEARNING SESSION II
___________________

3:30-3:50 WHY AM AND EURISKO APPEAR TO WORK? 
          Douglas Lenat, Stanford, John Seely Brown, Xerox PARC

3:50-4:10 LEARNING PHYSICAL DESCRIPTIONS FROM FUNCTIONAL DEFINITIONS, EXAMPLES,
          AND PRECEDENTS Patrick Winston & Boris Katz, MIT, Thomas Binford & 
          Michael Lowry, Stanford 

4:10-4:30 A PROBLEM-SOLVER FOR MAKING ADVICE OPERATIONAL Jack Mostow, USC/ISI

4:30-4:50 GENERATING HYPOTHESES TO EXPLAIN PREDICTION FAILURES 
          Steven Salzberg, Yale 

4:50-5:10 LEARNING BY RE-EXPRESSING CONCEPTS FOR EFFICIENT RECOGNITION
          Richard Keller, Rutgers 

in the INTERNATIONAL BALLROOM WEST

EXPERT SYSTEMS SESSION II
_________________________

3:30-3:50 DIAGNOSIS VIA CAUSAL REASONING: PATHS OF INTERACTION AND THE 
          LOCALITY PRINCIPLE Randall Davis, MIT

3:50-4:10 A NEW INFERENCE METHOD FOR FRAME-BASED EXPERT SYSTEMS 
          James Reggia, Dana Nau, Pearl Wang, Maryland

4:10-4:30 ANALYSIS OF PHYSIOLOGICAL BEHAVIOR USING A CAUSAL MODLE BASED ON 
          FIRST PRINCIPLES John C. Kunz, Stanford 

4:30-4:50 AN INTELLIGENT AID FOR CIRCUIT REDESIGN Tom Mitchell, Louis 
          Steinberg, Smadar Kedar-Cabelli, Van Kelly, Jeffrey Shulman, 
          Timothy Weinrich, Rutgers 

4:50-5:10 TALIB: AN IC LAYOUT DESIGN ASSISTANT Jin Kim and John McDermott, CMU

in the INTERNATIONAL BALLROOM CENTER

FRIDAY, AUGUST 26
_________________

KNOWLEDGE REPRESENTATION & PROBLEM SOLVING SESSION IV
------------------------------------------------------

9:00-9:20 ON INHERITANCE HIERARCHIES WITH EXCEPTIONS David W. Etherington, 
          University of British Columbia, Raymond Reiter, UBC and Rutgers

9:20-9:40 DEFAULT REASONING AS LIKELIHOOD REASONING Elaine Rich, Texas

9:40-10:00 DEFAULT REASONING USING MONOTONIC LOGIC: A MODEST PROPOSAL
           Jane Terry Nutter, Tulane 

10:00-10:20 A THEOREM-PROVER FOR A DETECTABLE SUBSET OF DEFAULT LOGIC
            Philippe Besnard, Rene Quiniou,&Patrice Quinton, IRISA-INRIA Rennes

10:20-10:40 DERIVATIONAL ANALOGY AND ITS ROLE IN PROBLEM SOLVING
            Jaime Carbonell, CMU

in the INTERNATIONAL BALLROOM CENTER

COGNITIVE MODELLING SESSION II
------------------------------

9:00-9:20 STRATEGIST: A PROGRAM THAT MODELS STRATEGY-DRIVEN AND CONTENT-DRIVEN
          INFERENCE BEHAVIOR Richard Granger, Jennifer Holbrook, and
          Kurt Eiselt, California-Irvine

9:20-9:40 LEARNING OPERATOR SEMANTICS BY ANALOGY
Sarah Douglas, Stanford & Xerox PARC, Thomas Moran, Xerox PARC

9:40-10:00 AN ANALYSIS OF A WELFARE ELIGIBILITY DETERMINATION INTERVIEW: 
           A PLANNING APPROACH 	Eswaran Subrahmanian, CMU

in the INTERNATIONAL BALLROOM EAST

VISION AND ROBOTICS SESSION II
------------------------------

9:00-9:20 THE PERCEPTUAL ORGANIZATION AS BASIS FOR VISUAL RECOGNITION
          David Lowe and Thomas Binford, Stanford

9:20-9:40 MODEL BASED INTERPRETATION OF RANGE IMAGERY
          Darwin Kuan and Robert Drazovich, AI&DS		

9:40-10:00 A DESIGN METHOD FOR RELAXATION LABELING APPLICATIONS
           Robert Hummel, Courant Institute, NYU

10:00-10:20 APPROPRIATE LENGTHS BETWEEN PHALANGES OF MULTI JOINTED FINGERS FOR
            STABLE GRASPING Tokuji Okada and Takeo Kanade, CMU

10:20-10:40 FIND-PATH FOR A PUMA-CLASS ROBOT Rodney Brooks, MIT

in the INTERNATIONAL BALLROOM WEST

10:40-11:00 BREAK

11:00-12:30 PANEL: ADVANCED HARDWARE ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE
	    Allen Newell, Organizer, CMU

in the INTERNATIONAL BALLROOM 

12:30-2:00 LUNCH BREAK
           AAAI SUBGROUP: AI IN MEDICINE MEMBERSHIP MEETING in HEMISPHERE ROOM

2:00-3:10 INVITED LECTURE - THE STATE OF THE ART IN ROBOTICS Michael Brady, MIT

in the INTERNATIONAL BALLROOM

3:10-3:30 BREAK

SEARCH SESSION II 
-----------------

3:30-3:50 INTELLIGENT CONTROL USING INTEGRITY CONSTRAINTS 
          Madhur Kohli and Jack Minker, Maryland

3:50-4:10 PREDICTING THE PERFORMANCE OF DISTRIBUTED KNOWLEDGE-BASED SYSTEMS:
          MODELLING APPROACH Jasmina Pavlin, UMASS

in the INTERNATIONAL BALLROOM EAST

LEARNING SESSION III 
--------------------

3:30-3:50 LEARNING: THE CONSTRUCTION OF A POSTERIORI KNOWLEDGE STRUCTURES
          Paul Scott, University of Michigan

3:50-4:10 A DOUBLY LAYERED, GENETIC PENETRANCE LEARNING SYSTEM
          Larry Rendell, University of Guelph

4:10-4:30 AN ANALYSIS OF GENETIC-BASED PATTERN TRACKING	AND COGNITIVE-BASED 
          COMPONENT TRACKING MODELS OF ADAPTATION
          Elaine Pettit and Kathleen Swigger, North Texas State University

in the INTERNATIONAL BALLROOM CENTER

SUPPORT HARDWARE AND SOFTWARE SESSION
-------------------------------------

3:30-3:50 MASSIVELY PARALLEL ARCHITECTURES FOR AI: NETL, THISTLE, AND BOLTZMANN
          MACHINES Scott Fahlman, Geoffrey Hinton, CMU,	Terrence Sejnowski, JHU

3:50-4:10 YAPS: A PRODUCTION RULE SYSTEMS MEETS OBJECTS 
          Elizabeth Allen, Maryland

4:10-4:30 SPECIFICATION-BASED COMPUTING ENVIRONMENTS Robert Balzer, David Dyer,
          Mathew Morgenstern, and Robert Neches, USC/ISI

in the INTERNATIONAL BALLROOM WEST

------------------------------

End of AIList Digest
********************
 25-Jul-83 22:15:58-PDT,16987;000000000001
Mail-From: LAWS created at  1-Aug-83 17:12:04
Date: Monday, July 25, 1983 10:15PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #26
To: AIList@SRI-AI


AIList Digest            Tuesday, 26 Jul 1983      Volume 1 : Issue 26

Today's Topics:
  AAAI-83 Schedule on USENet
  Roommates Wanted for AAAI
  Artificial Intelligence Info for kids
  Preparing Govmt Report on Canadian AI Research
  Definitions (2)
  Expectations of Expert System Technology
  Portable and More Efficient Lisps (3)
----------------------------------------------------------------------

Date: 24 Jul 83 20:20:09-PDT (Sun)
From: decvax!linus!utzoo!utcsrgv!peterr @ Ucb-Vax
Subject: AAAI-83 sched. avail. on USENet
Article-I.D.: utcsrgv.1828

I have a somewhat compressed, but still large (18052 ch.), on-line
version of the AAAI-83 schedule that I'm willing to mail to USENet
people on request.
   peter rowley, U. Toronto CSRG
   {cornell,watmath,ihnp4,floyd,allegra,utzoo,uw-beaver}!utcsrgv!peterr
 or {cwruecmp,duke,linus,lsuc,research}!utzoo!utcsrgv!peterr

------------------------------

Date: 22 Jul 83 10:34:11-PDT (Fri)
From: decvax!linus!utzoo!hcr!ravi @ Ucb-Vax
Subject: Room-mates wanted for AAAI
Article-I.D.: hcr.451

A friend (Mike Rutenberg) and I are going to AAAI at the end of
August.  We'd like to find a couple of people to share a room with --
both to meet interesting people and to save some money.  If you're
interested, please let me know by mail.

Also, if you have any other useful hints (like cheap transportation
from Ontario or better places to stay than the Hilton), please drop me
a line.

Thanks for your help.
        --ravi

------------------------------

Date: 24 Jul 1983 0727-CDT
From: Clive Dawson <CC.Clive@UTEXAS-20>
Subject: Artificial Intelligence Info for kids

               [Reprinted from the UTexas-20 BBoard.]

I received a letter from an 8th grader in Houston who wants to do a
science fair project on Artificial Intelligence.

        "...I plan to explain and demonstrate this topic with
         my computer and a program I made on it concerning this
         topic.  Any information you could send for my research
         would be appreciated."

If anybody knows of any source of AI information suitable for Jr. High
School level (good magazine articles written for the layman, etc.)
please let me know.  I have come across such stuff every so often, but
I'm having trouble remembering where.

Thanks,

Clive

------------------------------

Date: 23 Jul 83 16:30:27-PDT (Sat)
From: decvax!linus!utzoo!utcsrgv!zenon @ Ucb-Vax
Subject: Preparing Govmt report on Canadian AI research
Article-I.D.: utcsrgv.1823

A consortium of 4 groups has been awarded a contract by the Secretary
of State to prepare a report on what Canada ought to be doing to
support R & D in artificial intelligence in the next 5-10 years.  The
groups are Quasar Systems of Ottawa, Nordicity Group of Toronto,
Socioscope of Ottawa, and a group of academic AI people (Pylyshyn,
Mackworth, Skuce, Kittredge, Isabel, with consultants Tsotso,
Mylopoulos, Zucker, Cercone).  Because the client's primary interest
is in language (esp. translation) the report will concentrate on that
aspect, though we plan to cover all of AI on the grounds that it's all
of a piece.  The contract period is July-Dec 1983.  I am coordinating
the technical part of the report.

We are seeking input from all interested parties.  I will be touring
Canada, probably in September, and would like to talk to anyone who
has an AI lab and some ideas about where Canada ought to focus.  I am
especially eager to receive input from, and information about,
what's happening in Canadian industry.

I welcome all suggestions and invitations.  This is the first AI study
commissioned by a federal agency on AI and we should take this as an
opportunity to give them a good cross-section of views.

Zenon Pylyshyn, Centre for Cognitive Science, University of Western
Ontario, London, Ontario, N6A 5C2.  (519)-679-2461

utcsrgv!zenon or on the ARPANET Pylyshyn@CMU-CS-C

------------------------------

Date: Fri 22 Jul 83 09:32:16-EDT
From: MASON@CMU-CS-C.ARPA
Subject: Re: definition of robot

I think the definition of robot is a little too broad.  I've long been
reconciled to definitions which include, for instance,
cam-programmable sewing machines, but this new definition even
includes pistols.  (An input signal, trigger pressure, is processed
mechanically to actuate a mechanical device, the bullet.)  Of course,
if the NRA decided to lobby for robots ...

------------------------------

Date: Fri 22 Jul 83 09:22:54-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Definitions

Here are a few definitions taken from a Teknowledge/CEI ad:

  Artificial Intelligence
    That subfield of Computer Science which is concerned with
    symbolic reasoning and problem solving by computer.

  Knowledge Engineering
    The engineering discipline whereby knowledge is integrated
    into computer systems in order to solve complex problems
    normally required [sic] in a high level of human expertise.

  Knowledge/Expert Systems
    Computer systems that embody knowledge including inexact,
    heuristic and subjective knowledge; the results of knowledge
    engineering.

  Knowledge Representation
    A formalism for representing facts and rules about a subject
    or specialty.

  Knowledge Base
    A base of information encoded in a knowledge representation
    for a particular application.

  Inference Technique
    A methodology for reasoning about information in knowledge
    representation [sic] and drawing conclusions from that knowledge.

  Task Domains
    Application areas for knowledge systems such as analysis of
    oil well drilling problems or identification of computer
    system failures.

  Heuristics
    The informal, judgmental knowledge of an application area
    that constitutes the ``rules of good judgement'' in the field.
    Heuristics also encompass the knowledge of how to solve problems
    efficiently and effectively, how to plan steps in solving
    a complex problem, how to improve performance, and so forth.

  Production Rules
    A widely-used knowledge representation in which knowledge
    is formalized into ``rules'' containing an ``IF'' part and
    a ``THEN'' part (also called a condition and an action).
    The knowledge represented by the production rule is applicable
    to a line of reasoning if the IF part of the rule is satisfied:
    consequently, the THEN part can be concluded or its
    problem-solving action taken.

                                        -- Ken Laws

------------------------------

Date: 24 Jul 83 1:41:35-PDT (Sun)
From: decvax!linus!utzoo!utcsrgv!peterr @ Ucb-Vax
Subject: Expectations of expert system technology
Article-I.D.: utcsrgv.1824

>From a recent headhunting flyer sent to some AAAI members:

"We have been retained by a major Financial Institution, located in
New York City.  They are interested in building the support staff for
their money market traders and are looking for qualified candidates
for the following positions:

    A Senior AI Researcher who has experience in knowledge rep'n
    and expert systems.  The ideal candidate would have a
    graduate degree in CS - AI with a Psychology (particularly
    cognitive processes), Cultural Anthropology, or comparable
    background.  This person will start by being a consultant in
    Human Factors and would interact between the Traders and the
    Systems they use.  Two new Xerox 1100 computers have been
    purchased and experience in LISP programming is necessary
    (with INTERLISP-D preferred).  This person will have their
    own personal LISP machine.  The goal of this position will
    be to analyze how Traders think and to build trading support
    (expert) systems geared to the individual Trader's style."

Two other job descriptions are given for the same project, for an
economist and an MBA with CS (database, communications, and systems)
and Operations Research background.

The fact that the co. would buy the 1100's without consulting their
future user and the tone of the description prompts me to wonder if
the co. is treating expert system technology as an engineering
discipline which can produce results in a relatively short order
rather than the experimental field it appears to be.  Particularly
troubling is the problem domain for this system--I would expect such
traders to make extensive use of knowledge about politics and economic
policy on a number of levels, not easy knowledge to represent.

I'm not an expert systems builder by any means and may be
underestimating the technology...  does anyone think this co. is not
expecting too much?  (Replies to the net, please)

[The company should definitely get copies of

  J.L. Stansfield, COMEX: A Support System for a Commodities Analyst,
  MIT AIM-423, July 1977.

  J.L. Stansfield, Conclusions from the Commodity Expert Project,
  MIT AIM-601, (AD-A097-854), Nov. 1980.

The latter, I hear, documents the author's experience with large,
incomplete databases of unreliable facts about a complex world.
It must be one of the few examples of an academic research project
that could not claim success.  -- KIL]

------------------------------

Date: Mon 25 Jul 83 02:45:51-EDT
From: Chip Maguire <Maguire@COLUMBIA-20.ARPA>
Subject: Re: Portable and More Efficient Lisps

        What I wish to generate is a discussion of what are the
features of LISP@-[n] which provide a nice/efficient/(other standard
virtues) environment for computer aided intellectual tasks (such as
AI, CAD, etc.).
        For example, quite a lot of the work that I have been involved
with recently required that from with a LISP environment that I
generate line drawings to represent: data structures, binding
environments for a multi-processor simulator, or even as a graphical
syntax for programming.  Thus, I would like to have 1) reasonable
support (in terms of packages of routines) for textual labels and line
drawings; and 2) this same package available irrespective of which
machine I happen to be using at the time [within the limits of the
hardware available].

        What other examples of common utilities are emerging as
"expected" `primitives'?  Chip

------------------------------

Date: Sat, 23 Jul 83 15:58:24 EDT
From: Stephen Slade <Slade@YALE.ARPA>
Subject: Portable and More Efficient Lisps

Chip Maguire took violent exception to the claim that T, a version of 
Scheme implemented at Yale, is "more efficient and portable" compared
to other Lisp implementations.  He then listed the numerous machines
on which PSL, developed at Utah, now runs.

The problem in this case is one of operator precedence:  "more" has
higher precedence than "and".  Thus, T is both portable AND more
efficient.  These two features are intertwined in the language design
and implementation through the use of lexical scoping and an
optimizing compiler which performs numerous source-to-source
optimizations.  Many of the compiler operations that depend on the
specific target machine are table driven.  For example, the register
allocation scheme clearly depends on the number and type of registers
available.  The actual code generator is certainly machine dependent,
but does not comprise a large portion of the compiler.  The compiler
is written largely in T, simplifying the task of porting the compiler
itself.

For PSL, portability was a major implementation goal.  For T,
portability became a byproduct of the language and compiler design.  A
central goal of T has been to provide a clean, elegant, and efficient
LISP.  The T implementers strove to achieve compatibility not only
among different machines, but also between the interpreted and
compiled code -- often a source of problems in other Lisps.  So far, T
has been implemented for the M68000 (Apollo/Domain), VAX/UNIX, and
VAX/VMS.  There are plans for other machine implementations, as well
as enhancements of the elegance and efficiency of the language itself.

People at Yale have been using T for the past several years now.  
Applications have included an extensible text editor with inductive 
inference capability (editing by example), a hierarchical digital
circuit graphics editor and simulator, and numerous large AI programs.
T is also being used in a great many undergraduate courses both at
Yale and elsewhere.

I believe that PSL and Standard LISP have been very worthwhile
endeavors and have bestowed the salutary light of LISP on many
machines that had theretofore languished in the lispless darkness of
algebraic languages.  T, though virtuous in design and virtual in
implementation, does not address the FORTRAN-heathen, but rather seeks
to uplift the converted and provide comfort to those true-believers
who know, in their heart of hearts, that LISP can embrace both
elegance and efficiency.  Should this credo also facilitate
portability, well, praise the Lord.

------------------------------

Date: Mon, 25 Jul 83 11:41:50 EDT
From: Nathaniel Mishkin <Mishkin@YALE.ARPA>
Subject: Re: Lisp Portability

    Date: Tue 19 Jul 83 15:24:00-EDT
    From: Chip Maguire <Maguire@COLUMBIA-20.ARPA>
    Subject: Lisp Portability

    [...]

    So lets hear more about the good ideas in T and fewer nebulous
    comments like:  "more efficient and portable".

I can give my experience working on a display text editor, U, written
in T. (U's original author is Bob Nix.)  U is 10000+ lines of T code.
Notable U features are a "do what I did" editing by example system, an
"infinite" undo facility, and a Laurel (or Babyl) -like mail system.
U runs well on the Apollo and almost well on VAX/VMS. U runs on
VAX/Unix as well as can be expected for a week's worth of work.
Porting U went well:  the bulk of U did not have to be changed.

- - - - -

Notable features of T:

    - T, like Scheme (from which T is derived) supports closures (procedures
      are first-class data objects).  Closures are implemented efficiently
      enough so that they are used pervasively in the implementation of the
      T system itself.

    - Variables are lexically-scoped; variables from enclosing scopes can
      be accessed from closed procedures.

    - T supports an object-oriented programming style that does not conflict
      with the functional nature of Lisp. Operations (like Smalltalk messages)
      can be treated as functions; e.g. they can be used with the MAP
      functions.

    - Compiled and interpreted T behave identically.

    - T has fully-integrated support for multiple namespaces so software
      written by different people can be combined without worrying about
      name conflicts.

    - The T implementors (Jonathan Rees and Norman Adams) have not felt
      constrained to hold on to some of the less modern aspects of older
      Lisps (e.g. hunks and irrational function names).

    - T is less of a bag of bits than other Lisps. T has a language definition
      and a philosophy.  One feels that one understands all of T after reading
      the manual.  The T implementors have resisted adding arbitrary features
      that do not fit with the philosophy.

    - Other features:  inline procedure expansion, procedures accept arbitrary
      numbers of parameters ("lexpr's" or "&rest-args"), interrupt processing.

All these aspects of T have proved to be very useful.

- - - - -

    The predecessor system "Standard LISP" along with the REDUCE
    symbolic algebra system ran on the following machines (as October
    1979):  Amdahl:  470V/6; CDC: 640, 6600, 7600, Cyber 76; Burroughs:
    B6700, B7700; DEC: PDP-10, DECsystem-10, DECsystem-20; CEMA: ES
    1040; Fujitsu:  FACOM M-190; Hitachi:  MITAC M-160, M-180;
    Honneywell:  66/60; Honeywell-Bull:  1642; IBM: 360/44, 360/67,
    360/75, 360/91, 370/155, 370/158, 370/165, 370/168, 3033, 370/195;
    ITEL: AS-6; Siemens:  4004; Telefunken:  TR 440; and UNIVAC: 1108,
    1110.

Hmm. Was the 370/168 implementation significantly different from the
370/158 implementation?  Also, aren't some of those Japanese machines
"360s".  When listing implementations, let's do it in terms of
architectures and operating systems.

While it may be the case that PSL is more portable than T, T does
presently run on the Apollo, VAX/VMS and VAX/Unix. Implementations for
other architectures are being considered.

------------------------------

End of AIList Digest
********************
 27-Jul-83 16:21:59-PDT,14140;000000000001
Mail-From: LAWS created at  1-Aug-83 17:12:05
Date: Wednesday, July 27, 1983 4:21PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #27
To: AIList@SRI-AI


AIList Digest           Thursday, 28 Jul 1983      Volume 1 : Issue 27

Today's Topics:
  Multiple producers in a production system
  PROLONG
  HFELISP
  Getting Started in AI
  Lisp Translation
  Re: Expectations of Expert System Technology
  The Fifth Generation Computer Project
  The Military and AI
  AI Koans
  HP Computer Colloquium 7/28
----------------------------------------------------------------------

Date: 26 Jul 1983 0937-PDT
From: Jay <JAY@USC-ECLC>
Subject: Multiple producers in a production system

(speculation/question)

Has anyone heard of multiple "producers" in production systems?  What
I mean is:  should the STM contain (a b c) and there is a rule (a b)
-> (d) and another (b c) -> (e), would it be useful to somehow do BOTH
productions?  The PS could become two PS's, one with (d c) and another
with (e a) in STM.  This sort of a PS could be useful in fuzzy areas 
of knowledge where the same implicants could (due to lack of other 
implicants, or due to lack of understanding) imply more than one 
result.

j'

------------------------------

Date: Tue 26 Jul 83 23:14:06-PDT
From: WALLACE <N.WALLACE@SU-SCORE.ARPA>
Subject: PROLONG

        PROLONG:  A VERY SLOW LOGIC PROGRAMMING LANGUAGE

                          ABSTRACT

PROLONG was developed at the University of Heiroglyphia over a 22-year
period.  PROLONG is an implementation of a very well-known technique
for deciding whether a given well-formed formula F of first-order
logic is a theorem.  We first type in the axioms A of our system.
Then PROLONG applies the rules of inference successively to the axioms
A and the subsequent theorems we derive from A.  A matching routine
determines whether F is identical to one of these theorems.  If the
algorithm stops, we know that F is a theorem.  If it never stops, we
known that F is not.

------------------------------

Date: 27 Jul 1983 0942-PDT
From: Jay <JAY@USC-ECLC>
Subject: HFELISP


        HFELISP (Heffer Lisp) HUMAN FACTOR ENGINEERED LISP

                                ABSTRACT

  HFE sugests that the more complicated features of (common) Lisp are 
dangerous, and hard to understand.  As a result a number of Fortran, 
Cobol, and 370 assembler programmers got together with a housewife.  
They pared Lisp down to, what we belive to be, a much simpler and more
understandable system.  The system includes only the primitives CONS, 
READ, and PRINT.  However CONS was restricted to only take an atom for
the first argument, and a onelevel list for the second.  Since all 
lists are onelevel they also did away with parenthesis.  All the 
primitives were coded in ADA and this new lisp is being considered as 
the DOD's AI language.

j'

------------------------------

Date: 22 Jul 83 15:39:24-PDT (Fri)
From: harpo!floyd!cmcl2!rocky2!flipkin @ Ucb-Vax
Subject: Getting Started in AI
Article-I.D.: rocky2.103

Can someone point me to a good place to begin with AI? I find the
subject fascinating (as does my EECS girlfriend), and I would
appreciate some help getting started. Thanks in advance,
                Dennis Moore

(reply via mail please, unless you think it is of great interest
to the net)

[I think it is of great interest!  I recommend the AI Handbook for a
general overview.  I am still looking for a good intro to Lisp and the
programming conventions needed to produce interesting Lisp programs.
(Winston and Horn is a reasonable introduction, and Charniak,
Riesbeck, and McDermott has a lot of good material.  The Little Lisper
is a good introduction to recursive programming if you can stand the
"programmed text" question-and-answer presentation.)
-- KIL]

------------------------------

Date: 26 Jul 1983 0833-PDT
From: FC01@USC-ECL
Subject: Lisp Translation

        This lisp debate seems to be turning into a freeforall.
Slanderous remarks are unnecessary. The fact is that once you get used
to something, the momentum of keeping with it is often more powerful
than any advantages attainable by changing from it. Perhaps functions
like transor from Interlisp could be extended by some of the AI
researchers to provide real translations from lisp to lisp. This way,
you could develop your programs in the lisp of your choice and run
them in the most efficient lisp available on any given machine. With
all the work that has been done on human translations and the extreme
complexity thereof, it would seem a practical and only extremely 
ambitious (as opposed to down right unrealistic) project to develop a
translator between lisps. Think of it like translating between a New
Yorker and a Bostonian and a Texan, all talking breeds of English. If
the energy spent on developing new lisps and arguing about their
superiorities were spent in the lisp translation area, we might have
it done by now.
                        Fred

------------------------------

Date: 25 Jul 83 18:11:37-PDT (Mon)
From: decvax!microsoft!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: Expectations of expert system technology
Article-I.D.: ssc-vax.345

Expert systems technology is an experimental field whose basic
concepts have been fairly well established in the past few years.
Since it is really an engineering field (knowledge engineering) much
of the important research is carried on by attempting to develop a
specific application and seeing what sorts of problems and solutions
crop up.  This is true for MYCIN, R1, PROSPECTOR, and many other
expert systems.  Our Expert Systems Technology group at Boeing has
been developing a prototype flight route planner.  It has provided a
good test bed for more theoretical work on the kinds of tools and
capabilities needed for knowledge engineering (although as a planner,
it may never be fully functional).  Our application is sufficiently
difficult that it is quite experimental, however a simple expert
system is not particularly difficult to put together, if some of the
existing and available tools are used.  Needless to say, many sweeping
generalizations and unjustified assumptions (read: gross hacks) must 
be made, in order to simplify the problem to a point where an expert 
system can be built.  The resulting expert system, although perhaps
not much more capable than a good C program, will be much smaller and 
more transparent in structure than any ordinary program.

The ad in question may or may not be reasonable.  I don't know enough 
about finance to say whether the knowledge in that domain can be 
easily encoded.  However, if the company's expectations are not too
high, they may end up with a reasonable tool, one that will be just as
good as if some C wizard had spent a year of sleepless nights 
reinventing the AI wheels.

Stan ("the Leprechaun Hacker") Shebs
Boeing Aerospace Co.
ssc-vax!sts (soon utah-cs)

------------------------------

Date: 26 Jul 83 10:50:26-PDT (Tue)
From: decvax!linus!utzoo!hcr!ravi @ Ucb-Vax
Subject: The Fifth Generation Computer Project
Article-I.D.: hcr.455

Has anyone out there had any contact with the Japanese Institute for
New Generation Computer Technology (which is running the Fifth
Generation Computer Project)?  Since the the first rush of publicity
when the project was initiated, things have been fairly quiet (except
for the somewhat superficial book by Feigenbaum and a few papers in
symposia), and it's a bit hard to find out just how the project is
progressing.  I am especially interested in talking to people who have
visited INGCT recently and have met with the people directly involved
in the project.  Thanks!
        --ravi

        {linus, floyd, allegra, ihnp4} ! utzoo ! hcr ! hcrvax ! ravi 
OR
        decvax ! hcr ! hcrvax ! ravi

------------------------------

Date: Wed, 27 Jul 83 08:42 EDT
From: MJackson.Wbst@PARC-MAXC.ARPA
Subject: The military and AI

Food for thought:


  Date: 26 Jul 83 12:05:02 PDT (Tuesday)
  From: McCullough.PA
  Subject: The military and AI
  To: antiwar^ 

  From "The Race to Build a Supercomputer" in Newsweek, July 4, 1983...

  [Robert Kahn, mentioned below, is DARPA's computer director]


'Once they are in place, these technlogies will make possible an 
astonishing new breed of weapons and military hardware.  Smart robot 
weapons--drone aircraft, unmanned submarines and land vehicles--that 
combine aritificial intelligence and high-powered computing can be
sent off to do jobs that now involve human risk.  "This is a sexy area
to the military, because you can imagine all kinds of neat,
interesting things you could send off on their own little missions
around the world or even in local combat," says Kahn.  The Pentagon
will also use the technologies to create artificial-intelligence
machines that can be used as battlefield advisers and superintelligent
computers to coordinate complex weapons systems.  An intelligent
missile-guidance system would have to bring together different
technologies--real-time signal processing, numerical calculations and
symbolic processing, all at unimaginably high speeds--in order to make
decisions and give advice to human commanders.'

------------------------------

Date: 24 Jul 1983 16:21-PDT
From: greiner@Diablo
Subject: AI Koans

[This has appeared on several BBoards thanks to Gabriel Robins, Rich
Welty, Drew McDermott, Margot Flowers, and no doubt others.  I have
no idea what it is about, but pass it on for your doubtful
enlightenment.  -- KIL]


AI Koans: (by Danny)

  A novice was trying to fix a broken lisp machine by turning the
power off and on.  Knight, seeing what the student was doing spoke
sternly- "You can not fix a machine by just power-cycling it with no
understanding of what is going wrong."
  Knight turned the machine off and on.
  The machine worked.

-       -       -       -       -

One day a student came to Moon and said, "I understand how to make a
better garbage collector.  We must keep a reference count of the
pointers to each cons." Moon patiently told the student the following
story-

  "One day a student came to Moon and said, "I understand how to
  make a better garbage collector...


-       -       -       -       -

  In the days when Sussman was a novice Minsky once came to him as he
sat hacking at the PDP-6.  "What are you doing?", asked Minsky.
  "I am training a randomly wired neural net to play Tic-Tac-Toe."
  "Why is the net wired randomly?", asked Minsky.
  "I do not want it to have any preconceptions of how to play"
  Minsky shut his eyes,
  "Why do you close your eyes?", Sussman asked his teacher.
  "So the room will be empty."
  At that momment, Sussman was enlightened.


-       -       -       -       -

A student, in hopes of understanding the Lambda-nature, came to
Greenblatt.  As they spoke a Multics system hacker walked by.  "Is it
true", asked the student, "that PL-1 has many of the same data types
as Lisp".  Almost before the student had finished his question,
Greenblatt shouted, "FOO!", and hit the student with a stick.


-       -       -       -       -

A disciple of another sect once came to Drescher as he was eating his
morning meal.  "I would like to give you this personality test", said
the outsider,"because I want you to be happy." Drescher took the
paper that was offered him and put it into the toaster- "I wish the
toaster to be happy too".


-       -       -       -       -
(by who?)

A man from AI walked across the mountains to SAIL to see the Master,
Knuth.  When he arrived, the Master was nowhere to be found.

        "Where is the wise one named Knuth?" he asked a passing
student.

        "Ah," said the student, "you have not heard. He has gone on a
pilgrimage across the mountains to the temple of AI to seek out new
disciples."

Hearing this, the man was Enlightened.

-       -       -       -       -


And, of course, my own contribution:


A famous Lisp Hacker noticed an Undergraduate sitting in front of a
Xerox 1108, trying to edit a complex Klone network via a browser.
Wanting to help, the Hacker clicked one of the nodes in the network
with the mouse, and asked "what do you see?"
Very earnesty, the Undergraduate replied "I see a cursor."
The Hacker then quickly pressed the boot toggle at the back of the
keyboard, while simultaneously hitting the Undergraduate over the
head with a thick Interlisp Manual.  The Undergraduate was then
Enlightened.


         - Gabriel [Robins@ISIF]

------------------------------

Date: 26 Jul 83 14:10:41 PDT (Tuesday)
From: Kluger.PA@PARC-MAXC.ARPA
Reply-to: Kluger.PA@PARC-MAXC.ARPA
Subject: HP Computer Colloquium 7/28


                Professor Gio Wiederhold
                Department of Computer Science
                Stanford University

                Knowledge in Databases


We define knowledge-based approaches to database problems.

Using a clarification of application levels from the enterprise to the
system levels, we give examples of the varieties of knowledge which
can be used.  Most of the examples are drawn from work at the KBMS
project at Stanford.

The object of the presentation is to illustrate the power, and also
the high payoff of quite straightforward artificial intelligence 
applications in databases.  Implementation choices will also be 
evaluated.


        Thursday, July 28, 1983 4:00 pm

        5M Conference room
        HP Stanford Park Labs
        1501 Page Mill Rd
        Palo Alto

        *** Be sure to arrive at the building's lobby ON TIME, so that
you may be escorted to the conference room

------------------------------

End of AIList Digest
********************
 29-Jul-83 9:12:01-PDT,10586;000000000001
Mail-From: LAWS created at  1-Aug-83 17:12:06
Date: Friday, July 29, 1983 9:12AM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #28
To: AIList@SRI-AI


AIList Digest            Friday, 29 Jul 1983       Volume 1 : Issue 28

Today's Topics:
  USENET and AI
  AI and the Military
  The Fifth Generation Computer Project
  Lisp Books, Nondeterminism, Japanese Effort
  Automated LISP Dialect Translation
  Data Flow Computers and PS's
  Repeated Substring Detection
  A.I. in Sci Fi (2)
----------------------------------------------------------------------

Date: 26 Jul 83 11:52:01-PDT (Tue)
From: teklabs!jima @ Ucb-Vax
Subject: USENET and AI
Article-I.D.: teklabs.2247

In response to [a Usenet] query about AI research going on at USENET
sites:

The Tektronix Computer Research Lab now has a Knowledge-Based Systems 
group. We are a <very> new group and are still staffing up.  We're 
looking into circuit trouble shooting as well as numerous other topics
of interest.

Jim Alexander
Usenet: {ucbvax,decvax,pur-ee,ihnss,chico}!teklabs!jima
CSnet:  jima@tek
ARPA:   jima.tek@rand-relay

------------------------------

Date: Wed 27 Jul 83 21:29:44-PDT
From: Ira Kalet <IRA@WASHINGTON.ARPA>
Subject: AI and the military

The possibilities of AI in unmanned weapons systems are wonderful!  
Now we could send all the weapons, and their delivery vehicles to the 
moon (or beyond) where they can fight our war for us without anyone 
getting hurt and no property damage.  That would be progress!  If only
the decision makers valued us humans more than their toys..........

------------------------------

Date: 27 Jul 83 18:38:58 PDT (Wednesday)
From: Hamilton.ES@PARC-MAXC.ARPA
Subject: Re: The Fifth Generation Computer Project

In case some of you are not on every junk mailing list known to man
the way I am, there is a new international English-language journal
with an all-Japanese editorial board called "New Generation
Computing", published by Springer-Verlag, Journal Fulfillment Dept.,
44 Hartz Way, Secaucus, NJ 07094.  The price is even more outrageous
than the stuff published by North Holland:  vol.1 (2 issues) 1983,
$52; vol.2 (4 issues) 1984, $104.

Can anybody explain why so much AI literature (even by US authors) is 
published by foreign publishers at outrageous prices?  I should have 
thought some US univerity press would get smart and get into the act
in a bigger way.  Lawrence Erlbaum seems to be doing a creditable job
in Cognitive Science, but that's just one corner of AI.

--Bruce

------------------------------

Date: 29 Jul 1983 0838-PDT
From: FC01@USC-ECL
Subject: Re: Lisp Books, Nondeterminism, Japanese Effort

Lots of things to talk about today, A good lisp book for the beginner:
The LISP 1.6 Primer. It really explains what's going down, and even
has exercises with answers. It is not specific to any particular lisp
of today (since it is quite old) and therefore gives the general
knowledge necessary to use any lisp (with a little help from the
manual).

Nondeterministic production systems: Lots of work has been done. The 
fact is that a production system is built under the assumption that 
there is a single global database. The tree version of a production 
system doesn't meet this requirement. On the other hand, there are 
many models of what you speak of.  The Petri-net model treats such 
things nondeterministically by selecting one or the other (assuming 
their results prevent each other from occuring) seemingly at random.  
Of course, unless you have a real parallel processor the results you 
get will be deterministic. I refer you to any good book on Petri-nets 
(Peterson is pretty good). Tree structured algorithms in general have 
this property, therefore any breadth-first search will try to do both 
forks of the tree at once. Other examples of theorem provers doing 
this are relatively common (not to mention most multiprocess operating
systems based on forks).

%th generation computers: There is a lot of work on the same basic
idea as 5th generation computers (a 5th generation computer by any
other name sounds better). From what I have been able to gather from
reading all the info from ICOT (the Japanese project directorate) they
are trying to do the project by getting foreign experts to come and
tell them how. They anounce their project, say they're going to lead
the world, and wait for the egos of other scientists to bring them
there to show them how to really do it. The papers I've read show a
few good researchers with real good ideas but little in the way of
knowing how to get them working. On the other hand, data flow, speech
understanding, systolic arrays, microcomputer interfaces to
'supercomputers' and high BW communications are all operational to
some degree in the US, and are being improved on a daily basis. I
would therefore say that unless we show them how, we will be the
leaders in this field, not they.

***The last article was strictly my opinion-- no reflection on anyone
else***

                        Fred

------------------------------

Date: Thu, 28 Jul 83 11:34:17 CDT
From: Paul.Milazzo <milazzo.rice@Rand-Relay>
Subject: Automated LISP Dialect Translation

When Rice University got its first VAX, a friend of mine and I set 
about porting a production system based game playing program to Franz 
Lisp from Cambridge Lisp running on an IBM 370.  We used, as I recall,
a combination of Emacs macros (to change lexical constructs) and a
LISP program (to translate program constructs).  The technique was not
an elegant one, nor was it particularly general, but it gives me good 
reason to think that the LISP translator Fred proposes is far from 
impossible.  It also points out that implementation superiority is not
the only reason for choosing one LISP over another.

                                Paul Milazzo <milazzo.rice@Rand-Relay>
                                Dept. of Mathematical Sciences
                                Rice University, Houston, TX

:-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-)
P.S.  Fred:  After living in Texas for eight years, I'm still not
      sure I could interpret a Texan's remarks for a New Yorker.
      The dialect is easy to understand, but the concepts are all
      different...
:-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-)

------------------------------

Date: 28 Jul 1983 1352-PDT
From: Jay <JAY@USC-ECLC>
Subject: data flow computers and PS's

(more speculation)

 There has been some developement of computers suited to certain  high
level languages, includeing  the LISP machines.   There has also  been
some research  into non-Von  Neuman machines.   One  such  machine  is
the Data Flow Machine.

  The data flow machine differs from the conventional computer in that
ALL  instructions  are  initiated  when  the  program  starts.    Each
instruction waits  for  the  calculations yeilding  its  arguments  to
finish before it finishes.

  This machine seems,  to  me,  to be  ideally  suited  to  Production
Systems/Expert Systems.   Each  rule would  be  represented as  a  few
instructions (the IF part of the  production) and the THEN part  would
be represented by the completion of  the rule.  For example, the  rule
(Month-is-june AND Sun-is-up) ->  (Temperature-is-high) would be coded
as:

Temperature-is-high:    AND
                       /   \
                     /       \
                   /           \
          (Month-is-june)   (Sun-is-up)

  Where (Month-is-june) and (Sun-is-up) are represented as either
other rules, or as data (which I assume completes instantly).

j'

------------------------------

Date: Thu 28 Jul 83 16:06:46-PDT
From: David E.T. Foulser <FOULSER@SU-SCORE.ARPA>
Subject: Repeated Substring Detection

Would anyone in AI have use for the following type of program?  Given
a k-dimensional (the lower k the better) input string of characters 
from a finite alphabet, the program finds all substrings of dimension
k (or less if necessary) that occur more than once in the input
string.  I don't have a program that does this, but would like to know
of any interest.

                                        Sincerely,
                                        Dave Foulser

------------------------------

Date: 27 Jul 1983 1617-PDT
From: Park
Subject: A.I. in Sci Fi

                  [Reprinted from the SRI BBoard.]

Do you have a favorite gripe about the way scientists, computers, 
robots, or artificial intelligence are portrayed on tv shows?  Send 
them to me and I will forward them on Monday August 1 to an 
honest-to-God tv-show writer who is going to write that kind of show 
soon and would like to do it right.

Bill Park, EJ239 SRI International 333 Ravenswood Avenue Menlo Park,
CA 94025

------------------------------

Date: Thu 28 Jul 83 12:24:12-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Re: A.I. in Sci Fi

Gripes?  You mean things like:

  Hawaii 5-0 always using the card sorter as the epitome
  of computer readout?

  Stepford Wives portraying androids so realistic that no one
  notices, and executives/scientists who prefer them to true
  companions?

  Demon Seed showing impregnation of a woman by a computer?

  Telephon slowing down CRT typeout to 150 baud and adding
  Teletype sound effects?

  War Games similarly slowing the CRT typeout; using
  natural language communication; using voice synthesis
  on a home terminal connected by modem to a military computer;
  postulating that our national defense is in the hands of
  unsecured computers with dial-up ports, faulty password
  systems, games directories, and big panels of flashing lights;
  and portraying scientists and generals as nerds?

  Star Wars suggesting that computerized targeting mechanisms
  will always be inferior to human reflexes?

  Tron's premise that a computer can suck you into its internal
  conceptual world?

  Star Trek and War Games preaching that any computer can be
  disabled, even melted, by a logical contradiction or an
  unsatisfiable task?

Nah, I don't mind.

                                        -- Ken Laws

------------------------------

End of AIList Digest
********************
 29-Jul-83 16:27:02-PDT,7111;000000000001
Mail-From: LAWS created at  1-Aug-83 17:12:06
Date: Friday, July 29, 1983 4:27PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #29
To: AIList@SRI-AI


AIList Digest           Saturday, 30 Jul 1983      Volume 1 : Issue 29

Today's Topics:
  Robustness stories, program logs wanted; reprise
  Job Ad: Research Fellowships at Edinburgh AI
  Job Ad: Research Associate/Programmer at Edinburgh AI
  Job Ad: Computing Officer at Edinburgh AI
----------------------------------------------------------------------

Date: 28 Jul 83 1631 EDT (Thursday)
From: Craig.Everhart@CMU-CS-A
Reply-to: Robustness@CMU-CS-A
Subject: Robustness stories, program logs wanted; reprise

Response to the blinded Robustness mailbox has been good, but not
quite good enough to do the trick.  If you have a robustness-related
story or a change log for a program, wouldn't you consider sending it
to my collection?  Thanks very much!

What I need is descriptions of robustness features--designs or fixes
that have made programs meet their users' expectations better, beyond
bug fixing.  E.g.:
        - An automatic error recovery routine is a robustness feature,
          since the user (or client) doesn't then have to recover by
          hand.
        - A command language that requires typing more for a dangerous
          command, or supports undoing, is more robust than one that
          has neither feature, since each makes it harder for the user
          to get in trouble.
There are many more possibilities.  Anything where a system
doesn't meet user expectations because of incomplete or ill-advised
design is fair game.

Your stories will be used to validate my PhD thesis at CMU, which is
an attempt to build a discrimination net that will aid system
designers and maintainers in improving their designs and programs.
All stories will be properly credited in the thesis.

Please send a description of the problem, including an idea of the
task and what was going wrong (or what might have gone wrong) and a
description of the design or fix that handled the problem.  Or, if you
know of a program change log and would be available to answer a
question or two on it, please send it.  I'll extract the reports from
it.

Please send stories and logs to Robustness@CMU-CS-A.  Send queries
about the whole process to Everhart@CMU-CS-A.  I appreciate it--thank
you!

------------------------------

Date: Wednesday, 27-Jul-83  17:34:36-BST
From: DAVE     FHL (on ERCC DEC-10)  <bowen@edxa>
Reply-to: bowen%edxa%ucl-cs@isid
Subject: Job Ad: Research Fellowships at Edinburgh AI

--------

                            University of Edinburgh
                     Department of Artificial Intelligence

                              2 RESEARCH FELLOWS

                               (readvertisement)

Applications are invited for two Research Fellow posts to join a
project, funded by the Science and Engineering Research Council, which
is concerned with developing methods of modelling the user of
knowledge-based training and aid systems.  Candidates, who should have
a higher degree in Computer Science, Mathematics, Experimental
Psychology or related discipline, should be experienced programmers
and familiar with UNIX.  Experience of PROLOG or LISP and some
knowledge of IKBS (Intelligent Knowledge Based Systems) techniques 
would be an advantage.

The posts are tenable for three years, starting 1 October 1983, on the
salary scale 7190 - 11160 pounds sterling.

Applications, which should include a curriculum vitae and the names of
two referees, should be sent to

            The Secretary's Office
            Old College
            South Bridge
            Edinburgh EH8 9YL
            Great Britain

(from whom further particulars can be obtained) by 27 August 1983.
Please quote reference 5106.

------------------------------

Date: Wednesday, 27-Jul-83  17:38:47-BST
From: DAVE     FHL (on ERCC DEC-10)  <bowen@edxa>
Reply-to: bowen%edxa%ucl-cs@isid
Subject: Job Ad: Research Associate/Programmer at Edinburgh AI

--------

                            University of Edinburgh
                     Department of Artificial Intelligence

                         RESEARCH ASSOCIATE/PROGRAMMER

Applications are invited for a post of Research Associate/Programmer,
to join a project, led by Dr Jim Howe and funded by the Science and
Engineering Research Council, which is concerned with the
interpretation of sonar data in a 3-D marine environment.  Candidates,
who should have a degree in Computer Science, Mathematics or related
discipline, should be conversant with the UNIX programming environment
and fluent in the C language.  The work involves programming
applications of statistical estimation, 3-D motion representation, and
rule-based inference; experience in one or more of these areas would
be an advantage.

The post is tenable for three years, starting 1 October 1983, on the
salary scale 6310 - 7190 pounds sterling.

Applications, which should include a curriculum vitae and the names of
two referees, should be sent to

            The Secretary's Office
            Old College
            South Bridge
            Edinburgh EH8 9YL
            Great Britain

(from whom further particulars can be obtained) by 27 August 1983.
Please quote reference 5107.

------------------------------

Date: Wednesday, 27-Jul-83  17:32:58-BST
From: DAVE     FHL (on ERCC DEC-10)  <bowen@edxa>
Reply-to: bowen%edxa%ucl-cs@isid
Subject: Job Ad: Computing Officer at Edinburgh AI

--------

                            University of Edinburgh
                     Department of Artificial Intelligence

                        DEPARTMENTAL COMPUTING OFFICER

Applications are invited for a post of Departmental Computing Officer.
The successful applicant will lead a small group which is responsible
for creating, maintaining and documenting systems and application
software as needed for research and teaching in Artificial
Intelligence, and for managing the department's computing systems
which run under Berkeley UNIX.  Candidates, who should have a degree
in Computer Science or related discipline, should be conversant with
UNIX and fluent in the C language..  A background in compiler design
or an interest in A.I. would be advantageous.

The post is salaried on the scale 7190 - 11615 pounds sterling, with
placement according to age and experience.

Applications, which should include a curriculum vitae and the names of
two referees, should be sent to

            The Secretary's Office
            Old College
            South Bridge
            Edinburgh EH8 9YL
            Great Britain

(from whom further particulars can be obtained) by 27 August 1983.
Please quote reference 7033.

------------------------------

End of AIList Digest
********************
 2-Aug-83 13:22:49-PDT,12049;000000000001
Mail-From: LAWS created at  2-Aug-83 13:19:17
Date: Tuesday, August 2, 1983 12:54PM
From: AIList (Moderator: Kenneth Laws) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #30
To: AIList@SRI-AI


AIList Digest            Tuesday, 2 Aug 1983       Volume 1 : Issue 30

Today's Topics:
  Automatic Translation - Lisp to Lisp,
  Language Understanding - EPISTLE System,
  Programming Aids - High-Level Debuggers,
  Databases - Request for Geographic Descriptors,
  Seminars - Chess & Evidential Reasoning
----------------------------------------------------------------------

Date: Fri 29 Jul 83 15:53:59-PDT
From: Michael Walker <WALKER@SUMEX-AIM.ARPA>
Subject: Lisp Translators

[...]

        There has been some discussion about Lisp translation programs
the last couple of days. Another one to add to the list is that
developed by Gord Novak at Sumex for translating Interlisp into Franz,
Maclisp, UCILisp, and Portable Standard Lisp. I suspect Gord would 
have a pretty good idea about what else is available, as this seems to
be an area of interest of his.

                                Mike Walker

[Another resource might be the set of macros that Rodney Brooks 
developed to run his Maclisp ACRONYM system under Franz Lisp.
The Image Understanding Testbed at SRI uses this package.
-- KIL]

------------------------------

Date: 30 Jul 1983 07:10-PDT
From: the tty of Geoffrey S. Goodfellow
Reply-to: Geoff@SRI-CSL
Subject: IBM Epistle.


    TECHNOLOGY MEMO
    By Dan Rosenheim
    (c) 1983 Chicago Sun-Times (Independent Press Service)
    IBM is experimenting with an artificial intelligence program that 
may lead to machine recognition of social class, according to a 
research report from International Resource Development.
    According to the market research firm, the IBM program can
evaluate the style of a letter, document or memo and can criticize the
writing style, syntax and construction.
    The program is called EPISTLE (Evaluation, Preparation and 
Interpretation System for Text and Language Entities).
    Although IBM's immediate application for this technology is to 
highlight ''inappropriate style'' in documents being prepared by 
managers, IRD researchers see the program being applied to determine 
social origins, politeness and even general character.
     Like Bernard Shaw's Professor Higgins, the system will detect
small nuances of expression and relate them to the social background
of the originator, ultimately determining sex, age, level of
intelligence, assertiveness and refinement.
    Particularly intriguing is the possibility that the IBM EPISTLE 
program will permit a response in the mode appropriate to the user and
the occasion. For example, says IRD, having ascertained that a letter
had been sent by a 55-year-old woman of Armenian background, the
program could help a manager couch a response in terms to which the
woman would relate.

------------------------------

Date: 01 Aug 83  1203 PDT
From: Jim Davidson <JED@SU-AI>
Subject: EPISTLE


There's a lot of exaggeration here, presumably by the author of the
Sun-Times article.  EPISTLE is a legitimate project being worked on
at Yorktown, by George Heidorn, Karen Jensen, and others.  [See,
e.g., "The EPISTLE text-critiquing system". Heidorn et al, IBM
Systems Journal, 1982] Its general domain, as indicated, is business
correspondence.  Its stated (long-term) goals are

    (a) to provide support for the authors of business letters--
        critiquing grammar and style, etc.;

    (b) to deal with incoming texts: "synopsizing letter contents,
        highlighting portions known to be of interest, and
        automatically generating index terms based on conceptual
        or thematic characteristics rather than key words".

Note that part (b) is stated considerably less ambitiously than in
the Sun-Times article.

The current (as of 1982) version of the system doesn't approach even
these more modest goals.  It works only on problems in class (a)--
critiquing drafts of business letters.  The *only* things it checks
for are grammar (number agreement, pronoun agreement, etc.), and
style (overly complex sentences, inappropriate vocabulary, etc.)
Even within these areas, it's still very much an experimental system,
and has a long way to go.

Note in particular that the concept of "style" is far short of the
sort of thing presented in the Sun-Times article.  The kind of style
checking they're dealing with is the sort of thing you find in a
style manual: passive vs. active voice, too many dependent clauses,
etc.

------------------------------

Date: 28 Jul 1983 05:25:43-PST
From: whm.arizona@Rand-Relay
Subject: Debugger Query--Summary of Replies

                    [Reprinted from Human-Nets.]

Several weeks ago I posted a query for information on debuggers.  The 
information I received fell into two categories: information about 
papers, and information about actual programs.  The information about 
papers was basically subsumed by two documents: an annotated 
bibliography, and soon-to-be-published conference proceedings.  The 
information about programs was quite diverse and somewhat lengthy.  In
order to avoid clogging the digest, only the information about the 
papers is included here.  A longer version of this message will be 
posted to net.lang on USENET.

The basic gold mine of current ideas on debugging is the Proceedings 
of the ACM SIGSOFT/SIGPLAN Symposium on High-Level Debugging which was
held in March, 1983.  Informed sources say that it is scheduled to 
appear as vol. 8, no. 4 (1983 August) of SIGSOFT's Software 
Engineering Notes and as vol. 18, no. 8 (1983 August) of SIGPLAN 
Notices.  All members of SIGSOFT and SIGPLAN should receive copies 
sometime in August.

Mark Johnson at HP has put together a pair of documents on debugging.
They are:

        "An Annotated Software Debugging Bibliography"
        "A Software Debugging Glossary"

I believe that a non-annotated version of this bibliography appeared 
in SIGPLAN in February 1982.  The annotated bibliography is the basic 
gold mine of "pointers" about debugging.

Mark can be contacted at:

        Mark Scott Johnson
        Hewlett-Packard Laboratories
        1501 Page Mill Road, 3U24
        Palo Alto, CA 94304
        415/857-8719

        Arpa:  Johnson.HP-Labs@RAND-RELAY
        USENET: ...!ucbvax!hplabs!johnson


Two books were mentioned that are not currently included in Mark's 
bibliography:

        "Algorithmic Debugging" by Ehud Shapiro.  It has information
          on source-level debugging, debuggers in the language being
          debugged, debuggers for unconventional languages, etc.  It
          is supposedly available from MIT Press.  (From
          dixon.pa@parc-maxc)

        "Smalltalk-80: The Interactive Programming Environment"
           A section of the book describes the system's interactive
           debugger.  (This book is supposedly due in bookstores
           on or around the middle of October.  A much earlier
           version of the debugger was briefly described in the
           August 1981 BYTE.)  (From Pavel@Cornel.)

Ken Laws (Laws@sri-iu) sent me an extract from "A Bibliography of 
Automatic Programming" which contained a number of references on 
topics such as programmer's apprentices, program understanding, 
programming by example, etc.


Many thanks to those who took the time to reply.

                                Bill Mitchell
                                The University of Arizona
                                whm.arizona@rand-relay
                                arizona!whm

------------------------------

Date: Fri 29 Jul 83 19:32:39-PDT
From: Robert Amsler <AMSLER@SRI-AI.ARPA>
Subject: WANTED: Geographic Information Data Bases

I want to build a geographic knowledge base and wonder if
someone out there has small or large sets of foreign
geographic data. Something containing elements such as
(PARIS CITY FRANCE) composed of three items,
Geographic-Name, Superclass, and Containing-Geographic item.

I have already acquired a list of all U.S. cities and
their state memberships; but apart from that need other
geographic information for other U.S. features (e.g. counties,
rivers, mountains, etc.) as well as world-wide data.

I am not especially looking for numeric data (e.g. Longitude
and Latitude; elevations, etc.) nor numeric attributes such
as population, area, etc. -- I want symbolic data, names of
geographic entities.

Note::: I do mean already machine-readable.

Bob Amsler
Natural-Language and Knowledge-Resource Systems Group
Advanced Computer Systems Department
SRI International
333 Ravenswood Ave
Menlo Park, CA 95025

------------------------------

Date: 1 August 1983 1507-EDT
From: Dorothy Josephson at CMU-CS-A
Subject: CMU Seminar, 8/9

                  [Reprinted from the CMU BBoard.]

DATE:           Tuesday, August 9, 1983
TIME:           3:30 P.M.
PLACE:          Wean Hall 5409
SPEAKER:        Hans Berliner
TOPIC:          "Ken Thompson's New Chess Theorem"

                        ABSTRACT

Among the not-quite-so-basic endgames in chess is the one of 2
Bishops versus Knight (no pawns).  What the value of a general
position in this domain is, has always been an open question.  The
Bishops have a large advantage, but it was thought that a basic and
usually achievable position could be drawn.  Thompson has just shown
that this endgame is won in the general case using a technique called
retrograde enumeration.  We will explain what he did, how he did it,
and the significance of this result.  We hope some people from Formal
Foundations will attend as there are interesting questions relating
to whether a construction such as this should be considered a
"proof."

------------------------------

Date: 1 Aug 83 17:40:48 PDT (Monday)
From: murage.pa@PARC-MAXC.ARPA
Subject: HP Computer Colloquium, 8/4

                  [Reprinted from the SRI BBoard.]


                       JOHN D. LAWRENCE

                   Articifial Intelligence Center
                       SRI International


                       EVIDENTIAL REASONING:
           AN IMPLIMENTATION FOR MULTI-SENSOR INTEGRATION


One common feature of most knowledge-based expert systems is that
they must reason based upon evidential information. Yet there is very
little agreement on how this should be done. Here we present our
current understanding of this problem and its solution as it applies
to multi-sensor integration. We begin by characterizing evidence as a
body of information that is uncertain, incomplete, and sometimes
inaccurate. Based on this characterization, we conclude that
evidential reasoning requires both a method for pooling multiple
bodies of evidence to arrive at a consensus opinion and some means of
drawing the appropriate conclusions from that opinion. We contrast
our approach, based on a relatively new mathematical theory of
evidence, with those approaches based on Bayesian probability models.
We believe that our approach has some significant advantages,
particulary its ability to represent and reason from bounded
ignorance. Further, we describe how these techniques are implemented
by way of a long term memory and a short term memory.  This provides
for automated reasoning from evidential information at multiple
levels of abstraction over time and space.


   Thursday, August 4, 1983 4:00 p.m.

   5M Conference Room
   1501 Page Mill Road
   Palo Alto, CA 94304

   NON-HP EMPLOYEES:  Welcome! Please come to the lobby on time, so
that you may be escorted to the conference room.

------------------------------

End of AIList Digest
********************
 2-Aug-83 22:58:55-PDT,9630;000000000001
Mail-From: LAWS created at  2-Aug-83 22:56:21
Date: Tuesday, August 2, 1983 10:49PM
From: AIList Moderator: Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #31
To: AIList@SRI-AI


AIList Digest           Wednesday, 3 Aug 1983      Volume 1 : Issue 31

Today's Topics:
  Fifth Generation - Opinion & Book Review
----------------------------------------------------------------------

Date: Sat 30 Jul 83 21:39:16-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: 5th generation

I think that there is a widespread misconception on ICOT and the 5th
generation project. Here are my comments on a recent message to this
bulletin board:

    From what I have been able to gather from reading all the
    info from ICOT (the Japanese project directorate) they are
    trying to do the project by getting foreign experts to come
    and tell them how. They anounce their project, say they're
    going to lead the world, and wait for the egos of other
    scientists to bring them there to show them how to really do
    it.

I know personally several people that have visited ICOT, have talked 
at length with two of them and read trip reports by others. In their 
visits, there was very little if any suggestion tht they should 
participate on the day to day effort at ICOT or give detailed reports 
on their work. The character of the visits was very much that of an
academic visit where the visitor goes on doing his current work and
sees what the hosts are up to. They were also very open with their
(very concrete and under way) plans. The image of the ICOT worker
waiting axiously to be told what to do seems the opposite of reality,
and in fact they sometimes seem too busy with their own work to give
their visitors any more than the minimum courtous attention.  As far
as I can tell, the goal of the invitations is to foster goodwill and
understanding of ICOTs goals.

    The papers I've read show a few good researchers with real
    good ideas but little in the way of knowing how to get them
    working.

ICOT has a very clear plan of creating a line of successively faster 
and more sophisticated "inference machines".  The first, the personal 
sequential inference machine (PSI), a specialized Prolog machine, is
being built now, and there is no reason to believe that it will not be
completed in time. They are also doing research in parallel 
architectures and database machines.

    On the other hand, data flow, speech understanding, systolic
    arrays, microcomputer interfaces to 'supercomputers' and
    high BW communications are all operational to some degree in
    the US, and are being improved on a daily basis. I would
    therefore say that unless we show them how, we will be the
    leaders in this field, not they.

I have looked, and I know people who have looked much more carefully, 
at the usefulness of current fashions in parallel architectures for 
general deductive inference engines. The picture, unfortunately, is 
not brilliant.  Given that ICOT are comitted to logic programming and 
deductive mechanisms in general, there isn't that much that they could
borrow from that work. That is, they are taking genuine research 
risks. To explain fully why I think most current architectures are not
appropriate for logic programming/deduction would take me too far 
afield. I will just point out that logic programming/deduction involve
dealing with incompletely specified objects (terms with uninstantiated
variables) that can be specified further in many alternative ways (OR 
parallelism).  Implementation of this kind of parallelism in currently
BUILT architectures would involve either wholesale copying or a high 
cost in accessing variable bindings.

Fernando Pereira

------------------------------

Date: 01 Aug 83  1422 PDT
From: Jim Davidson <JED@SU-AI>
Subject: The Fifth Generation (book review)

BC-BOOK-REVIEW Undated By CHRISTOPHER LEHMANN-HAUPT c. 1983 N.Y. Times
News Service
    THE FIFTH GENERATION. Artificial Intelligence and Japan's Computer
Challenge to the World. By Edward A. Feigenbaum and Pamela McCorduck.
275 pages. Illustrated with diagrams. Addison-Wesley. $15.75.

    This isn't just another of those books that says Japan is better 
than we are and therefore is going to keep on whipping us in 
productivity. ''The Fifth Generation'' goes considerably further than 
that. It points with a trembling finger at Japan's commitment to 
produce within a decade a new generation of computers so immensely 
powerful that they will in effect constitute a new and revolutionary 
form of wealth.
    KIPS, these computers will be called, an acronym of knowledge 
information processing systems. They will exploit the recent 
speculation that intelligence, be it real or artificial, doesn't 
depend so much on the power to reason as it does on a ''messy bunch of
details, facts, rules of good guessing, rules of good judgment, and
experiential knowledge,'' as the authors put it. They will be so much
more powerful that where today's machines can handle 10,000 to 100,000
logical inferences per second, or LIPS, the next-generation computer
will be capable of 100 million to 1,000 million LIPS.
    These computers, if the Japanese succeed, will be able to interact
with people using natural language, speech and pictures. They'll 
transform talk into print and translate one language into another.  
Compared to today's machines, they'll be what automobiles are to 
bicycles. And because they'll raise knowledge to the status of what 
land, labor and capital once were, these machines will become ''an 
engine for the new wealth of nations.''
    Will the Japanese really pull this off, despite their supposed 
tendency to be ''copycats'' instead of innovators? The authors insist 
that this and other stereotypes are largely mythical; that every great
industrial nation must go through a phase of imitation. Sure, the
Japanese can do it. And even if they fail to fulfill their grand 
design, they'll likely achieve enough to make it pointless for any 
other nation to compete with them. Meanwhile, the United States will 
assume the role of ''the first great post-industrial agrarian 
society.''
    It's quite an awesome picture that Edward A. Feigenbaum and Pamela
McCorduck have painted. What's more, they have impressive credentials
- Feigenbaum as professor of computer science at Stanford University 
and a founder of TeKnowledge Inc., a pioneer knowledge engineering 
company; Mrs. McCorduck as a science-writer who teaches at Columbia 
and whose last book was a history of artificial intelligence called 
''Machines Who Think.'' And their Jeremiad is extremely well written, 
even quite witty in places. It's certainly more articulate by an order
of magnitude than ''In Search of Excellence,'' the book that defends
America's managerial potential and now sits atop the nonfiction
best-seller list.
    So what are we supposed to do in the face of this awesome
challenge?  The authors list various possibilities, such as joining up
with Japan or preparing for our future as the world's truck garden.
But what they'd really like to see is ''a national center for
knowledge technology'' - that is, ''a gathering up of all knowledge,''
''to be fused, amplified, and distributed, all at orders of magnitude,
difference in cost, speed, volume, and >>usefulness<< over what we
have now.''
    Let that be as it may. While ''The Fifth Generation'' makes a 
powerful case, there are those who believe that, between the 
Pentagon's Defense Advanced Research Projects Agency (DARPA) and 
several interindustry groups that have been formed, we have already 
been sufficiently aroused to compete in this new race for world 
leadership. (The Soviet Union, by the way, is out in left field, 
according to the authors.)
    Whether the apocalypse it foresees is real or not, ''The Fifth 
Generation'' is worthwhile reading. Pamela McCorduck is very good on 
the debate over the ability of the machines to think, concluding that 
the condemnation they have met has been largely political - amusingly 
similar to ''the reasons given in the nineteenth century to explain 
why women could never be the intellectual equals of men.'' Feigenbaum 
is fascinating on his firsthand impressions of the Japanese computer 
establishment. (Each of the co-authors becomes a character in the 
narrative when his or her specialty happens to come up.)
    Together they are lucid on what the fifth-generation machines will
be like. And there is the standard mind-bending section on future 
computer applications. I particularly like Mrs. McCorduck's vision of 
the geriatric robot. ''It isn't hanging about in the hopes of 
inheriting your money - nor of course will it slip you a little 
something to speed the inevitable. It isn't hanging about because it 
can't find work elsewhere. It's there because it's yours. It doesn't 
just bathe you and feed you and wheel you out into the sun when you 
crave fresh air and a change of scene, though of course it does all 
those things. The very best thing about the geriatric robot is that it
>>listens<<. 'Tell me again,' it says, 'about how wonderful-dreadful
your children are to you. Tell me again that fascinating tale of the
coup of '63. Tell me again ... ' And it means it.''

------------------------------

End of AIList Digest
********************
 4-Aug-83 09:41:19-PDT,17634;000000000001
Mail-From: LAWS created at  4-Aug-83 09:38:08
Date: Thursday, August 4, 1983 9:26AM
From: AIList Moderator: Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #32
To: AIList@SRI-AI


AIList Digest            Thursday, 4 Aug 1983      Volume 1 : Issue 32

Today's Topics:
  Graph Theory - Finding Clique Covers,
  Knowledge Representation - Textnet,
  Fifth Generation & Msc. - Opinion,
  Lisp - Revised Maclisp Manual & Review of IQLisp
----------------------------------------------------------------------

Date: 2 Aug 83 11:14:51 EDT  (Tue)
From: Dana S. Nau <dsn%umcp-cs@UDel-Relay>
Subject: A graph theory problem

The following graph theory problem has arisen in connection with some
AI research on computer-aided design and manufacturing:

    Let H be a graph containing at least 3 vertices and having no
    cycles of length 4.  Find a smallest clique cover for H.

If there were no restrictions on the nature of H, the problem would be
NP-hard, but given the restrictions, it's unclear what its complexity
is.  A couple of us here at Maryland have been puzzling over the
problem for a week or so, and haven't been able to reduce any known
NP-hard problem to it.  However, the fastest procedure we have found
to solve the problem takes exponential time in the worst case.

Does anyone know anything about the computational complexity of this
problem, or about possible procedures for solving it?

------------------------------

Date: 3 Aug 83 20:50:46 EDT  (Wed)
From: Randy Trigg <randy%umcp-cs@UDel-Relay>
Subject: Textnet

[Adapted from Human-Nets.  The organization and indexing
of knowledge are topics that should be of interest to the AI
community.  -- KIL]

Regarding the recent worldnet discussion, I thought I'd briefly 
describe my research and suggest how it might apply: My thesis work 
has been in the area of advanced text handlers for the online 
scientific community.  My system is called "Textnet" and shares much 
with both NLS/Augment and Hypertext.  It combines a hierarchical 
component (like NLS, though we allow and encourage multiple 
hierarchies for the same text) with the arbitrary linked network 
strategy of Hypertext.  The Textnet data structure resembles a 
semantic network in that links are typed and are valid manipulable 
objects themselves, as are "chunks" (nodes with associated text) and 
"tocs" (nodes capturing hierarchical info).

I believe that a Textnet approach is the most flexible for a national 
network.  In a distributed version of Textnet (distributing 
Hypertext/Xanadu has also been proposed), users create not only new 
papers and critiques of existing ones, but also link together existing
text (i.e., reindexing information), and build alternate
organizations.

There can be no mad dictator in such an information network.  Each 
user organizes the world of scientific knowledge as he/she desires.  
Of course, the system can offer helpful suggestions, notifying a user 
about new information needing to be integrated, etc.  But in this 
approach, the user plays the active role.  Rather than passively 
accepting information in whatever guise worldnet decides to promote, 
each must take an active hand in monitoring that part of the network 
of interest, and designing personalized search strategies for the 
rest.  (For example, I might decree that any information stemming from
a set of journals I deem absurd, shall be ignored.)  After all, any 
truly democratic system should and does require a little work from 
each member.

------------------------------

Date: 3 Aug 1983 0727-PDT
From: FC01@USC-ECL
Subject: Re: Fifth Generation

Several good points were made about the Japanese capabilities and
plans for 5th generation computers. I certainly didn't intend to say
that they weren't capable of building such machines, only that the
U.S. could easily beat them to it if the effort were deemed
worthwhile. I have to agree that the nature of systolic arrays is
quite different from the necessary architecture for inference engines,
but nevertheless for vision and speech applications, these arrays are 
quite clearly superior. I know of no other nation with a data flow
machine in operation (although the Japanese are most certainly working
on it). Virtually every theorem proving system in existence was
written in the U.S. All of this information was freely (and rightly in
my opinion) disseminated to the rest of the world. If we continue to
do the research and seek immediate profits at the expense of long term
development, there is no doubt in my mind that the Japanese will beat
us there. If on the other hand, we use our extreme expertise to make 
our development programs the best they can be, and don't make the same
mistake we made with robotics in the 70s, I feel we can build better
machines sooner.

        Lisp translators from interlisp to otherlisps seem very 
interesting to me. Perhaps someone could send me a pointer to an 
ARPA-net mailing address of the creator/maintainer of these programs.
To my knowledge, none operates w/out human assistance, but I could be 
wrong.  [Check with Hanson@SRI-IU for Rodney Brooks' Maclisp-to-Franz 
macro package.  It does not cover all features in Maclisp.  -- KIL]

        As to natural language translation using computers, it has
been tried for technical translation and has been quite succesful as a
dictionary. As of 5 years ago, there were no real translators beyond
this for natural language.  Perhaps this has changed drastically. It
is my guess that without a system capable of learning, true
translation will never be done. It is simply too much to expect that a
human expert would be able to embody all of the knowledge of a
language into a program. Perhaps 90% translation could be achieved in
a few years, and 99% could probably be here w/in 10 years (between
similar languages).

        Speech recognition can be quite effective for relatively small
vocabularies by a given speaker in a particular language.
Understanding speech is a considerably slower process, but has the
advantage of trying to make sense of the sounds. It is not probably
realistic to say that general purpose speech understanding systems in
multiple languages with multiple speakers using large vocabularies
will be operational at real time performance in the next 10 years.

        Vision systems have been researched considerably for limited
robotics applications. Context boundedness seems to have a great
effect on the sort of IO that humans do. It is certainly not clear
that real time vision systems capable of understanding large varieties
of environments will be operational w/in the next 10 years.

        These problems are not simply solved by having very large
quantities of processing power! If they were, 5th generation computers
would not be such a risk. Even if the goals are not met, the advances
due to a large R+D program such as ICOTs will certainly have many
technological spinoffs with a widespread effect on the world
marketplace. It has been a longstanding problem with AI research that
people who demonstrate its results and people who report on these 
demonstrations both stress the possibilities for the future rather
than the realities of today. In many cases, the misconceptions spread
through the scientific community as well as the general public. Even
many computer science 'experts' that I've met have vast misconceptions
about what the current systems can in fact do, have in fact done, and
can be easily expanded to do. In many cases, NP complete problems have
been approached through heuristic means. This certainly works in many
cases, but as the sizes of problems increase, it is not clear that
these heuristics will apply as handily. NP completeness cannot be 
gotten around in general by building bigger or faster computers.
Computer learning has only been approached by a few researchers, and
few people would be considered 'intelligent' if they couldn't learn
from their mistakes.

        It doesn't bother me to see Kirk destroy computers with his
illogical ways. I've personally blown away many operating systems
accidentally with my illogical ways, and don't expect that anyone will
ever be able to build a 'perfect' machine. It does bother me when
people look at that as more than fantasy and claim it as scientific
evidence. Just as the 'robots' that are run by remote control (kind of
like a radio controlled airplane) sometimes upset me when they fool
people into thinking they are autonomous and intelects.

                                Yet another flaming controversy
				starter by
                                        Fred

------------------------------

Date: 3 August 1983 15:04 EDT
From: Kent M. Pitman <KMP @ MIT-MC>
Subject: MIT-LCS TR-295: The Revised Maclisp Manual

They said it would never happen, but look for yourself...

                        The Revised Maclisp Manual
                             by Kent Pitman

                                Abstract

Maclisp is a dialect of Lisp developed at M.I.T.'s Project MAC (now
the MIT Laboratory for Computer Science) and the MIT Artificial
Intelligence Laboratory for use in artificial intelligence research
and related fields.  Maclisp is descended from Lisp 1.5, and many
recent important dialects (for example Lisp Machine Lisp and NIL) have
evolved from Maclisp.

David Moon's original document on Maclisp, The Maclisp Reference
Manual (alias the Moonual) provided in-depth coverage of a number of
areas of the Maclisp world. Some parts of that document, however, were
never completed (most notably a description of Maclisp's I/O system);
other parts are no longer accurate due to changes that have occurred
in the language over time.

This manual includes some introductory information about Lisp, but is 
not intended as tutorial. It is intended primarily as a reference 
manual; particularly, it comes in response to users' pleas for more 
up-to-date documentation. Much text has been borrowed directly from
the Moonual, but there has been a shift in emphasis. While the Moonual
went into greater depth on some issues, this manual attempts to offer
more in the way of examples and style notes.  Also, since Moon had
worked on the Multics implementation, the Moonual offered more detail
about compatibility between ITS and Multics Maclisp. While it is hoped
that Multics users will still find the information contained herein to
be useful, this manual focuses more on the ITS and TOPS-20
implementations since those were the implementation most familiar to
the author.

The PitMANUAL, draft #14 May 21, 1983
                                   Saturday Evening Edition

Keywords: Artificial Intelligence, Lisp, List Structure, Maclisp,
          Programming Language, Symbol Manipulation

Ordering Information:

        The Revised Maclisp Manual
        MIT-LCS TR-295, $13.10

        Publications
        MIT Laboratory for Computer Science
        545 Technology Square
        Cambridge, MA 02139

About 300 copies were made. I don't know how long they'll last.
--kmp

------------------------------

Date: 1 August 1983 1747-EDT
From: Jeff Shrager at CMU-CS-A
Subject: IQLisp for the IBM-PC


        A review of IQLisp (by Integral Quality, 1983).

                Compiled by Jeff Shrager
                    CMU Psychology
                      7/27/83

The following comments refer to IQLisp running on an IBM-PC XT/256K
(you tell IQLisp the host machine's memory size at startup).  I spent
two two-hour (approximately) sessions with IQLisp just going through
the manual and hacking various features.  Then I tried to implement a
small production system interpreter (which took another three hours).

I. Things that make IQLisp more attractive than other micro Lisp
   systems that I have seen.

  A. The general workspace size is much larger than most due to the
     IBM-PC XT's expanded capacity.  IQLisp can take advantage of the
     increased space and the manual explains in detail how memory
     can be rearranged to take advantage of different programming
     requirements.  (But, see II.G.) (See also, summary.)
  B. The Manual is complete and locally legible. (But see II.D.)
     The internal specifications manual is surprisingly clear and
     complete.
  C. There is a window package. (But the windows aren't implemented
     to scroll under one another so the feature is more-or-less
     useless.)
  D. There is a macro facility.  This feature is important to both
     speed and eventual implementation of a compiler. (But see II.B.)
     Note that the manual teaches the "correct" way to write
     fexprs -- i.e., with macros.
  E. It uses the 8087 FP coprocessor if one exists. (But see II.A.)
  F. Integer bignums are supported.
  G. Arrays are supported for various data types.
  H. It has good "simple" I/O facilities.
     1. Function key support.
     2. Single keystroke input.
     3. Read macros. (No print macros?)
     4. A (marginal) window facility.
     5. Multiple streams.
  I. The development package is a useful programming tool.
     1. Error recovery tools are well designed.
     2. A complete structure editor is provided. (But, see II.I.)
     3. Many useful macros are included (e.g, backquote).
  J. It seems to be reasonably fast.  (See summary.)
  K. Stack frame hacking functions are provided which permit error
     control and evaluations in different contexts. (But, see II.H.)
  L. There is a clean interface to DOS.  (The "DIR" function is
     especially useful and cleverly implemented.)


II. Negative aspects of IQLisp.  (* Things marked with a "*" indicate
    important deficiencies.)

**A. There is no compiler!
 *B. Floating point is not supported without the 8087.  One would
     think that some sort of even very slow FP would be provided.
 *C. Casing is completely backwards.  Uppercase is demanded by IQLisp
     which forces one to put on shift lock (in a bad place on the IBM
     PC).  If any case dependency is implemented it should be the
     opposite (i.e., demand lower case) but case sensitivity should
     be switch controllable -- and default OFF!
 *D. The manual is poorly organized.  It is very difficult to find
     a particular topic since there are no complete indexes and the
     topics are split over several different sections.
  E. Error recovery is sometimes poor.  I have had three or four
     occasions to reboot the PC because IQLisp had gone to lunch.
     Once this was because the 8087 was not present and I had told
     the system that it was.  I don't know what caused the other
     problems.
  F. The file system supports only sequential files.
  G. The stack is fixed at 64K maximum which isn't very much and
     permits only about 700 levels of binding-free recursion.
  H. No new features of larger Lisp systems are provided.  For
     example: closures, flavors, etc.  This is really not a
     reasonable complaint since we're talking 256K here.
  I. There is no screen editor for functions.


III. Summary.

I was disappointed by IQLisp but perhaps this is because I am still
dreaming of having a Lisp machine for under $5,000.  IQ has obviously
put a very large amount of effort into the system and its
documentation (the latter being at least as important as the former).

Although one does not have all the functionality of a Lisp machine in
IQLisp (or even nearly so) I think that they have done an admirable
job within the constraints of the IBM-PC.  Some of the features are
overkill (e.g, the window system which is pretty worthless in the way
provided and in a non-graphics environment.)

My production system was not the model of efficient PS hacking.  It
was not meant to be.  I wanted to see how IQLisp compared with our
Vax VMS Franz system.  I didn't use a RETE net or efficient memory
organization.  IQ didn't do very well against even a heavily loaded
Vax (also interpreted lisp code). The main problem was space, not
speed.  This is to be expected on a machine without virtual memory.
Since there are no indexed file capabilities in IQLisp, the user is
strictly limited by the available core memory. I think that it's
going to be some time before we can do interesting AI with a micro.
However, (1) I think that I could have rewritten my production system
to be much more efficient in both space and time.  It may have run
acceptably with some careful tuning (what do you want for three
hours!?). And (2) we are going to try to use the system in the near
future for some human-computer interaction experiments -- as a
single-subject workstation for learning Lisp.  I see no reason that
it should not perform acceptably in domains which are less
information intensive than AI.

The starred (*) items in section II above are major stumbling blocks
to using IQLisp in general.  Of these, it is the lack of a Lisp
compiler which stops me from recommending it to everyone.  I expect
that this will be corrected in the near future because they have all
the required underpinnings (macros, assembly interface, etc).  Why
don't people just write a simple little lisp system and a whizzy
compiler?

------------------------------

End of AIList Digest
********************
 5-Aug-83 17:29:29-PDT,13173;000000000001
Mail-From: LAWS created at  5-Aug-83 17:26:44
Date: Friday, August 5, 1983 5:13PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #33
To: AIList@SRI-AI


AIList Digest            Saturday, 6 Aug 1983      Volume 1 : Issue 33

Today's Topics:
  Automatic Translation - FRANZLATOR & Natural Language,
  Expert Systems - Survey Alert,
  Fifth Generation - Opinions,
  Computational Complexity - Parallelism,
  Distributed AI - Problem Solving Bibliography,
  Literature Sources - Requests,
  Workstations - Request,
  Job - Stanford Heuristic Programming Project
----------------------------------------------------------------------

Date: Thu, 4 Aug 83 12:06 EDT
From: Tim Finin <Tim%UPenn@UDel-Relay>
Subject: FRANZLATOR inter-dialect translation system


We have built a rule-driven lisp-to-lisp translation system
(FRANZLATOR) in Franz lisp and have used it to translate KL-ONE from
Interlisp to Franz. (We includes people here at Penn and at BBN and
CCA).  The system is modular so that modifying it to work with a
different source and target dialect should involve only changing
several data bases.

The translator is organized as a two-pass system which is applied to a
set of source-dialect files and produces a corresponding set of
target-dialect files and a set of files containing notes about the
translation (e.g.  possible errors).

During the first pass all of the source files are scanned to build up
a database of information about the functions defined in the file
(e.g. type of function, arity, how it evals its args).  In the second
pass the expressions in the source files are translated and the
results written to the target files. The translation of an
s-expression is driven by transformation rules applied according to an
"eval-order" schedule (i.e. the arguments to a function call are
translated before the call to the function itself). An additional
initial pass may be required to perform certain character-level
transformations, although this can often be done through the use of
multiple readtables.

The actual translation is done by a set of rewrite rules, each rule
taking an s-expression into one or more resultant s-expressions.  In
addition to the usual "pattern" and "result" parts, rules can be
easily augmented with arbitrary conditions and actions and can have
several other attributes which control their application (e.g. a
priority). Variables are represented using the "backquote" convention.
Example of rules for Interlisp->Franz are:
   (NIL nil)
   ((NLISTP ,x) (not (dtpr ,x)))
   ((PROG1 ,@args) (prog2 nil ,@args))
   ((DECLARE: ,@args) ,(translateDeclare: ,args))
   ((and ,@x (and ,@y) ,@z) (and ,@x ,@y ,@z) -cyclic)

The translation rules are presented to the system in the form
described above and are immediately "compiled" (by macro-expansion)
into Lisp code which is quite efficient and can be, of course, further
compiled by LISZT.  The pattern matching operation, for example, is
"open coded" into a conjuction of primitive tests and action (e.g. EQ,
EQUAL, LENGTH, SETQ).

If you are interested in more information, contact me.

- Tim at UPENN (csnet)

------------------------------

Date: Friday, 5 August 1983 12:43:04 EDT
From: Robert.Frederking@CMU-CS-CAD
Subject: Machine translation

        The thing that makes any kind of general purpose machine
translation extremely hard is that there generally aren't one-to-one
correspondences between words, phrases, or sometimes concepts in two
different human languages.  A real translator essentially reads and
understands the text in one language, and then generates the
appropriate text in the other language.  Since understanding general
texts requires huge amounts of real-world knowledge, unrestricted
machine translation will arrive about the time AI programs can pass
the Turing test.  In my opinion, this will be substantially longer
than ten years.

------------------------------

Date: Thu 4 Aug 83 09:25:16-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Expert Systems Summary

The August issue of IEEE Spectrum contains an article by William B.  
Gevarter (of NASA) titled "Expert Systems: Limited but Powerful".  The
table of existing expert systems shows 79 systems in 16 categories.  
The text includes brief descriptions of Dendral, Mycin, R1, and 
Internist.

                                        -- Ken Laws

------------------------------

Date: 4 Aug 83 8:56:21-PDT (Thu)
From: decvax!linus!philabs!ras @ Ucb-Vax
Subject: Re: Japanese Effort
Article-I.D.: philabs.27320

        Bully for you Fred! I also believe the Japanese do not have
        the know how nor the man-power to create such a machine.
        They make great memory devices but thats where it ends.

                                        Rafael Aquino !plabs

------------------------------

Date: Thu 4 Aug 83 13:41:13-PDT
From: Al Davis <ADavis at SRI-KL>
Subject: Re: Fifth Generation Book Review


As a frequent visitor to the Soviet Union, and regular reader of
Kibernetica, I don't get the feeling that the "Russians are out in
left field" - nor do I feel that the book is particularly
illuminating.  It is readable and provides some excellent insight to
the non-professional.  However the hype and reality is carefully
interwoven.  After all how professional is the "pointing of a
trembling finger at the Japanese".  Take your pick.

                                                Al Davis

                                                AI Architecture
                                                Fairchild AI Labs

------------------------------

Date: 4 Aug 1983 23:05:15-PDT
From: borgward.umn-cs@Rand-Relay
Subject: Re: Fifth Generation Computing

I do know of other nations with a data flow machine in operation.  
Gurd and Watson have one that works at Manchester in England.  I think
that the French LAU system also works.  Such lapses in attention are
what make Americans unpopular in Europe.  We also import a lot of AI
research from Europe and Prolog as well.

--Peter Borgwardt, University of Minnesota

------------------------------

Date: Fri 5 Aug 83 14:06:06-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: NP-completeness and parallelism

        In AIList V#32 Fred comments that "NP-completeness cannot be 
gotten around in general by building bigger or faster computers".  My
guess would be that parallelism may offer a way to reduce the order of
an algorithm, perhaps even to a polynomial order (using a machine with
"infinite parallel capacity", closely related to Turing's machine with
"infinite memory"). For example, I have heard of work developing 
sorting algorithms for parallel machines which have a lower order than
any known sequential algorithm.

        Perhaps more powerful machines are truly the answer to some of
our problems, especially in vision analysis and data base searching.  
Has anyone heard of a good book discussing parallel algorithms and 
reduction in problem order?

David Rogers

DRogers@SUMEX-AIM.ARPA

------------------------------

Date: Thu 4 Aug 83 17:41:01-PDT
From: Vineet Singh <vsingh@SUMEX-AIM.ARPA>
Subject: Distributed Problem Solving: An Annotated Bibliography

 For all of you who expressed interest in the annotated bibliography
on Distributed Problem Solving, here is some important information on
how to ftp a copy if you don't know this already.

The bibliography manuscript file "<vsingh.dps>dpsdis.bib" will be kept
on sumex-aim.arpa.  Please login as "anonymous" with password 
"sumexguest" (one word).

The file is by no means complete as you can see.  It will be 
continually updated.  You may notice that the file is prepared for 
Scribe formatting.

Please mail additional entries/annotations/corrections/suggestions to 
me and I will incorporate them in the file as soon as possible.  The 
turnaround time will be a lot shorter if the new entries are also in 
Scribe format.  If you know anything about Scribe, please save me a 
lot of effort and put your entries in Scribe format.

For those of you that did not see the original message, I have 
reproduced it below.

-------------------------------------------------------------------------------


This is to request contributions to an annotated bibliography of 
papers in *Distributed Problem-Solving* that I am currently compiling.
My plan is to make the bibliography available to anybody that is 
interested in it at any stage in its compilation.  Papers will be from
many diverse areas: Artificial Intelligence, Computer Systems 
(especially Distributed Systems and Multiprocessors), Analysis of 
Algorithms, Economics, Organizational Theory, etc.

Some miscellaneous comments.  My definition of distributed 
problem-solving is a very general one, namely "the process of many 
entities engaged in solving a problem", so feel free to send a 
contribution if you are not sure that a paper is suitable for this 
bibliography.  I also encourage you to make short annotations; more 
than 5 sentences is long.  All annotations in the bibliography will 
carry a reference to the author.  If your bibliography entries are in 
Scribe format that's great because the entire bibliography will be in 
Scribe.

Vineet Singh (VSINGH@SUMEX-AIM.ARPA)

------------------------------

Date: 1 Aug 83 4:22:03-PDT (Mon)
From: ihnp4!cbosgd!cbscd5!lvc @ Ucb-Vax
Subject: AI Journals
Article-I.D.: cbscd5.365

I am interested in subscribing to a computer science journal(s) that
deals primarily with artificial intelligence.  Could anyone that knows
of such journals send me via mail the names of these journals.  I will
post a list of all those sent my way.  Thanks in advance,

Larry Cipriani cbosgd!cbscd5!lvc

------------------------------

Date: 4 Aug 83 0:26:53-PDT (Thu)
From: hplabs!hp-pcd!jrf @ Ucb-Vax
Subject: AI~Geography
Article-I.D.: hp-pcd.1455



Please send info on what's available in Geography (PROSPECTOR,
cartography, etc.).  Thanks.

jrf

------------------------------

Date: 05 Aug 83  1417 PDT
From: Fred Lakin <FRD@SU-AI>
Subject: LISP & SUNs ...

I am interested in connectons between Franz LISP and SUN workstations.
Like how far along is Franz on the SUN?  Is there some package which
allows Franz on a VAX to use a SUN as a display device?  Also, now
that i think of it, any other LISP's which might run on both SUNs and
VAXes ...

Any info on this matter would be appreciated.  Thnaks, Fred Lakin

------------------------------

Date: Thu 4 Aug 83 09:57:01-PDT
From: Larry Fagan  <FAGAN@SUMEX-AIM.ARPA>
Subject: Programmer - ONCOCIN Project: Stanford Heuristic Programming
         Project

Programmer - ONCOCIN Project:  Stanford Heuristic Programming Project

        This position will involve applications programming for an 
oncology protocol management system known as ONCOCIN.  This project 
with Ted Shortliffe as principal investigator, represents an 
application of expert systems to the treatment of cancer patients, and
is currently in daily use by physicians.  The job requires significant
experience with artificial intelligence techniques and the LISP or
Interlisp languages.  The applicant must be willing to learn an
already existing, large expert system.  Masters level training in
computer science and previous experience with personal workstations
are highly desirable.  Although the tasks required will be varied, the
emphasis will be on artificial intelligence aspects of the oncology
research work:

*day-to-day management of the Interlisp programming efforts;
*participation in the design as well as the implementation of system
capabilities; *documentation of the system on an ongoing basis
(system overview/description as well as software documentation);
*supervisory coordination of students and part-time programmers who
may also be working on related projects; *assistance with occasional
non-programming matters important to the smooth running of the
project and to the efficient and effective performance of the system
in the clinical environment; *assistance with system demonstrations
for visitors and at meetings; *assistance with preparation of
portions of annual reports and funding proposals; *an ability to work
closely with the Chief Programmer, who will coordinate the Interlisp
efforts with other developing aspects of the total project.

Salary:  will follow Stanford University guidelines for Scientific 
Programmer III in accordance with the level of training and prior 
experience.

Contact: Larry Fagan, M.D., Ph.D.  (FAGAN@SUMEX)
         Project Director, ONCOCIN
         Stanford University Medical Center
         TC-117, Dept. of General Internal Medicine
         Stanford, Calif. 94305 (415)497-6979

------------------------------

End of AIList Digest
********************
 8-Aug-83 13:28:08-PDT,12471;000000000001
Mail-From: LAWS created at  8-Aug-83 13:25:03
Date: Monday, August 8, 1983 1:15PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #34
To: AIList@SRI-AI


AIList Digest             Monday, 8 Aug 1983       Volume 1 : Issue 34

Today's Topics:
  Fifth Generation - Opinion,
  Translation - Natural Language,
  Computational Complexity - Parallelism,
  LOGO - Request,
  Lab Descriptions - USENET Sites,
  Conferences - AAAI Panel to Honor Alexander Lerner
----------------------------------------------------------------------

Date: 5 Aug 83 20:14:19-PDT (Fri)
From: harpo!floyd!vax135!cornell!uw-beaver!ssc-vax!ditzel @ Ucb-Vax
Subject: Re: Japanese Effort
Article-I.D.: ssc-vax.377

Whereas it is true the Unites States holds a substantial lead in AI
over the Japanese, it really is beyond me how a person can believe
that they do not have the resources to overcome such a lead.  In my
*opinion* some things make a possible Japanese lead in AI machines 
possible.  Like:

*It is a national effort with an attempt to coordinate goals. The fact
that the project will be a coordinated effort rather than various 
incongruously related developments should facilitate compatibility
among the different topics.

*It may well be that Japan will have to go to the outside world to
make their project a success. What of it...a success is still a
success.

*In addition to believe that a priority project supported by both
government and industry will not try to encourage,educate and nurture
talented individuals toward the topics covered by the 5th generation
is not realistic.

*Worse yet, to believe such a project will not have an intense
political and social effect on Japan is also ignoring reality. If and
when successes in project goals do come, various segments of the
society and industrial sectors may begin to participate.

*The 5th generation project at least is visionary, a bit idealistic
and very ambitious. The outside 'egos' don't have an equivalent
project in the United States. (i.e.-one that has substantial backing
from industry and government *and* has fairly substantial financing
for the next five to ten years).

The point is we are very early into the project.... wait a bit.... we 
may learn a thing or two if we are not energetic enough.



                                            cld

------------------------------

Date: 5 Aug 83 14:50:43-PDT (Fri)
From: decvax!microsoft!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: Japanese Effort
Article-I.D.: ssc-vax.376

Concerning your lack of concern about the Japanese:

They may not have the manpower now, but they have been hiring outside 
Japan and giving some pretty strong support to their researchers.  I'd
go in a minute if they made me an offer...

                                        stan the leprechaun hacker
                                        ssc-vax!sts (soon utah-cs)

------------------------------

Date: 5 Aug 83 12:51:22-PDT (Fri)
From: decvax!linus!utzoo!watmath!echrzanowski @ Ucb-Vax
Subject: 5th generation computers
Article-I.D.: watmath.5613

I recently had an opportunity to show a visiting prof from the
University of Kyoto around our facilities. During one of our
conversations I asked him about the 5th generation computers in Japan.
His response was that it was only a large government promotional
campaign and nothing more.  Sure they are building some new computers
but not to the degree that we are expected to believe.


If anyone else has any ideas or comments on 5th generation computers I
would like to see them.


                                   (watmath)!echrzanowski

------------------------------

Date: 6 Aug 83 13:01:14-PDT (Sat)
From: decvax!genrad!mit-eddie!smh @ Ucb-Vax
Subject: Re: 5th generation computers
Article-I.D.: mit-eddie.551

About the professor from Kyoto who claimed that the 5th generation 
project was only a big government promotional effort:

Maybe so, maybe not.  Weren't there some similar gentlemen in
Washington making similar assurances about a different matter around 5
Dec 1948?

------------------------------

Date: Sat, 6 Aug 83 19:42 EDT
From: Tim Finin <Tim%UPenn@UDel-Relay>
Subject: natural language translation


    ... A real translator essentially reads and understands the
    text in one language, and then generates the appropriate
    text in the other language.  Since understanding general
    texts requires huge amounts of real-world knowledge,
    unrestricted machine translation will arrive about the time
    AI programs can pass the Turing test.  In my opinion, this
    will be substantially longer than ten years....

The long-standing machine translation project at the University of
Texas at Austin is not a system based on a deep understanding of the
text being translated yet has been giving good results in translating
technical manuals from German to English. Slocum reported on its
status in the ACL Conference on Applied Natural Language Processing
held in Santa Monica in February 83.  In this case, good meant
requiring less post-translation editing than the output of human
translators.

------------------------------

Date: 6 Aug 83 11:09:57 EDT  (Sat)
From: Craig Stanfill <craig%umcp-cs@UDel-Relay>
Subject: NP-completeness and parallelism

David Rogers commented that in parallel computing it makes sense to
assume a processor with an infinite number of processing elements,
much as a Turing machine has an infinite amount of memory.  He then
goes on to suggest that this might allow the effective solution of
NP-hard problems.

If we do this, we need to consider the processor-complexity of our
algorithms, not just the time-complexity.  For example, are there
algorithms for NP-hard problems which are linear in time but NP-hard
in the number of processors?  I suspect this is the case.

Parallelism is not the solution to combinatorial explosions; it is
just as limiting to use 2**n processors as it to use 2**n time.
However, the speedup is probably worth the effort; I would rather work
with a computer that uses 64,000 processors for one second than one
which uses 1 processor for 64,000 seconds.  Now, if we can just figure
out how to do this ...

------------------------------

Date: 7 Aug 83 16:57:17-PDT (Sun)
From: harpo!gummo!whuxlb!pyuxll!ech @ Ucb-Vax
Subject: Re: NP-completeness and parallelism
Article-I.D.: pyuxll.388

A couple of clarifications are in order here:

1. NP-completeness of a problem means, among other things, that the
   best known algorithm for that problem has exponential
   worst-case running time on a serial processor.  That is not
   intended as a technical definition, just an operational one.
   Moreover, all NP-complete problems are related by the fact
   that if a polynomial-time algorithm is ever discovered for
   any of them, then there is a polynomial-time algorithm for
   all, so the (highly oversimplified!) definition of
   NP-complete, as of this date, is "intrinsically exponential."

2. Perhaps obvious, but I will say so anyway: n processors yoked in
   parallel can't do better than to be n times faster than a
   single serial processor. For some problems (e.g. sorting),
   the speedup is less.

The bottom line is that the "biggest tractable problem" is
proportional to the log of the computing power at your disposal;
whether you increase the power by speeding up a serial processor or by
multiplying the number of processors is of small consequence.

Now for the silver lining.  NP-complete problems often can be tweaked 
slightly to yield "easy" problems; if you have an NP-complete problem 
on your hands, go back and see if you can restrict it to a more
readily soluble problem.

Also, one can often restrict to a subproblem which, while it is still 
NP-complete, has a heuristic which generates solutions which aren't 
too far from optimal.  An example of this is the Travelling Salesman 
Problem.  Several years ago Lewis, Rosencrantz, and Stearns at GE
Research described a heuristic that yielded solutions that were no
worse than twice the optimum if the graph obeyed the triangle 
inequality (i.e. getting from A to C costs no more than going from A
to B, then B to C), a perfectly reasonable constraint.  It seems to me
that the heuristic ran in O(n-squared) or O(n log n), but my memory
may be faulty; low-order polynomial in any case.

So: "think parallel" may or may not help.  "Think heuristic" may help
a lot!

=Ned=

------------------------------

Date: 5 Aug 83 17:56:34-PDT (Fri)
From: harpo!eagle!allegra!jdd @ Ucb-Vax
Subject: LOGO wanted
Article-I.D.: allegra.1721

A colleague of mine is looking for an implementation of LOGO, or any
similar language, under UNIX (one that already ran on both Suns and
PDP-11/23's would be ideal, but fat chance of that, eh?).  Failing
that, she would like to find a reasonably portable version (e.g., in
MacLisp).  In any case, if you have suggestions, please send them to
me and I shall forward.

Cheers, John ("This Has Been A Public Service Announcement")
DeTreville Bell Labs

------------------------------

Date: 5 Aug 83 13:15:42-PDT (Fri)
From: decvax!linus!utzoo!utcsrgv!kramer @ Ucb-Vax
Subject: Re: USENET and AI
Article-I.D.: utcsrgv.1898

We at the University of Toronto have a strong AI group that has been
in action for years:

        Area                       Major Project

  Knowledge Representation     PSN (procedural semantic network)

  Databases and Knowledge      TAXIS Representation

  Vision                       ALVEN (left ventricular motion understanding)

  Linguistics                  Speech acts


A major summary of our activities is being prepared to appear in the 
magazine for AAAI at some point.

Our research is being done on VAXen under UNIX*.  Presently at
utcsrgv, we will soon (September) be moving to a VAX dedicated to ai
work.

------------------------------

Date: 6 Aug 83 13:40:14-PDT (Sat)
From: ihnp4!houxm!hocda!spanky!burl!duke!unc!bts @ Ucb-Vax
Subject: More AI on USENET only
Article-I.D.: unc.5673

     The Computer Science Department at UNC-Chapel Hill is another
site with (some) AI interests that is on USENET but not ARPANET.  We
are one of CSNET's phone sites, but this still doesn't allow us to FTP
files. (Yes, in part, this is a plea for those folks who can FTP to
share with the rest of us on USENET!)

     Our functional programming group has a couple of projects with
some AI overtones.  We have begun to look at AI style programming
languages for Gyula Mago's string reduction tree-machine.  This is a
small-grain parallel computer which executes Backus' FFP language.
We're also looking at automatic FP program transformations.

     Along with our neighbors at Duke University, we have some Prolog
programmers.  Right now, that's C-Prolog at UNC and NU7 UNIX Prolog at
Duke.

        Bruce Smith, UNC-Chapel Hill
        duke!unc!bts (USENET)
        bts.unc@udel-relay (other NETworks)

------------------------------

Date: 5 Aug 83 15:11:37 EDT  (Fri)
From: JACK MINKER <minker%umcp-cs@UDel-Relay>
Subject: AAAI Panel to Honor Alexander Lerner

        In conjunction with the AAAI meeting in Washington, D.C. a
session is being held to honor the 70th birthday of the Soviet
cyberneticist, Professor Alexander Lerner. The session will be held
on:

                Date: Tuesday, August 23, 1983
                Time: 7:00 PM
                Location: Georgetown Room, Concourse Level

        The session will consist of a brief description of Dr.
Lerner's career, followed by a panel discussion on:

                Future Directions in Artificial Intelligence

The following have agreed to be on the panel with me:

                Nils Nilsson
                John McCarthy
                Patrick Winston

Others will be invited to participate in the panel session.

        We hope that you will be able to join us to honor this
distinguished scientist.


                Jack Minker
                University of Maryland

------------------------------

End of AIList Digest
********************
 9-Aug-83 17:26:48-PDT,14290;000000000001
Mail-From: LAWS created at  9-Aug-83 10:18:41
Date: Tuesday, August 9, 1983 10:00AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #35
To: AIList@SRI-AI


AIList Digest            Tuesday, 9 Aug 1983       Volume 1 : Issue 35

Today's Topics:
  Expert Systems - Bibliography,
  Learning - Bibliography,
  Logic - Bibliography
----------------------------------------------------------------------

Date: Tue 9 Aug 83 09:04:09-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Bibliographies

The bibliographies in this and the following three issues were
extracted from the new-reports list put out by the Stanford Math/CS
Library.  I have sorted the citations as best I could from just the
titles.  Reports on planning and problem solving have not been pulled
out separately--they are listed here either by application domain
or by technique.

                                        -- Ken Laws

------------------------------

Date: Tue 9 Aug 83 08:44:04-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Expert Systems Bibliography

This is an update to the titles previously reported in AIList.

J.S. Aikins, J.C. Kunz, E.H. Shortliffe, and R.J. Fallat, PUFF: An
Expert System for Interpretation of Pulmonary Function Data.  Stanford
U. Comp. Sci. Dept., STAN-CS-82-931; Stanford U. Comp. Sci. Dept.
Heuristic Programming Project, HPP-82-013, 1982.  21p.

C. Apte, Expert Knowledge Management for Multi-Level Modelling.  
Rutgers U. Comp. Sci. Res. Lab., LCSR-TR-41, 1982.

B.G. Buchanan and R.O. Duda, Principles of Rule Based Expert Systems.
Stanford U. Comp. Sci. Dept., STAN-CS-82-926; Stanford U. Comp. Sci.
Dept. Heuristic Programming Project, HPP-82-014, 1982.  55p.

B.G. Buchanan, Partial Bibliography of Work on Expert Systems.  
Stanford U. Comp. Sci. Dept., STAN-CS-82-953; Stanford U. Comp. Sci.
Dept. Heuristic Programming Project, HPP-82-30, 1982.  13p.

A. Bundy and B. Silver, A Critical Survey of Rule Learning Programs.  
Edinburgh U. A.I. Dept., Res. Paper 169, 1982.

R. Davis, Expert Systems: Where are We? And Where Do We Go from Here?
M.I.T. A.I. Lab., Memo 665, 1982.

D. Dellarosa and L.E. Bourne, Jr., Text-Based Decisions: Changes in
the Availability of Facts due to Instructions and the Passage of
Time.  Colorado U. Cognitive Sci. Inst., Tech.  rpt. 115-ONR, 1982.

T.G. Dietterich, B. London, K. Clarkson, and G. Dromey, Learning and
Inductive Inference (a section of the Handbook of Artificial
Intelligence, edited by Paul R.  Cohen and Edward A. Feigenbaum).  
Stanford U. Comp. Sci. Dept., STAN-CS-82-913; Stanford U. Comp. Sci.
Dept. Heuristic Programming Project, HPP-82-010, 1982.  215p.

G.A. Drastal and C.A. Kulikowski, Knowledge Based Acquisition of Rules
for Medical Diagnosis.  Rutgers U. Comp. Sci. Res. Lab., CBM-TM-97, 
1982.

N.V. Findler, An Expert Subsystem Based on Generalized Production
Rules.  Arizona State U. Comp. Sci. Dept., TR-82-003, 1982.

N.V. Findler and R. Lo, A Note on the Functional Estimation of Values
of Hidden Variables--An Extended Module for Expert Systems.  Arizona
State U. Comp. Sci.  Dept., TR-82-004, 1982.

K.E. Huff and V.R. Lesser, Knowledge Based Command Understanding: An
Example for the Software Development Environment. Massachusetts U.
Comp. & Info. Sci. Dept., COINS Tech.Rpt. 82-06, 1982.

J.K. Kastner, S.M. Weiss, and C.A. Kulikowske, Treatment Selection and
Explanation in Expert Medical Consultation: Application to a Model of
Ocular Herpes Simplex.  Rutgers U. Comp.  Sci. Res. Lab., CBM-TR-132,
1982.

R.M. Keller, A Survey of Research in Strategy Acquisition.  Rutgers U.
Comp. Sci. Dept., DCS-TR-115, 1982.

V.E. Kelly and L.I. Steinberg, The Critter System: Analyzing Digital
Circuits by Propagating Behaviors and Specifications. Rutgers U.
Comp. Sci. Res. Lab., LCSR-TR-030, 1982.

J.J. King, An Investigation of Expert Systems Technology for
Automated Troubleshooting of Scientific Instrumentation.  Hewlett
Packard Co. Comp. Sci. Lab., CSL-82-012; Hewlett Packard Co. Comp.
Res.  Center, CRC-TR-82-002, 1982.

J.J. King, Artificial Intelligence Techniques for Device
Troubleshooting.  Hewlett Packard Co. Comp. Sci. Lab., CSL-82-009; 
Hewlett Packard Co. Comp. Res. Center, CRC-TR-82-004, 1982.

G.M.E. Lafue and T.M. Mitchell, Data Base Management Systems and
Expert Systems for CAD.  Rutgers U. Comp. Sci. Res. Lab., LCSR-TR-028,
1982.

R.J. Lytle, Site Characterization using Knowledge Engineering -- An
Approach for Improving Future Performance.  Cal U. Lawrence Livermore
Lab., UCID-19560, 1982.

T.M. Mitchell, P.E. Utgoff, and R. Banerji, Learning by
Experimentation: Acquiring and Modifying Problem Solving Heuristics.
Rutgers U. Comp. Sci. Res. Lab., LCSR-TR-31, 1982.

D.S. Nau, Expert Computer Systems, Computer, Vol. 16, No. 2, pp.
63-85, Feb. 1983.

D.S. Nau, J.A. Reggia, and P. Wang, Knowledge-Based Problem Solving
Without Production Rules, IEEE 1983 Trends and Applications Conf., pp.
105-108, May 1983.

P.G. Politakis, Using Empirical Analysis to Refine Expert System
Knowledge Bases.  Rutgers U. Comp. Sci. Res. Lab., CBM-TR-130, Ph.D.
Thesis, 1982.

J.A. Reggia, P. Wang, and D.S. Nau, Minimal Set Covers as a Model for
Diagnostic Problem Solving, Proc. First IEEE Comp. Soc. Int. Conf. on
Medical Computer Sci./Computational Medicine, Sept. 1982.

J.A. Reggia, D.S. Nau, and P. Wang, Diagnostic Expert Systems Based on
a Set Covering Model, Int. J. Man-Machine Studies, 1983.  To appear.

M.D. Rychener, Approaches to Knowledge Acquisition: The Instructable
Production System Project.  Carnegie Mellon U. Comp. Sci. Dept.,
1981.

R.D. Schachter, An Incentive Approach to Eliciting Probabilities.  
Cal. U., Berkeley. O.R. Center, ORC 82-09, 1982.

E.H. Shortliffe and L.M. Fagan, Expert Systems Research: Modeling the
Medical Decision Making Process.  Stanford U. Comp. Sci. Dept., 
STAN-CS-82-932; Stanford U. Comp. Sci. Dept. Heuristic Programming
Project, HPP-82-003, 1982.  23p.

M. Suwa, A.C. Scott, and E.H. Shortliffe, An Approach to Verifying
Completeness and Consistency in a Rule Based Expert System.  Stanford
U. Comp. Sci. Dept., STAN-CS-82-922, 1982.  19p.

J.A. Wald and C.J. Colbourn, Steiner Trees, Partial 2-Trees, and
Minimum IFI Networks.  Saskatchewan U. Computational Sci. Dept., Rpt.
82-06, 1982.

J.A. Wald and C.J. Colbourn, Steiner Trees in Probabilistic Networks.
Saskatchewan U. Computational Sci. Dept., Rpt. 82-07, 1982.

A. Walker, Automatic Generation of Explanations of Results from
Knowledge Bases.  IBM Watson Res. Center, RJ 3481, 1982.

J.W. Wallis and E.H. Shortliffe, Explanatory Power for Medical Expert
Systems: Studies in the Representation of Causal Relationships for
Clinical Consultation.  Stanford U. Comp.  Sci. Dept.,
STAN-CS-82-923, 1982.  37p.

S. Weiss, C. Kulikowske, C. Apte, and M. Uschold, Building Expert
Systems for Controlling Complex Programs.  Rutgers U. Comp. Sci. Res.
Lab., LCSR-TR-40, 1982.

Y. Yuchuan and C.A. Kulikowske, Multiple Strategies of Reasoning for
Expert Systems.  Rutgers U. Comp. Sci. Res. Lab., CBM-TR-131, 1982.

------------------------------

Date: Tue 9 Aug 83 08:47:25-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Learning Bibliography

Anderson, J.R. Farrell, R. Sauers, R.* Learning to plan in LISP.* 
Carnegie Mellon U. Psych.Dept.*1982.

Barber, G.*Supporting organizational problem solving with a 
workstation.* M.I.T. A.I. Lab.*Memo 681.*1982.

Bundy, A. Silver, B.*A critical survey of rule learning programs.* 
Edinburgh U. A.I. Dept.*Res. Paper 169.*1982.

Carbonell, J.G.* Learning by analogy: formulating and generalizing 
plans from past experience.* Carnegie Mellon U.  
Comp.Sci.Dept.*CMU-CS-82-126.*1982.

Carroll, J.M. Mack, R.L.* Metaphor, computing systems, and active 
learning.* IBM Watson Res. Center.*RC 9636.*1982.  schemes.* IBM 
Watson Res. Center.*RJ 3645.*1982.

Cohen, P.R.* Planning and problem solving.* Stanford U.  
Comp.Sci.Dept.*STAN-CS-82-939; Stanford U. Comp.Sci.Dept.  Heuristic 
Programming Project.*HPP-82-021.*1982.  61p.

Dellarosa, D. Bourne, L.E. Jr.*Text-based decisions: changes in the 
availability of facts due to instructions and the passage of time.* 
Colorado U. Cognitive Sci.Inst.* Tech.rpt. 115-ONR.*1982.

Ehrlich, K. Soloway, E.*An empirical investigation of the tacit plan 
knowledge in programming.* Yale U.  Comp.Sci.Dept.*Res.Rpt.  
236.*1982.

Findler, N.V. Cromp, R.F.*An artificial intelligence technique to 
generate self-optimizing experimental designs.* Arizona State U.  
Comp.Sci.Dept.*TR-83-001.* 1983.

Good, D.I.* Reusable problem domain theories.* Texas U.  Computing 
Sci.Inst.*TR-031.*1982.

Good, D.I.* Reusable problem domain theories.* Texas U.  Computing 
Sci.Inst.*TR-031.*1982.

Kautz, H.A.*A first-order dynamic logic for planning.* Toronto U.  
Comp. Systems Res. Group.*CSRG-144.*1982.

Luger, G.F.*Some artificial intelligence techniques for describing 
problem solving behaviour.* Edinburgh U. A.I.  Dept.*Occasional Paper 
007.*1977.

Mitchell, T.M. Utgoff, P.E. Banerji, R.* Learning by experimentation:
acquiring and modifying problem solving heuristics.* Rutgers U.  
Comp.Sci.Res.Lab.*LCSR-TR-31.* 1982.

Moura, C.M.O. Casanova, M.A.* Design by example (preliminary report).*
Pontificia U., Rio de Janeiro.  Info.Dept.*No. 05/82.*1982.

Nadas, A.*A decision theoretic formulation of a training problem in 
speech recognition and a comparison of training by uncondition versus 
conditional maximum likelihood.* IBM Watson Res. Center.*RC 
9617.*1982.

Slotnick, D.L.* Time constrained computation.* Illinois U.  
Comp.Sci.Dept.*UIUCDCS-R-82-1090.*1982.

Tomita, M.* Learning of construction of finite automata from examples 
using hill climbing.  RR: regular set recognizer.* Carnegie Mellon U.
Comp.Sci.Dept.* CMU-CS-82-127.*1982.

Utgoff, P.E.*Acquisition of appropriate bias for inductive concept 
learning.* Rutgers U. Comp.Sci.Res.Lab.* LCSR-TM-02.*1982.

Winston, P.H. Binford, T.O. Katz, B. Lowry, M.* Learning physical 
descriptions from functional definitions, examples, and precedents.* 
M.I.T. A.I. Lab.*Memo 679.* 1982.

Winston, P.H.* Learning by augmenting rules and accumulating censors.*
M.I.T. A.I. Lab.*Memo 678.*1982.

------------------------------

Date: Tue 9 Aug 83 08:48:00-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Logic Bibliography

Ballantyne, M. Bledsoe, W.W. Doyle, J. Moore, R.C. Pattis, R.  
Rosenschein, S.J.* Automatic deduction (Chapter XII of Volume III of 
the Handbook of Artificial Intelligence, edited by Paul R. Cohen and 
Edward A. Feigenbaum).* Stanford U. Comp.Sci.Dept.*STAN-CS-82-937; 
Stanford U.  Comp.Sci.Dept. Heuristic Programming 
Project.*HPP-82-019.* 1982.  64p.

Bergstra, J. Chmielinska, A. Tiuryn, J.*" Hoare's logic is not 
complete when it could be".* M.I.T. Lab. for Comp.Sci.*TM-226.*1982.

Bergstra, J.A. Tucker, J.V.* Hoare's logic for programming languages 
with two data types.* Mathematisch Centrum.*IW 207/82.*1982.

Boehm, H.-J.*A logic for expressions with side-effects.* Cornell U.  
Comp.Sci.Dept.*Tech.Rpt. 81-478.*1981.

Bowen, D.L. (ed.)* DECsystem-10 Prolog user's manual.* Edinburgh U.  
A.I. Dept.*Occasional Paper 027.*1982.

Boyer, R.S. Moore, J.S.*A mechanical proof of the unsolvability of the
halting problem.* Texas U. Computing Sci. and Comp.Appl.Inst.  
Certifiable Minicomputer Project.*ICSCA-CMP-28.*1982.

Bundy, A. Welham, B.*Utility procedures in Prolog.* Edinburgh U. A.I.
Dept.*Occasional Paper 009.*1977.

Byrd, L. (ed.)*User's guide to EMAS Prolog.* Edinburgh U.  A.I.  
Dept.*Occasional Paper 026.*1981.

Demopoulos, W.*The rejection of truth conditional semantics by Putnam 
and Dummett.* Western Ontario U. Cognitive Science Centre.*COGMEM 
06.*1982.

Goto, E. Soma, T. Inada, N. Ida, T. Idesawa, M. Hiraki, K.  Suzuki, M.
Shimizu, K. Philipov, B.*Design of a Lisp machine - FLATS.* Tokyo U.
Info.Sci.Dept.*Tech.Rpt.  82-09.*1982.

Griswold, R.E.*The control of searching and backtracking in string 
pattern matching.* Arizona U. Comp.Sci.Dept.*TR 82-20.*1982.

Hagiya, M.*A proof description language and its reduction system.* 
Tokyo U. Info.Sci.Dept.*Tech.Rpt. 82-03.*1982.

Itai, A. Makowsky, J.*On the complexity of Herbrand's theorem.* 
Technion - Israel Inst. of Tech.  Comp.Sci.Dept.*Tech.Rpt. 243.*1982.

Kautz, H.A.*A first-order dynamic logic for planning.* Toronto U.  
Comp. Systems Res. Group.*CSRG-144.*1982.

Kozen, D.C.*Results on the propositional mu-calculus.* Aarhus U.  
Comp.Sci.Dept.*DAIMI PB-146.*1982.

Makowsky, J.A. Tiomkin, M.L.*An array assignment for propositional 
dynamic logic.* Technion - Israel Inst. of Tech.  
Comp.Sci.Dept.*Tech.Rpt. 234.*1982.

Manna, Z. Pneuli, A.*How to cook a temporal proof system for your pet 
language.* Stanford U. Comp.Sci.Dept.* STAN-CS-82-954.*1982.  14p.

Mosses, P.* Abstract semantic algebras!* Aarhus U.  
Comp.Sci.Dept.*DAIMI PB-145.*1982.

Orlowska, E.*Logic of vague concepts: applications of rough sets.* 
Polish Academy of Sciences. Inst. of Comp.Sci.*ICS PAS rpt. no.  
474.*1982.

Sakamura, K. Ishikawa, C.* High level machine design by dynamic 
tuning.* Tokyo U. Info.Sci.Dept.*Tech.Rpt.  82-07.*1982.

Sato, M.*Algebraic structure of symbolic expressions.* Tokyo U.  
Info.Sci.Dept.*Tech.Rpt. 82-05.*1982.

Shapiro, E.Y.* Alternation and the computational complexity of logic 
programs.* Yale U. Comp.Sci.Dept.*Res.Rpt. 239.* 1982.

Stabler, E.P. Jr.* Database and theorem prover designs for question 
answering systems.* Western Ontario U. Cognitive Science 
Centre.*COGMEM 12.*1982.

Sterling, L. Bundy, A.* Meta level inference and program 
verification.* Edinburgh U. A.I. Dept.*Res. Paper 168.* 1982.

Treleaven, P.C. Gouveia Lima, I.*Japan's fifth generation computer 
systems.* Newcastle upon Tyne U. Computing Lab.* No. 176.*1982.

------------------------------

End of AIList Digest
********************
 9-Aug-83 17:27:07-PDT,12574;000000000001
Mail-From: LAWS created at  9-Aug-83 10:30:08
Date: Tuesday, August 9, 1983 10:26AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #36
To: AIList@SRI-AI


AIList Digest            Tuesday, 9 Aug 1983       Volume 1 : Issue 36

Today's Topics:
  Robotics - Bibliography,
  Vision - Bibliography,
  Speech Understanding - Bibliography,
  Pattern Recognition - Bibliography
----------------------------------------------------------------------

Date: Tue 9 Aug 83 09:22:41-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Robotics Bibliography

Ambler, A.P. Popplestone, R.J. Kempf, K.G.*An experiment in the 
offline programming of robots.* Edinburgh U. A.I.  Dept.*Res. Paper 
170.*1982.

Ambler, A.P.* RAPT: an object level robot programming language.* 
Edinburgh U. A.I. Dept.*Res. Paper 172.*1982.

Brooks, R.A. Lozano-Perez, T.*A subdivision algorithm in configuration
space for findpath with rotation.* M.I.T.  A.I.  Lab.*Memo 684.*1982.

Brooks, R.A.*Solving the find path problem by representing free space 
as generalized cones.* M.I.T. A.I. Lab.*Memo 674.*1982.

Brooks, R.A.*Symbolic error analysis and robot planning.* M.I.T. A.I.
Lab.*Memo 685.*1982.

Cameron, S.* Body models for every body.* Edinburgh U.  A.I.  
Dept.*Working Paper 107.*1982.

Gueting, R.H. Wood, D.*Finding rectangle intersections by 
divide-and-conquer.* McMaster U. Comp.Sci. Unit.* Comp.Sci. Tech.Rpt.
No. 82-CS-04.*1982.

Hofri, M.*BIN packing: an analysis of the next fit algorithm.* 
Technion - Israel Inst. of Tech.  Comp.Sci.Dept.*Tech.Rpt. 242.*1982.

Hollerbach, J.M.*Computers, brains, and the control of movement.* 
M.I.T. A.I. Lab.*Memo 686.*1982.

Hollerbach, J.M.*Dynamic scaling of manipulator trajectories.* M.I.T.
A.I. Lab.*Memo 700.*1982.

Hollerbach, J.M.*Workshop on the design and control of dexterous hands
(held at the MIT Artificial Intelligence Laboratory on November 5-6,
1981).* M.I.T. A.I. Lab.*Memo 661.*1982.

Hopcroft, J.E. Joseph, D.A. Whitesides, S.H.*On the movement of robot 
arms in 2-dimensional bounded regions.*Cornell U.  
Comp.Sci.Dept.*Tech.Rpt. 82-486.*1982.

Kirkpatrick, D.* Optimal search in planar subdivisions.* British 
Columbia U. Comp.Sci.Dept.*Tech.Rpt. 81-13.*1981.

Kouta, M.M. O'Rourke, J.*Fast algorithms for polygon decomposition.* 
Johns Hopkins U. E.E. & Comp.Sci.Dept.* Tech.Rpt. 82/10.*1982.

Koutsou, A.*A survey of model bases robot programming languages.* 
Edinburgh U. A.I. Dept.*Working Paper 108.* 1981.

Lozano-Perez, T.* Robot programming.* M.I.T. A.I. Lab.*Memo 698.*1982.

Mason, M.T.* Manipulator grasping and pushing operations.* M.I.T.  
A.I. Lab.*TR-690, Ph.D. Thesis. Mason, M.T.*1982.

Mavaddat, F.* WATSON/I: WATerloo's SONically guided robot.*Waterloo U.
Comp.Sci.Dept.*Res.Rpt. CS-82-16.*1982.

Moran, S.*On the densest packing of circles in convex figures.* 
Technion - Israel Inst. of Tech. Comp.Sci.Dept.* Tech.Rpt. 241.*1982.

Mujtaba, M.S.* Motion sequencing of manipulators.* Stanford U.  
Comp.Sci.Dept.*STAN-CS-82-917, Ph.D. Thesis.  Mujtaba, M.S.  
(Department of Industrial Engineering and Engineering 
Management).*1982.  291p.

Myers, E.W.*An O(ElogE+I) expected time algorithm for the planar 
segment intersection problem.* Arizona U.  Comp.Sci.Dept.*TR 
82-03.*1982.

Popplestone, R.J.*Discussion document on body modelling for robot 
languages.* Edinburgh U. A.I. Dept.*Working Paper 110.*1982.

Shneier, M.* Hierarchical sensory processes for 3-D robot vision.* 
Maryland U. Comp.Sci. Center.*TR-1165.*1982.

Slotnick, D.L.* Time constrained computation.* Illinois U.  
Comp.Sci.Dept.*UIUCDCS-R-82-1090.*1982.

Srinivasan, C.V.*Notes on object centered associative memory 
organization.* Rutgers U. Comp.Sci.Res.Lab.* LCSR-TR-19.*1981.

Taylor, R.H.*An integrated robot system architecture.* IBM Watson Res.
Center.*RC 9824.*1983.

Yin, B.*A proposal for studying how to use vision within a robot 
language which reasons about spatial relationships.*Edinburgh U. A.I.
Dept.*Working Paper 109.*1982.

------------------------------

Date: Tue 9 Aug 83 09:54:44-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Vision Bibliography


A. Athukorala, Some Hardware for Computer Vision.  Edinburgh U. A.I.
Dept., Working Paper 102, 1981.

H.H. Baker, Depth from Edge and Intensity Based Stereo.  Ph.D. Thesis,
Stanford U. Comp. Sci. Dept., STAN-CS-82-930; Stanford U. Comp. Sci.
Dept. A.I. Lab., AIM-347, 1982, 90p.  Based on a Ph.D. thesis
submitted to the University of Illinois at Urbana-Champaign in
September of 1981.

R.J. Beattie, Edge Detection for Semantically Based Early Visual
Processing.  Edinburgh U. A.I. Dept., Res. Paper 174, 1982.

M. Brady and W.E.L. Grimson, The Perception of Subjective Surfaces.  
M.I.T. A.I. Lab., Memo 666, 1981.

I. Chakravarty, The Use of Characteristic Views as a Basis for
Recognition of Three-Dimensional Objects.  Rensselaer Polytechnic
Inst. Image Processing Lab., IPL-TR-034, 1982.

L. Dreschler, Ermittlung markanter Punkte auf den Bildern bewegter
Opjekte und Berechnung einer 3D-Beschreibung auf dieser Grundlage.
Hamburg U. Fachbereich Informatik, Bericht Nr. 83, 1981.

J.-O. Eklundh, Knowledge Based Image Analysis: Some Aspects of Images
using Other Types of Information.  Royal Inst. of Tech., Stockholm,
Num.Anal. & Computing Sci. Dept., TRITA-NA-8206, 1982.

R.B. Fisher, A Structured Pattern Matching Approach to Intermediate
Level Vision.  Edinburgh U. A.I. Dept., Res. Paper 177, 1982.

W.B. Gevarter, An Overview of Computer Vision.  U.S. National Bureau
of Standards, NBSIR 82-2582, 1982.

W.E.L. Grimson, The Implicit Constraints of the Primal Sketch.  M.I.T.
A.I. Lab., Memo 663, 1981.

W.I. Grosky, Towards a Data Model for Integrated Pictorial Databases.
Wayne State U. Comp. Sci. Dept., CSC-82-012, 1982.

R.F. Hauser, Some experiments with stochastic edge detection, IBM
Watson Res. Center, RZ 1210, 1983.

E.C. Hildreth and S. Ullman, The Measurement of Visual Motion.  M.I.T.
A.I. Lab., Memo 699, 1982.

T. Kanade (ed.), Vision.  Stanford U. Comp. Sci. Dept., 
STAN-CS-82-938; Stanford U. Comp. Sci. Dept. Heuristic Programming
Project, HPP-82-020, 1982, 220p.  Assistant Editor: Steven A. Shafer.
Contributors:  David A. Bourne, Rodney Brooks, Nancy H. Cornelius,
James L. Crowley, Hiromichi Fujisawa, Martin Herman, Fuminobu Komura,
Bruce D. Lucas, Steven A. Shafer, David R. Smith, Steven L. Tanimoto, 
Charles E. Thorpe.

A. Krzesinski, The normalised convolution algorithm, IBM Watson Res.
Center, RC 9834, 1983.

M.A. Lavin and L.I. Lieberman, AML/V: An Industrial Machine Vision
Programming System.  IBM Watson Res. Center, RC 9390, 1982.

C.N. Liu, M. Fatemi, and R.C. Waag, Digital Processing for Improvement
of Ultrasonic Abdominal Images.  IBM Watson Res. Center, RC 9499, 
1982.

D. Montuno and A. Fournier, Detecting intersection among star
polygons, Toronto U. Comp. Systems Res. Group, CSRG-146, 1982.

T.N. Mudge and T.A. Rahman, Efficiency of feature dependent
algorithms for the parallel processing of images, Michigan U.
Computing Res.  Lab., CRL-TR-11-83, 1983.

T.M. Nicholl, D.T. Lee, Y.Z. Liao, and C.K. Wong, Constructing the X-Y
convex hull of a set of X-Y polygons, IBM Watson Res. Center, RC 9737,
1982.

E. Pervin and J.A. Webb, Quaternions in computer vision and robotics, 
Carnegie Mellon U. Comp. Sci. Dept., CMU-CS-82-150, 1982.

T. Poggio, H.K. Nishihara, and K.R.K. Nielsen, Zero Crossings and
Spatiotemporal Interpolation in Vision: Aliasing and Electrical
Coupling Between Sensors.  M.I.T. A.I. Lab., Memo 675, 1982.

T. Poggio, Visual Algorithms.  M.I.T. A.I. Lab., Memo 683, 1982.

W. Richards, H.K. Nishihara, and B. Dawson, CARTOON: A Biologically
Motivated Edge Detection Algorithm.  M.I.T. A.I. Lab., Memo 668, 1982.

A. Rosenfeld, Computer vision, Maryland U. Comp. Sci. Center, TR-1157,
1982.

A. Rosenfeld, Trends and perspectives in computer vision, Maryland U.
Comp. Sci. Center, TR-1194, 1982.

I.K. Sethi and R. Jain, Determining Three Dimensional Structure of
Rotating Objects.  Wayne State U. Comp. Sci. Dept., CSC-83-001, 1983.

M. Shneier, Hierarchical sensory processes for 3-D robot vision, 
Maryland U. Comp. Sci. Center, TR-1165, 1982.

C.L. Sidner, Protocols of Users Manipulating Visually Presented
Information with Natural Language.  Bolt, Beranek and Newman, Inc.,
BBN 5128, 1982.

R.W. Sjoberg, Atmospheric Effects in Satellite Imaging of Mountainous
Terrain.  M.I.T. A.I. Lab., TR-688,

S.N. Srihari, Pyramid representations for solids, SUNY, Buffalo, Comp.
Sci. Dept., Tech.Rpt. 200, 1983.

K.A. Stevens, Implementation of a Theory for Inferring Surface Shape
from Contours.  M.I.T. A.I. Lab., Memo 676, 1982.

D. Terzopoulos, Multi-Level Reconstruction of Visual Surfaces:
Variational Principles and Finite Element Representations.  M.I.T.
A.I. Lab., Memo 671, 1982.

R.Y. Tsai, Multiframe Image Point Matching and 3-D Surface
Reconstruction.  IBM Watson Res. Center, RC 9398, 1982.

R.Y. Tsai and T.S. Huang, Analysis of 3-D Time Varying Scene.  IBM
Watson Res. Center, RC 9479, 1982.

R.Y. Tsai, 3-D inference from the motion parallax of a conic arc and
a point in two perspective views, IBM Watson Res. Center, RC 9818,
1983.

R.Y. Tsai, Estimating 3-D motion parameters and object surface
structures from the image motion of conic arcs, I: theoretical basis,
IBM Watson Res. Center, RC 9787, 1983.

R.Y. Tsai, Estimating 3-D motion parameters and object surface
structures from the image motion of conic arcs, IBM Watson Res.
Center, RC 9819, 1983.

L. Uhr and L. Schmitt, The Several Steps from ICON to SYMBOL, using
Structured Cone/Pyramids.  Wisconsin U. Comp. Sci. Dept., Tech.Rpt.
481, 1982.

P.H. Winston, T.O. Binford, B. Katz, and M. Lowry, Learning Physical
Descriptions from Functional Definitions, Examples, and Precedents.
M.I.T. A.I. Lab., Memo 679, 1982.  1982.

M.-M. Yau, Generating quadtrees of cross-sections from octrees, SUNY,
Buffalo, Comp. Sci. Dept., Tech.Rpt. 199, 1982.

B. Yin, A Proposal for Studying How to Use Vision Within a Robot
Language which Reasons about Spatial Relationships.  Edinburgh U.
A.I. Dept., Working Paper 109, 1982.

------------------------------

Date: Tue 9 Aug 83 08:54:15-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Speech Understanding Bibliography

Lucassen, J.M.*Discovering phonemic base forms automatically: an 
information theoretic approach.* IBM Watson Res. Center.*RC 
9833.*1983.

Waibel, A.*Towards very large vocabulary word recognition.* Carnegie 
Mellon U. Comp.Sci.Dept.*CMU-CS-82-144.*1982.

------------------------------

Date: Tue 9 Aug 83 08:49:15-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Pattern Recognition Bibliography

Barnes, E.R.*An algorithm for separating patterns by ellipsoids.* IBM 
Watson Res. Center.*RC 9500.*1982.

Chiang, W.P. Teorey, T.J.*A method for database record clustering.* 
Michigan U. Computing Res.Lab.* CRL-TR-05-82.*1982.

Findler, N.V. Cromp, R.F.*An artificial intelligence technique to 
generate self-optimizing experimental designs.* Arizona State U.  
Comp.Sci.Dept.*TR-83-001.* 1983.

Findler, N.V. Lo, R.*A note on the functional estimation of values of 
hidden variables--an extended module for expert systems.* Arizona 
State U. Comp.Sci.Dept.*TR-82-004.* 1982.

Jenkins, J.M.* Symposium on computer applications to cardiology:  
introduction and automated electrocardiography and arrhythmia 
monitoring.* Michigan U. Computing Res.Lab.*CRL-TR-20-83.*1983.

Kumar, V. Kanal, L.N.* Branch and bound formulations for sequential 
and parallel And/Or tree search and their applications to pattern 
analysis and game playing.* Maryland U. Comp.Sci.  
Center.*TR-1144.*1982.

O'Rourke, J.*The signature of a curve and its applications to pattern 
recognition (preliminary version).* Johns Hopkins U. E.E. & 
Comp.Sci.Dept.*Tech.Rpt. 82/09.*1982.

Seidel, R.*A convex hull algorithm for point sets in even dimensions.*
British Columbia U. Comp.Sci.Dept.*Tech.Rpt.  81-14.*1981.

Varah, J.M.*On fitting exponentials by nonlinear least squares.* 
British Columbia U. Comp.Sci.Dept.*Tech.Rpt.  82-02.*1982.

------------------------------

End of AIList Digest
********************
 9-Aug-83 17:27:22-PDT,18312;000000000001
Mail-From: LAWS created at  9-Aug-83 10:35:01
Date: Tuesday, August 9, 1983 10:33AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #37
To: AIList@SRI-AI


AIList Digest            Tuesday, 9 Aug 1983       Volume 1 : Issue 37

Today's Topics:
  Representation - Bibliography,
  Natural Language Understanding - Bibliography,
  Cognition - Bibliography
----------------------------------------------------------------------

Date: Tue 9 Aug 83 08:51:05-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Representation Bibliography

Abdallah, M.A.N.* Data types as algorithms.* Waterloo U.  
Comp.Sci.Dept.*Res.Rpt. CS-82-10.*1982.

Alterman, R.E.*A system of seven coherence relations for 
hierarchically organizing event concepts in text.* Texas U.  
Comp.Sci.Dept.*TR-209.*1982.

Amit, Y.*Review of conceptual dependency theory.* Edinburgh U. A.I.  
Dept.*Occasional Paper 008.*1977.

Andrews, G.R. Schneider, F.B.*Concepts and notations for concurrent 
programming.* Arizona U. Comp.Sci.Dept.*TR 82-12.*1982.

Ericson, L.W.* DPL-82: a language for distributed processing.* 
Carnegie Mellon U. Comp.Sci.Dept.* CMU-CS-82-129.*1982.

Forbus, K.D.* Qualitative process theory.* M.I.T. A.I.  Lab.*Memo 
664.*1982.

Katz, R.H. Lehman, T.J.*Storage structures for versions and 
alternatives.* Wisconsin U. Comp.Sci.Dept.*Tech.Rpt.  479.*1982.

Lucas, P. Risch, T.*Representation of factual information by equations
and their evaluation.* IBM Watson Res.  Center.*RJ 3362.*1982.

Luger, G.F.*Some artificial intelligence techniques for describing 
problem solving behaviour.* Edinburgh U. A.I.  Dept.*Occasional Paper 
007.*1977.

Lytinen, S.L. Schank, R.C.* Representation and translation.*Yale U.  
Comp.Sci.Dept.*Res.Rpt. 234.*1982.

Mahr, B. Makowsky, J.A.*Characterizing specification languages which 
admit initial semantics.* Technion - Israel Inst. of Tech.  
Comp.Sci.Dept.*Tech.Rpt. 232.*1982.

Mercer, R.E. Reiter, R.*The representation of presuppositions using 
defaults.* British Columbia U.  Comp.Sci.Dept.*Tech.Rpt. 82-01.*1982.

Orlowska, E. Pawlak, Z.*Representation of nondeterministic 
information.* Polish Academy of Sciences. Inst. of Comp.Sci.*ICS PAS 
Rpt. No. 450.*1981.

Orlowska, E.*Logic of vague concepts: applications of rough sets.* 
Polish Academy of Sciences. Inst. of Comp.Sci.*ICS PAS rpt. no.  
474.*1982.

Orlowska, E.*Semantics of vague concepts: application of rough sets.* 
Polish Academy of Sciences. Inst. of Comp.Sci.*ICS PAS rpt. no.  
469.*1982.

Pawlak, Z.* Rough functions.* Polish Academy of Sciences.  Inst. of 
Comp.Sci.*ICS PAS rpt. no. 467.*1981.

Pawlak, Z.* Rough sets: power set hierarchy.* Polish Academy of 
Sciences. Inst. of Comp.Sci.*ICS PAS rpt. no.  470.*1982.

Pawlak, Z.*About conflicts.* Polish Academy of Sciences.  Inst. of 
Comp.Sci.*ICS PAS Rpt. No. 451.*1981.

Pawlak, Z.*Some remarks about rough sets.* Polish Academy of Sciences.
Inst. of Comp.Sci.*ICS PAS rpt. no. 456.* 1982.

Sridharan, N.S.*A flexible structure for knowledge: examples of legal 
concepts.* Rutgers U. Comp.Sci.Res.Lab.* LRP-TR-014.*1982.

Srinivasan, C.V.*Notes on object centered associative memory 
organization.* Rutgers U. Comp.Sci.Res.Lab.* LCSR-TR-19.*1981.

Weiser, M. Israel, B. Stanfill, C. Trigg, R. Wood, R.* Working papers 
in knowledge representation and acquisition.* Maryland U. Comp.Sci.  
Center.*TR-1175.* 1982.  Contents: Israel, B. Weiser, M.*Towards a 
perceptual system for monitoring computer behavior; Stanfill, C.* 
Geometry to causality: a hierarchy of subdomains for machine world; 
Trigg, R.*Acquiring knowledge for an electronic textbook; Wood, R.J.*A
model for interactive program synthesis.

Winston, P.H. Binford, T.O. Katz, B. Lowry, M.* Learning physical 
descriptions from functional definitions, examples, and precedents.* 
M.I.T. A.I. Lab.*Memo 679.* 1982.

Woods, W.A. Bates, M. Bobrow, R.J. Goodman, B. Israel, D.  Schmolze, 
J. Schudy, R. Sidner, C.L. Vilain, M.*Research in knowledge 
representation for natural language understanding. Annual report: 1 
September 1981 to 31 August 1982.* Bolt, Beranek and Newman, Inc.*BBN 
rpt.  5188.*1982.

------------------------------

Date: Tue 9 Aug 83 08:46:47-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Natural Language Understanding Bibliography

Allen, E.M.*Acquiring linguistic knowledge for word experts.* Maryland
U. Comp.Sci. Center.*TR-1166.

Alterman, R.E.*A system of seven coherence relations for 
hierarchically organizing event concepts in text.* Texas U.  
Comp.Sci.Dept.*TR-209.*1982.

Amit, Y.*Review of conceptual dependency theory.* Edinburgh U. A.I.  
Dept.*Occasional Paper 008.*1977.

Ballard, B.W. Lusth, J.C.*An English-language processing system which 
'learns' about new domains.* Duke U.  Comp.Sci.Dept.*CS-1982-18.*1982.

Ballard, B.W.*A "domain class" approach to transportable natural 
language processing.* Duke U. Comp.Sci.Dept.* CS-1982-11.*1982.

Bancilhon, F. Richard, P.* TQL, a textual query language.* 
INRIA.*Rapport de Recherche 145.*1982.

Barr, A. Cohen, P.R. Fagan, L.*Understanding spoken language (Chapter 
V of Volume I of the Handbook of Artificial Intelligence, edited by 
Avron Barr and Edward A. Feigenbaum).* Stanford U. Comp.Sci.Dept.* 
STAN-CS-82-934; Stanford U. Comp.Sci.Dept. Heuristic Programming 
Project.*HPP-82-016.*1982.  52p.

Black, J.B. Galambos, J.A. Read, S.* Story comprehension.* Yale U.  
Cognitive Science Program.*Tech.Rpt. 017.*1982.

Black, J.B. Seifert, C.M.*The psychological study of story 
understanding.* Yale U. Cognitive Science Program.* Tech.Rpt.  
018.*1982.

Black, J.B. Wilkes-Gibbs, D. Gibbs, R.W. Jr.*What writers need to know
that they don't know they need to know.* Yale U. Cognitive Science
Program.*Tech.Rpt. 08.*1981.

Carbonell, J.G.* Meta-language utterances in purposive discourse.* 
Carnegie Mellon U. Comp.Sci.Dept.* CMU-CS-82-125.*1982.

Clinkenbeard, D.J.*A quite general text analysis method.* Colorado U.
Comp.Sci.Dept.*CU-CS-237-82.*1982.

Culik, K. Natour, I.A.* Ambiguity types of formal grammars.*Wayne 
State U. Comp.Sci.Dept.*CSC-82-014.*1982.

Dellarosa, D. Bourne, L.E. Jr.*Text-based decisions: changes in the 
availability of facts due to instructions and the passage of time.* 
Colorado U. Cognitive Sci.Inst.* Tech.rpt. 115-ONR.*1982.

Denny, J.P.* Whorf's Algonquian: old evidence and new ideas concerning
linguistic relativity.* Western Ontario U.  Cognitive Science
Centre.*COGMEM 11.*1982.

Dolev, D. Reischuk, R. Strong, H.R.*'Eventual' is earlier than 
'immediate'.* IBM Watson Res. Center.*RJ 3632.*1982.

Dyer, M.G.*In-depth understanding: a computer model of integrated 
processing for narrative comprehension.* Yale U.  
Comp.Sci.Dept.*Res.Rpt. 219, Ph.D. Thesis. Dyer, M.G.* 1982.

Gawron, J.M. King, J.J. Lamping, J. Loebner, J.J. Paulson, E.A.  
Pullum, G.K. Sag, I.A. Wasow, T.A.*Processing English with a 
generalized phrase structure grammar.* Hewlett Packard Co.  
Comp.Sci.Lab.*CSL-82-005.*1982.

Greene, B.R. Fujisake, T.*A probabilistic approach for dealing with 
ambiguous syntactic structures.* IBM Watson Res. Center.*RC 
9764.*1982.

Hartmanis, J.*On Goedel speed-up and succinctness of language 
representation.* Cornell U. Comp.Sci.Dept.* Tech.Rpt. 82-485.*1982.

Israel, D.J.*On interpreting semantic network formalisms.* Bolt, 
Beranek and Newman, Inc.*BBN rpt. 5117.*1982.

Jensen, K. Heidorn, G.E.*The fitted parse: 100% parsing capability in 
a syntactic grammar of English.* IBM Watson Res. Center.*RC 
9729.*1982.  graphs.* IBM Watson Res. Center.*RC 9642.*1982.

Johnson, P.N. Robertson, S.P.* MAGPIE: a goal based model of 
conversation.* Yale U. Comp.Sci.Dept.*Res.Rpt. 206.* 1981.

Katz, B. Winston, P.H.* Parsing and generating English using 
commutative transformations.* M.I.T. A.I. Lab.* Memo 677.*1982.

Lamping, J. King, J.J.* LM/GPSG--a prototype workstation for 
linguists.* Hewlett Packard Co. Comp.Sci.Lab.* CSL-82-011; Hewlett 
Packard Co. Comp.Res. Center.* CRC-Tr-82-006.*1982.

Lehnert, W. Dyer, M.G. Johnson, P.N. Yang, C.J. Harley, S.* BORIS: an 
experiment in in-depth understanding of narratives.* Yale U.  
Comp.Sci.Dept.*Res.Rpt. 188.*1981.

Lehnert, W.G.* Affect units and narrative summarization.* Yale U.  
Comp.Sci.Dept.*Res.Rpt. 179.*1980.

Lytinen, S.L. Schank, R.C.* Representation and translation.* Yale U.  
Comp.Sci.Dept.*Res.Rpt. 234.*1982.

Mann, W.C. Matthiessen, C.M.I.M.*Two discourse generators, by William 
C. Mann; A grammar and a lexicon for a text production system, by 
Christian M.I.M. Matthiessen.* Southern Cal U.  
Info.Sci.Inst.*ISI/RR-82-102.*1982.

Mann, W.C.*The anatomy of a systemic choice.* Southern Cal U.  
Info.Sci.Inst.*ISI/RR-82-104.*1982.

Martin, P.A.*Integrating local information to understand dialog.* 
Stanford U. Comp.Sci.Dept.*STAN-CS-82-941; Stanford U. Comp.Sci.Dept.
A.I. Lab.*AIM-348, Ph.D.  Thesis. Martin, P.A.*1982.  125p.

Miller, L.A.*" Natural language text are not necessarily grammatical 
and unambiguous. Or even complete".* IBM Watson Res. Center.*RC 
9441.*1982.

Misek-Falkoff, L.D.*The new field of software linguistics: an 
early-bird view.* IBM Watson Res. Center.*RC 9421.* 1982.

Misek-Falkoff, L.D.* Software science and natural language: a 
unification of Halstead's counting rules for programs and English 
text, and a claim space approach to extensions.* IBM Watson Res.  
Center.*RC 9420.*1982.

Mueckstein, E.-M.M.* Parsing for collecting syntactic statistics.* IBM
Watson Res. Center.*RC 9836.*1983.

Mueckstein, E.M.M.* Q-Trans: query translation into English.* IBM 
Watson Res. Center.*RC 9841.*1983.

Perlman, G.* Natural artificial languages: low-level processes.* Cal.
U., San Diego. Human Info. Proces.  Center.*Rpt. 8208.*1982.

Peterson, J.L.* Webster's seventh new collegiate dictionary: a 
computer-readable file format.* Texas U.  Comp.Sci.Dept.*TR-196.*1982.

Reiser, B.J. Black, J.B. Lehnert, W.G.* Thematic knowledge structures 
in the understanding and generation of narratives.* Yale U. Cognitive 
Science Program.* Tech.Rpt. 016.*1982.

Reiser, B.J. Black, J.B.*Processing and structural models of 
comprehension.* Yale U. Cognitive Science Program.* Tech.Rpt.  
012.*1982.

Schank, R.C. Burstein, M.*Modeling memory for language understanding.*
Yale U. Comp.Sci.Dept.*Res.Rpt. 220.* 1982.

Schank, R.C. Collins, G.C. Davis, E. Johnson, P.N. Lytinen, S.  
Reiser, B.J.*What's the point?* Yale U.  Comp.Sci.Dept.*Res.Rpt.  
205.*1981.

Shwartz, S.P.*The search for pronominal referents.* Yale U. Cognitive 
Science Program.*Tech.Rpt. 10.*1981.

Sidner, C.L. Bates, M.*Requirements for natural language understanding
in a system with graphic displays.* Bolt, Beranek and Newman, Inc.*BBN
rpt. 5242.*1983.

Sidner, C.L.* Protocols of users manipulating visually presented 
information with natural language.* Bolt, Beranek and Newman, Inc.*BBN
rpt. 5128.*1982.

Smith, D.E.* FOCUSER: a strategic interaction paradigm for language 
acquisition.* Rutgers U. Comp.Sci.Res.Lab.* LCSR-TR-36, Ph.D. Thesis.
Smith, D.E.*1982.

Stabler, E.P. Jr.* Programs, rule governed behavior and grammars in 
theories of language acquisition and use.* Western Ontario U.  
Cognitive Science Centre.*COGMEM 07.* 1982.

Usui, T.*An experimental grammar for translating English to Japanese.*
Texas U. Comp.Sci.Dept.*TR-201.*1982.

Wilensky, R.*Talking to UNIX in English: an overview of an on-line 
consultant.* California U., Berkeley.  Comp.Sci.Div.*UCB/CSD 
82/104.*1982.

Woods, W.A. Bates, M. Bobrow, R.J. Goodman, B. Israel, D.  Schmolze, 
J. Schudy, R. Sidner, C.L. Vilain, M.*Research in knowledge 
representation for natural language understanding. Annual report: 1 
September 1981 to 31 August 1982.* Bolt, Beranek and Newman, Inc.*BBN 
rpt.  5188.*1982.

------------------------------

Date: Tue 9 Aug 83 08:45:21-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Cognition Bibliography

Ballard, B.W. Lusth, J.C.*An English-language processing system which 
'learns' about new domains.* Duke U. Comp.Sci.Dept.,CS-1982-18,1982.

Barr, A.* Artificial intelligence: cognition as computation.* Stanford
U. Comp.Sci.Dept.*STAN-CS-82-956; Stanford U. Comp.Sci.Dept.  
Heuristic Programming Project.* HPP-82-29.*1982.  28p.

Black, J.B. Galambos, J.A. Reiser, B.J.*Coordinating discovery and 
verification research.* Yale U. Cognitive Science Program.*Tech.Rpt.  
013.*1982.

Black, J.B. Galambos, J.A. Read, S.* Story comprehension.* Yale U.  
Cognitive Science Program.*Tech.Rpt. 017.*1982.

Black, J.B. Seifert, C.M.*The psychological study of story 
understanding.* Yale U. Cognitive Science Program.* Tech.Rpt.  
018.*1982.

Black, J.B. Wilkes-Gibbs, D. Gibbs, R.W. Jr.*What writers need to know
that they don't know they need to know.* Yale U. Cognitive Science
Program.*Tech.Rpt. 08.*1981.

Bonar, J. Soloway, E.*Uncovering principles of novice programming.* 
Yale U. Comp.Sci.Dept.*Res.Rpt. 240.*1982.

Carbonell, J.G.* Learning by analogy: formulating and generalizing 
plans from past experience.* Carnegie Mellon U.  
Comp.Sci.Dept.*CMU-CS-82-126.*1982.

Carroll, J.M. Mack, R.L.* Metaphor, computing systems, and active 
learning.* IBM Watson Res. Center.*RC 9636.*1982.  schemes.* IBM 
Watson Res. Center.*RJ 3645.*1982.

Cohen, P.R.*Models of cognition (Chapter XI of Volume III of the 
Handbook of Artificial Intelligence, edited by Paul R. Cohen and 
Edward A. Feigenbaum).* Stanford U.  Comp.Sci.Dept.*STAN-CS-82-936; 
Stanford U. Comp.Sci.Dept.  Heuristic Programming 
Project.*HPP-82-018.*1982.  87p.

Conrad, M.* Microscopic macroscopic interface in biological 
information processing.* Wayne State U. Comp.Sci.Dept.* 
CSC-83-003.*1983.

Doyle, J.*The foundations of psychology: a logico-computational 
inquiry into the concept of mind.* Carnegie Mellon U.  
Comp.Sci.Dept.*CMU-CS-82-149.*1982.

Dyer, M.G.*In-depth understanding: a computer model of integrated 
processing for narrative comprehension.* Yale U.  
Comp.Sci.Dept.*Res.Rpt. 219, Ph.D. Thesis. Dyer, M.G.* 1982.

Ehrlich, K. Soloway, E.*An empirical investigation of the tacit plan 
knowledge in programming.* Yale U.  Comp.Sci.Dept.*Res.Rpt.  
236.*1982.

Ericsson, K.A. Chase, W.G.* Exceptional memory.* Carnegie Mellon U.  
Psych.Dept.*Tech.Rpt. 08.*1982.

Firdman, H.E.*Toward a theory of cognizing systems: the search for an 
integrated theory of AI.* Hewlett Packard Co.  
Comp.Sci.Lab.*CSL-82-007; Hewlett Packard Co.  Comp.Res.  
Center.*CRC-TR-82-002.*1982.

Galambos, J.A.*Normative studies of six characteristics of our 
knowledge of common activities.* Yale U. Cognitive Science 
Program.*Tech.Rpt. 014.*1982.

Good, D.I.* Reusable problem domain theories.* Texas U.  Computing 
Sci.Inst.*TR-031.*1982.

Hollerbach, J.M.*Computers, brains, and the control of movement.* 
M.I.T. A.I. Lab.*Memo 686.*1982.

Israel, D.J.*On interpreting semantic network formalisms.* Bolt, 
Beranek and Newman, Inc.*BBN rpt. 5117.*1982.

Kampfner, R.R. Conrad, M.*Sequential behavior and stability properties
of enzymatic neuron networks.* Wayne State U.  
Comp.Sci.Dept.*CSC-82-011.*1982.

Lansner, A.* Information processing in a network of model neurons: a 
computer simulation study.* Royal Inst. of Tech., Stockholm.  
Num.Anal. & Computing Sci.Dept.* TRITA-NA-8211.*1982.

Mather, J.A.* Saccadic eye movements to seen and unseen targets:  
preprogramming and sensory input in motor control.* Western Ontario U.
Cognitive Science Centre.* COGMEM 10.*1982.

Mitchell, T.M. Utgoff, P.E. Banerji, R.* Learning by experimentation:
acquiring and modifying problem solving heuristics.* Rutgers U.  
Comp.Sci.Res.Lab.*LCSR-TR-31.* 1982.

Poggio, T. Koch, C.*Nonlinear interactions in a dendritic tree:  
localization, timing, and role in information processing.* M.I.T.  
A.I. Lab.*Memo 657.*1981.

Reiser, B.J. Black, J.B. Lehnert, W.G.* Thematic knowledge structures 
in the understanding and generation of narratives.* Yale U. Cognitive 
Science Program.* Tech.Rpt. 016.*1982.

Richards, W. Nishihara, H.K. Dawson, B.* CARTOON: a biologically 
motivated edge detection algorithm.* M.I.T.  A.I. Lab.*Memo 668.*1982.

Schank, R.C. Burstein, M.*Modeling memory for language understanding.*
Yale U. Comp.Sci.Dept.*Res.Rpt. 220.* 1982.

Schank, R.C.*Representing meaning: an artificial intelligence 
perspective.* Yale U. Cognitive Science Program.*Tech.Rpt. 11.*1981.

Seifert, C.M. Robertson, S.P.*On-line processing of pragmatic 
inferences.* Yale U. Cognitive Science Program.*Tech.Rpt. 015.*1982.

Shwartz, S.P.*Three-dimensional mental rotation revisited: picture 
plane rotation is really faster than depth rotation.* Yale U.  
Cognitive Science Program.*Tech.Rpt.  09.*1981.

Sidner, C.L.* Protocols of users manipulating visually presented 
information with natural language.* Bolt, Beranek and Newman, Inc.*BBN
rpt. 5128.*1982.

Smith, D.E.* FOCUSER: a strategic interaction paradigm for language 
acquisition.* Rutgers U. Comp.Sci.Res.Lab.* LCSR-TR-36, Ph.D. Thesis.
Smith, D.E.*1982.

Soloway, E. Bonar, J. Ehrlich, K.* Cognitive strategies and looping 
constructs: an empirical study.* Yale U.  Comp.Sci.Dept.*Res.Rpt.  
242.*1982.

Soloway, E. Ehrlich, K. Bonar, J. Greenspan, J.*What do novices know 
about programming?* Yale U. Comp.Sci.Dept.* Res.Rpt. 218.*1982.

Srinivasan, C.V.*Notes on object centered associative memory 
organization.* Rutgers U. Comp.Sci.Res.Lab.* LCSR-TR-19.*1981.

Stabler, E.P. Jr.* Programs, rule governed behavior and grammars in 
theories of language acquisition and use.* Western Ontario U.  
Cognitive Science Centre.*COGMEM 07.* 1982.

Utgoff, P.E.*Acquisition of appropriate bias for inductive concept 
learning.* Rutgers U. Comp.Sci.Res.Lab.* LCSR-TM-02.*1982.

------------------------------

End of AIList Digest
********************
 9-Aug-83 17:27:41-PDT,17238;000000000001
Mail-From: LAWS created at  9-Aug-83 10:43:51
Date: Tuesday, August 9, 1983 10:38AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #38
To: AIList@SRI-AI


AIList Digest            Tuesday, 9 Aug 1983       Volume 1 : Issue 38

Today's Topics:
  Programming - Bibliography,
  Databases - Bibliography,
  Computer Science - Bibliography
----------------------------------------------------------------------

Date: Tue 9 Aug 83 08:50:19-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Programming Bibliography

[Includes programming environments and techniques
as well as automatic programming.]

Abdallah, M.A.N.* Data types as algorithms.* Waterloo U.  
Comp.Sci.Dept.*Res.Rpt. CS-82-10.*1982.

Andrews, G.R. Schneider, F.B.*Concepts and notations for concurrent 
programming.* Arizona U. Comp.Sci.Dept.*TR 82-12.*1982.

Andrews, G.R. Schneider, F.B.*Concepts and notations for concurrent 
programming.* Arizona U. Comp.Sci.Dept.*TR 82-12.*1982.

Andrews, G.R.* Distributed programming languages.* Arizona U.  
Comp.Sci.Dept.*TR 82-13.*1982.

Archer, J.E. Jr.*The design and implementation of a cooperative 
program development environment.* Cornell U.  Comp.Sci.Dept.*Tech.rpt.
81-468, Ph.D. Thesis. Archer, J.E. Jr.*1982.

Bakker, J.W. de Zucker, J.I.* Processes and the denotational semantics
of concurrency.* Mathematisch Centrum.*IW 209/82.*1982.

Barber, G.*Supporting organizational problem solving with a 
workstation.* M.I.T. A.I. Lab.*Memo 681.*1982.

Bergstra, J.A. Klop, J.W.* Fixed point semantics in process algebras.*
Mathematisch Centrum.*IW 206/82.*1982.

Bergstra, J.A. Tucker, J.V.* Hoare's logic for programming languages 
with two data types.* Mathematisch Centrum.*IW 207/82.*1982.

Best, E.* Relational semantics of concurrent programs (with some 
applications).* Newcastle Upon Tyne U. Computing Lab.*No. 180.*1982.

Bobrow, D.G. Stefik, M.*The LOOPS manual (preliminary version).* 
Xerox. Palo Alto Res. Center.*Memo KB-VLSI-81-13.*1981, (working 
paper).

Bonar, J. Soloway, E.*Uncovering principles of novice programming.* 
Yale U. Comp.Sci.Dept.*Res.Rpt. 240.*1982.

Burger, W.F. Halim, N. Pershing, J.A. Parr, F.N. Strom, R.E. Yemini, 
S.*Draft NIL reference manual.* IBM Watson Res. Center.*RC 9732.*1982.

Culik, K. Rizki, M.M.* Mathematical constructive proofs as computer 
programs.* Wayne State U. Comp.Sci.Dept.* CSC-83-004.*1983.

diSessa, A.A.*A principled design for an integrated computational 
environment.* M.I.T. Lab. for Comp.Sci.* TM-223.*1982.

Ehrlich, K. Soloway, E.*An empirical investigation of the tacit plan 
knowledge in programming.* Yale U.  Comp.Sci.Dept.*Res.Rpt.  
236.*1982.

Elrad, T. Francez, N.*A weakest precondition semantics for 
communicating processes.* Technion - Israel Inst. of Tech.  
Comp.Sci.Dept.*Tech.Rpt. 244.*1982.

Ericson, L.W.* DPL-82: a language for distributed processing.* 
Carnegie Mellon U. Comp.Sci.Dept.* CMU-CS-82-129.*1982.

Eyries, F.*Synthese d'images de scenes composees de spheres.* 
INRIA.*Rapport de Recherche 163.*1982.

Good, D.I.*The proof of a distributed system in GYPSY.* Texas U.  
Computing Sci.Inst.*TR-030.*1982.

Israel, B.*Customizing a personal computing environment through object
oriented programming.* Maryland U.  Comp.Sci.  Center.*TR-1158.*1982.

Jobmann, M.*ILMAOS - Eine Sprache zur Formulierung von 
Rechensystemmodellen.* Hamburg U. Fachbereich Informatik.* Bericht Nr.
91.*1982.

Kanasaki, K. Yamaguchi, K. Kunii, T.L.*A software development system 
supported by a database of structures and operations.* Tokyo U.  
Info.Sci.Dept.*Tech.Rpt.  82-15.*1982.

Kant, E. Newell, A.* Problem solving techniques for the design of 
algorithms.* Carnegie Mellon U. Comp.Sci.Dept.* CMU-CS-82-145.*1982.

Krafft, D.B.* AVID: a system for the interactive development of 
verifiably correct programs.* Cornell U.  Comp.Sci.Dept.*Tech.rpt.  
81-467.*1981.

Lacos, C.A. McDermott, T.S.*Interfacing with the user of a syntax 
directed editor.* Tasmania U. Info.Sci.Dept.*No.  R82-03.*1982.

Lamping, J. King, J.J.* IZZI--a translator from Interlisp to 
Zetalisp.* Hewlett Packard Co. Comp.Sci.Lab.* CSL-82-010; Hewlett 
Packard Co. Comp.Res. Center.* CRC-TR-82-005.*1982.

LeBlanc, T.J.*The design and performance of high level language 
primitives for distributed programming.* Wisconsin U.  
Comp.Sci.Dept.*Tech.Rpt. 492, Ph.D. Thesis.  LeBlanc, T.J.*1982.

Lengauer, C.*A methodology for programming with concurrency.* Toronto 
U. Comp. Systems Res. Group.* CSRG-142, Ph.D. Thesis. Lengauer, 
C.*1982.

Lesser, V. Corkill, D. Pavlin, J. Lefkowitz, L. Hudlicka, E. Brooks, 
R. Reed, S.*A high-level simulation testbed for cooperative 
distributed problem solving.* Massachusetts U. Comp. & 
Info.Sci.Dept.*COINS Tech.Rpt.  81-16.*1981.

Lieberman, H.*Seeing what your programs are doing.* M.I.T.  A.I.  
Lab.*Memo 656.*1982.

Lochovsky, F.H.* Alpha beta, edited by F.H. Lochovsky.* Toronto U.  
Comp. Systems Res. Group.*CSRG-143.*1982.  Contents: (1) Lochovsky, 
F.H. Tsichritzis, D.C.* Interactive query language for external data 
bases; (2) Mendelzon, A.O.*A database editor; (3) Lee, D.L.*A voice 
response system for an office information system; (4) Gibbs, S.J.* 
Office information models and the representation of 'office objects'; 
(5) Martin, P.* Tsichritzis, D.C.*A message management model; (6) 
Nierstrasz, O.*Tsichritzis, D.C.* Message flow modeling; (7) 
Tsichritzis, D.C. Christodoulakis, S. Faloutsos, C.* Design 
considerations for a message file server.

Mahr, B. Makowsky, J.A.*Characterizing specification languages which 
admit initial semantics.* Technion - Israel Inst. of Tech.  
Comp.Sci.Dept.*Tech.Rpt. 232.*1982.

McAllester, D.A.* Reasoning utility package. User's manual.  Version 
one.* M.I.T. A.I. Lab.*Memo 667.*1982.

Medina-Mora, R.* Syntax directed editing: towards integrated 
programming environments.* Carnegie Mellon U.  Comp.Sci.Dept.* Ph.D.  
Thesis. Medina-Mora, R.*1982.

Melese, B.* Metal, un langage de specification pour le systeme 
mentor.* INRIA.*Rapport de Recherche 142.*1982.

Olsen, D.R. Jr. Badler, N.*An expression model for graphical command 
languages.* Arizona State U.  Comp.Sci.Dept.*TR-82-001.*1982.

Paige, R.* Transformational programming--applications to algorithms 
and systems: summary paper.* Rutgers U.  
Comp.Sci.Dept.*DCS-TR-118.*1982.

Parr, F.N. Strom, R.E.* NIL: a high level language for distributed 
systems programming.* IBM Watson Res.  Center.*RC 9750.*1982.

Pratt, V.*Five paradigm shifts in programming language design and 
their realization in Viron, a dataflow programming environment.* 
Stanford U. Comp.Sci.Dept.* STAN-CS-82-951.*1982.  9p.

Rosenstein, L.S.* Display management in an integrated office 
workstation.* M.I.T. Lab for Comp.Sci.*TR-278.* 1982.

Ross, P.M.* TERAK LOGO user's manual (for version 1 - 0).* Edinburgh 
U. A.I. Dept.*Occasional Paper 021.*1980.

Schlichting, R.D. Schneider, F.B.*Using message passing for 
distributed programming: proof rules and disciplines.* Arizona U.  
Comp.Sci.Dept.*TR 82-05.*1982.

Schmidt, E.E.*Controlling large software development in a distributed 
environment.* Xerox. Palo Alto Res.  Center.*CSL-82-07, Ph.D. Thesis.
Schmidt, E.E. (University of California at Berkeley).*1982.

Senach, B.*Aide a la resolution de probleme par presentation graphique
des informations.* INRIA.*Rapport de Recherche 013.*1982.

Soloway, E. Bonar, J. Ehrlich, K.* Cognitive strategies and looping 
constructs: an empirical study.* Yale U.  Comp.Sci.Dept.*Res.Rpt.  
242.*1982.

Soloway, E. Ehrlich, K. Bonar, J. Greenspan, J.*What do novices know 
about programming?* Yale U. Comp.Sci.Dept.* Res.Rpt. 218.*1982.

Stefik, M. Bell, A.G. Bobrow, D.G.* Rule oriented programming in 
LOOPS.* Xerox. Palo Alto Res. Center.*Memo KB-VLSI-82-22.*1982.  
(working paper).

Sterling, L. Bundy, A.* Meta level inference and program 
verification.* Edinburgh U. A.I. Dept.*Res. Paper 168.* 1982.

Sterling, L. Bundy, A. Byrd, L. O'Keefe, R. Silver, B.* Solving 
symbolic equations with PRESS.* Edinburgh U.  A.I. Dept.*Res. Paper 
171.*1982.

Tappel, S. Westfold, S. Barr, A.* Programming languages for AI 
research (Chapter VI of Volume II of the Handbook of Artificial 
Intelligence, edited by Avron Barr and Edward A. Feigenbaum).* 
Stanford U. Comp.Sci.Dept.* STAN-CS-82-935; Stanford U.  
Comp.Sci.Dept. Heuristic Programming Project.*HPP-82-017.*1982.  90p.

Theriault, D.*A primer for the Act-1 language.* M.I.T.  A.I.  
Lab.*Memo 672.*1982.

Thompson, H.*Handling metarules in a parser for GPSG.  Edinburgh U.  
A.I. Dept.*Res. Paper 175.*1982.

Walker, A.* PROLOG/EX1: an inference engine which explains both yes 
and no answers.* IBM Watson Res. Center.*RJ 3771.*1983.

Waters, R.C.* LetS: an expressional loop notation.* M.I.T.  A.I.  
Lab.*Memo 680a.*1983.

Wilensky, R.*Talking to UNIX in English: an overview of an on-line 
consultant.* California U., Berkeley.  Comp.Sci.Div.*UCB/CSD 
82/104.*1982.

Wolper, P.L.*Synthesis of communicating processes from temporal logic 
specifications.* Stanford U.  Comp.Sci.Dept.*STAN-CS-82-925, Ph.D.  
Thesis. Wolper, P.L.* 1982.  111p.

Wood, R.J.* Franz flavors: an implementation of abstract data types in
an applicative language.* Maryland U.  Comp.Sci.  
Center.*TR-1174.*1982.

Woods, D.R.*Drawing planar graphs.* Stanford U.  
Comp.Sci.Dept.*STAN-CS-82-943, Ph.D. Thesis. Woods, D.R.* 1981.

------------------------------

Date: Tue 9 Aug 83 08:55:06-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Database Bibliography

Bancilhon, F. Richard, P.* TQL, a textual query language.* 
INRIA.*Rapport de Recherche 145.*1982.

Bossi, A. Ghezzi, C.*Using FP as a query language for relational 
data-bases.* Milan. Politecnico. Dipartimento di Elettronica. Lab. di 
Calcolatori.*Rapporto Interno N.  82-11.*1982.

Cooke, M.P.*A speech controlled information retrieval system.* U.K.  
National Physical Lab. Info. Technology and Computing Div.*DITC 
15/83.*1983.

Corson, Y.*Aspects psychologiques lies a l'interrogation d'une base de
donnees.* INRIA.*Rapport de Recherche 126.* 1982.

Cosmadakis, S.S.*The complexity of evaluation relational queries.* 
M.I.T. Lab. for Comp.Sci.*TM-229.*1982.

Daniels, D. Selinger, P. Haas, L. Lindsay, B. Mohan, C.  Walker, A.  
Wilms, P.*An introduction to distributed query compilation in R.* IBM 
Watson Res. Center.*RJ 3497.*1982.

Gonnet, G.H.* Unstructured data bases.* Waterloo U.  
Comp.Sci.Dept.*Res.Rpt. CS-82-09.*1982.

Griswold, R.E.*The control of searching and backtracking in string 
pattern matching.* Arizona U. Comp.Sci.Dept.*TR 82-20.*1982.

Grosky, W.I.*Towards a data model for integrated pictorial databases.*
Wayne State U. Comp.Sci.Dept.*CSC-82-012.* 1982.

Haas, L.M. Selinger, P.G. Bertino, E. Daniels, D. Lindsay, B. Lohman, 
G. Masunaga, Y. Mogan, C. Ng, P. Wilms, P.  Yost, R.* R*: a research 
project on distributed relational DBMS.* IBM Watson Res. Center.*RJ 
3653.*1982.

Hailpern, B.T. Korth, H.F.*An experimental distributed database 
system.* IBM Watson Res. Center.*RC 9678.*1982.

Jenny, C.*Methodologies for placing files and processes in systems 
with decentralized intelligence.* IBM Watson Res. Center.*RZ 
1139.*1982.

Kanasaki, K. Yamaguchi, K. Kunii, T.L.*A software development system 
supported by a database of structures and operations.* Tokyo U.  
Info.Sci.Dept.*Tech.Rpt.  82-15.*1982.

Klug, A.*On conjunctive queries containing inequalities.* Wisconsin U.
Comp.Sci.Dept.*Tech.Rpt. 477.*1982.

Konikowska, B.* Information systems: on queries containing k-ary 
descriptors.* Polish Academy of Sciences. Inst. of Comp.Sci.*ICS PAS 
rpt. no. 466.*1982.

Lochovsky, F.H.* Alpha beta, edited by F.H. Lochovsky.* Toronto U.  
Comp. Systems Res. Group.*CSRG-143.*1982.  Contents: (1) Lochovsky, 
F.H. Tsichritzis, D.C.* Interactive query language for external data 
bases; (2) Mendelzon, A.O.*A database editor; (3) Lee, D.L.*A voice 
response system for an office information system; (4) Gibbs, S.J.* 
Office information models and the representation of 'office objects'; 
(5) Martin, P.* Tsichritzis, D.C.*A message management model; (6) 
Nierstrasz, O.*Tsichritzis, D.C.* Message flow modeling; (7) 
Tsichritzis, D.C. Christodoulakis, S. Faloutsos, C.* Design 
considerations for a message file server.

Lohman, G.M. Stoltzfus, J.C. Benson, A.N. Martin, M.D.  Cardenas, 
A.F.* Remotely sensed geophysical databases: experience and 
implications for generalized DBMS.* IBM Watson Res. Center.*RJ 
3794.*1983.

Madelaine, E.*Le systeme perluette et les preuves de representation de
types abstraits.* INRIA.*Rapport de Recherche 133.*1982.

Maier, D. Ullman, J.D.* Fragments of relations.* Stanford U.  
Comp.Sci.Dept.*STAN-CS-82-929.*1982.  11p.

Michard, A.*A new database query language for non-professional users:
design principles and ergonomic evaluation.* INRIA.*Rapport de 
Recherche 127.*1982.

Ng, P.* Distributed compilation and recompilation of database 
queries.* IBM Watson Res. Center.*RJ 3375.*1982.

Srivas, M.K.*Automatic synthesis of implementations for abstract data 
types from algebraic specifications.* M.I.T. Lab for Comp.Sci.*TR-276,
Ph.D. Thesis. Srivas, M.K. (This report is a minor revision of a
thesis of the same title submitted to the Department of Electrical
Engineering and Computer Science in December 1981).*1982.

Stabler, E.P. Jr.* Database and theorem prover designs for question 
answering systems.* Western Ontario U. Cognitive Science 
Centre.*COGMEM 12.*1982.

Stamos, J.W.*A large object oriented virtual memory: grouping 
strategies, measurements, and performance.* Xerox. Palo Alto Res.  
Center.*SCG-82-02.*1982.

Wald, J.A. Sorenson, P.G.*Resolving the query inference problem using 
Steiner trees.* Saskatchewan U.  Computational 
Sci.Dept.*Rpt.83-04.*1983.

Weyer, S.A.* Searching for information in a dynamic book.* Xerox.  
Palo Alto Res. Center.*SCG-82-01, Ph.D. Thesis.  Weyer, S.A.  
(Stanford University).*1982.

------------------------------

Date: Tue 9 Aug 83 08:56:36-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Computer Science Bibliography

[Includes selected topics in CS that seem relevant to AIList
and are not covered in the preceeding bibliographies.]

Eppinger, J.L.*An empirical study of insertion and deletion in binary 
search trees.* Carnegie Mellon U.  Comp.Sci.Dept.*CMU-CS-82-146.*1982.

Gilmore, P.C.*Solvable cases of the travelling salesman problem.* 
British Columbia U. Comp.Sci.Dept.*Tech.Rpt.  81-08.*1981.

Graham, R.L. Hell, P.*On the history of the minimum spanning tree 
problem.* Simon Fraser U. Computing Sci.Dept.*TR 82-05.*1982.

Gupta, A. Hon, R.W.*Two papers on circuit extraction.* Carnegie Mellon
U. Comp.Sci.Dept.*CMU-CS-82-147.*1982.  Contents: Gupta, A.* ACE: a
circuit extractor; Gupta, A.  Hon, R.W.* HEXT: a hierarchical circuit
extractor.

Hofri, M.*BIN packing: an analysis of the next fit algorithm.* 
Technion - Israel Inst. of Tech.  Comp.Sci.Dept.*Tech.Rpt. 242.*1982.

Jomier, G.*An overview of systems modelling and evaluation 
tendencies.* INRIA.*Rapport de Recherche 134.*1982.

Jurkiewicz, E.* Stability of compromise solution in multicriteria 
decision making problem.* Polish Academy of Sciences. Inst. of 
Comp.Sci.*ICS PAS rpt. no. 455.*1981.

Kirkpatrick, D.G. Hell, P.*On the complexity of general graph factor 
problems.* British Columbia U.  Comp.Sci.Dept.*Tech.Rpt. 81-07.*1981.

Kjelldahl, L. Romberger, S.*Requirements for interactive editing of 
diagrams.* Royal Inst. of Tech., Stockholm.  Num.Anal. & Computing 
Sci.Dept.*TRITA-NA-8303.*1983.

Moran, S.*On the densest packing of circles in convex figures.* 
Technion - Israel Inst. of Tech. Comp.Sci.Dept.* Tech.Rpt. 241.*1982.

Nau, D. Kumar, V. Kanal, L.*General branch and bound and its relation 
to A* and AO*.* Maryland U. Comp.Sci.  Center.*TR-1170.*1982.

Nau, D.S.* Pathology on game trees revisited, and an alternative to 
minimaxing.* Maryland U. Comp.Sci.  Center.*TR-1187.*1982.

Roberts, B.J. Marashian, I.* Bibliography of Stanford computer science
reports, 1963-1982.* Stanford U.  Comp.Sci.Dept.*STAN-CS-82-911.*1982.
59p.

Scowen, R.S.*An introduction and handbook for the standard syntactic 
metalanguage.* U.K. National Physical Lab.  Info. Technology and 
Computing Div.*DITC 19/83.*1983.

Seidel, R.*A convex hull algorithm for point sets in even dimensions.*
British Columbia U. Comp.Sci.Dept.*Tech.Rpt.  81-14.*1981.

Varah, J.M.*Pitfalls in the numerical solution of linear ill posed 
problems.* British Columbia U. Comp.Sci.Dept.* Tech.Rpt. 81-10.*1981.

Wegman, M.*Summarizing graphs by regular expressions.* IBM Watson Res.
Center.*RC 9364.*1982.

------------------------------

End of AIList Digest
********************
12-Aug-83 09:35:40-PDT,15885;000000000001
Mail-From: LAWS created at 12-Aug-83 09:29:25
Date: Friday, August 12, 1983 9:06AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #39
To: AIList@SRI-AI


AIList Digest            Friday, 12 Aug 1983       Volume 1 : Issue 39

Today's Topics:
  Textnet - Publish Adventure,
  Representation - Current Adequacy,
  Computational Complexity - NP-Completeness & FFP Machine,
  Programming Languages - Functional Programming,
  Fifth Generation - Opinion & Pearl Harbor Correction,
  Programming Languages & Humor - Comment
----------------------------------------------------------------------

Date: 11-Aug-83 13:52 PDT
From: Kirk Kelley  <KIRK.TYM@OFFICE-2>
Subject: Re: Textnet

I have spent most spare minutes for the last ten years designing a
distributed hyper-service using NLS and Augment as a development tool.
We can simulate, via electronic mail, the beginnings of a
self-descriptive service-service called the "Publish adventure".  The
Xanadu project's Hypertext, because of its devotion to static text, is
a degenerate case of the Publish adventure.  If you are interested in
collaborating on the design of the protocol, let me know.

 -- Kirk Kelley

------------------------------

Date: 10 Aug 83 16:36:29-PDT (Wed)
From: harpo!floyd!vax135!cornell!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: A Real AI Topic
Article-I.D.: ssc-vax.398

First let me get in a one last (?) remark about where the Japanese are
in AI - pattern recognition and robotics are useful but marginal in
the AI world.  Some of the pattern recognition work seems to be making
the same conclusions now that real AI workers made ten years ago
(those who don't know history are doomed to repeat it!).

Now on to the good stuff.  I have been thinking about knowledge 
representation (KR) recently and made some interesting (to me, anyway)
observations.

1.  Certain KRs tend to show up again and again, though perhaps in
    well-disguised forms.

2.  All the existing KRs can be cast into something like an
    attribute-value representation.

Space does not permit going into all the details, but as an example,
the PHRAN language analyzer from Berkeley is actually a specialized
production rule system, although its origins were elsewhere (in
parsers using demons).  Semantic nets are considered obsolete and ad
hoc, but predicate logic reps end up looking an awful lot like a net
(so does a sizeable frame system).  A production rule has two
attributes: the condition and the action.  Object-oriented programming
(smalltalk and flavors) uses the concept of attributes (instance
variables) attached to objects.  There are other examples.

Question: is there something fundamentally important and inescapable 
about attribute-value pairs attached to symbols?  (ordinary program 
code is a representation of knowledge, but doesn't look like av-pairs
- is it a valid counterexample?)

What other possible KRs are there?

Certain KRs (such as RLL (which is really a very interesting system)) 
claim to be universal and capable of representing anything.  Are there
any particularly difficult concepts that *no* KR has been able to
represent (even in a crude way)?  What is so difficult about those
concepts, if any such exist?

                                Just stirring up the mud,
                                stan the leprechaun hacker
                                ssc-vax!sts (soon utah-cs)


[I believe that planning systems still have difficulties in
representing continuous time, hypothetical worlds, beliefs, and
intentions, among other things.  In vision, robotics, geology, and
medicine, there are difficulties in representing shape, texture, and
spatial relationships.  Attribute-value pairs are just not very
useful for representing continuous quantities.  -- KIL]

------------------------------

Date: Mon 8 Aug 83 17:19:42-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: NP-completeness

    I forward this message because it raises an interesting point, and
I thought readers may care to see it. I had a reply to this, but
perhaps someone else may care to comment.

  Date:     Sun,  7 Aug 83 18:28:09 CDT
  From: Mike.Caplinger <mike.rice@Rand-Relay>

  Claiming that a parallel machine makes NP-complete problems
  polynomial (given that the machine has an infinite number of
  processing elements) is certainly true (by the definition of
  NP-completeness), but meaningless.  Admittedly, a large number of
  processing elements might make a finitely-bounded algorithm faster,
  but any finitely-bounded algorithm is a constant time algorithm.
  (If I say N is never greater than the number of processors, then N
  might as well be a constant.)

------------------------------

Date: 10 Aug 83 13:19:32-PDT (Wed)
From: ihnp4!we13!burl!duke!unc!koala @ Ucb-Vax
Subject: Matrix Multiplication on the FFP Machine
Article-I.D.: unc.5687

        Since the subject has been brought up, I felt I should clear
up some of the statements about the FFP machine.  The machine consists
of a linear vector of small processors which communicate by being
connected as the leaves of a binary tree.

        Roughly speaking, the FFP machine performs general matrix
multiplication in O(nxn) space and time.  Systolic arrays can multiply
matrices in O(n) time, but do not provide a flexibility in the size of
matrices that can be handled.

        Order notation only presents half the picture - in real life,
constant factors and other terms are also important.  The machine's
matrix multiply operation examines each element of the two matrices
once.  Multiplying two matrices, mxn and nxp, requires accessing (mxn
+ nxp) values, and this is the measure of the time for the
computation.  Each cell performs n multiplications, dominated by the
access.  Further, when you multiply two matrices, mxn and nxp, the
result is of size mxp.  (Consider multiplying a column by a row).
Thus, when n < (mxp)/(m+p), extra space must be allocated for the
result.  This is also a quadratic time operation.

                                David Middleton
                                UNC Chapel Hill
                                decvax!duke!unc!koala

------------------------------

Date: 11 Aug 83 16:23:19-PDT (Thu)
From: harpo!gummo!whuxlb!floyd!vax135!cornell!uw-beaver!ssc-vax!sts@Ucb-Vax
Subject: Re: Matrix Multiplication on the FFP Machine
Article-I.D.: ssc-vax.406

I must admit to being a little sloppy when giving the maximum speed of
a matrix multiplication on an FFP machine (haven't worked on this 
stuff for a year, and my memory is slipping).  I still stand by the 
original statement, however.  The *maximum* possible speed for the 
multiplication of two nxn matrices is O(log n).  What I should have 
done is state that the machine architecture is completely unspecified.
I am not convinced that the Mago tree machine is the ultimate in FFP
designs, although it is very interesting.  The achievement of O(log n)
requires several things.  Let me enumerate.  First, assume that the
matrix elements are already distributed to their processors.  Second,
assume that a single processor can quickly distribute a value to 
arbitrarily many processors (easy: put it on the bus (buss? :-} ) and
let the processors all go through a read cycle simultaneously).  
Third, assume that the processors can communicate in such a way that
addition of n numbers can be performed in log n time (by adding pairs,
then pairs of pairs, etc).  Then the distribution of values takes
constant time, the multiplications are all done simultaneously and so
take constant time, leaving only the summation to slow things down.  I
know this is fast and loose; its main failing is that it assumes the
availability of an extraordinarily high number of communication paths
(the exact number is left as an exercise for the reader).

                                        stan the leprechaun hacker
                                        ssc-vax!sts (soon utah-cs)

ps For those not familiar with FP, read J. Backus' Turing Lecture in
CACM (Aug 78, I believe) - it is very readable, also he gives a
one-liner for matrix multiplication in FP, which I used as a basis for
the timing hackery above

------------------------------

Date: 11 Aug 83 19:32:18-PDT (Thu)
From: harpo!floyd!vax135!cornell!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Functional Programming and AI
Article-I.D.: ssc-vax.408

It is interesting that the subject of FP (an old interest of mine) has
arisen in the AI newsgroup (no this is not an "appropriate newsgroup"
flame).  Having worked with both AI and FP languages, it seems to me
that the two are diametrically opposed to one another.  The ultimate
goal of functional programming language research is to produce a
language that is as clean and free of side effects as possible; one
whose semantic definition fits on a single side of an 8 1/2 x 11 sheet
of paper (and not in microform, smart-aleck!).  On the other hand, the
goal of AI research (at least in the AI language area) is to produce
languages that can effectively work with as tangled and complicated 
representations of knowledge as possible.  Languages for semantic 
nets, frames, production systems, etc, all have this character.  
Formal definitions are at best difficult, and sometimes impossible 
(aside: could this be proved for any specific knowledge rep?).  Now
between the Japanese 5th generation project (and the US response) and
the various projects to build non-vonNeumann machines using FP, it
looks to me like the seeds of a controversy over the best way to do
programming.  Should we be using FP languages or AI languages?  We
can't have it both ways, right?  Or can we?

                                        stan the leprechaun hacker
                                        ssc-vax!sts (soon utah-cs)

------------------------------

Date: Mon 8 Aug 83 13:58:36-PDT
From: Robert Amsler <AMSLER@SRI-AI.ARPA>
Subject: Japanese 5th Generation Effort

It seems to me that the 5th generation effort differs from most
efforts we are familiar with in being strictly top-down. That is to
say, the Japanese are willing to start work not only without knowing
how to solve the nitty-gritty problems at the bottom--but without
knowing what those nitty-gritty problems actually are. Although
dangerous, this is a very powerful research strategy. Until it gets
bogged down due to an almost insurmountable number of unsolvable 
technical problems one can expect very rapid progress indeed. When it
does get bogged down, their understanding of the problems will be as
great as that of anyone else in the world. The best way to learn is by
doing.

------------------------------

Date: 9-AUG-1983 15:24
From: SHRAGER%CMU-PSY-A@CMU-CS-PT
Subject: On Science and the Fifth Generation


I'm a little confused about why this Japanese business seems to be
scaring the pants off of the US research community; why scientists are
quoted in national news magazines as being "panic stricken", and why
terms like "race" and "ahead" are being thrown around in a community
of "scientists"; why anyone cares if the fifth generation thing is
propoganda or not.  You'll find out when they make it work or they
don't!

Science is a cooperative effort.  If Japan wants to jump forward
(note, not "ahead" in any sense) in technology and understanding it is
the position of every other scientist to applaud their boldness and
provide every ounce of critical advice we can give them.  So what if
symbolics goes bankrupt becuase Japan makes a machine that makes the
3600 look like an Apple!? It will probably cost one third as much and
I'll be able to have one on my desk to further my research efforts.
Likewise, whatever the Japanese research community learns will
certainly benefit my research, even if just by learning what roads are
not fruitful.

Worry about the arms race, not the computer race!  Work as hard as you
can to further science and technology, not to beat the Japanese!  Work
toward the Nth generation, not the fifth or the sixth or the
seventh....  A little competition is probably useful sometimes, but
not to the detrement of the community spirit of science.  If we start
hiding things from one another, do we have the right to call ourselves
scientists?

When I begin to worry is when Japan decides to build a better MX
missle, not a better computer system.  Then issues of scientific
morals are involved and it's a whole 'nother ballgame.

------------------------------

Date: 9 Aug 83 21:04:30-PDT (Tue)
From: decvax!microsoft!uw-beaver!ssc-vax!tjj @ Ucb-Vax
Subject: Re: Pearl Harbor Day
Article-I.D.: ssc-vax.393

OK folks, especially those of you from various parts of tektronix-land
who don't seem to have access to or have interest in reading a history
book, let's review the bidding for your edification at least.  A very
unsavory reference was made in the context of a remark from a
present-day visiting professor from Japan regarding the Japanese Fifth
Generation Project.  The first bid for a date was 5 Dec 1948.  This
was changed by the same author after he received at least one
electronic mail reply to 5 Dec 1945!  This may have been with
tongue-in-cheek, as I know that he was given the correct date at least
once prior to his second message.  It's a matter of record that the
Japanese Ambassador was instructed to visit the Secretary of State on
Friday, December 5, 1941.  Whether he or his representative were again
doing so on Sunday, December 7, 1941 is a moot point, as I am certain
that they were very busy at the old trash incinerator that morning.  
Although we should not forget history, lest we be doomed to repeat it,
I do think that comparison of this episode with the present day 5th
Generation Project, even in the context of the devastation of Detroit,
is stretching things beyond the breaking point.  If you want to flame,
send mail to me, as I already have my asbestos suit on, but let's
graduate net.ai back to something more appropriate and certainly more
interesting.

TJ (with Amazing Grace) The Piper ssc-vax!tjj

------------------------------

Date: 10 Aug 83 12:02:09-PDT (Wed)
From: teklabs!done @ Ucb-Vax
Subject: Re: 5th generation computers
Article-I.D.: teklabs.2322

<flame on>

I can't stand this any longer:

   "YESTERDAY, DECEMBER 7, 1941; A DATE WHICH WILL LIVE IN INFAMY!"

Carefully memorize this date and PLEASE DON'T SCREW IT UP AGAIN.  Or
maybe infamy needs to be expressed in binary for you Computer Science 
folks.

<flame off>

Don Ellis   | USENET:  {aat,cbosg,decvax,harpo,ihnss,orstcs,pur-ee,ssc-vax
Tektronix   |          ucbvax,unc,zehntel,ogcvax,reed} !teklabs!done
Oregon, USA | ARPAnet: done.tek@rand-relay    CSNet: done@tek

------------------------------

Date: 10 Aug 1983 1244-EDT
From: MONTALVO%MIT-OZ@MIT-ML
Subject: Re: HFELISP

   Date: 27 Jul 1983 0942-PDT
   From: Jay <JAY@USC-ECLC>
   Subject: HFELISP

           HFELISP (Heffer Lisp) HUMAN FACTOR ENGINEERED LISP

                                   ABSTRACT

     HFE sugests that the more complicated features of (common) Lisp
   are dangerous, and hard to understand.  As a result a number of
   Fortran, Cobol, and 370 assembler programmers got together with a
   housewife. ...

How dare you malign the good sense of housewives by classing them with
Fortran, Cobol, and 370 assembler programmers!

Fanya Montalvo

------------------------------

End of AIList Digest
********************
16-Aug-83 09:22:18-PDT,12767;000000000001
Mail-From: LAWS created at 16-Aug-83 09:17:11
Date: Tuesday, August 16, 1983 9:10AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #40
To: AIList@SRI-AI


AIList Digest            Tuesday, 16 Aug 1983      Volume 1 : Issue 40

Today's Topics:
  Knowledge Representation & Applicative Languages,
  Fifth Generation - Military Potential,
  Artificial Intelligence - Bigotry & Turing Test
----------------------------------------------------------------------

Date: Friday, 12 Aug 1983 15:28-PDT
From: narain@rand-unix
Subject: Reply to stan the leprechaun hacker


I am responding to two of the points you raised.

Attribute value pairs are hopeless for any area (including AI areas)
where your "cognitive chunks" are complex structures (like trees). An
example is symbolic algebraic manipulation, where it is natural to
think in terms of general forms of algebaraic expressions. Try writing
a symbolic differentiation program in terms of attribute-value pairs.
Another example is the "logic grammars" for natural language, whose
implementation in Prolog is extremely clear and efficient.

As to whether FP or more generally applicative languages are useful to
AI depends upon the point of view you take of AI. A useful view is to
consider it as "advanced programming" where you wish to develop 
intelligent computer programs, and so develop powerful computational
methods for them, even if humans do not use those methods. From this
point of view Backus's comments about the "von Neumann bottleneck"
apply equally to AI programming as they do to conventional
programming. Hence applicative languages may have ideas that could
solve the "software crisis" in AI as well.

This is not just surmise; the Prolog applications to date and underway
are evidence in favor of the power of applicative languages. You may
debate about the "applicativeness" of practical Prolog programming,
but in my opinion the best and (also the most efficient) Prolog
programs are in essence "applicative".

-- Sanjai Narain

------------------------------

Date: 12 Aug 1983 1208-PDT
From: FC01@USC-ECL
Subject: Knowledge Representation, Fifth Generation

About knowledge representation---

        Although many are new to this ballgame, the fundamentals of
the field are well established. Look in the dictionary of information
science a few years back (5-10?) for an article on the representation
of knowledge by Irwin Marin.  The (M,R) pair mentioned is indeed a
general structure for representation. In fact, you may recal 10 or 20
years ago there was talk that the most efficient programs on computers
would eventually consist of many many pointers (Rs) that pointed
between datums (Ms) in may different ways - kinda like the brain!!! It
has gone well beyond the (M,R) pair stage and Marin has developed a
structure for representation that allows top down knowledge
engineering to proceed in a systematic fashion. I guess many of us
forsake history in many ways, both social and technical.

        As to the 'race' to 5th generation computers, it may indeed be
a means to further the military industrial complex in the area of
computing, but let us also consider the tactical implications of a
highly intelligent (take the term with a grain of salt when speeking
of a computer) tactical computer. Perhaps the complexities of battle
could be simplified for human consumption to the point where a good
general could indeed win an otherwise lost war. Perhaps not. The 
scientific sharing of ideas has always been the boon of science and
the bust of government. The U.S. is in an advantageous vantage point
from the boom point of view because we share so much with each other
and others. We are also tops in the bust category because it is so
easy to get our information to other places.  Somewhere the scientific
need for communication must be traded off with the possible effects of
the research. This is what I call scientific responsibility.  As
scientists we are responsible not only to our research and the
dissemination of our knowledge, but also responsible for the effects
of that knowledge. If we shared the 'secrets' of the atomic bomb with
the world as we developed it, do you think more or fewer people would
have died? I think the Germans (who were also working on the project)
might have been able to complete their version sooner and would have
killed a great number more people. In the case of Japan, we are
talking economic struggle rather than political, but the concept of
war and destruction can be visualized just as well. A small country
using a very rapid economic growth to push ahead of the rest of the
world, now has no place to expand to. Heard it before? What new
technology will be developed using the new generation of computers?
Can we afford to lose our edge in yet another technological area to
the more eager of the world? Is this just another ploy of the M.I.
complex to get money from the people and take food from the hungry?
Tough questions, without the facts hard to answer.

                                        Another controversy ignited or
                                        enflamed by yours truly,
                                                Fred

------------------------------

Date: 12 Aug 1983 15:09-PDT
From: andy at -[VAX]
Subject: Japan's supercomputers as potential defense threat


    I'm a little confused about why this Japanese business seems to be
    scaring the pants off of the US research community... why
    anyone cares if the fifth generation thing is propoganda or not.
    You'll find out when they make it work or they don't!  ...Worry
    about the arms race, not the computer race!
                        -- SHRAGER%CMU-PSY-A@CMU-CS-PT

One serious reason for concern, at least according to political 
conservatives, is that the United States would cease to be in a 
position to control the distribution of the world's most advanced 
computing technology.

Currently, there are specific export restrictions to prohibit transfer
of advanced technology from the U.S. to its putative enemies (e.g. the
Soviet Union).  (For example, I was told not long ago that it is 
illegal to fly over France carrying the schematics for a Cyber in your
briefcase.)

The reason for this becomes quite clear when you consider who the 
principal consumers of supercomputers are in this country: they are 
disproportionally well represented by people pursuing nuclear energy 
and weapons R&D, cryptology, and war gaming.  If the Japanese have the
fastest computers, then they control distribution of the hottest 
computational technology and at least potentially could sell it to 
countries that DoD would prefer to remain well behind us
technologically.  Worse, they might sell it to others but not to the
United States.

While there are lapses in the effectiveness of this sort of export 
control, it seems to work fairly well overall.  For example, I
recently read that the East Germans have just successfully fabricated
a Z-80 chip clone; reportedly, although their chip does seem to work,
it is substantially inferior to the state of the art here.  If the
best that "blacklisted" countries can do is play catch-up via reverse 
engineering, the U.S. Government will have met its practical goal of 
denying them up-to-date technology.  If, on the other hand, other 
countries are able to produce faster and more powerful computers, the 
U.S. could no longer control access to the best tools available for 
defense R&D.


    When I begin to worry is when Japan decides to build a better MX
    missle, not a better computer system.  Then issues of scientific
    morals are involved and it's a whole 'nother ballgame.


Supercomputers play a significant role in intelligence and weapons 
resarch in the United States.  I would expect those people who 
subscribe to the view that the U.S. Government should deny high 
technology to its perceived enemies to argue that they ARE "worry[ing]
about the arms race" when they feel threatened by Japan's big 
technology push, and that the issue IS at least qualitatively 
equivalent to Japan's developing better missiles.

                                                asc

p.s. No flames about science and brotherhood, please.  I didn't claim
     to agree with the conservatives whose views I'm attempting to
     describe.  The argument that "Science is a cooperative effort"
     has, BTW, also been voiced freequently in response to NSA's
     recent attempt to control cryptology research in the U.S.

p.p.s.  Perhaps further discussion of the role of Japan's
     supercomputer project in defense applications should be directed to,
     or at least CC'd to, ARMS-D@MIT-MC.

------------------------------

Date: Fri, 12 Aug 83 12:59:34 EDT
From: Brint Cooper (CTAB) <abc@brl-bmd>
Subject: Unprintable

I'm sorry, folks, but all this flaming about 7 December 1941 sounds
too much like old fashioned racism for me.

B. Cooper

------------------------------

Date: 12 Aug 83 16:52:14-PDT (Fri)
From: ihnp4!we13!otuxa!ll1!sb1!sb6!emory!gatech!spaf @ Ucb-Vax
Subject: Sex, religion, words, smoking, farting, and the net
Article-I.D.: gatech.364

It just occurred to me today that most of the discussions going on
about use of genderless pronouns, homosexuals, heterosexuals,
personal habits, religion, and other interesting habits, all have one
point in common when we discuss them -- they're *human*
activities/conditions.

Now stop for a moment and consider the Turing test.  When you read
these messages from other users on the net, how do you know that they
are from people typing at some site rather than some intelligent
program?  I would contend that a good definition of humanity and
intelligence could be formulated by someone looking at the net
traffic.  The rabid flamers and fanatics who condemn and insult would
not meet that definition.

We develop new ideas daily in this field.  A handicapped person is
freed from his or her limitations if they can communicate with the
rest of us at 300 or 1200 baud.  They can stutter, or be mute, they
can be almost completely paralyzed, but their minds and souls are
still alive and free and can communicate with the rest of us.

It doesn't matter if you are male or female, black, red, white,
green, tall, short, old, young, fat, smoking, farting, going 55 mph,
attracted to members of the same sex, attracted to sheep, or any
possible variation of the human condition -- you are a human
intelligence at the other end of my network connection, and I deal
with you in a human manner.  Once you show your lack of tolerance or
your inability to at least try to understand, you show yourself to be
less than human.

Discrimination really means the ability to differentiate amongst
alternatives.  Prejudice and bigotry mean that you discriminate based
on factors which have no real bearing on the choice at hand.  I
believe that the definition of "human intelligence" is that it
implies the ability to discriminate and the inability to be a bigot.

I hope that some of the contributors to the net are simply AI
projects; I would hate to believe that there are people with so much
hate and intolerance as is sometimes expressed.

Comments?

--
The soapbox of Gene Spafford
CSNet:  Spaf @ GATech
ARPA:   Spaf.GATech @ UDel-Relay
uucp:   ...!{sb1,allegra,ut-ngp}!gatech!spaf
        ...!duke!mcnc!msdc!gatech!spaf


[I disagree strongly with any definition of humanity that excludes
flamers and bigots, but this digest is not the place for such a
discussion.  The question of whether intelligence excludes (or
implies) prejudice is more interesting.  We should also be seeking a
replacement for the Turing test that could identify nonhuman
intelligence. -- KIL]

------------------------------

Date: 14 Aug 83 1:12:15-PDT (Sun)
From: harpo!seismo!rlgvax!oz @ Ucb-Vax
Subject: Re: Sex, religion, words, smoking, farting, and the net
Article-I.D.: rlgvax.994

I agree that it would be a shame if there were AI projects that had
such hate and bigotry.  I argue that it WOULD be possible for an AI
project to exhibit the narrowmindedness and stupidity that we
frequently see on the net.  An interesting discussion, Gene, it is
something to ponder.

                                OZ
                                seismo!rlgvax!oz

------------------------------

End of AIList Digest
********************
17-Aug-83 16:14:02-PDT,11709;000000000001
Mail-From: LAWS created at 17-Aug-83 16:09:15
Date: Wednesday, August 17, 1983 4:04PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #41
To: AIList@SRI-AI


AIList Digest           Thursday, 18 Aug 1983      Volume 1 : Issue 41

Today's Topics:
  Expert Systems - Rog-O-Matic,
  Programming Languages - LOGLisp & NETL & Functional Programming,
  Computational Complexity - Time/Space Tradeoff & Humor
----------------------------------------------------------------------

Date: Tuesday, 16 August 1983 21:20:38 EDT
From: Michael.Mauldin@CMU-CS-CAD
Subject: Rog-O-Matic paper request


People of NetLand, the long awaited day of liberation has arrived!
Throw off the shackles of ignorance and see what secrets of
technology have been laid bare through the agency of a free
university press, unbridled by the harsh realities of economic
competition!

                The Rog-O-Paper is here!

For a copy of CMU Technical Report CMU-CS-83-144, entitled
"Rog-O-Matic: A Belligerent Expert System", please send your physical
address to

                Mauldin@CMU-CS-A

and include the phrase "paper request" in the subject line.


For those who have a copy of the draft, the final version contains
two more figures, expanded descriptions of some algorithms, and an
updated discussion of Rog-O-Matic's performance, including
improvements made since February.  And even if you don't have a copy
of the draft, the final version still contains two more diagrams,
expanded descriptions of some algorithms, and an updated discussion
of performance.  The history of the program's development is also
chronicled.

The source is still available by either FTP or can be mailed in
several pieces.  It is about a third of a megabyte of characters, and
is mailed in pieces either 70K or 40K characters long.

Michael Mauldin (Fuzzy)
Computer Science Department
Carnegie-Mellon University
Pittsburgh, PA  15213


                     CMU-CS-83-144      Abstract

      Rog-O-Matic is an unusual combination  of  algorithmic and
      production  systems programming techniques which cooperate
      to explore a hostile environment.  This environment is the
      computer game  Rogue,  which offers several advantages for
      studying  exploration  tasks.   This  paper  presents  the
      major features of the Rog-O-Matic  system,  the  types  of
      knowledge  sources  and   rules   used   to   control  the
      exploration,  and  compares  the performance of the system
      with human Rogue players.

------------------------------

Date: Tue 16 Aug 83 22:56:27-CDT
From: Donald Blais <CC.BLAIS@UTEXAS-20.ARPA>
Subject: LOGLisp language query

In the July 1983 issue of DATAMATION, Larry R. Harris states that the
logic programming language LOGLisp has recently been developed by
Robinson.  What sources can I go to for additional information on this
language?

-- Donald

------------------------------

Date: Wed, 17 Aug 83 04:25 PDT
From: "Glasser Alan"@LLL-MFE.ARPA
Subject: Scott Fahlmann's NETL

I've read a book by Scott Fahlmann about a system called NETL for 
representing knowledge in terms of a particular tree-like structure.  
I found it a fascinating idea.  It was published in 1979.  When I last
heard about it, there were plans to develop some hardware to implement
the concept.  Does anyone know what's been happening on this front?
                              Alan Glasser (glasser@lll-mfe)

------------------------------

Date: 15 Aug 83 22:44:27-PDT (Mon)
From: pur-ee!uiucdcs!uicsl!pollack @ Ucb-Vax
Subject: Re: FP and AI - (nf)
Article-I.D.: uiucdcs.2574

Having also worked with both FP and AI systems I basically agree with
your perceptions of their respective goals and functions, but I think
that we can have both, since they operate at different levels: Think
of a powerful, functional language that underlies the majority of the
work in AI data and procedural representations, and imagine what the
world would be like if it were pure (but still powerful).

Besides the "garbage collector" running now and then, there could,
given the mathematical foundations of FP systems, also be an
"efficiency expert" hanging around to tighten up your sloppy code.

Jordan Pollack
University of Illinois
...!pur-ee!uiucdcs!uicsl!pollack

P.S. There is a recent paper by Lenat from Rand called "Cognitive
Economy" which discusses some possible advances in computing
environment maintenance; I don't recall it being linked to FP
systems, however.

------------------------------

Date: 16 Aug 83 20:33:29 EDT  (Tue)
From: Mark Weiser <mark%umcp-cs@UDel-Relay>
Subject: maximum speed

This *maximum* time business needs further ground rules if we are to
discuss it here (which we probably shouldn't).  For instance, the
argument that communication and multiplcation paths don't matter in an
nxn matrix multiply, but that the limiting step is the summation of n
numbers, seems to allow too much power in specifying components.  I am
allowed unboundedly many processors and communication paths, but only
a tree of adders?  I can build you a circuit that will add n numbers
simultaneously, so that means the *maximum* speed of an nxn matrix
multiply is constant.  But it just ain't so.  As n grows larger and
larger and larger the communication paths and the addition circuitry 
will also either grow and grow and grow, or the algorithm will slow
down.  Good old time-space tradeoff.

        (Another time-space tradeoff for matrix multiply on digital
computers:  just remember all the answers and look them up in ROM.
Result: constant time matrix multiply for bounded n.)

------------------------------

Date: 16 Aug 1983 2016-MDT
From: William Galway <Galway@UTAH-20>
Subject: NP-completeness and parallelism, humor

Perhaps AI-digest readers will be amused by the following
article.  I believe it's by Danny Cohen, and appears in the
proceedings of the CMU Conference on VLSI Systems and
Computations, pages 124-125, but this copy was dug out of local
archives.

..................................................................

                      The VLSI Approach to
                    Computational Complexity

                      Professor J. Finnegan
                 University of Oceanview, Kansas
             (Formerly with the DP department of the
               First National Bank of Oceanview)]

The rapid advance of  VLSI and the trend  toward the decrease  of
the geometrical  feature  size,  through the  submicron  and  the
subnano to the subpico, and beyond, have dramatically reduced the
cost  of  VLSI  circuitry.   As  a  result,  many   traditionally
unsolvable problems  can now  (or  will in  the near  future)  be
easily implemented using VLSI technology.

For example, consider the  traveling salesman problem, where  the
optimal sequence of N nodes ("cities") has to be found.   Instead
of  applying  sophisticated   mathematical  tools  that   require
investment in human thinking, which because of the rising cost of
labor  is  economically  unattractive,  VLSI  technology  can  be
applied to  construct  a  simple  machine  that  will  solve  the
problem!

The traveling salesman problem is considered difficult because of
the requirement  of finding  the best  route out  of N!  possible
ones.  A conventional single processor would require O(N!)  time,
but with clever use of VLSI technology this problem can easily be
solved in polynomial time!!

The solution is obtained with a simple VLSI array having only  N!
processors.  Each  processor is  dedicated to  a single  possible
route that  corresponds  to  a certain  permutation  of  the  set
[1,2,3,..N].  The time to load the distance matrix and to  select
the shortest  route(s)  is  only  polynomial  in  N.   Since  the
evaluation of  each route  is  linear in  N, the  entire  system
solves the problem in just polynomial time! Q.E.D.

Readers familiar only with conventional computer architecture may
wrongly suspect  that  the  communication between  all  of  these
processors is too expensive (in area).  However, with the use  of
wireless communication this problem is easily solved without  the
traditional, conventional area penalty.   If the system fails  to
obtain  from  the  FCC  the  required  permit  to  operate  in  a
reasonable  domain  of  the  frequency  spectrum,  it  is  always
possible to  use  microlasers and  picolasers  for  communicating
either through a light-conducting  substrate (e.g.  sapphire)  or
through a convex light-reflecting surface mounted parallel to the
device.   The  CSMA/CD  (Carrier  Sense  Multiple  Access,   with
Collision Detection) communication  technology, developed in  the
early seventies,  may  be found  to  be most  helpful  for  these
applications.

If it is necessary to  solve a problem with  a larger N than  the
one for which the system  was initially designed, one can  simply
design another system for that particular  value of N, or even  a
larger  one,  in  anticipation   of  future  requirements.    The
advancement of  VLSI  technology  makes  this  iterative  process
feasible and attractive.

This approach is not new.  In the early eighties many researchers
discovered the possibility of  accelerating the solution of  many
NP-complete problems by a simple  application of systems with  an
exponential number of processors.

Even earlier, in  the late seventies  many scientists  discovered
that problems with polynomial complexity could also be solved  in
lower time (than  the complexity) by  using number of  processors
which  is  also  a  polynomial  function  of  the  problem  size,
typically of  a  lower  degree.   NxN  matrix  multiplication  by
systems with N^2 processors used to  be a very popular topic  for
conversations and  conference papers,  even though  less  popular
among system builders.  The requirement of dealing the variable N
was (we believe)  handled by  the simple  P/O technique,  namely,
buying a new system for any other value of N, whenever needed.

According to the most  popular model of those  days, the cost  of
VLSI processors decreases  exponentially.  Hence the  application
of an exponential number  of processors does  not cause any  cost
increase, and  the application  of only  a polynomial  number  of
processors results in a substantial cost saving!!  The fact  that
the former exponential decrease refers  to calendar time and  the
latter to problem size probably has no bearing on this discussion
and should be ignored.

The famous Moore model of exponential cost decrease was based  on
plotting the time  trend (as has  been observed in  the past)  on
semilogarithmic scale.   For that  reason  this model  failed  to
predict the present  as seen  today.  Had  the same  observations
been plotted on a simple linear  scale, it would be obvious  that
the cost of VLSI processors is already (or about to be) negative.
This must be the case, or else there is no way to explain why  so
many researchers  design systems  with an  exponential number  of
processors and compete  for solving  the same  problem with  more
processors.

CONCLUSIONS

 - With  the  rapid  advances  of  VLSI  technology  anything  is
possible.

- The more VLSI processors in a system, the better the paper.

------------------------------

End of AIList Digest
********************
18-Aug-83 10:03:22-PDT,6919;000000000001
Mail-From: LAWS created at 18-Aug-83 09:58:21
Date: Thursday, August 18, 1983 9:54AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #42
To: AIList@SRI-AI


AIList Digest           Thursday, 18 Aug 1983      Volume 1 : Issue 42

Today's Topics:
  Fifth Generation - National Security,
  Artificial Intelligence - Prejudice & Turing Test
----------------------------------------------------------------------

Date: Tue, 16 Aug 83 13:32:17 EDT
From: Morton A. Hirschberg <mort@brl-bmd>
Subject: AI & Morality

  The human manner has led to all sorts of abuses.  Indeed your latest
series of messages (e.g. Spaf) has offended me.  Maybe he meant
humane?  In any event there is no need to be vulgar to make a point.
Any point.

  There are some of us who work for the US government who are very
aware of the threats of exporting high technology and deeply concerned
about the free exchange of data and information and the benefits of
such exchange.  It is only in recent years and maybe because of the
Japanese that academia has taken a greater interest in areas which
they were unwilling to look at before (current economics also makes
for strange bedfellows). Industry has always had an interest (if for
nothing more than to show us a better? wheel for bigger!  bucks).  We
are in a good position to maintain the military-industrial-university
complex (not sorry if this offends anyone) and get some good work 
done.  Recent government policy may restrict high technology flow so
that you might not even get on that airplane soon.

[...]

Mort

------------------------------

Date: Tue, 16 Aug 83 17:15:24 EDT
From: Joe Buck <buck@NRL-CSS>
Subject: frame theory of prejudice


We've heard on this list that we should consider flamers and bigots 
less than human. But doesn't Minsky's frame theory suggest that
prejudice is simply a natural by-product of the way our minds work?
When we enter a new situation, we access a "script" containing default
assumptions about the situation. If the default assumptions are
"sticky" (don't change to agree with newly obtained information), the
result is prejudice.

When I say "doctor", a picture appears in your mind, often quite
detailed, containing default assumptions about sex, age, physical
appearance, etc.  In some people, these assumptions are more firmly
held than in others.  Might some AI programs designed along these
lines show phenomena resembling human prejudice?

                                                Joe Buck
                                                buck@nrl-css

------------------------------

Date: 16 Aug 1983 1437-PDT
From: Jay <JAY@USC-ECLC>
Subject: Turing Test; Parry, Eliza, and Flamer

Parry and Eliza are fairly famous early AI projects.  One acts
paranoid, another acts like an interested analyst.  How about reviving
the project and challenging the Turing test?  Flamer is born.

Flamer would read messages from the net and then reply to the 
sender/bboard denying all the person said, insulting him, and in 
general making unsupported statements.  I suggest some researchers out
there make such a program and put it on the net.  The goal would be 
for the readers of the net try to detect the Flamer, and for Flamer to
escape detection.  If the Flamer is not discovered, then it could be 
considered to have passed the Turing test.

Flamer has the advantage of being able to take a few days in 
formulating a reply; it could consult many related online sources, it
could request information concerning the subject from experts (human,
or otherwise), it could perform statistical analysis of other flames
to make appropriate word choices, it could make common errors 
(gramical, syntactical, or styleistical), and it could perform other 
complex computations.

Perhaps Flamer is already out there, and perhaps this message is 
generated by such a program.

j'

------------------------------

Date: 16 Aug 83 20:57:20 EDT  (Tue)
From: Mark Weiser <mark%umcp-cs@UDel-Relay>
Subject: artificially intelligent bigots.

I agree that bigotry and intelligence exclude each other.  An
Eliza-like bigotry program would be simple in direct proportion to its
bigotry.

------------------------------

Date: 15 Aug 83 20:05:24-PDT (Mon)
From: harpo!floyd!vax135!cornell!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: AI Projects on the Net
Article-I.D.: ssc-vax.417


This is a really fun topic.  The problem of the Turing Test is 
enormously difficult and *very* subtle (either that or we're 
overlooking something really obvious).  Now the net provides a
gigantic lab for enterprising researchers to try out their latest
attempts.  So far I have resisted the temptation, since there are more
basic problems to solve first!  The curious thing about an AI project
is that it can be made infinitely complicated (programs are like that;
consider emacs or nroff), certainly enough to simulate any kind of
behavior desired, whether it be bigotry, right-wingism, irascibility,
mysticism, or perhaps even ordinary rational thought.  This has been 
demonstrated by several programs, among them PARRY (simulates 
paranoia), and POLITICS (simulates arguments between ideologues) (mail
me for refs if interested).  So it doesn't appear that there is a way
to detect an AI project, based on any *particular* behavior.

A more productive approach might be to look for the capability to vary
behavior according to circumstances (self-modifiability).  I can note
that all humans appear capable of modifying their behavior, and that
very few AI programs can do so.  However, not all human behavior can
be modified, and much cannot be modified easily.  "Try not to think of
a zebra for the next ten minutes" - humans cannot change their own
thought processes to manage this feat, while an AI program would not
have much problem.  In fact, Lenat's Eurisko system (assuming we can
believe all the claims) has the capability to speed up its own
operation! (it learned that Lisp 'eq' and 'equal' are the same for
atoms, and changed function references in its own code) The ability to
change behavior cannot be a criterion.

So how does one decide?  The question is still open....

                                        stan the leprechaun hacker
                                        ssc-vax!sts (soon utah-cs)

ps I thought about Zeno's Paradox recently - the Greeks (especially 
Archimedes) were about a hair's breadth away from discovering 
calculus, but Zeno had crippled everybody's thinking by making a 
"paradox" where none existed.  Perhaps the Turing Test is like
that....

------------------------------

End of AIList Digest
********************
19-Aug-83 17:52:10-PDT,18065;000000000001
Mail-From: LAWS created at 19-Aug-83 17:50:45
Date: Friday, August 19, 1983 5:26PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #43
To: AIList@SRI-AI


AIList Digest           Saturday, 20 Aug 1983      Volume 1 : Issue 43

Today's Topics:
  Administrivia - Request for Archives,
  Bindings - J. Pearl,
  Programming Languages - Loglisp & LISP CAI Packages,
  Automatic Translation - Lisp to Lisp,
  Knowledge Representation,
  Bibliographies - Sources & AI Journals
----------------------------------------------------------------------

Date: Thu 18 Aug 83 13:19:30-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Archives

I would like to hear from systems people maintaining AIList archives
at their sites.  Please msg AIList-Request@SRI-AI if you have an
online archive that is publicly available and likely to be available
under the same file name(s) for the forseeable future.  Send any
special instructions needed (beyond anonymous FTP).  I will then make
the information available to the list.

                                        -- Ken Laws

------------------------------

Date: Thu, 18 Aug 83 13:50:16 PDT
From: Judea Pearl <f.judea@UCLA-LOCUS>
Subject: change of address

Effective September 1, 1983 and until March 1, 1984 Judea Pearl's 
address will be :

        Judea Pearl
        c/o Faculty of Management
        University of Tel Aviv
        Ramat Aviv, ISRAEL

Dr. Pearl will be returning to UCLA at that time.

------------------------------

Date: Wednesday, 17 Aug 1983 17:52-PDT
From: narain@rand-unix
Subject: Information on Loglisp


You can get Loglisp (language or reports) by writing to J.A. Robinson
or E.E. Sibert at:

      C.I.S.
      313 Link Hall
      Syracuse University
      Syracuse, NY 13210


A paper on LOGLISP also appeared in "Logic Programming" eds. Clark and
Tarnlund, Academic Press 1982.

-- Sanjai

------------------------------

Date: 17 Aug 83 15:19:44-PDT (Wed)
From: decvax!ittvax!dcdwest!benson @ Ucb-Vax
Subject: LISP CAI Packages
Article-I.D.: dcdwest.214

Is there a computer-assisted instructional package for LISP that runs
under 4.1 bsd ?  I would appreciate any information available and will
summarize what I learn ( about the package) in net.lang.lisp.

Peter Benson decvax!ittvax!dcdwest!benson

------------------------------

Date: 17-AUG-1983 19:27
From: SHRAGER%CMU-PSY-A@CMU-CS-PT
Subject: Lisp to Lisp translation again


I'm glad that I didn't have to start this dicussion up this time.
Anyhow, here's a suggestion that I think should be implemented but
which requires a great deal of Lisp community cooperation.  (Oh
dear...perhaps it's dead already!)

Probably the most intracompatible language around (next to TRAC) is
APL.  I've had a great deal of success moving APL workspaces from one 
implementation to another with a minumum of effort.  Now, part of this
has to do with the fact that APL's primatve set can't be extended
easily but if you think about it, the question of exactly how do you
get all the stuff in a workspace from one machine to the other isn't
an easy one to answer.  The special character set makes each machine's
representation a little different and, of course, trying to send the
internal form would be right out!

The APL community solved this rather elegantly: they have a thing
called a "workspace interchange standard" which is in a canonical code
whose first 256 bytes are the atomic vector (character codes) for the
source machine, etc.  The beauty of this canconical representation
isn't just that it exists, but rather that the translation to and from
this code is the RESPONSIBILITY OF THE LOCAL IMPLEMENTOR!  That is,
for example, if I write a program in Franz and someone at Xerox wants
it, I run it through our local workspace outgoing translator which
puts it into the standard form and then I ship them that (presumably
messy) version.  They have a compatible ingoing translator which takes
certain combinations of constructs and translates them to InterLisp.

Now, of course, this isn't all that easy.  First we'd have to agree on
a standard but that's not so bad.  Most of the difficulty in deciding
on a standard Lisp is taste and that has nothing to do with the form
of the standard since no human ever writes in it.  Another difficulty
(here I am endebted to Ken Laws) is that many things have impure
semantics and so cannot be cleanly translated into another form --
take, for example, the spaghetti stack (please!). Anyhow, I never said
it would be easy but I don't think that it's all that difficult either
-- certainly it's easier than the automatic programming problem.

I'll bet this would make a very interesting dissertation for some
bright young Lisp hacker.  But the difficult part isn't any particular
translator.  Each is hand tailored by the implementors/supporters of a
particular lisp system. The difficult part is getting the Lisp world
to follow the example of a computing success, as, I think, the APL
world has shown workspace interchange to be.

------------------------------

Date: 18 Aug 83 15:31:18-PDT (Thu)
From: decvax!tektronix!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Knowledge Representation, Programming Styles
Article-I.D.: ssc-vax.437

Actually trees can be expressed as attribute-value pairs.  Have had to
do that to get around certain %(&^%$* OPS5 limitations, so it's 
possible, but not pretty.  However, many times your algebraic/tree 
expressions/structures have duplicated components, in which case you
would like to join two nodes at lower levels.  You then end up with a
directed structure only.  (This is also a solution for multiple
inheritance problems.)

I'll refrain from flaming about traditional (including logic)
grammars.  I'm tired of people insisting on a restricted view of
language that claims that grammar rules are the ultimate description
of syntax (semantics being irrelevant) and that idioms are irritating 
special cases.  I might note that we have basically solved the
language analysis problem (using a version of Berkeley's Phrase
Analysis that handles ambiguity) and are now working on building a
language learner to speed up the knowledge acquisition process, as
well as other interesting projects.

I don't recall a von Neumann bottleneck in AI programs, at least not 
of the kind Backus was talking about.  The main bottleneck seems to be
of a conceptual rather than a hardware nature.  After all, production 
systems are not inherently bottlenecked, but nobody really knows how 
to make them run concurrently, or exactly what to do with the results 
(I have some ideas though).

                                        stan the lep hack
                                        ssc-vax!sts (soon utah-cs)

------------------------------

Date: 16 Aug 83 10:43:54-PDT (Tue)
From: ihnp4!ihuxo!fcy @ Ucb-Vax
Subject: How does one obtain university technical reports?
Article-I.D.: ihuxo.276

I think the bibliographies being posted to the net are great.  I'd 
like to follow up on some of the references, but I don't know where to
obtain copies for many of them.  Is there some standard protocol and
contact point for requesting copies of technical reports from 
universities?  Is there a service company somewhere from which one 
could order such publications with limited distribution?

                        Curiously,

                        Fred Yankowski
                        Bell Labs Rm 6B-216
                        Naperville, IL
                        ihnp4!ihuxo!fcy


[I published all the addresses I know in V1 #8, May 22.  Two that
might be of help are:

    National Technical Information Service
    5285 Port Royal Road
    Springfield, Virginia  22161

    University Microfilms
    300 North Zeeb Road
    Ann Arbor, MI  48106

You might be able to get ordering information for many sources
through your corporate or public library.  You could also contact
LIBRARY@SCORE; I'm sure Richard Manuck  would be willing to help.
If all else fails, put out a call for help through AIList. -- KIL]

------------------------------

Date: 17 Aug 83 1:14:51-PDT (Wed)
From: decvax!genrad!mit-eddie!gumby @ Ucb-Vax
Subject: Re: How does one obtain university technical reports?
Article-I.D.: mit-eddi.616

Bizarrely enough, MIT and Stanford AI memos were recently issued by 
some company on MICROFILM (!) for some exorbitant price.  This price 
supposedly gives you all of them plus an introduction by Marvin
Minsky.  They advertised in Scientific American a few months ago.  I
guess this is a good deal for large institutions like Bell, but
smaller places are unlikely to have a microfilm (or was it fiche)
reader.

MIT AI TR's and memos can be obtained from Publications, MIT AI Lab, 
8th floor, 545 Technology Square, Cambridge, MA 02139.


[See AI Magazine, Vol. 4, No. 1, Winter-Spring 1983, pp. 19-22, for 
Marvin Minsky's "Introduction to the COMTEX Microfiche Edition of the
Early MIT Artificial Intelligence Memos".  An ad on p. 18 offers the
set for $2450.  -- KIL]

------------------------------

Date: 17 Aug 83 10:11:33-PDT (Wed)
From: harpo!eagle!mhuxt!mhuxi!mhuxa!ulysses!cbosgd!cbscd5!lvc @
      Ucb-Vax
Subject: List of AI Journals
Article-I.D.: cbscd5.419

Here is the list of AI journals that I was able to put together from
the generous contributions of several readers.  Sorry about the delay.
Most of the addresses, summary descriptions, and phone numbers for the
journals were obtained from "The Standard Periodical Directory"
published by Oxbridge Communications Inc.  183 Madison Avenue, Suite
1108 New York, NY 10016 (212) 689-8524.  Other sources you may wish to
try are Ulrich's International Periodicals Directory, and Ayer
Directory of Publications.  These three reference books should be
available in most libraries.

*************************
AI Journals and Magazines 
*************************

------------------------------
AI Magazine
        American Association for Artificial Intelligence
        445 Burgess Drive
        Menlo Park, CA 94025
        (415) 328-3123
        AAAI-OFFICE@SUMEX-AIM
        Quarterly, $25/year, $15 Student, $100 Academic/Corporate
------------------------------
Artificial Intelligence
        Elsevier Science Publishers B.V. (North-Holland)
        P.O. Box 211
        1000 AE Amsterdam, The Netherlands
        About 8 issues/year, 880 Df. (approx. $352)
------------------------------
American Journal of Computational Linguistics
        Donald E. Walker
        SRI International
        333 Ravenswood Avenue
        Menlo Park, CA 94025
        (415) 859-3071
        Quarterly, individual ACL members $15/year, institutions $30.
------------------------------
Robotics Age
        Robotics Publishing Corp.
        174 Concord St., Peterborough NH 03458 (603) 924-7136
        Technical articles related to design and implementation of
        intelligent machine systems
        Bimonthly, No price quoted
------------------------------
SIGART Newsletter
        Association for Computing Machinery
        11 W. 42nd St., 3rd fl.
	New York NY 10036
	(212) 869-7440
        Artificial intelligence, news, report, abstracts, educa-
        tional material, etc.  Book reviews.
        Bimonthly $12/year, $3/copy
------------------------------
Cognitive Science
        Ablex Publishing Corp.
        355 Chestnut St.
	Norwood NJ 07648
	(201) 767-8450
        Articles devoted to the emerging fields of cognitive
        psychology and artificial intelligence.
        Quarterly $22/year
------------------------------
International Journal of Man Machine Studies
        Academic Press Inc.
        111 Fifth Avenue
	New York NY 10013
	(212) 741-4000
        No description given.
        Quarterly $26.50/year
------------------------------
IEEE Transactions on Pattern Analysis and Machine Intelligence
        IEEE Computer Society
        10662 Los Vaqueros Circles,
	Los Alamitos CA 90720
	(714) 821-8380
        Technical papers dealing with advancements in artificial
        machine intelligence
        Bimonthly $70/year, $12/copy
------------------------------
Behavioral and Brain Sciences
        Cambridge University Press
        32 East 57th St.
	New York NY 10022
	(212) 688-8885
        Scientific form of research in areas of psychology,
	neuroscience, behavioral biology, and cognitive science,
	continuing open peer commentary is published in each issue
        Quarterly $95/year, $27/copy
------------------------------
Pattern Recognition
        Pergamon Press Inc.
        Maxwell House, Fairview Park
        Elmsford NY 10523
	(914) 592-7700
        Official journal of the Pattern Recognition Society
        Bimonthly $170/year, $29/copy
------------------------------

************************************
Other journals of possible interest.
************************************

------------------------------
Brain and Cognition
        Academic Press
        111 Fifth Avenue
	New York NY 10003
	(212) 741-6800
        The latest research in the nonlinguistic aspects of neuro-
        psychology.
        Quarterly $45/year
------------------------------
Brain and Language
        Academic Press, Journal Subscription
        111 Fifth Avenue
	New York NY 10003
	(212) 741-6800
        No description given.
        Quarterly $30/year
------------------------------
Human Intelligence
        P.O. Box 1163
        Birmingham MI 48012
	(313) 642-3104
        Explores the research and application of ideas on human
	intelligence.
        Bimonthly newsletter - No price quoted.
------------------------------
Intelligence
        Ablex Publishing Corp.
        355 Chestnut St.
	Norwood NJ 07648
	(201) 767-8450
        Original research, theoretical studies and review papers
        contributing to understanding of intelligence.
        Quarterly $20/year
------------------------------
Journal of the Assn. for the Study of Perception
        P.O. Box 744
	DeKalb IL 60115
        No description given.
        Semiannually $6/year
------------------------------
Computational Linguistics and Computer Languages
        Humanities Press
        Atlantic Highlands NJ 07716
	(201) 872-1441
        Articles deal with syntactic and semantic of [missing word]
        languages relating to math and computer science, primarily
        those which summarize, survey, and evaluate.
        Semimonthly $46.50/year
------------------------------
Annual Review in Automatic Programming
        Maxwell House, Fairview Park
        Elmsford NY 10523
	(914) 592-7700
        A comprehensive treatment of some major topics selected
        for their current importance.
        Annual $57/year
------------------------------
Computer
        IEEE Computer Society
        10662 Los Vaqueros Circle
        Los Alamitos, CA 90720
        (714) 821-8380
        Monthly, $6/copy, free with Computer Society Membership
------------------------------
Communications of the ACM
        Association for Computing Machinery
        11 West 42nd Street
        New York, NY 10036
        Monthly, $65/year, free with membership ($50, $15 student)
------------------------------
Journal of the ACM
        Association for Computing Machinery
        11 West 42nd Street
        New York, NY 10036
        Computer science, including some game theory,
        search, foundations of AI
        Quarterly, $10/year for members, $50 for nonmembers
------------------------------
Cognition
        Associated Scientific Publishers b.v.
        P.O. Box 211
        1000 AE Amsterdam, The Netherlands
        Theoretical and experimental studies of the mind, book reviews
        Bimonthly, 140 Df./year (~ $56), 240 Df. institutional
------------------------------
Cognitive Psychology
        Academic Press
        111 Fifth Avenue
        New York, NY 10003
        Quarterly, $74 U.S., $87 elsewhere
------------------------------
Robotics Today
        Robotics Today
        One SME Drive
        P.O. Box 930
        Dearborn, MI 48121
        Robotics in Manufacturing
        Bimonthly, $36/year unless member of SME or RIA
------------------------------
Computer Vision, Graphics, and Image Processing
        Academic Press
        111 Fifth Avenue
        New York, NY 10003
        $260/year U.S. and Canada, $295 elsewhere
------------------------------
Speech Technology
        Media Dimensions, Inc.
        525 East 82nd Street
        New York, NY 10028
        (212) 680-6451
        Man/machine voice communications
        Quarterly, $50/year
------------------------------

*******************************
    Names, but no addresses
*******************************

        Magazines
        --------

AISB Newsletter

        Proceedings
        __________

IJCAI	International Joint Conference on AI
AAAI	American Association for Artificial Intelligence
TINLAP	Theoretical Issues in Natural Language Processing
ACL	Association of Computational Linguistics
AIM	AI in Medicine
MLW	Machine Learning Workshop
CVPR	Computer Vision and Pattern Recognition (formerly PRIP)
PR	Pattern Recognition
IUW	Image Understanding Workshop (DARPA)
T&A	Trends and Applications (IEEE, NBS)
DADCM	Workshop on Data Abstraction, Databases, and Conceptual Modeling
CogSci	Cognitive Science Society
EAIC	European AI Conference


Thanks again to all that contributed.

Larry Cipriani
cbosgd!cbscd5!lvc

------------------------------

End of AIList Digest
********************
22-Aug-83 09:58:56-PDT,21598;000000000001
Mail-From: LAWS created at 22-Aug-83 09:53:03
Date: Monday, August 22, 1983 9:39AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #44
To: AIList@SRI-AI


AIList Digest            Monday, 22 Aug 1983       Volume 1 : Issue 44

Today's Topics:
  AI Architecture - Parallel Processor Request,
  Computational Complexity - Maximum Speed,
  Functional Programming,
  Concurrency - Production Systems & Hardware,
  Programming Languages - NETL
----------------------------------------------------------------------

Date: 18 Aug 83 17:30:43-PDT (Thu)
From: decvax!linus!philabs!sdcsvax!noscvax!revc @ Ucb-Vax
Subject: Looking for parallel processor systems
Article-I.D.: noscvax.182

We have been looking into systems to replace our current ANALOG
computers.  They are the central component in a real time simulation
system.  To date, the only system we've seen that looks like it might
do the job is the Zmob system being built at the Univ. of Md (Mark
Weiser).

I would appreciate it if you could supply me with pointers to other
systems that might support high speed, high quality, parallel
processing.

Note: most High Speed networks are just too slow and we can't justify
a Cray-1.

Bob Van Cleef

uucp : {decvax!ucbvax || philabs}!sdcsvax!nosc!revc arpa : revc@nosc 
CompuServe : 71565,533

------------------------------

Date: 19 Aug 83 20:29:13-PDT (Fri)
From: decvax!tektronix!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: maximum speed
Article-I.D.: ssc-vax.445

Hmmm, I didn't know that addition of n numbers could be performed 
simultaneously - ok then, constant time matrix multiplication, given 
enough processors.  I still haven't seen any hard data on limits to
speed because of communications problems.  If it seems like there are
limits but you can't prove it, then maybe you haven't discovered the
cleverest way to do it yet...

                                        stan the lep hack
                                        ssc-vax!sts (soon utah-cs)

ps The space cost of constant or log time matrix mults is of course
   ridiculous

pps Perhaps this should move to net.applic?

------------------------------

Date: Fri, 19 Aug 83 15:08:15 EDT
From: Paul Broome (CTAB) <broome@brl-bmd>
Subject: Re: Functional programming and AI

Stan,

Let me climb into my pulpit and respond to your FP/AI prod.  I don't 
think FP and AI are diametrically opposed.  To refresh everyone's
memory here are some of your comments.


        ...  Having worked with both AI and FP languages,
        it seems to me that the two are diametrically
        opposed to one another.  The ultimate goal of functional
        programming language research is to produce a language that
        is as clean and free of side effects as possible; one whose
        semantic definition fits on a single side of an 8 1/2 x 11
        sheet of paper ...

Looking at Backus' Turing award lecture, I'd have to say that
cleanliness and freedom of side effects are two of Backus' goals but
certainly not succinctness of definition.  In fact Backus says (CACM,
Aug.  78, p. 620), "Our intention is to provide FP systems with widely
useful and powerful primitive functions rather than weak ones that 
could then be used to define useful ones."

Although FP has no side effects, Backus also talked about applicative 
state systems (AST) with one top level change of state per
computation,i.e.  one side effect.  The world of expressions is a
nice, orderly one; the world of statements has all the mush.  He's
trying to move the statement part out of the way.

I'd have to say one important part of the research in FP systems is to
define and examine functional forms (program forming operations) with 
nice mathematical properties.  A good way to incorporate (read 
implement) a mathematical concept in a computer program is without 
side effects.  This side effect freeness is nice because it means that
a program is 'referentially transparent', i.e. it can be used without
concern about collision with internal names or memory locations AND
the program is dependable; it always does the same thing.

A second nice thing about applicative languages is that they are
appropriate for parallel execution.  In a shared memory model of
computation (e.g. Ada) it's very difficult (NP-complete, see CACM, a
couple of months ago) to tell if there is collision between
processors, i.e. is a processor overwriting data that another
processor needs.


        On the other hand, the goal of AI research (at least in the
        AI language area) is to produce languages that can effectively
        work with as tangled and complicated representations of
        knowledge as possible.  Languages for semantic nets, frames,
        production systems, etc, all have this character.

I don't think that's the goal of AI research but I can't offer a
better one at the moment.  (Sometimes it looks as if the goal is to
make money.)

Large, tangled structures can be handled in applicative systems but
not efficiently, at least I don't see how.  If you view a database
update as a function mapping the pair (NewData, OldDatabase) into
NewDatabase you have to expect a new database as the returned value.
Conceptionally that's not a problem.  However, operationally there
should just be a minor modification of the original database when
there is no sharing and suspended modification when the database is
being shared.  There are limited transformations that can help but
there is much room for improvement.

An important point in all this is program transformation.  As we build
bigger and smarter systems we widen the gap between the way we think 
and the hardware.  We need to write clear, easy to understand, and 
large-chunked programs but transform them (within the same source 
language) into possibly less clear, but more efficient programs.  
Program transformation is much easier when there are no side effects.

        Now between the Japanese 5th generation project (and the US
        response) and the various projects to build non-vonNeumann
        machines using FP, it looks to me like the seeds of a
        controversy over the best way to do programming.  Should we be
        using FP languages or AI languages?  We can't have it both ways,
        right?  Or can we?

A central issue is efficiency.  The first FORTRAN compiler was viewed
with the same distrust that the public had about computers in general.
Early programmers didn't want to relinquish explicit management of
registers or whatever because they didn't think the compiler could do
as well as they.  Later there was skepticism about garbage collection
and memory management.  A multitude of sins is committed in the name
of (machine) efficiency at the expense of people efficiency.  We
should concern ourselves more with WHAT objects are stored than with
HOW they are stored.

There's no doubt that applicative languages are applicable.  The
Japanese (fortunately for them) are less affected by, as Dijkstra puts
it, "our antimathematical age."  And they, unlike us, are willing to
sacrifice some short term goals for long term goals.


- Paul Broome
  (broome@brl)

------------------------------

Date: 17 Aug 83 17:06:13-PDT (Wed)
From: decvax!tektronix!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: FP and AI - (nf)
Article-I.D.: ssc-vax.427

There *is* a powerful functional language underlying most AI programs
- Lisp!  But it's never pure Lisp.  The realization that got me to
thinking about this was the apparent necessity for list surgery,
sooner or later.  rplaca and allied functions show up in the strangest
places, and seem to be crucial to the proper functioning of many AI
systems (consider inheritance in frames or the construction of a
semantic network; perhaps method combination in flavors qualifies).
I'm not arguing that an FP language could *not* be used to build an AI
language on top; I'm thinking more about fundamental philosophical
differences between different schools of research.

                                        stan the lep hacker
                                        ssc-vax!sts (soon utah-cs)

------------------------------

Date: Sat 20 Aug 83 12:28:17-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: So the language analysis problem has been solved?!?

I will also refrain from flaming, but not from taking to task 
excessive claims.

    I'll refrain from flaming about traditional (including
    logic) grammars.  I'm tired of people insisting on a
    restricted view of language that claims that grammar rules
    are the ultimate description of syntax (semantics being
    irrelevant) and that idioms are irritating special cases.  I
    might note that we have basically solved the language
    analysis problem (using a version of Berkeley's Phrase
    Analysis that handles ambiguity) ...

I would love to test that "solution of the language analysis 
problem"... As for the author being "tired of people insisting on a 
restricted ...", he is just tired of his own straw people, because 
there doesn't seem to be anybody around anymore claiming that 
"semantics is irrelevant".  Formal grammars (logic or otherwise) are 
just a convenient mathematical technique for representing SOME 
regularities in language in a modular and testable form. OF COURSE, a 
formal grammar seen from the PROCEDURAL point of view can be replaced 
by any arbitrary "ball of string" with the same operational semantics.
What this replacement does to modularity, testability and 
reproducibility of results is sadly clear in the large amount of 
published "research" in natural language analysis which is untestable 
and irreproducible. The methodological failure of this approach 
becomes obvious if one considers the analogous proposal of replacing 
the principles and equations of some modern physical theory (general 
relativity, say) by a computer program which computes "solutions" to 
the equations for some unspecified subset of their domain, some of 
these solutions being approximate or plain wrong for some (again 
unspecified) set of cases. Even if such a program were "right" all the
time (in contradiction with all our experience so far), its sheer 
opacity would make it useless as scientific explanation.

Furthermore, when mentioning "semantics", one better say which KIND of
semantics one means. For example, grammar rules fit very well with 
various kinds of truth-theoretic and model-theoretic semantics, so the
comment above cannot be about that kind of semantics. Again, a theory 
of semantics needs to be testable and reproducible, and, I would 
claim, it only qualifies if it allows the representation of a 
potential infinity of situation patterns in a finite way.

    I don't recall a von Neumann bottleneck in AI programs, at
    least not of the kind Backus was talking about.  The main
    bottleneck seems to be of a conceptual rather than a
    hardware nature.  After all, production systems are not
    inherently bottlenecked, but nobody really knows how to make
    them run concurrently, or exactly what to do with the
    results (I have some ideas though).

The reason why nobody knows how to make production systems run 
concurrently is simply because they use a global state and side 
effects. This IS precisely the von Neumann bottleneck, as made clear 
in Backus' article, and is a conceptual limitation with hardware 
consequences rather than a purely hardware limitation. Otherwise, why 
would Backus address the problem by proposing a new LANGUAGE (fp), 
rather than a new computer architecture?  If your AI program was 
written in a language without side effects (such as PURE Prolog), the 
opportunities for parallelism would be there. This would be 
particularly welcome in natural language analysis with logic (or other
formal) grammars, because dealing with more and more complex subsets 
of language needs an increasing number of grammar rules and rules of 
inference, if the results are to be accurate and predictable.  
Analysis times, even if they are polynomial on the size of the input, 
may grow EXPONENTIALLY with the size of the grammar.

                                Fernando Pereira
                                AI Center
                                SRI International
                                pereira@sri-ai

------------------------------

Date: 15 Aug 83 22:44:05-PDT (Mon)
From: pur-ee!uiucdcs!uicsl!pollack @ Ucb-Vax
Subject: Re: data flow computers and PS's - (nf)
Article-I.D.: uiucdcs.2573


The nodes in a data-flow machine, in order to compute efficiently,
must be able to do a local computation.  This is why arithmetic or
logical operations are O.K. to distribute.  Your scheme, however,
seems to require that the database of propositions be available to
each node, so that the known facts can be deduced "instantaneously".
This would cause severe problems with the whole idea of concurrency,
because either the database would have to be replicated and passed
through the network, or an elaborate system of memory locks would need
to be established.

The Hearsay system from CMU was one of the early PS's with claims to a
concurrent implementation. There is a paper I remember in IEEE ToC (75
or 76) which discussed the problems of speedup and locks.

Also, I think John Holland (of Michigan?) is currently working on a 
parallel PS machine (but doesn't call it that!)


Jordan Pollack
University of Illinois
...!pur-ee!uiucdcs!uicsl!pollack

------------------------------

Date: 17 Aug 83 16:56:55-PDT (Wed)
From: decvax!tektronix!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: data flow computers and PS's - (nf)
Article-I.D.: ssc-vax.426

A concurrent PS is not too impossible, 'cause I've got one 
(specialized for NL processing and not actually implemented 
concurrently, but certainly capable).  It is true that the working
memory would have to be carefully organized, but that's a matter of
sufficiently clever design; there's no fundamental theoretical
problems.  Traditional approaches won't work, because two concurrently
operating rules may come to contradictory conclusions, both of which
may be valid.  You need a way to store both of these and use them.

                                        stan the leprechaun hacker
                                        ssc-vax!sts (soon utah-cs)

------------------------------

Date: 18 Aug 83 0516 EDT
From: Dave.Touretzky@CMU-CS-A
Subject: NETL

I am a graduate student of Scott Fahlman's, and I've been working on
NETL for the last five years.  There are some interesting lessons to
be learned from the history of the NETL project.  NETL was a
combination of a parallel computer architecture, called a parallel
marker propagation machine, and a representation language that
appeared to fit well on this architecture.  There will probably never
be a hardware implementation of the NETL Machine, although it is
certainly feasible.  Here's why...

The first problem with NETL is its radical semantics:  no one
completely understands their implications.  We (Scott Fahlman, Walter
van Roggen, and I) wrote a paper in IJCAI-81 describing the problems
we had figuring out how exceptions should interact with multiple
inheritance in the IS-A hierarchy and why the original NETL system
handled exceptions incorrectly.  We offered a solution in our paper,
but the solution turned out to be wrong.  When you consider that NETL
contains many features besides exceptions and inheritance, e.g.
contexts, roles, propositional statements, quantifiers, and so on, and
all of these features can interact (!!), so that a role (a "slot" in
frame lingo) may only exist within certain contexts, and have
exceptions to its existence (not its value, which is another matter)
in certain sub-contexts, and may be mapped multiple times because of
the multiple inheritance feature, it becomes clear just how 
complicated the semantics of NETL really is.  KLONE is in a similar 
position, although its semantics are less radical than NETL's.
Fahlman's book contains many simple examples of network notation
coupled with appeals to the reader's intuition; what it doesn't
contain is a precise mathematical definition of the meaning of a NETL
network because no such definition existed at that time.  It wasn't
even clear that a formal definition was necessary, until we began to
appreciate the complexity of the semantic problems.  NETL's operators
are *very* nonstandard; NETL is the best evidence I know of that
semantic networks need not be simply notational variants of logic,
even modal or nonmonotonic logics.

In my thesis (forthcoming) I develop a formal semantics for multiple 
inheritance with exceptions in semantic network languages such as
NETL.  This brings us to the second problem.  If we choose a
reasonable formal semantics for inheritance, then inheritance cannot
be computed on a marker propagation machine, because we need to pass
around more information than is possible on such a limited
architecture.  The algorithms that were supposed to implement NETL on
a marker propagation machine were wrong:  they suffered from race
conditions and other nasty behavior when run on nontrivial networks.
There is a solution called "conditioning" in which the network is
pre-processed on a serial machine by adding enough extra links to
ensure that the marker propagation algorithms always produce correct 
results.  But the need for serial preprocessing removes much of the 
attractiveness of the parallel architecture.

I think the NETL language design stands on its own as a major
contribution to knowledge representation.  It raises fascinating
semantic problems, most of which remain to be solved.  The marker
propagation part doesn't look too promising, though.  Systems with
NETL-like semantics will almost certainly be built in the future, but
I predict they will be built on top of different parallel
architectures.

-- Dave Touretzky

------------------------------

Date: Thu 18 Aug 83 13:46:13-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: NETL and hardware

        In Volume 40 of the AIList Alan Glasser asked about hardware 
implimentations using marker passing a la NETL. The closest hardware I
am aware of is called the Connection Machine, and is begin developed 
at MIT by Alan Bawden, Dave Christman, and Danny Hillis (apologies if
I left someone out). The project involves building a model with about
2^10 processors. I'm not sure of its current status, though I have
heard that a company is forming to build and market prototype CM's.

        I have heard rumors of the SPICE project at CMU, though I am
not aware of any results pertaining to hardware, the project seems to
have some measure of priority at CMU. Hopefully members of each of
these projects will also send notes to AIList...

David Rogers, DRogers@SUMEX-AIM

------------------------------

Date: Thu, 18 Aug 1983  22:01 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
Subject: NETL


I've only got time for a very quick response to Alan Glasser's query 
about NETL.  Since the book was published we have done the following:

1. Our group at CMU has developed several design sketches for
practical NETL machine implementations of about a million porcessing
elements.  We haven't built one yet, for reasons described below.

2. David B. McDonald has done a Ph.D.thesis on noun group
understanding (things like "glass wine glass") using a NETL-type
network to hold the necessary world knowledge.  (This is available as
a CMU Tech Report.)

3. David Touretzky has done a through logical analysis of NETL-style 
inheritance with exceptions, and is currently writing up his thesis on
this topic.

4. I have been studying the fundamental strengths and limitations of 
NETL-like marker-passing compared to other kinds of massively parallel
computation.  This has gradually led me to prefer an architecture that
passes numers or continuous values to the single-bit marker-passing of
NETL.

For the past couple of years, I've been putting most of my time into
the Common Lisp effort -- a brief foray into tool building that got
out of hand -- and this has delayed any plans to begin work on a NETL
machine.  Now that our Common Lisp is nearly finished, I can think
again about starting a hardware project, but something more exciting
than NETL has come along: the Boltzmann Machine architecture that I am
working on with Geoff Hinton of CMU and Terry Sejnowski of
Johns-Hopkins.  We will be presenting a paper on this at AAAI.

Very briefly, the Boltzmann machine is a massively parallel
architecture in which each piece of knowledge is distributed over many
units, unlike NETL in which concepts are associated with particular
pieces of hardware.  If we can make it work, this has interesting
implications for reliable large-scale implementation, and it is also a
much more plausible model for neural processing than is something like
NETL.

So that's what has happened to NETL.

-- Scott Fahlman (FAHLMAN@CMU-CS-C)

------------------------------

End of AIList Digest
********************
22-Aug-83 10:40:52-PDT,13520;000000000001
Mail-From: LAWS created at 22-Aug-83 10:39:39
Date: Monday, August 22, 1983 10:08AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #45
To: AIList@SRI-AI


AIList Digest            Monday, 22 Aug 1983       Volume 1 : Issue 45

Today's Topics:
  Language Translation - Lisp-to-Lisp,
  Programming Languages - Lisps on 68000s and SUNs
----------------------------------------------------------------------

Date: 19 Aug 1983 2113-PDT
From: VANBUER@USC-ECL
Subject: Lisp Interchange Standard

In response to your message sent Friday, August 19, 1983 5:26PM

On Lisp translation via a standard form:

I have used Interlisp Transor a fair amount both into and out of
Interlisp (even experimented with translation to C), and the kind of
thing which makes it very difficult, especially if you want to retain
some efficiency, are subtle differences in what seem to be fairly
standard functions:  e.g. in Interlisp (DREMOVE (CAR X) X) will be EQ
to X (though not EQUAL or course) except in the case the result is
NIL; both CAR and CDR of the lead cell are RPLACed so that all
references to the value of X also see the DREMOVE as a side effect.
In Franz Lisp, the DREMOVE would have the value (CDR X) in most cases,
but no RPLACing is done.  In most cases this isn't a problem, but ....
In APL, at least the majority of the language has the same semantics
in all implementations.
        Darrel J. Van Buer, SDC

------------------------------

Date: 20 Aug 1983 1226-PDT
From: FC01@USC-ECL
Subject: Re: Language Translation

I like the APL person's [Shrager's] point of view on translation.
The problem seems to be that APL has all the things it needs in its
primative functions. Lisp implementers have seen fit to impurify
their language by adding so much fancy stuff that they depend on so
heavily.  If every lisp program were translated into lisp 1.5 (or
so), it would be easy to port things, but it would end in
innefficient implementations.  I like APL, in fact, I like it so much
I've begun maintaining it on our unix system. I've fixed several
bugs, and it now seems to work very well.  It has everything any
other APL has, but nobody seems to want to use it except me. I write
simulators in a day, adaptive networks in a week, and analyze
matrices in seconds. So at any rate, anyone who is interested in APL
on the VAX - especially for machine intelligence applications please
get in touch with me. It's not ludicrous by the way, IBM does more
internal R+D in APL than in any other language! That includes their
robotics programs where they do lots of ARM solutions (matrix
manipulation being built into APL has tremendous advantages in this
domain).

FLAME ON!
[I believe this refers the Stan the Leprechaun's submission in
V1 #43. -- KIL]

So if your language translation program is the last word in
translators, how come it's not in the journals? How come nobody knows 
that it solves all the problems of translation? How come you haven't
made a lot of money selling COBOL to PASCAL to C to APL to LISP to
ASSEMBLER to BASIC to ... translators in the open market? Is it that
it only works for limited cases? Is it that it only deals with
'natural' languages? Is it really as good as you think, or do you only
think it's really good?  How about sharing your (hopefully non
NPcomplete) solution to an NP complete problem with the rest of us!  
FLAME OFF!

[...]
                Fred

------------------------------

Date: Sat 20 Aug 83 15:18:13-PDT
From: Mabry Tyson <Tyson@SRI-AI.ARPA>
Subject: Lisp-to-Lisp translation

Some of the comments on Lisp-to-Lisp translation seem to be rather 
naive.  Translating code that works on pure S-expressions is usually 
not too difficult.  However, Lisp is not pure Lisp.

I am presently translating some code from Interlisp to Zetalisp (from
a Dec-20 to a Symbolics 3600) and thought a few comments might be
appropriate.  First off, Interlisp has TRANSOR which is a package to
translate between Lisps and is programmable.  It isn't used often but
it does some of the basic translations.  There is an Interlisp
Compatability Package(ILCP) on the 3600, which when combined with a
CONVERT program to translate from Interlisp (running in Interlisp),
covers a fair amount of Interlisp.  (Unfortunately it is still early
in its development - I just rewrote all the I/O functions because they
didn't work for me.)

Even with these aids there are lots of problems.  Here are a few
examples I have come across:  In the source language, taking the CAR
of an atom did not cause an error.  Apparently laziness prevented the
author from writing code to check whether some input was an atom
(which was legal input) before seeing if the CAR of it was some
special symbol.

Since Interlisp-10 is short of cons-cell room, many relatively obscure
pieces of code were designed to use few conses.  Thus the author used 
and reused scratch lists and scratch strings.  The exact effect
couldn't be duplicated.  In particular, he would put characters into
specific spots in the scratch string and then would collect the whole
string.  (I'm translating this into arrays.)

All the I/O has to be changed around.  The program used screen control
characters to do fancy I/O on the screen.  It just printed the right
string to go to whereever it wanted.  You can't print a string on the
3600 to do that.  Also, whether you get an end-of-line character at
the end of input is different (so I have to hand patch code that did a
(RATOM) (READC)).  And of course file names (as well as the default
part of them, ie., the directory) are all different.

Then there are little differences which the compatability package can
take care of but introduce inefficiencies.  For instance, the function
which returns the first position of a character in a string is
different between the two lisps because the values returned are off by
1.  So, where the author of the program used that function just to
determine whether the character was in the string is now computing the
position and then offsetting it by 1.

The ILCP does have a nice advantage of letting me use the Interlisp 
name for functions even though there is a similarly named, but
different, function in Zetalisp.

Unfortunately for me, this code is going to be continued to be
developed on the Dec-20 while we want to get the same code up on the
3600.  So I have to try to set it up so the translation can happen
often rather than just once.  That means going back to the Interlisp
code and putting it into shape so that a minimum amount of
hand-patching need be done.

------------------------------

Date: 19 Aug 83 10:52:11-PDT (Fri)
From: harpo!eagle!allegra!jdd @ Ucb-Vax
Subject: Lisps on 68000's
Article-I.D.: allegra.1760

A while ago I posted a query about Lisps on 68000's.  I got
essentially zero replies, so let me post what I know and see whether
anyone can add to it.

First, Franz Lisp is being ported from the VAX to 68000's.  However,
the ratio of random rumors to solid facts concerning this undertaking
seems the greatest since the imminent availability of NIL.  Moreover,
I don't really like Franz; it has too many seams showing (I've had too
many programs die without warning from segmentation errors and the
like).

Then there's T.  T sounds good, but the people who are saying it's
great are the same ones trying to sell it to me for several thousand
dollars, so I'd like to get some more disinterested opinions first.
The only person I've talked to said it was awful, but he admits he
used an early version.

I have no special knowledge of PSL, particularly of the user
environment or of how useful or standard its dialect looks, nor of the
status of its 68000 version.

As for an eventual Common Lisp on a 68000, well, who knows?

There are also numerous toy systems floating around, but none I would 
consider for serious work.

Well, that's about everything I know; can any correct me or add to the
list?

Cheers,
John ("Don't Wanna Program in C") DeTreville
Bell Labs, Murray Hill

[I will reprint some of the recent Info-Grpahics discussion of SUNs
and other workstations as LISP-based graphics servers.  Several of
the comments relate to John's query.  -- KIL]

------------------------------

Date: Fri, 5 Aug 83 21:30:22 PDT
From: fateman%ucbkim@Berkeley (Richard Fateman)
Subject: SUNs, 3600s, and Lisp

         [Reprinted from the Info-Graphics discussion list.]

[...]

In answer to Fred's original query, (I replied to him personally
earlier ), Franz has been running on a SUN since January, 1983.  We
find it runs Lisp faster than a VAX 750, and with expected performance
improvements, may be close to a VAX 780. (= about 2.5 to 4 times
slower than a KL-10).  This makes it almost irrelevant using Franz on
a VAX.  Yet more specifically in answer to FRD's question, Franz on
the SUN has full access to the graphics software on it, and one could
set up inter-process communication between a Franz on a VAX and
something else (e.g. Franz) on a SUN. A system for shipping smalltalk
pictures to SUNs runs at UCB.

  Franz runs on other 68000 UNIX workstations, including Pixel, Dual,
and Apple Lisa.  Both Interlisp-D and MIT LispMachine Lisp have more 
highly developed graphics stuff at the moment.

  As far as other lisps, I would expect PSL and T, which run on Apollo
Domain 68000 systems, to be portable towards the SUN, and I would not
be surprised if other systems turn up.  For the moment though, Franz
seems to be alone.  Most programs run on the SUN without change (e.g.
Macsyma).

------------------------------

Date: Sat 6 Aug 83 13:39:13-PDT
From: Bill Nowicki <NOWICKI@SU-SCORE.ARPA>
Subject: Re: LISP & SUNs ...

         [Reprinted from the Info-Graphics discussion list.]

You can certainly run Franz under Unix from SMI, but it is SLOW.  Most
Lisps are still memory hogs, so as was pointed out, you need a
$100,000 Lisp machine to get decent response.

If $100,000 is too much for you to spend on each programmer, you might
want to look at what we are doing on the fourth floor here at
Stanford.  We are running a small real-time kernel in a cheap, quiet,
diskless SUN, which talks over the network to various servers.  Bill
Yeager of Sumex has written a package which runs under interLisp and
talks to our Virtual Graphics Terminal Service.  InterLisp can be run
on VAX/Unix or VAX/VMS systems, TOPS-20, or Xerox D machines.  The
cost/performance ratio is very good, since each workstation only needs
256K of memory, frame buffer, CPU, and Ethernet interface, while the 
DECSystem-20 or VAX has 8M bytes and incredibly fast system 
performance (albeit shared between 20 users).

We are also considering both PSL and T since they already have 68000
compilers.  I don't know how this discussion got on Info-Graphics.

        -- Bill

------------------------------

Date: 6 Aug 1983 1936-MDT
From: JW-Peterson@UTAH-20 (John W. Peterson)
Subject: Lisp Machines

         [Reprinted from the Info-Graphics discussion list.]

Folks who don't have >$60K to spend on a Lisp Machine may want to
consider Utah's Portable Standard Lisp (PSL) running on the Apollo 
workstation.  Apollo PSL has been distributed for several months now.
PSL is a full Lisp implementation, complete with a 68000 Lisp
compiler.  The standard distribution also comes with a wide range of
utilities.

PSL has been in use at Utah for almost a year now and is supporting 
applications in computer algebra (the Reduce system from Rand), VLSI 
design and Computer aided geometric design.

In addition, the Apollo implementation of PSL comes with a large and
easily extensible system interface package.  This provides easy,
interactive access to the resident Apollo window package, graphics
library, process communication system and other operating system
services.

If you have any questions about the system, feel free to contact me
via
        JW-PETERSON@UTAH-20 (arpa) or
        ...!harpo!utah-cs!jwp (uucp)

jw

------------------------------

Date: Sun, 7 Aug 83 12:08:08 CDT
From: Mike.Caplinger <mike.rice@Rand-Relay>
Subject: SUNs

         [Reprinted from the Inof-Graphics discussion list.]

[...]

Lisp is available from UCB (ftp from ucb-vax) for the SUN and many 
simialr 68K-based machines.  We have it up on our SMI SUNs running
4.1c UNIX.  It seems about as good as Franz on the VAX, which from a 
graphics standpoint, is saying nothing at all.

By the way, the SUN graphics library, SUNCore, seems to be an OK 
implementation of the SIG Core standard.  It has some omissions and 
extensions, like every implementation.  I haven't used it extensively 
yet, and it has some problems, but it should get some good graphics 
programs going fairly rapidly.  I haven't yet seen a good graphics
demo for the SUN.  I hope this isn't indicative of what you can
actually do with one.

By the way, "Sun Workstation" is a registered trademark of Sun 
Microsystems, Inc.  You may be able to get a "SUN-like" system 
elsewhere.  I'm not an employee of Sun, I just have to deal with them
a lot...

------------------------------

End of AIList Digest
********************
23-Aug-83 11:11:20-PDT,12739;000000000001
Mail-From: LAWS created at 23-Aug-83 11:00:05
Date: Tuesday, August 23, 1983 10:53AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #46
To: AIList@SRI-AI


AIList Digest            Tuesday, 23 Aug 1983      Volume 1 : Issue 46

Today's Topics:
  Artificial Intelligence - Prejudice & Frames & Turing Test & Evolution,
  Fifth Generation - Top-Down Research Approach
----------------------------------------------------------------------

Date: Thu 18 Aug 83 14:49:13-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Prejudice

The message from (I think .. apologies if wrong) Stan the Leprechaun,
which sets up "rational thought" as the opposite of "right-wingism"
and of "irascibility", disproves the contention in another message
that "bigotry and intelligence are mutually exclusive".  Indeed this
latter message is its own disproof, at least by my definition of
bigotry.  All of which leads me to believe that one or other of them
*was* sent by an AI project Flamer-type program.  Good work.
                                                - Richard

------------------------------

Date: 22 Aug 83 19:45:38-EDT (Mon)
From: The soapbox of Gene Spafford <spaf%gatech@UDel-Relay>
Subject: AI and Human Intelligence

[The following are excerpts from several interchanges with the author.
-- KIL]

Words mean not necessarily what I want them to mean nor what you want
them to mean, but what we all agree that they mean.  My point is that 
we must very possibly consider emotions and ethics in any model we 
care to construct of a "human" intelligence.  The ability to handle a
conversation, as is implied by the Turing test, is not sufficient in 
my eyes to classify something as "intelligent."  That is, what
*exactly* is intelligence?  Is it something measured by an IQ test?
I'm sure you realize that that particular point is a subject of much
conjecture.

If these discussion groups are for discussion of artificial
"intelligence," then I would like to see some thought given as to the
definition of "intelligence."  Is emotion part of intelligence?  Is
superstition part of intelligence?

FYI, I do not believe what I suggested -- that bigots are less than
human.  I made that suggestion to start some comments.  I have gotten
some interesting mail from people who have thought some about the
idea, and from a great many people who decided I should be locked away
for even coming up with the idea.

[...]

That brought to mind a second point -- what is human?  What is
intelligence?  Are the the same thing? (My belief -- no, they aren't.)
I proposed that we might classify "human" as being someone who *at
least tries* to overcome irrational prejudices and bigotry.  More than
ever we need such qualitites as open-mindedness and compassion, as
individuals and as a society.  Can those qualities be programmed into
an AI system?  [...]

My original submission to Usenet was intended to be a somewhat 
sarcastic remark about the nonsense that was going on in a few of the
newsgroups.  Responses to me via mail indicate that at least a few
people saw through to some deeper, more interesting questions.  For
those people who immediately jumped on my case for making the
suggestion, not only did you miss the point -- you *are* the point.

--
  The soapbox of Gene Spafford
  CSNet:  Spaf @ GATech ARPA:  Spaf.GATech @ UDel-Relay
  uucp: ...!{sb1,allegra,ut-ngp}!gatech!spaf
        ...!duke!mcnc!msdc!gatech!spaf

------------------------------

Date: 18 Aug 83 13:40:03-PDT (Thu)
From: decvax!linus!vaxine!wjh12!brh @ Ucb-Vax
Subject: Re: AI Projects on the Net
Article-I.D.: wjh12.299

        I realize this article was a while ago, but I'm just catching
up with my news reading, after vacation.  Bear with me.

        I wonder why folks think it would be so easy for an AI program
to "change it's thought processes" in ways we humans can't.  I submit
that (whether it's an expert system, experiment in KR or what) maybe
the suggestion to 'not think about zebras' would have a similiar
effect on an AI proj. as on a human.  After all, it IS going to have
to decipher exactly what you meant by the suggestion.  On the other
hand, might it not be easier for one of you humans .... we, I mean ...
to consciously think of something else, and 'put it out of your
mind'??

        Still an open question in my mind...  (Now, let's hope this
point isn't already in an article I haven't read...)

                        Brian Holt
                        wjh!brh

------------------------------

Date: Friday, 19 Aug 1983 09:39-PDT
From: turner@rand-unix
Subject: Prejudice and Frames, Turing Test


  I don't think prejudice is a by-product of Minsky-like frames.
Prejudice is simply one way to be misinformed about the world.  In
people, we also connect prejudism with the inability to correct
incorrect information in light of experiences which prove it be wrong.

  Nothing in Minsky frames as opposed to any other theory is a
necessary condition for this.  In any understanding situation, the
thinker must call on background information, regardless of how that is
best represented.  If this background information is incorrect and not
corrected in light of new information, then we may have prejudism.

  Of course, this is a subtle line.  A scientist doesn't change his
theories just because a fact wanders by that seems to contradict his
theories.  If he is wise, he waits until a body of irrefutable
evidence builds up.  Is he prejudiced towards his current theories?
Yes, I'd say so, but in this case it is a useful prejudism.

  So prejudism is really related to the algorithm for modifying known 
information in light of new information.  An algorithm that resists
change too strongly results in prejudism.  The opposite extreme -- an
algorithm that changes too easily -- results in fadism, blowing the
way the wind blows and so on.

                        -----------

  Stan's point in I:42 about Zeno's paradox is interesting.  Perhaps
the mind cast forced upon the AI community by Alan Turing is wrong.
Is Turing's Test a valid test for Artificial Intelligence?

  Clearly not.  It is a test of Human Mimicry Ability.  It is the
assumption that the ability to mimic a human requires intelligence.
This has been shown in the past not to be entirely true; ELIZA is an
example of a program that clearly has no intelligence and yet mimics a
human in a limited domain fairly well.

  A common theme in science fiction is "Alien Intelligence".  That is,
the sf writer basis his story on the idea:  "What if alien
intelligence wasn't like human intelligence?"  Many interesting
stories have resulted from this basis.  We face a similar situation
here.  We assume that Artificial Intelligence will be detectable by
its resemblance to human intelligence.  We really have little ground
for this belief.

  What we need is a better definition of intelligence, and a test
based on this definition.  In the Turing mind set, the definition of
intelligence is "acts like a human being" and that is clearly
insufficient.  The Turing test also leads one to think erroneously
that intelligence is a property with two states (intelligent and
non-intelligent) when even amongst humans there is a wide variance in
the level of intelligence.

  My initial feeling is to relate intelligence to the ability to
achieve goals in a given environment.  The more intelligent man today
is the one who gets what he wants; in short, the more you achieve your
goals, the more intelligent you are.  This means that a person may be
more intelligent in one area of life than in another.  He is, for
instance, a great businessman but a poor father.  This is no surprise.
We all recognize that people have different levels of competence in
different areas.

  Of course, this defintion has problems.  If your goal is to lift
great weights, then your intelligence may be dependent on your
physical build.  That doesn't seem right.  Is a chess program more
intelligent when it runs on a faster machine?

  In the sense of this definition we already have many "intelligent"
programs in limited domains.  For instance, in the domain of
electronic mail handling, there are many very intelligent entities.
In the domain of human life, no electronic entities.  In the domain of
human politics, no human entities (*ha*ha*).

  I'm sure it is nothing new to say that we should not worry about the
Turing test and instead worry about more practical and functional
problems in the field of AI.  It does seem, however, that the Turing
Test is a limited and perhaps blinding outlook onto the AI field.


                                        Scott Turner
                                        turner@randvax

------------------------------

Date: 21 Aug 83 13:01:46-PDT (Sun)
From: harpo!eagle!mhuxt!mhuxi!mhuxa!ulysses!smb @ Ucb-Vax
Subject: Hofstadter
Article-I.D.: ulysses.560

Douglas Hofstadter is the subject of today's N.Y. Times Magazine cover
story.  The article is worth reading, though not, of course,
particularly deep technically.  Among the points made:  that
Hofstadter is not held in high regard by many AI workers, because they
regard him as a popularizer without any results to back up his
theories.

------------------------------

Date: Tue, 23 Aug 83 10:35 PDT
From: "Glasser Alan"@LLL-MFE.ARPA
Subject: Program Genesis

After reading in the New York Times Sunday Magazine of August 21 about
Douglas Hofstadter's latest idea on artificial intelligence arising
from the interplay of lower levels, I was inspired to carry his
suggestion to the logical limit.  I wrote the following item partly in
jest, but the idea may have some merit, at least to stimulate
discussion.  It was also inspired by Stanislaw Lem's story "Non
Serviam".

------------------------------------------------------------------------


                            PROGRAM GENESIS

                A COMPUTER MODEL OF THE PRIMORDIAL SOUP


     The purpose of this program is to model the primordial soup that 
existed in the earth's oceans during the period when life first
formed.  The program sets up a workspace (the ocean) in which storage
space in memory and CPU time (resources) are available to
self-replicating mod- ules of memory organization (organisms).
Organisms are sections of code and data which, when run, cause copies
of themselves to be written into other regions of the workspace and
then run.  Overproduction of species, competition for scarce
resources, and occasional copying errors, either accidental or
deliberately introduced, create all the conditions neces- sary for the
onset of evolutionary processes.  A diagnostic package pro- vides an
ongoing picture of the evolving state of the system.  The goal of the
project is to monitor the evolutionary process and see what this might
teach us about the nature of evolution.  A possible long-range 
application is a novel method for producing artificial intelligence.
The novelty is, of course, not complete, since it has been done at
least once before.

------------------------------

Date: 18 Aug 83 11:16:24-PDT (Thu)
From: decvax!linus!utzoo!dciem!mmt @ Ucb-Vax
Subject: Re: Japanese 5th Generation Effort
Article-I.D.: dciem.293

There seems to be an analogy between the 5th generation project and 
the ARPA-SUR project on automatic speech understanding of a decade
ago.  Both are top-down, initiated with a great deal of hope, and
dependent on solving some "nitty-gritty problems" at the bottom. The
result of the ARPA-SUR project was at first to slow down research in
ASR (automatic speech recognition) because a lot of people got scared
off by finding how hard the problem really is. But it did, as Robert
Amsler suggests the 5th generation project will, show just what
"nitty-gritty problems" are important. It provided a great step
forward in speech recognition, not only for those who continued to
work on projects initiated by ARPA-SUR, but also for those who have
come afterward. I doubt we would now be where we are in ASR if it had
not been for that apparently failed project ten years ago.
(Parenthetically, notice that a lot of the subsequent advances in ASR
have been due to the Japanese, and that European/American researchers
freely use those advances.)

Martin Taylor

------------------------------

End of AIList Digest
********************
24-Aug-83 10:41:58-PDT,17318;000000000001
Mail-From: LAWS created at 24-Aug-83 10:39:01
Date: Wednesday, August 24, 1983 10:34AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #47
To: AIList@SRI-AI


AIList Digest           Wednesday, 24 Aug 1983     Volume 1 : Issue 47

Today's Topics:
  Request - AAAI-83 Registration,
  Logic Programming - PARLOG & PROLOG & LISP Prologs
----------------------------------------------------------------------

Date: 22 Aug 83 16:50:55-PDT (Mon)
From: harpo!eagle!allegra!jdd @ Ucb-Vax
Subject: AAAI-83 Registration
Article-I.D.: allegra.1777

Help!  I put off registering for AAAI-83 until too late, and now I
hear that it's overbooked!  (I heard 7000 would-be registrants and
1500 places, or some such.)  If you're registered but find you can't
attend, please let me know, or if you have any other suggestions, feel
free.

Cheers, John ("Something Wrong With My Planning Heuristics")
DeTreville Bell Labs, Murray Hill

------------------------------

Date: 23 Aug 83  1337 PDT
From: Diana Hall <DFH@SU-AI>
Subject: PARLOG

                 [Reprinted from the SCORE BBoard.]

Parlog Seminar

Keith Clark will give a seminar on Parlog Thursday, Sept. 1 at 3 p.m
in Room 252 MJH.



              PARLOG: A PARALLEL LOGIC PROGRAMMING LANGUAGE

                              Keith L. Clark

ABSTRACT

        PARLOG is a logic programming language in the sense that
nearly every definition and query can be read as a sentence of
predicate logic.  It differs from PROLOG in incorporating parallel
modes of evaluation.  For reasons of efficient implementation, it
distinguishes and separates and-parallel and or-parallel evaluation.
        PARLOG relations are divided into two types:  and-relations
and or-relations.  A sequence of and-relation calls can be evaluated
in parallel with shared variables acting as communication channels.
Only one solution to each call is computed.
        A sequence of or-relation calls is evaluated sequentially but
all the solutions are found by a parallel exploration of the different
evaluation paths.  A set constructor provides the main interface
between and-relations and or-relations.  This wraps up all the
solutions to a sequence of or-relation calls in a list.  The solution
list can be concurrently consumed by an and-relation call.
        The and-parallel definitions of relations that will only be
used in a single functional mode can be given using conditional
equations.  This gives PARLOG the syntactic convenience of functional
expressions when non-determinism is not required.  Functions can be
invoked eagerly or lazily; the eager evaluation of nested function
calls corresponds to and-parallel evaluation of conjoined relation
calls.
        This paper is a tutorial introduction and semi-formal
definition of PARLOG.  It assumes familiarity with the general
concepts of logic programming.

------------------------------

Date: Thu 18 Aug 83 20:00:36-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: There are Prologs and Prologs ...

In the July issue of SIGART an article by Richard Wallace describes 
PiL, yet another Prolog in Lisp. The author claims that his 
interpreter shows that "it is easy to extend Lisp to do what Prolog 
does."

It is a useful pedagogical exercise for Lisp users interested in logic
programming to look at a simple, clean implementation of a subset of 
Prolog in Lisp. A particularly illuminating implementation and 
discussion is given in "Structure and Implementation of Computer 
Programs", a set of MIT lecture notes by Abelson and Sussman.

However, such simple interpreters (even the Abelson and Sussman one 
which is far better than PiL) are not a sufficient basis for the claim
that "it is easy extend Lisp to do what Prolog does." What Prolog 
"does" is not just to make certain deductions in a certain order, but 
also MAKE THEM VERY FAST. Unfortunately, ALL Prologs in Lisp I know of
fail in this crucial aspect (by factors between 30 and 1000).

Why is speed such a crucial aspect of Prolog (or of Lisp, for that 
matter)? First, because the development of complex experimental 
programs requires MANY, MANY experiments, which just could not be done
if the systems were, say, 100 times slower than they are. Second, 
because a Prolog (Lisp) system needs to be written mostly in Prolog 
(Lisp) to support the extensibility that is a central aspect of modern
interactive computing environments.

The following paraphrase of Wallace's claim shows its absurdity: "[LiA
(Lisp in APL) shows] that is easy to extend APL to do what Lisp does."
Really? All of what Maclisp does? All of what ZetaLisp does?

Lisp and Prolog are different if related languages. Both have their 
supporters. Both have strengths and (serious) weaknesses. Both can be 
implemented with comparable efficiency. It is educational to to look 
both at (sub)Prologs in Lisp and (sub)Lisps in Prolog. Let's not claim
discoveries of philosopher's stones.

Fernando Pereira
AI Center
SRI International

------------------------------

Date: Wed, 17 Aug 1983  10:20 EDT
From: Ken%MIT-OZ@MIT-MC
Subject: FOOLOG Prolog

                 [Reprinted from the PROLOG Digest.]

Here is a small Prolog ( FOOLOG = First Order Oriented LOGic )
written in Maclisp. It includes the evaluable predicates CALL,
CUT, and BAGOF. I will probably permanently damage my reputation
as a MacLisp programmer by showing it, but as an attempt to cut
the hedge, I can say that I wanted to see how small one could
make a Prolog while maintaining efficiency ( approx 2 pages; 75%
of the speed of the Dec-10 Prolog interpreter ).  It is actually
possible to squeeze Prolog into 16 lines.  If you are interested
in that one and in FOOLOG, I have a ( very ) brief report describing
them that I can send you.  Also, I'm glad to answer any questions
about FOOLOG. For me, the best is if you send messages by Snail Mail,
since I do not have a net connection.  If that is uncomfortable, you
can also send messages via Ken Kahn, who forwards them.

My address is:

Martin Nilsson
UPMAIL
Computing Science Department
Box 2059
S-750 02 UPPSALA, Sweden


---------- Here is a FOOLOG sample run:

(load 'foolog)          ; Lower case is user type-in

; Loading DEFMAX 9844442.
(progn (defpred member  ; Definition of MEMBER predicate
         ((member ?x (?x . ?l)))
         ((member ?x (?y . ?l)) (member ?x ?l)))
       (defpred cannot-prove    ; and CANNOT-PROVE predicate
         ((cannot-prove ?goal) (call ?goal) (cut) (nil))
         ((cannot-prove ?goal)))
       'ok)
OK
(prove (member ?elem (1 2 3)) ; Find elements of the list
       (writeln (?elem is an element))))
(1. IS AN ELEMENT)
MORE? t                 ; Find the next solution
(2. IS AN ELEMENT)
MORE? nil               ; This is enough
(TOP)
(prove (cannot-prove (= 1 2)) ; The two cannot-prove cases
MORE? t
NIL
(prove (cannot-prove (= 1 1))
NIL


---------- And here is the source code:

; FOOLOG Interpreter (c) Martin Nilsson  UPMAIL   1983-06-12

(declare (special *inf* *e* *v* *topfun* *n* *fh* *forward*)
         (special *bagof-env* *bagof-list*))

(defmacro defknas (fun args &rest body)
  `(defun ,fun macro (l)
     (cons 'progn (sublis (mapcar 'cons ',args (cdr l))
                          ',body))))

; ---------- Interpreter

(setq *e* nil *fh* nil *n* nil *inf* 0
      *forward* (munkam (logior 16. (logand (maknum 0) -16.))))
(defknas imm (m x) (cxr x m))
(defknas setimm (m x v) (rplacx x m v))
(defknas makrecord (n)
  (loop with r = (makhunk n) and c for i from 1 to (- n 2) do
        (setq c (cons nil nil))
        (setimm r i (rplacd c c)) finally (return r)))

(defknas transfer (x y)
  (setq x (prog1 (imm x 0) (setq y (setimm x 0 y)))))
(defknas allocate nil
  (cond (*fh* (transfer *fh* *n*) (setimm *n* 7 nil))
        ((setq *n* (setimm (makrecord 8) 0 *n*)))))
(defknas deallocate (on)
  (loop until (eq *n* on) do (transfer *n* *fh*)))
(defknas reset (e n) (unbind e) (deallocate n) nil)
(defknas ult (m x)
  (cond ((or (atom x) (null (eq (car x) '/?))) x)
        ((< (cadr x) 7)
         (desetq (m . x) (final (imm m (cadr x)))) x)
        ((loop initially (setq x (cadr x)) until (< x 7) do
               (setq x (- x 6)
                     m (or (imm m 7)
                           (imm (setimm m 7 (allocate)) 7)))
          finally (desetq (m . x) (final (imm m x)))
          (return x)))))
(defknas unbind (oe)
  (loop with x until (eq *e* oe) do
   (setq x (car *e*)) (rplaca x nil) (rplacd x x) (pop *e*)))
(defknas bind (x y n)
  (cond (n (push x *e*) (rplacd x (cons n y)))
        (t (push x *e*) (rplacd x y) (rplaca x *forward*))))
(lap-a-list '((lap final subr) (hrrzi 1 @ 0 (1)) (popj p) nil))
; (defknas final (x) (cdr (memq nil x))) ; equivalent
(defknas catch-cut (v e)
  (and (null (and (eq (car v) 'cut) (eq (cdr v) e))) v)))

(defun prove fexpr (gs)
  (reset nil nil)
  (seek (list (allocate)) (list (car (convq gs nil)))))

(defun seek (e c)
  (loop while (and c (null (car c))) do (pop e) (pop c))
  (cond ((null c) (funcall *topfun*))
        ((atom (car c)) (funcall (car c) e (cdr c)))
        ((loop with rest = (cons (cdar c) (cdr c)) and
          oe = *e* and on = *n* and e1 = (allocate)
          for a in (symeval (caaar c)) do
          (and (unify e1 (cdar a) (car e) (cdaar c))
               (setq inf* (1+ *inf*)
                     *v* (seek (cons e1 e)
                               (cons (cdr a) rest)))
               (return (catch-cut *v* e1)))
          (unbind oe)
          finally (deallocate on)))))

(defun unify (m x n y)
  (loop do
    (cond ((and (eq (ult m x) (ult n y)) (eq m n)) (return t))
          ((null m) (return (bind x y n)))
          ((null n) (return (bind y x m)))
          ((or (atom x) (atom y)) (return (equal x y)))
          ((null (unify m (pop x) n (pop y))) (return nil)))))

; ---------- Evaluable Predicates

(defun inst (m x)
  (cond ((let ((y x))
           (or (atom (ult m x)) (and (null m) (setq x y)))) x)
        ((cons (inst m (car x)) (inst m (cdr x))))))

(defun lisp (e c)
  (let ((n (pop e)) (oe *e*) (on *n*))
    (or (and (unify n '(? 2) (allocate) (eval (inst n '(? 1))))
             (seek e c))
        (reset oe on))))

(defun cut (e c)
  (let ((on (cadr e))) (or (seek (cdr e) c) (cons 'cut on))))

(defun call (e c)
  (let ((m (car e)) (x '(? 1)))
    (seek e (cons (list (cons (ult m x) '(? 2))) c))))

(defun bagof-topfun nil
  (push (inst *bagof-env* '(? 1)) *bagof-list*) nil)

(defun bagof (e c)
  (let* ((oe *e*) (on *n*) (*bagof-list* nil)
                  (*bagof-env* (car e)))
    (let ((*topfun* 'bagof-topfun)) (seek e '(((call (? 2))))))
    (or (and (unify (pop e) '(? 3) (allocate) *bagof-list*)
             (seek e c))
        (reset oe on))))

; ---------- Utilities

(defun timer fexpr (x)
  (let* ((*rset nil) (*inf* 0) (x (list (car (convq x nil))))
         (t1 (prog2 (gc) (runtime) (reset nil nil)
                    (seek (list (allocate)) x)))
         (t1 (- (runtime) t1)))
    (list (// (* *inf* 1000000.) t1) 'LIPS (// t1 1000.)
          'MS *inf* 'INF)))

(eval-when (compile eval load)
  (defun convq (t0 l0)
    (cond ((pairp t0) (let* (((t1 . l1) (convq (car t0) l0))
                             ((t2 . l2) (convq (cdr t0) l1)))
                        (cons (cons t1 t2) l2)))
          ((null (and (symbolp t0) (eq (getchar t0 1) '/?)))
           (cons t0 l0))
          ((memq t0 l0)
           (cons (cons '/? (cons (length (memq t0 l0))
                                 t0)) l0))
          ((convq t0 (cons t0 l0))))))

(defmacro defpred (pred &rest body)
  `(setq ,pred ',(loop for clause in body
                       collect (car (convq clause nil)))))

(defpred true    ((true)))
(defpred =       ((= ?x ?x)))
(defpred lisp    ((lisp ?x ?y) . lisp))
(defpred cut     ((cut) . cut))
(defpred call    ((call (?x . ?y)) . call))
(defpred bagof   ((bagof ?x ?y ?z) . bagof))
(defpred writeln
  ((writeln ?x) (lisp (progn (princ '?x) (terpri)) ?y)))

(setq *topfun*
      '(lambda nil (princ "MORE? ")
               (and (null (read)) '(top))))

------------------------------

Date: Wed, 17 Aug 1983  10:14 EDT
From: Ken%MIT-OZ@MIT-MC
Subject: A Pure Prolog Written In Pure Lisp

                 [Reprinted from the PROLOG Digest.]

;; The following is a tiny Prolog interpreter in MacLisp
;; written by Ken Kahn.
;; It was inspired by other tiny Lisp-based Prologs of
;; Par Emanuelson and Martin Nilsson
;; There are no side-effects in anywhere in the implementation
;; Though it is very slow of course.

(defun Prolog (database) ;; a top-level loop for Prolog
  (prove (list (rename-variables (read) '(0)))
         ;; read a goal to prove
         '((bottom-of-environment)) database 1)
  (prolog database))

(defun prove (list-of-goals environment database level)
  ;; proves the conjunction of the list-of-goals
  ;; in the current environment
  (cond ((null list-of-goals)
         ;; succeeded since there are no goals
         (print-bindings environment environment)
          ;; the user answers "y" or "n" to "More?"
         (not (y-or-n-p "More?")))
        (t (try-each database database
                     (rest list-of-goals) (first list-of-goals)
                     environment level))))

(defun try-each (database-left database goals-left goal
                               environment level)
 (cond ((null database-left)
        ()) ;; fail since nothing left in database
       (t (let ((assertion
                 ;; level is used to uniquely rename variables
                 (rename-variables (first database-left)
                                   (list level))))
            (let ((new-environment
                   (unify goal (first assertion) environment)))
              (cond ((null new-environment) ;; failed to unify
                     (try-each (rest database-left)
                               database
                               goals-left
                               goal
                               environment level))
                    ((prove (append (rest assertion) goals-left)
                            new-environment
                            database
                            (add1 level)))
                    (t (try-each (rest database-left)
                                 database
                                 goals-left
                                 goal
                                 environment
                                 level))))))))

(defun unify (x y environment)
  (let ((x (value x environment))
        (y (value y environment)))
    (cond ((variable-p x) (cons (list x y) environment))
          ((variable-p y) (cons (list y x) environment))
          ((or (atom x) (atom y))
           (and (equal x y) environment))
          (t (let ((new-environment
                    (unify (first x) (first y) environment)))
               (and new-environment
                    (unify (rest x) (rest y)
                           new-environment)))))))

(defun value (x environment)
  (cond ((variable-p x)
         (let ((binding (assoc x environment)))
           (cond ((null binding) x)
                 (t (value (second binding) environment)))))
        (t x)))

(defun variable-p (x) ;; a variable is a list beginning with "?"
  (and (listp x) (eq (first x) '?)))

(defun rename-variables (term list-of-level)
  (cond ((variable-p term) (append term list-of-level))
        ((atom term) term)
        (t (cons (rename-variables (first term)
                                   list-of-level)
                 (rename-variables (rest term)
                                   list-of-level)))))

(defun print-bindings (environment-left environment)
  (cond ((rest environment-left)
         (cond ((zerop
                 (third (first (first environment-left))))
                (print
                 (second (first (first environment-left))))
                (princ " = ")
                (prin1 (value (first (first environment-left))
                              environment))))
         (print-bindings (rest environment-left) environment))))

;; a sample database:
(setq db '(((father jack ken))
           ((father jack karen))
           ((grandparent (? grandparent) (? grandchild))
            (parent (? grandparent) (? parent))
            (parent (? parent) (? grandchild)))
           ((mother el ken))
           ((mother cele jack))
           ((parent (? parent) (? child))
            (mother (? parent) (? child)))
           ((parent (? parent) (? child))
            (father (? parent) (? child)))))

;; the following are utilities

(defun first (x) (car x))
(defun rest (x) (cdr x))
(defun second (x) (cadr x))
(defun third (x) (caddr x))

------------------------------

End of AIList Digest
********************
25-Aug-83 09:40:26-PDT,11511;000000000001
Mail-From: LAWS created at 25-Aug-83 09:37:30
Date: Thursday, August 25, 1983 9:14AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #48
To: AIList@SRI-AI


AIList Digest           Thursday, 25 Aug 1983      Volume 1 : Issue 48

Today's Topics:
  AI Literature - Journals & COMTEX & Online Reports,
  AI Architecture - The Connection Machine,
  Programming Languages - Scheme and Lisp Availability,
  Artificial Intelligence - Turing Test & Hofstadter Article
----------------------------------------------------------------------

Date: 20 Aug 1983 0011-MDT
From: Jed Krohnfeldt <KROHNFELDT@UTAH-20>
Subject: Re: AI Journals

I would add one more journal to the list:

Cognition and Brain Theory
	Lawrence Erlbaum Associates, Inc.
	365 Broadway,
	Hillsdale, New Jersey 07642
	$18 Individual $50 Instititional
	Quarterly
	Basic cognition, proposed models and discussion of
	consciousness and mental process, epistemology - from frames to
	neurons, as related to human cognitive processes. A "fringe"
	publication for AI topics, and a good forum for issues in cognitive
	science/psychology.

Also, I notice that the institutional rate was quoted for several of 
the journals cited.  Many of these journals can be had for less if you
convince them that you are a lone reader (individual) and/or a 
student.


[Noninstitutional members of AAAI can get the Artificial Intelligence
Journal for $50.  See the last page of the fall AI Magazine.

Another journal for which I have an ad is

New Generation Computing
	Springer-Verlag New York Inc.
	Journal Fulfillment Dept.
	44 Hartz Way
	Secaucus, NJ  07094
	A quarterly English-language journal devoted to international
	research on the fifth generation computer.  [It seems to be
	very strong on hardware and logic programming.]
	1983 - 2 issues - $52. (Sample copy free.)
	1984 - 4 issues - $104.

-- KIL]

------------------------------

Date: Sun 21 Aug 83 18:06:52-PDT
From: Robert Amsler <AMSLER@SRI-AI>
Subject: Journal listings

Computing Reviews, Nov. 1982, lists all the periodicals they receive 
and their addresses. Handy list of a lot of CS journals.

------------------------------

Date: Tue, 23 Aug 83 11:05 EDT
From: Tim Finin <Tim%UPenn@UDel-Relay>
Subject: COMTEX and getting AI technical reports


There WAS a company which offered a service in which subscribers would
get copies of recent technical reports on all areas of AI research -
COMTEX.  The reports were to be drawn from universities and
institutions doing AI research.  The initial offering in the series
contained old Stanford and MIT memos.  The series was intended to
provide very timely access to current reaseach in the participating
institution. COMTEX has decided to discontinue the AI series, however.
Perhaps if they perceive an increased demand for this series they will
reactivate it.

Tim

[There is a half-page Comtex ad for the MIT and Stanford memoranda in
the Fall issue of AI Magazine, p. 79.  -- KIL]

------------------------------

Date: 19 Aug 83 19:21:34 PDT (Friday)
From: Hamilton.ES@PARC-MAXC.ARPA
Subject: On-line tech reports?

I raised this issue on Human-nets nearly two years ago and didn't seem
to get more than a big yawn for a response.

Here's an example of what I had to go through recently:  I saw an 
interesting-looking CMU tech report (Newell, "Intellectual Issues in
the History of AI") listed in SIGART News.  It looked like I could
order it from CMU.  No ARPANET address was listed, so I wrote -- I
even gave them my ARPANET address.  They sent me back a form letter
via US Snail referring me to NTIS.  So then I phoned NTIS.  I talked
to an answering machine and left my US Snail address and the order
number of the tech report.  They sent me back a postcard giving the
price, something like $7.  I sent them back their order form,
including my credit card#.  A week or so later I got back a moderately
legible document, probably reproduced from microfiche, that looks
suspiciously like a Bravo document that's probably on line somewhere,
if I only knew where.  I'm not picking on CMU -- this is a general
problem.

There's GOT to be a better way.  How about: (1) Have a standard 
directory at each major ARPA host, containing at least a catalog with 
abstracts of all recent tech reports, and info on how to order, and 
hopefully full text of at least the most recent and/or popular ones, 
available for FTP, perhaps at off-peak hours only.  (2) Hook NTIS into
ARPANET, so that folks could browse their catalogs and submit orders 
electronically.

RUTGERS used to have an electronic mailing list to which they 
periodically sent updated tech report catalogs, but that's about the 
only activity of this sort that I've seen.

We've got this terrific electronic highway.  Let's make it useful for 
more than mailing around collections of flames, like this one!

--Bruce

------------------------------

Date: 23 August 1983 00:22 EDT
From: Alan Bawden <ALAN @ MIT-MC>
Subject: The Connection Machine

    Date: Thu 18 Aug 83 13:46:13-PDT
    From: David Rogers <DRogers at SUMEX-AIM.ARPA>

    The closest hardware I am aware of is called the Connection
    Machine, and is begin developed at MIT by Alan Bawden, Dave
    Christman, and Danny Hillis ...

also Tom Knight, David Chapman, Brewster Kahle, Carl Feynman, Cliff
Lasser, and Jon Taft.  Danny Hillis provided the original ideas, his
is the name to remember.

    The project involves building a model with about 2^10 processors.

The prototype Connection Machine was designed to have 2^20 processors,
although 2^10 might be a good size to actually build to test the idea.

One way to arrive at a superficial understanding of the Connection
Machine would be to imagine augmenting a NETL machine with the ability
to pass addresses (or "pointers") as well as simple markers.  This
permits the Connection Machine to perform even more complex pattern
matching on semantic-network-like databases.  The detection of any
kind of cycle (find all people who are employed by their own fathers),
is the canonical example of something this extension allows.

But thats only one way to program a Connection Machine.  In fact, the
thing seems to be a rather general parallel processor.

MIT AI Memo #646, "The Connection Machine" by Danny Hillis, is still a
perfectly good reference for the general principles behind the
Connection Machine, despite the fact that the hardware design has
changed a bit since it was written.  (The memo is currently being
revised.)

------------------------------

Date: 22 August 1983 18:20 EDT
From: Hal Abelson <HAL @ MIT-MC>
Subject: Lisps on 68000


At MIT we are working on a version of Scheme (a lexically scoped 
dialect of Lisp) that runs on the HP 9836 computer, which is a 68000 
machine.  Starting 3 weeks from now, 350 MIT students will be using 
this system on a full-time basis.

The implementation consists of a kernel written in 68000 assembler, 
with most of the system written in Scheme and compiled using a quick 
and dirty compiler, which is also written in Scheme.  The 
implementation sits inside of HP's UCSD-Pascal-clone operating system.
For an editor, we use NMODE, which is a version of EMACS written in 
Portable Standard Lisp. Thus our machines run, at present, with both 
Scheme and PSL resident, and consequently require 4 megabytes of main 
memory.  This will change when we get another editor, which will be at
least a few months.

The current system gives good performance for coursework, and is 
optimized to provide fast interpreted code, as well as a good 
debugging environment for student use.

Work will begin on a serious compiler as soon as the start-of-semester
panic is over.  There will also be a compatible version for the Vax.

Distribution policy has not yet been decided upon, but most likely we 
will give the system away (not the PSL part, which is not ours to 
give) to anyone who wants it, provided that people who get it agree to
return all improvements to MIT.

Please no requests for a few months, though, since we are still making
changes in the design and documentation.  Availibility will be 
annouced on this mailing list..

------------------------------

Date: 23 Aug 83 16:36:26-PDT (Tue)
From: harpo!seismo!rlgvax!cvl!umcp-cs!mark @ Ucb-Vax
Subject: Franz lisp on a Sun Workstation.
Article-I.D.: umcp-cs.2096

So what is the true story?  What person says it is almost as fast as
a single user 780, another says it is an incredible hog.  These can't
both be right, as a Vax-780 IS at least as fast as a Lispmachine (not
counting the bitmapped screen).  It sounded to me like the person who
said it was fast had actually used it, but the person who said it was
slow was just working from general knowledge.  So maybe it is fast.
Wouldn't that be nice.
--
spoken: mark weiser
UUCP:   {seismo,allegra,brl-bmd}!umcp-cs!mark
CSNet:  mark@umcp-cs
ARPA:   mark.umcp-cs@UDel-Relay

------------------------------

Date: Tue 23 Aug 83 14:43:50-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: in defense of Turing

        Scott Turner (AIList V1 #46) has some interesting points about
intelligence, but I felt compelled to defend Turing in his absence.  
The Turing article in Mind (must reading for any AIer) makes it clear
that the test is not proposed to *define* an intelligent system, or
even to *recognize* one; the claim is merely that a system which *can*
pass the test has intelligence. Perhaps this is a subtle difference, 
but it's as important as the difference between "iff" and "if" in
math.

        Scott bemoans the Turing test as testing for "Human Mimicing 
Ability", and suggests that ELIZA has shown this to be possible 
without intelligence. ELIZA has fooled some people, though I would not
say it has passed anything remotely like the Turing test.  Mimicing
language is a far cry from mimicing intelligence.

        In any case, it may be even more difficult to detect 
intelligence without doing a comparison to human intellect; after all,
we're the only intelligent systems we know of...

Regards,

David

------------------------------

Date: Tue 23 Aug 83 19:23:00-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: Hofstadter article

        Alas, after reading the article about Hofstadter in the
NYTimes, I realized that AI workers can be at least as closeminded as
other scientists have shown. At its bottom level, it seemed that DH's
basic feeling (that we have a long way to go before creating real
intelligence) is embarrassingly obvious. In the long run, the false
hopes that expectations of quick results give rise to can only hurt
the acceptance of AI in people's minds.

        (By the way, I thought the article was very well written, and
would encourage people to look it up. The report is spiced with
opinions from AI workers such as Alan Newell and Marvin Minsky, and it
was enjoyable to hear their candid comments about Hofstadter and AI in
general. Quite a step above the usual articles designed for general
consumption about AI...)

David R.

------------------------------

End of AIList Digest
********************
29-Aug-83 11:27:31-PDT,14742;000000000001
Mail-From: LAWS created at 29-Aug-83 11:24:09
Date: Monday, August 29, 1983 11:08AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #49
To: AIList@SRI-AI


AIList Digest            Monday, 29 Aug 1983       Volume 1 : Issue 49

Today's Topics:
  Conferences - AAAI-83 Registration,
  Bindings - Rog-O-Matic & Mike Mauldin,
  Artificial Languages - Loglan,
  Knowledge Representation & Self-Consciousness - Textnet,
  AI Publication - Corporate Constraints,
  Lisp Availability - PSL on 68000's,
  Automatic Translation - Lisp-to-Lisp & Natural Language
----------------------------------------------------------------------

Date: 23 Aug 83 11:04:22-PDT (Tue)
From: decvax!linus!philabs!seismo!rlgvax!cvl!umcp-cs!arnold@Ucb-Vax
Subject: Re: AAAI-83 Registration
Article-I.D.: umcp-cs.2093


        If there will be over 7000 people attending AAAi_83,
        then there will almost be as many people as will
        attend the World Sci. Fic. Convention.

        I worked registration for AAAI-83 on Aug 22 (Monday).
        There were about 700 spaces available, along with about
        1700 people who pre-registered.

        [...]

                --- A Volunteer

------------------------------

Date: 26 Aug 83 2348 EDT
From: Rudy.Nedved@CMU-CS-A
Subject: Rog-O-Matic & Mike Mauldin

Apparently people want something related to Rog-O-Matic and are 
sending requests to "Maudlin". If you notice very closely that is not
how his name is spelled. People are transposing the "L" and the "D".
Hopefully this message will help the many people who are trying to
send Mike mail.

If you still can't get his mailing address right, try
"mlm@CMU-CS-CAD".

-Rudy
A CMU Postmaster

------------------------------

Date: 28 August 1983 06:36 EDT
From: Jerry E. Pournelle <POURNE @ MIT-MC>
Subject: Loglan

I've been interested in LOGLANS since Heinlein's GULF which was in
part devoted to it.  Alas, nothing seems to happen that I can use; is
the institute about to publish new materials?  Is there anything in
machine-readable form using Loglans?  Information appreciated.  JEP

------------------------------

Date: 25-Aug-83 10:03 PDT
From: Kirk Kelley  <KIRK.TYM@OFFICE-2>
Subject: Re: Textnet

Randy Trigg mentioned his "Textnet" thesis project a few issues back
that combines hypertext and NLS/Augment structures.  He makes a strong
statement about distributed Textnet on worldnet:

   There can be no mad dictator in such an information network.

I am interested in building a testing ground for statements such as
that.  It would contain a model that would simulate the global effects
of technologies such as publishing on-line.  Here is what may be of
interest to the AI community.  The simulation would be a form of
"augmented global self-consciousness" in that it models its own
viability as a service published on-line via worldnet.  If you have
heard of any similar project or might be interested in collaborating
on this one, let me know.

 -- kirk

------------------------------

Date: 25 Aug 83 15:47:19-PDT (Thu)
From: decvax!microsoft!uw-beaver!ssc-vax!tjj @ Ucb-Vax
Subject: Re: Language Translation
Article-I.D.: ssc-vax.475

OK, you turned your flame-thrower on, now prepare for mine!  You want
to know why things don't get published -- take a look at your address
and then at mine.  You live (I hope I'm not talking to an AI Project)
in the academic community; believe it or not there are those of us
who work in something euphemistically refered to as industry where
the rule is not publish or perish, the rule is keep quiet and you are
less likely to get your backside seared!  Come on out into the 'real'
world where technical papers must be reviewed by managers that don't
know how to spell AI, let alone understand what language translation
is all about.  Then watch as two of them get into a moebius argument,
one saying that there is nothing classified in the paper but there is
proprietary information, while the other says no proprietary but it
definitely is classified!  All the while this is going on the
deadline for submission to three conferences passes by like the
perennial river flowing to the sea.  I know reviews are not unheard
of in academia, and that professors do sometimes get into arguments,
but I've no doubt that they would be more generally favorable to
publication than managers who are worried about the next
stockholder's meeting.

It ain't all that bad, but at least you seem to need a wider
perspective.  Perhaps the results haven't been published; perhaps the
claims appear somewhat tentative; but the testing has been critical,
and the only thing left is primarily a matter of drudgery, not
innovative research.  I am convinced that we may certainly find a new
and challenging problem awaiting us once that has been done, but at
least we are not sitting around for years on end trying to paste
together a grammar for a context
sensitive language!!

Ted Jardine
TJ (with Amazing Grace) The Piper
ssc-vax!tjj

------------------------------

Date: 24 Aug 83 19:47:17-PDT (Wed)
From: pur-ee!uiucdcs!uicsl!pollack @ Ucb-Vax
Subject: Re: Lisps on 68000's - (nf)
Article-I.D.: uiucdcs.2626

I played with a version of PSL on a HP 9845 for several hours one
day.  The environment was just like running FranzLisp under Emacs in
"electric-lisp" mode. (However, the editor is written in PSL itself,
so it is potentially much more powerful than the emacs on our VAX,
with its screwy c/mock-lisp implementation.) The language is in the
style of Maclisp (rather than INTERLISP) and uses standard scoping
(rather than the lexical scoping of T). The machine has 512 by 512
graphics and a 2.5 dimensional window system, but neither are as
fully integrated into the programming environment as on a Xerox
Dolphin. Although I have no detailed benchmarks, I did port a
context-free chart parser to it. The interpreter speed was not
impressive, but was comparable with interpreted Franz on a VAX.
However, the speed of compiled code was very impressive. The compiler
is incremental, and built-in to the lisp system (like in INTERLISP),
and caused about a 10-20 times speedup over interpreted code (my
estimate is that both the Franz and INTERLISP-d compilers only net
2-5 times speedup).  As a result, the compiled parser ran much faster
on the 68000 than the same compiled program on a Dolphin.

I think PSL is definitely a superior lisp for the 68000, but I have
no idea whether is will be available for non-HP machines...


Jordan Pollack
University of Illinois
...pur-ee!uiucdcs!uicsl!pollack

------------------------------

Date: 24 Aug 83 16:20:12-PDT (Wed)
From: harpo!gummo!whuxlb!floyd!vax135!cornell!uw-beaver!ssc-vax!sts@Ucb-Vax
Subject: Re: Lisp-to-Lisp translation
Article-I.D.: ssc-vax.468

These problems just go to show what AI people have known for years 
(ever since the first great bust of machine translation) - ya can't 
translate without understanding what yer translating.  Optimizing 
compilers are often impressive encodings of expert coders' knowledge, 
and they are for very simple languages - not like Interlisp or English

                                        stan the lep hacker
                                        ssc-vax!sts (soon utah-cs)

------------------------------

Date: 24 Aug 83 16:12:59-PDT (Wed)
From: harpo!floyd!vax135!cornell!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: Language Translation
Article-I.D.: ssc-vax.467

You have heard of my parser.  It's a variant on Berkeley's PHRAN, but 
has been improved to handle arbitrarily ambiguous sentences.  I
submitted a paper on it to AAAI-83, but it was rejected (well, I did
write it in about 3 days - wasn't very good).  A paper will be
appearing at the AIAA Computers in Aerospace conference in October.
The parser is only a *basic* solution - I suppose I should have made
that clearer.  Since it is knowledge-based, it needs **lots** of
knowledge.  Right now we're working on ways to acquire linguistic
knowledge automatically (Selfridge's work is very interesting).  The
knowledge base is woefully small, but we don't anticipate any problems
expanding it (famous last words!).

The parser has just been released for use within Boeing ("just"
meaning two days ago), and it may be a while before it becomes
available elsewhere (sorry).  I can mail details on it though.

As for language analysis being NP-complete, yes you're right.  But are
you sure that humans don't brute-force the process, and that computers
won't have to do the same?

                                        stan the lep hacker
                                        ssc-vax!sts (soon utah-cs)

ps if IBM is using APL, that explains a lot (I'm a former MVS victim)

------------------------------

Date: 24 Aug 83 15:47:11-PDT (Wed)
From: harpo!gummo!whuxlb!floyd!vax135!cornell!uw-beaver!ssc-vax!sts@Ucb-Vax
Subject: Re: So the language analysis problem has been solved?!?
Article-I.D.: ssc-vax.466

Heh-heh.  Thought that'd raise a few hackles (my boss didn't approve 
of the article; oh well.  I tend to be a bit fiery around the edges).

The claim is that we have "basically" solved the problem.  Actually, 
we're not the only ones - the APE-II parser by Pazzani and others from
the Schank school have also done the same thing.  Our parser can
handle arbitrarily ambiguous sentences, generating *all* the possible
meanings, limited only by the size of its knowledge base.  We have the
capability to do any sort of idiom, and mix any number of natural
languages.  Our problems are really concerned with the acquisition of
linguistic knowledge, either by having nonspecialists put it in by
hand (*everyone* is an expert on the native language) or by having the
machine acquire it automatically.  We can mail out some details if
anyone is interested.

One advantage we had is starting from ground zero, so we had very few 
preconceptions about how language analysis ought to be done, and
scanned the literature.  It became apparent that since we were
required to handle free-form input, any kind of grammar would
eventually become less than useful and possibly a hindrance to
analysis.  Dr. Pereira admits as much when he says that grammars only
reflect *some* aspects of language.  Well, that's not good enough.  Us
folks in applied research can't always afford the luxury of theorizing
about the most elegant methods.  We need something that models human
cognition closely enough to make sense to knowledge engineers and to
users.  So I'm sort of in the Schank camp (folks at SRI hate 'em)
although I try to keep my thinking as independent as possible (hard
when each camp is calling the other ones charlatans; I'll post
something on that pernicious behavior eventually).

Parallel production systems I'll save for another article...

                                        stan the leprechaun hacker
                                        ssc-vax!sts (soon utah-cs)

ps I *did* read an article of Dr. Pereira's - couldn't understand the
point.  Sorry.  (perhaps he would be so good as to explain?)

[Which article? -- KIL]

------------------------------

Date: 26 Aug 83 11:19-EST (Fri)
From: Steven Gutfreund <gutfreund%umass-cs@UDel-Relay>
Subject: Musings on AI and intelligence

Spafford's musings on intelligent communications reminded me of an
article I read several years ago by John Thomas (then at T.J. Watson,
now at White Plains, a promotion as IBM sees it).

In the paper he distinguishes between two distinct approaches (or
philosophies) at raising the level man/machine communication.

[Natural langauge recognition is one example of this problem. Here the
machine is trying to "decipher" the user's natural prose to determine
the desired action. Another application are intelligent interfaces
that attempt to decipher user "intentions"]

The Human Approach -

Humans view communication as inherently goal based. When one
communicates with another human being, there is an explicit goal -> to
induce a cognitive state in the OTHER. This cognitive state is usually
some function of the communicators cognitive state. (usually the
identity function, since one wants the OTHER to understand what one is
thinking). In this approach the medium of communication (words, art,
gestulations) are not the items being communicated, they are
abstractions meant to key certain responses in the OTHER to arrive at
the desired goal.

The Mechanistic Approach

According to Thomas this is the approach taken by natural language 
recognition people. Communication is the application of a decrypto
function to the prose the user employed. This approach is inherently
flawed, according to Thomas, since the actually words and prose do not
contain meaning in themselves but are tools for efecting cognitive
change.  Thus, the text of one of Goebell's propaganda speeches can
not be examined in itself to determine what it means, but one needs an
awareness of the cognitive models, metaphors, and prejudices of the
speaker and listeners.  Capturing this sort of real world knowledge
(biases, prejudices, intuitive feelings) is not a stong point of the
AI systems. Yet, the extent that certain words move a person, may be
much more highly dependent on, say his Catholic upbringing than the
words themselves.

If one doubts the above thesis, then I encourage you to read Thomas
Kuhn's book "the Sturcture of Scientific Revolutions" and see how
culture can affect the interpretation of supposedly hard scientific
facts and observations.

Perhaps something that best brings this out was an essay (I believe it
was by Smuyllian) in "The Mind's Eye" (Dennet and Hofstadter). In this
essay a homunculus is set up with the basic tools of one of Schank's
language understanding systems (scripts, text, rules, etc.) He then
goes about the translation of the text from one language to another
applying a set of mechanistic transformation rules. Given that the
homunculus knows nothing of either the source language or the target
language, can you say that it has any understanding of what the script
was about? How does this differ from today's NUR systems?


                                        - Steven Gutfreund
                                          Gutfreund.umass@udel-relay

------------------------------

End of AIList Digest
********************
30-Aug-83 10:36:00-PDT,11207;000000000001
Mail-From: LAWS created at 30-Aug-83 10:34:13
Date: Tuesday, August 30, 1983 10:16AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #50
To: AIList@SRI-AI


AIList Digest            Tuesday, 30 Aug 1983      Volume 1 : Issue 50

Today's Topics:
  AI Literature - Bibliography Request,
  Intelligence - Definition & Turing Test & Prejudice & Flamer
----------------------------------------------------------------------

Date: 29 Aug 1983 11:05:14-PDT
From: Susan L Alderson <mccarty@Nosc>
Reply-to: mccarty@Nosc
Subject: Help!


We are trying to locate any and all bibliographies, in electronic
form, of AI and Robotics.  I know that this covers a broad spectrum,
but we would rather have too many things to choose from than none at
all.  Any help or leads on this would be greatly appreciated.

We are particularly interested in:

    AI Techniques
    Vision Analysis
    AI Languages
    Robotics
    AI Applications
    Speech Analysis
    AI Environments
    AI Systems Support
    Cybernetics

This is not a complete list of our interests, but a good portion of
the high spots!

susie (mccarty@nosc-cc)


[Several partial bibilographies have been published in AIList; more
would be most welcome.  Readers able to provide pointers should reply
to AIList as well as to Susan.

Many dissertation and report abstracts have been published in the
SIGART newsletter; online copies may exist.  Individual universities
and corporations also maintain lists of their own publications; CMU,
MIT, Stanford, and SRI are among the major sources in this country.
(Try Navarro@SRI-AI for general AI and CPowers@SRI-AI for robotics
reports.)

One of the fastest ways to compile a bibliography is to copy author's
references from the IJCAI and AAAI conference proceedings.  The AI
Journal and other AI publications are also good.  Beware of straying
too far from your main topics, however.  Rosenfeld's vision and image
processing bibliographies in CVGIP (Computer Vision, Graphics, and
Image Processing) list over 700 articles each year.

-- KIL]

------------------------------

Date: 25 Aug 1983 1448-PDT
From: Jay <JAY@USC-ECLC>
Subject: intelligence is...

  An intelligence must have at least three abilities; To act; To 
perceive, and classify (as one of: better, the same, worse) the
results of its actions, or the environment after the action; and
lastly To change its future actions in light of what it has perceived,
in attempt to maximize "goodness", and avoid "badness".  My views are 
very obviously flavored by behaviorism.

  In defense of objections I hear coming...  To act is necessary for 
intelligence, since it is pointless to call a rock intelligent since 
there seems to be no way to detect it.  To perceive is necessary of 
intelligence since otherwise projectiles, simple chemicals, and other 
things that act following a set of rules, would be classified as 
intelligent.  To change future actions is the most important since a 
toaster could perceive that it was overheating, oxidizing its heating
elements, and thus dying, but would be unable to stop toasting until
it suffered a breakdown.

  In summary (NOT (AND actp percievep evolvep)) -> (NOT intelligent), 
or Action, Perception, and Evolution based upon perception is
necessary for intelligence.  I *believe* that these conditions are
also sufficient for intelligence.

awaiting flames,

j'

PS. Yes, the earth's bio-system IS intelligent.

------------------------------

Date: 25 Aug 83 2:00:58-PDT (Thu)
From: harpo!gummo!whuxlb!pyuxll!ech @ Ucb-Vax
Subject: Re: Prejudice and Frames, Turing Test
Article-I.D.: pyuxll.403

The characterization of prejudice as  an  unwillingness/inability
to  adapt  to  new  (contradictory)  data  is  an  appealing one.
Perhaps this belongs in net.philosophy, but it seems to me that a
requirement  for  becoming a fully functional intelligence (human
or otherwise) is to abandon the search for  compact,  comfortable
"truths"  and  view knowledge as an approximation and learning as
the process of improving those approximations.

There is nothing wrong with compact generalizations: they  reduce
"overhead" in routine situations to manageable levels. It is when
they   are   applied   exclusively   and/or    inflexibly    that
generalizations  yield bigotry and the more amusing conversations
with Eliza et al.

As for the Turing test, I think it may be appropriate to think of
it  as  a "razor" rather than as a serious proposal.  When Turing
proposed the test there was a philosophical argument raging  over
the  definition  of  intelligence,  much  of  which  was outright
mysticism. The famous test cuts the fog nicely: a device  needn't
have  consciousness,  a  soul,  emotions -- pick your own list of
nebulous terms -- in order to  function  "intelligently."  Forget
whether it's "the real thing," it's performance that counts.

I think Turing recognized that, no matter how successful AI  work
was, there would always be those (bigots?) who would rip the back
off the machine and say,  "You  see?  Just  mechanism,  no  soul,
no emotions..." To them, the Turing test replies, "Who cares?"

=Ned=

------------------------------

Date: 25 Aug 83 13:47:38-PDT (Thu)
From: harpo!floyd!vax135!cornell!uw-beaver!uw-june!emma @ Ucb-Vax
Subject: Re: Prejudice and Frames, Turing Test
Article-I.D.: uw-june.549

I don't think I can accept some of the comments being bandied about 
regarding prejudice.  Prejudice, as I understand the term, refers to 
prejudging a person on the basis of class, rather than judging that 
person as an individual.  Class here is used in a wider sense than 
economic.  Examples would be "colored folk got rhythm" or "all them
white saxophonists sound the same to me"-- this latter being a quote
from Miles Davis, by the way.  It is immediately apparent that
prejudice is a natural result of making generalizations and
extrapolating from experience.  This is a natural, and I would suspect
inevitable, result of a knowledge acquisition process which
generalizes.

Bigotry, meanwhile, refers to inflexible prejudice.  Miles has used a 
lot of white saxophonists, as he recognizes that some don't all sound 
the same.  Were he bigoted, rather than prejudiced, he would refuse to
acknowledge that.  The problem lies in determining at what point an 
apparent counterexample should modify a conception.  Do we decide that
gravity doesn't work for airplanes, or that gravity always works but 
something else is going on?  Do we decide that a particular white sax 
man is good, or that he's got a John Coltrane tape in his pocket?

In general, I would say that some people out there are getting awfully
self-righteous regarding a phenomenon that ought to be studied as a 
result of our knowledge acquisition process rather than used to 
classify people as sub-human.

-Joe P.

------------------------------

Date: 25 Aug 83 11:53:10-PDT (Thu)
From: decvax!linus!utzoo!utcsrgv!utcsstat!laura@Ucb-Vax
Subject: AI and Human Intelligence [& Editorial Comment]

Goodness, I stopped reading net.ai a while ago, but had an ai problem
to submit and decided to read this in case the question had already
been asked and answered. News here only lasts for 2 weeks, but things
have changed...

At any rate, you are all discussing here what I am discussing in mail 
to AI types (none of whom mentioned that this was going on here, the 
cretins! ;-) ). I am discussing bigotry by mail to AI folk.

I have a problem in furthering my discussion. When I mentioned it I
got the same response from 2 of my 3 AI folk, and am waiting for the
same one from the third.  I gather it is a fundamental AI sort of
problem.

I maintain that 'a problem' and 'a discription of a problem' are not
the same thing. Thus 'discrimination' is a problem, but the word
'nigger' is not. 'Nigger' is a word which describes the problem of
discrimination. One may decide not to use the word 'nigger' but
abolishing the word only gets rid of one discription of the problem,
but not the problem itself.

If there were no words to express discrimination, and discrimination 
existed, then words would be created (or existing words would be 
perverted) to express discrimination. Thus language can be counted 
upon to reflect the attitudes of society, but changing the language is
not an effective way to change society.


This position is not going over very well. I gather that there is some
section of the AI community which believes that language (the
description of a problem) *is* the problem.  I am thus reduced to
saying, "oh no it isnt't you silly person" but am left holding the bag
when they start quoting from texts. I can bring out anthropology and
linguistics and they can get out some epistomology and Knowledge
Representation, but the discussion isn't going anywhere...

can anybody out there help?

laura creighton
utzoo!utcsstat!laura


[I have yet to be convinced that morality, ethics, and related aspects
of linguistics are of general interest to AIList readers.  While I
have (and desire) no control over the net.ai discussion, I am
responsible for what gets passed on to the Arpanet.  Since I would
like to screen out topics unrelated to AI or computer science, I may
choose not to pass on some of the net.ai submissions related to
bigotry.  Contact me at AIList-Request@SRI-AI if you wish to discuss
this policy. -- KIL]

------------------------------

Date: 25 Aug 1983 1625-PDT
From: Jay <JAY@USC-ECLC>
Subject: [flamer@ida-no: Re:  Turing Test; Parry, Eliza, and Flamer]

Is this a human response??

j'
                ---------------

  Return-path: <flamer%umcp-cs%UMCP-CS@UDel-Relay>
  Received: from UDEL-RELAY by USC-ECLC; Thu 25 Aug 83 16:20:32-PDT
  Date:     25 Aug 83 18:31:38 EDT  (Thu)
  From: flamer@ida-no
  Return-Path: <flamer%umcp-cs%UMCP-CS@UDel-Relay>
  Subject:  Re:  Turing Test; Parry, Eliza, and Flamer
  To: jay@USC-ECLC
  In-Reply-To: Message of Tue, 16-Aug-83 17:37:00 EDT from
      JAY%USC-ECLC@sri-unix.UUCP <4325@sri-arpa.UUCP>
  Via:  UMCP-CS; 25 Aug 83 18:55-EDT

        From: JAY%USC-ECLC@sri-unix.UUCP

        . . . Flamer would read messages from the net and then
        reply to the sender/bboard denying all the person said,
        insulting him, and in general making unsupported statements.
        . . .

  Boy! Now that's the dumbest idea I've heard in a long time. Only an
  idiot such as yourself, who must be totally out of touch with reality,
  could come up with that. Besides, what would it prove?  It's not much
  of an accomplishment to have a program which is stupider than a human.
  The point of the Turing test is to demonstrate a program that is as
  intelligent as a human. If you can't come up with anything better,
  stay off the net!

------------------------------

End of AIList Digest
********************
30-Aug-83 16:45:57-PDT,17461;000000000001
Mail-From: LAWS created at 30-Aug-83 16:44:54
Date: Tuesday, August 30, 1983 4:30PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #51
To: AIList@SRI-AI


AIList Digest           Wednesday, 31 Aug 1983     Volume 1 : Issue 51

Today's Topics:
  Expert Systems - Availability & Dissent,
  Automatic Translation - State of the Art,
  Fifth Generation - Book Review & Reply
----------------------------------------------------------------------

Date: 26 Aug 83 17:00:18-PDT (Fri)
From: decvax!ittvax!dcdwest!benson @ Ucb-Vax
Subject: Expert Systems
Article-I.D.: dcdwest.216

I would like to know whether there are commercial expert
systems available for sale.  In particular, I would like to
know about systems like the Programmer's Apprentice, or other
such programming aids.

Thanks in advance,

Peter Benson
!decvax!ittvax!dcdwest!benson

------------------------------

Date: 26 Aug 83 11:12:31-PDT (Fri)
From: decvax!genrad!mit-eddie!rh @ Ucb-Vax
Subject: bulstars
Article-I.D.: mit-eddi.656

from AP (or NYT?)


       COMPUTER TROUBLESHOOTER:
       'Artificially Intelligent' Machine Analyses Phone Trouble

           WASHINGTON - Researchers at Bell Laboratories say
       they've developed an ''artificially intelligent'' computer
       system that works like a highly trained human analyst to
       find troublespots within a local telephone network. Slug
       PM-Bell Computer. New, will stand. 670 words.

Oh, looks like we beat the Japanese :-( Why weren't we told that
'artificial intelligence' was about to exist?  Does anyone know if
this is the newspaper's fault, or if the guy they talked to just
wanted more attention???


-- Randwulf
(Randy Haskins);
Path= genrad!mit-eddie!rh
or... rh@mit-ee (via mit-mc)

------------------------------

Date: Mon 29 Aug 83 21:36:04-CDT
From: Jonathan Slocum <LRC.Slocum@UTEXAS-20.ARPA>
Subject: claims about "solving NLP"

I have never been impressed with claims about "solving the Natural
Language Processing problem" based on `solutions' for 1-2 paragraphs
of [usu. carefully (re)written] text.  There are far too many scale-up
problems for such claims to be taken seriously.  How many NLP systems
are there that have been applied to even 10 pages of NATURAL text,
with the full intent of "understanding" (or at least "treating in the
identical fashion") ALL of it?  Very few.  Or 100 pages?  Practically
none.  Schank & Co.'s "AP wire reader," for example, was NOT intended
to "understand" all the text it saw [and it didn't!], but only to 
detect and summarize the very small proportion that fell within its
domain -- a MUCH easier task, esp. considering its miniscule domain
and microscopic dictionary.  Even then, its performance was -- at best
-- debatable.

And to anticipate questions about the texts our MT system has been
applied to:  about 1,000 pages to date -- NONE of which was ever
(re)written, or pre-edited, to affect our results.  Each experiment
alluded to in my previous msg about MT was composed of about 50 pages
of natural, pre-existing text [i.e., originally intended and written
for HUMAN consumption], none of which was ever seen by the project
linguists/programmers before the translation test was run.  (Our 
dictionaries, by the way, currently comprise about 10,000 German
words/phrases, and a similar number of English words/phrases.)

We, too, MIGHT be subject to further scale-up problems -- but we're a
damned sight farther down the road than just about any other NLP
project has been, and have good reason to believe that we've licked
all the scale-up problems we'll ever have to worry about.  Even so, we
would NEVER be so presumptuous as to claim to have "solved the NLP
problem," needing only a large collection of `linguistic rules' to
wrap things up!!!  We certainly have NOT done so.

REALLY, now...

------------------------------

Date: Mon 29 Aug 83 17:11:26-CDT
From: Jonathan Slocum <LRC.Slocum@UTEXAS-20.ARPA>
Subject: Machine Translation - a very short tutorial

Before proclaiming the impossibility of automatic [i.e., computer]
translation of human languages, it's perhaps instructive to know
something about how human translation IS done -- and is not done -- at
least in places where it's taken seriously.  It is also useful,
knowing this, to propose a few definitions of what may be counted as
"translation" and -- more to the point -- "useful translation."
Abbreviations: MT = Machine Translation; HT = Human Translation.

To start with, the claim that "a real translator reads and understands
a text, and then generates [the text] in the [target] language" is
empty.  First, NO ONE really has anything like a good idea of HOW
humans translate, even though there are schools that "teach
translation."  Second, all available evidence indicates that (point #1
notwithstanding), different humans do it differently.  Third, it can
be shown (viz simultaneous interpreters) that nothing as complicated
as "understanding" need take place in all situations.  Fourth, 
although the contention that "there generally aren't 1-1
correspondences between words, phrases..."  sounds reasonable, it is
in fact false an amazing proportion of the time, for languages with
similar derivational histories (e.g., German & English, to say nothing
of the Romance languages).  Fifth, it can be shown that highly
skilled, well-respected technical-manual translators do not always (if
ever) understand the equipment for which they're translating manuals
[and cannot, therefore, be argued to understand the original texts in 
any fundamentally deep sense] -- and must be "understanding" in a
shallower, probably more "linguistic" sense (one perhaps more
susceptible to current state-of-the-art computational treatment).

Now as to how translation is performed in practice.  One thing to
realize here is that, at least outside the U.S. [i.e., where
translation is taken seriously and where almost all of it is done], NO
HUMAN performs "unrestricted translation" -- i.e., human translators
are trained in (and ONLY considered competent in) a FEW AREAS.
Particularly in technical translation, humans are trained in a limited
number of related fields, and are considered QUITE INCOMPETENT outside
those fields.  Another thing to realize is that essentially ALL
TRANSLATIONS ARE POST-EDITED.  I refer here not to stylistic editing,
but to editing by a second translator of superior skill and
experience, who NECESSARILY refers to the original document when
revising his subordinate's translation.  The claim that MT is
unacceptable IF/BECAUSE the results must be post-edited falls to the
objection that HT would be unacceptable by the identical argument.
Obviously, HT is not considered unacceptable for this reason -- and
therefore, neither should MT.  All arguments for acceptablility then
devolve upon the question of HOW MUCH revision is necessary, and HOW
LONG it takes.

Happily, this is where we can leave the territory of pontifical
pronouncements (typically utterred by the un- or ill-informed), and
begin to move into the territory of facts and replicable experiments.
Not entirely, of course, since THERE IS NO SUCH THINGS AS A PERFECT
TRANSLATION and, worse, NO ONE CAN DEFINE WHAT CONSTITUTES A GOOD
TRANSLATION.  Nevertheless, professional post-editors are regularly
saddled with the burden of making operational decisions about these
matters ("Is this sufficiently good that the customer is likely to 
understand the text?  Is it worth my [company's] time to improve it
further?").  Thus we can use their decisions (reflected, e.g., in
post-editing time requirements) to determine the feasibility of MT in
a more scientific manner; to wit: what are the post-editing
requirements of MT vs. HT?  And in order to assess the economic
viability of MT, one must add: taking all expenses into account, is MT
cost-effective [i.e., is HT + human revision more or less expensive
than MT + human revision]?

Re: these last points, our experimental data to date indicate that (1)
the absolute post-editing requirements (i.e., something like "number
of changes required per sentence") for MT are increased w.r.t. HT
[this is no surprise to anyone]; (2) paradoxically, post-editing time
requirements of MT is REDUCED w.r.t. HT [surprise!]; and (3) the
overall costs of MT (including revision) are LESS than those for HT
(including revision) -- a significant finding.

We have run two major experiments to date [with our funding agency
collecting the data, not the project staff], BOTH of which produced
these results; the more recent one naturally produced better results
than the earlier one, and we foresee further improvements in the near
future.  Our finding (2) above, which SEEMS inconsistent with finding
(1), is explainable with reference to the sociology of post-editing
when the original translator is known to be human, and when he will
see the results (which probably should, and almost always does,
happen).  Further details will appear in the literature.

So why haven't you heard about this, if it's such good news?  Well,
you just did!  More to the point, we have been concentrating on
producing this system more than on writing papers about it [though I
have been presenting papers at COLING and ACL conferences], and
publishing delays are part of the problem [one reason for having
conferences].  But more papers are in the works, and the secret will
be out soon enough.

------------------------------

Date: 26 Aug 83  1209 PDT
From: Jim Davidson <JED@SU-AI>
Subject: Fifth Generation (Book Review)

                 [Reprinted from the SCORE BBoard.]

14 Aug 8
by Steven Schlossstein
(c) 1983 Dallas Morning News (Independent Press Service)

    THE FIFTH GENERATION: Artificial Intelligence and Japan's Computer
Challenge to the World. By Edward Feigenbaum and Pamela McCorduck 
(Addison-Wesley, $15.55).

    (Steven Schlossstein lived and worked in Japan with a major Wall 
Street firm for more than six years. He now runs his own Far East 
consulting firm in Princeton, N.J. His first novel, ''Kensei,-' which 
deals with the Japanese drive for industrial supremacy in the high 
tech sector, will be published by Congdon & Weed in October).

    ''Fukoku Kyohei'' was the rallying cry of Meiji Japan when that 
isolated island country broke out of its self-imposed cultural cocoon 
in 1868 to embark upon a comprehensive plan of modernization to catch 
up with the rest of the world.
    ''Rich Country, Strong Army'' is literally what is meant.  
Figuratively, however, it represented Japan's first experimentation 
with a concept called industrial policy: concentrating on the 
development of strategic industries - strategic whether because of 
their connection with military defense or because of their importance 
in export industries intended to compete against foreign products.
    Japan had to apprentice herself to the West for a while to bring
it off.
    The military results, of course, were impressive. Japan defeated 
China in 1895, blew Russia out of the water in 1905, annexed Korea and
Taiwan in 1911, took over Manchuria in 1931, and sat at the top of the
Greater East Asia Co-Prosperity Sphere by 1940. This from a country
previously regarded as barbarian by the rest of the world.
    The economic results were no less impressive. Japan quickly became
the world's largest shipbuilder, replaced England as the world's 
leading textile manufacturer, and knocked off Germany as the premier 
producer of heavy industrial machinery and equipment. This from a 
country previously regarded as barbarian by the rest of the world.
    After World War II, the Ministry of Munitions was defrocked and 
renamed the Ministry of International Trade and Industry (MITI), but 
the process of strategy formulation remained the same.
    Only the postwar rendition was value-added, and you know what 
happened. Japan is now the world's No. 1 automaker, produces more 
steel than anyone else, manufactures over half the TV sets in the 
world, is the only meaningful producer of VTRs, dominates the 64K 
computer chip market, and leads the way in one branch of computer 
technology known as artificial intelligence (AI). All this from a 
country previously regarded as barbarbian by the rest of the world.
    What next for Japan? Ed Feigenbaum, who teaches computer science
at Stanford and pioneered the development of AI in this country, and 
Pamela McCorduck, a New York-based science writer, write that Japan is
trying to dominate AI research and development.
    AI, the fifth generation of computer technology, is to your
personal computer as your personal computer is to pencil and paper. It
is based on processing logic, rather than arithmetic, deals in 
inferences, understands language and recognizes pictures. Or will. It 
is still in its infancy. But not for long; last year, MITI established
the Institute for New Generation Computer Technology, funded it
aggressively, and put some of the country's best brains to work on AI.
    AI systems consist of three subsystems: a knowledge base needed
for problem solving and understanding, an inference subsystem that 
determines what knowledge is relevant for solving the problem at hand,
and an interaction subsystem that facilitates communication between
the overall system and its user - between man and machine.
    Now America does not have a MITI, does not like industrial policy,
has not created an institute to work on AI, and is not even convinced 
that AI is the way to go. But Feigenbaum and McCorduck argue that even
if the Japanese are not successful in developing the fifth generation,
the spin-off from this 10-year project will be enormous, with
potentially wide applications in computer technology, 
telecommunications, industrial robotics, and national defense.
    ''The Fifth Generation'' walks you through AI, how and why Japan 
puts so much emphasis on the project, and how and why the Western 
nations have failed to respond to the challenge. National defense 
implications alone, the authors argue, are sufficient to justify our 
taking AI seriously.
    Smart bombs and laser weapons are but advanced wind-up toys
compared with the AI arsenal of the future. The Pentagon has a little
project called ARPA - Advanced Research Projects Agency - that has
been supporting AI small-scale, but not with the people or funding the
authors feel is meaningful.
    Unfortunately, ''The Fifth Generation'' suffers from some 
organizational defects. You don't really get into AI and how its 
complicated systems operate until you're almost halfway through the 
book. And the chapter on industrial policy - from which all 
technological blessings flow - is only three pages long. It's also at 
the back of the book instead of up front, where it belongs.
    But the issues are highlighted well by experts who are not only 
knowledgeable about AI but who are concerned about our lack of 
response to yet another challenge from Japan. The author's depiction 
of the drivenness of the Japanese is especially poignant. It all boils
down to national survival.
    Japan no longer is in a position of apprenticeship to the West.
                       [Begin garbage]
The D B LD LEAJE OW IN A EMBARRUSSINOF STRATEGIC INDUSDRIES. EAgain1u
2, with few exceptions and shampoo, but it's not trying harder - if at
all.
                        [End garbage]
mount an effective reaponse to the Japanese challenge? ''The
Fifth Generation'' doesn't think so, and for compelling reasons. Give
it a read.
    END

------------------------------

Date: Fri 26 Aug 83 15:40:16-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM>
Subject: Re: Fifth Generation (Book Review)

                 [Reprinted from the SCORE BBoard.]

Anybody who says the Japanese are *leading* in "one branch of computer
technology known as artificial intelligence" is out to lunch.  And by
what standards is DARPA describable as small?  And what is all this
BirdSong about other countries failing to "respond to the challenge"?
Hasn't this turkey read the Alvey report?  Hasn't he noticed France's
vigorous encouragement of their domestic computer industry?  Who in
America is not "convinced that AI is the way to go" (this was true of
the leadership in Britain until the Alvey report came out, I admit)
and what are they doing to hinder AI work?  Does he think 64k RAMs are
the only things that go into computers?  Does he, incidentally, know
that AI has had plenty of pioneers outside of the HPP?

More to the point, most of you know about the wildly over-optimistic
promises that were made in the 60's on behalf of AI, and what happened
in their wake.  Whipping up public hysteria is a dangerous game,
especially when neither John Q. Public nor Malcolm Forbes himself can
do very much about the 5GC project, except put pressure on the local
school board to teach the kids some math and science.
                                                        - Richard

------------------------------

End of AIList Digest
********************
31-Aug-83 14:18:14-PDT,20720;000000000001
Mail-From: LAWS created at 31-Aug-83 14:17:06
Date: Wednesday, August 31, 1983 2:12PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #52
To: AIList@SRI-AI


AIList Digest           Wednesday, 31 Aug 1983     Volume 1 : Issue 52

Today's Topics:
  Bibliograpy - Vision
----------------------------------------------------------------------

Date: Tue, 30 Aug 83 15:26:12 EDT
From: Morton A. Hirschberg <mort@brl-bmd>
Subject: Vision Bibliograpy

I have two hundred references from DTIC and NTIS on vision.  The list 
is not complete by any means since I am looking at scene analysis and 
algorithms.  References are more or less from the last ten years with 
few 1982-83 items.  Shown are title, authors, AD number, and 
publication date.  Hopes this helps some.  Mort.

[I have reformatted the entries and sorted them by author.  For files
of this size (about 20K characters), I find that it hassles the fewest
people if I just send it out instead of sending FTP instructions.
-- KIL]


GJ Agin, Representation and Description of Curved Objects, AD755139, 
Oct 72.

N Ahuja & A Rosenfeld & RM Haralick, Neighbor Gray Levels as Features 
in Pixel Classification, , 80.

N Ahuja, Mosaic Models for Image Analysis and Synthesis, ADA050100, 
Nov 77.

JO Amoss, A Syntax-Directed Method of Extracting Topological Regions 
from a Silhouette, ADA045944, Jul 77.

HC Andrews (Project Director), Image Understanding Research, 
ADA054091, Mar 78.

HC Andrews (Project Director), Image Understanding Research, 
ADA046214, Sep 77.

Anonymous, Annual Report 1980, N81-27841, Jan 81.

Anonymous, Automatic Scene Analysis, N81-12776, Nov 79.

Anonymous, Optical Array Processor, ADA118371, Jul 82.

K Arbter, Erkennung und Vermessung von Konturen mit Hilfe der 
Fouriertransformation, ADB061321, Sep 81.

A Baldwin & R Greenblatt & J Holloway & T Knight & D Moon & D Weinreb,
LISP Machine Progress Report, ADA062178, Aug 77.

DH Ballard, Parameter Networks: Towards a Theory of Low-Level Vision, 
ADA101216, Apr 81.

AG Barto & RS Sutton, Goal Seeking Components for Adaptive 
Intelligence: An Initial Assessment, ADA101476, Apr 81.

LS Baumann ed., Image Understanding, ADA052900, Apr 77.

LS Baumann ed., Image Understanding, ADA084764, Apr 80.

LS Baumann ed., Image Understanding, ADA098261, Apr 81.

LS Baumann ed., Proceedings: Image Understanding Workshop, ADA052902, 
May 78.

LS Baumann ed., Proceedings: Image Understanding Workshop, ADA064765, 
Nov 78.

BL Bean & WL Flowers & WM Gutman & AV Jeliek & RL Spellicy, Laser, IR 
and NMMW Propagation Measurements and Analyses, ADB055523L, Feb 80.

B Bhanu, Shape Matching and Image Segmentation Using Stochastic 
Labeling, ADA110033, Aug 81.

GA Biecker & DS Paden & JL Potter, Feature Tagging, ADA091691, Apr 80.

HD Block & NJ Nilsson & RW Duda, Determination and Detection of 
Features in Patterns, AD427840, Dec 63.

M Brady, Computational Approaches to Image Understanding, ADA108191, 
Oct 81.

A Broder & A Rosenfeld, Gradient Magnitude as an Aid in Color Pixel 
Classification, ADA091995, Jun 80.

RA Brooks, Symbolic Reasoning Among 3-D Models and 2-D Images, 
ADA110316, Jun 81.

J Bryant & LF Guseman Jr., Basic Research in Mathematical Pattern 
Recognition and Image Analysis, N81-23561, Jan 81.

BL Bullock, Unstructured Control and Communication Processes in Real 
World Scene Analysis, ADA049458, Oct 77.

GJ Burton, Contrast Discrimination by the Human Visual System, 
ADA104181, May 81.

B Carrigan, Pattern Recognition and Image Processing: Citations from 
NTIS Aug 77 - Jul 79, PB80814221, Aug 80.

R Cederberg, Chain-Link Coding and Segmentation for Raster Scan 
Devices, N79-17129, Nov 78.

A Celmins, A Manual for General Least Squares Model Fitting, 
ADB040229L, Jun 79.

I Chakravarty, A Generalized Line and Junction Labelling Scheme with 
Applications to Scene Analysis, PB278073, Dec 77.

I Chakravarty, A Survey of Current Techniques for Computer Vision, 
PB268385, Jan 77.

R Chellappa, On an Estimation Scheme for Gauss Markov Random Field 
Models, ADA102057, Apr 81.

CH Chen, A Comparative Evaluation of Statistical Image Segmentation 
Techniques, ADA094237, Jan 81.

CH Chen, Image Processing, ADA095552, Feb 81.

CH Chen, Research Progress on Image Segmentation, ADA101827, Jul 81.

CH Chen, Some New Results on Image Processing and Recognition, 
ADA055862, Jun 78.

PW Cheng, A Psychophysical Approach to Form Perception:  
Incompatibility as an Explanation of Integrality, ADA087607, Jul 80.

LS Coles & B Raphael & RO Duda & CA Rosen & TD Garvey & RA Yates & JH 
Munson, Application of Intelligent Automata to Reconnaissance, 
AD868871, Nov 69.

SA Cook & TP Harrington & H Toffer, Digital-Image Processing Improves 
Man-Machine Communication at a Nuclear Reactor, UNI-SA-98, Aug 82.

JL Crowley, A Representation for Visual Information, ADA121443, Nov 
81.

S Cushing and L Vaina, Further Progress in Knowledge Representation 
for Image Understanding, ADA098416, Mar 81.

DARPA, Proceedings: Image Understanding Workshop, ADA052901, Oct 77.

SM Dunn, Generalized Blomqvist Correlation, ADA102058, Apr 81.

CR Dyer, Memory-Augmented Cellular Automata for Image Analysis, 
ADA065328, Nov 78.

JO Eklundh, Studies of Some Algorithms for Digital Picture Processing,
N81-14656, 81.

J Fain & D Gorlin & F Hayes-Roth & S Rosenschein & H Sowizral & D 
Waterman, The ROSIE Language Reference Manual, ADA111025, Dec 81.

JJ Fasano & TS Huang, Feature Dimensionality Reduction Through Use of 
the Karhunen-Love Transform in a Multisensor Pattern Recognition 
System, ADB057184, May 81.

CL Forgy, OPS5 User's Manual, ADA106558, Jul 81.

G Fowler & RM Haralick & FG Gray & C Feustel & C Grinstead, Efficient 
Graph Automorphism by Vertex Partitioning, , 83.

MS Fox, Reasoning with Incomplete Knowledge in a Resource-Limited 
Environment: Integrating Reasoning and Knowledge Acquisition, 
ADA102285, Mar 81.

H Freeman, Shape Description Via the Use of Critical Points, 
ADA040273, Jun 77.

BR Frieden, Image Processing, ADA095075, Feb 81.

DD Garber, Computational Models for Texture Analysis and Synthesis, 
ADA102470, May 81.

Inc., Geo-Centers, Inc., A Review of Three-Dimensional Vision for 
Robotics, ADA118055, May 82.

AP Ginsburg, Perceptual Capabilities, Ambiguities and Artifacts in Man
and Machine, ADA109864, 81.

RC Gonzalez, Evaluation of the Chitra Character Recognition System and
Development of Feature Extraction Algorithms, ADB059991L, May 80.

GD Hadden, A Cellular Automata Approach to Computer Vision and Image 
Processing, ADA096569, Sep 80.

SE Haehn & D Morris, OLPARS VI (On-Line Pattern Analysis and 
Recognition System), ADA118732, Jun 82.

SE Haehn & D Morris, OLPARS VI (On-Line Pattern Analysis and 
Recognition System), ADA118733, Jun 82.

EL Hall & RC Gonzalez, Multi-Sensor Scene Synthesis and Analysis, 
ADA110812, Sep 81.

EL Hall & W Frei & RY Wong, Scene Content Analysis Program - Phase II,
ADA045624, Jul 77.

RM Haralick & D Queeney, Understanding Engineering Drawings, , 82.

RM Haralick & GL Elliott, Increasing Tree Search Efficiency for 
Constraint Satisfaction Problems, , 80.

RM Haralick & LG Shapiro, Decomposition of Polygonal Shapes by 
Clustering, , .

RM Haralick & LG Shapiro, The Consistent Labeling Problem: Part I, , 
Apr 79.

RM Haralick & LG Shapiro, The Consistent Labeling Problem: Part II, , 
May 80.

RM Haralick & LT Watson, A Facet Model for Image Data, , 81.

RM Haralick & LT Watson & TJ Laffey, The Topographic Primal Sketch, , 
83.

RM Haralick, An Interpretation for Probabilistic Relaxation, , 83.

RM Haralick, Edge and Region Analysis for Digital Image Data, , 80.

RM Haralick, Ridges and Valleys on Digital Images, , 83.

RM Haralick, Scene Analysis, Homomorphism, and Consistent Labeling 
Problem Algorithms, ADA082058, Jan 80.

RM Haralick, Some Neighborhood Operators, , 81.

RM Haralick, Statistical and Structural Approaches to Texture, , May 
79.

RM Haralick, Structural Pattern Recognition, Arrangements and Theory 
of Covers, , .

RM Haralick, Using Perspective Transformations in Scene Analysis, , 
80.

F Hayes-Roth & D Gorlin & S Rosenschein & H Sowizral & D Waterman, 
Rationale and Motivation for ROSIE, ADA111018, Nov 81.

CA Hlavka & RM Haralick & SM Carlyle & R Yokoyama, The Discrimination 
of Winter Wheat Using a Growth-State Signature, , 80.

YC Ho and AK Agrawala, On Pattern Classification Algorithms - 
Introduction and Survey, AD667728, Mar 68.

JM Hollerbach, Hierarchical Shape Description of Objects by Selection 
and Modification of Prototypes, ADA024970, Nov 75.

BR Hunt, Automation of Image Processing, ADA111029, May 81.

NE Huston Jr., Shift and Scale Invariant Preprocessor, ADA114519, Dec 
81.

RA Jarvis, Computer Image Segmentation: First Partitions Using Shared 
Near Neighbor Clustering, PB277929, Dec 77.

RA Jarvis, Computer Image Segmentation: Structured Merge Strategies, 
PB277930, Dec 77.

HA Jenkinson, Image Processing Techniques for Automatic Target 
Detection, ADB055686L, Mar 81.

LN Kanal, Pattern Analysis & Modeling, ADA070961, Apr 79.

MD Kelly, Visual Identification of People by Computer, AD713252, Jul 
70.

CE Kim, On Cellular Straight Line Segments, ADA089511, Jul 80.

CE Kim, Three-Dimensional Digital Line Segments, ADA106813, Aug 81.

RL Kirby & A Rosenfeld, A Note on the Use of (Gray Level, Local 
Average Gray Level) Space as an Aid in Threshold Selection, ADA065695,
Jan 79.

L Kitchens & A Rosenfeld, Edge Evaluation Using Local Edge Coherence, 
ADA109564, Dec 80.

AH Klopf, Evolutionary Pattern Recognition Systems, AD637492, Nov 65.

WA Kornfeld, The Use of Parallelism to Implement a Heuristic Search, 
ADA099184, Mar 81.

E Kowler, Eye Movement and Visual Information Processing, ADA112399, 
Dec 81.

S Krusemark & RM Haralick, An Operating System Interface for 
Transportable Image Processing Software, , 83.

FP Kuhl & CR Giardina & OR Mitchell & DJ Charpentier, 
Three-Dimensional Object Recognition Using N-Dimensional Chain Codes, 
ADA119011, Mar 82.

R LaPado & C Reader & L Hubble, Image Processing Displays: A Report on
Commercially Available State-of-the-Art Features, ADA097226, Aug 78.

BA Lambird & D Lavine & LN Kanal, Interactive Knowledge-Based 
Cartographic Feature Extraction, ADB061479L, Oct 81.

BA Lambird & D Lavine & GC Stockman & KC Hayes & LN Kanal, Study of 
Digital Matching of Dissimilar Images, ADA102619, Nov 80.

M Lebowitz, Generalization and Memory in an Integrated Understanding 
System, ADA093083, Oct 80.

T Lozano-Perez, Spatial Planning: A Configuration Space Approach, 
ADA093934, Dec 80.

AV Luizov & NS Fedorova, Illumination and Visual Information, 
ADB056076L, Mar 81.

WI Lundgren, Scene Analysis, ADA115603, Dec 81.

D Marr and HK Nishihara, Representation and Recognition of the Spatial
Organization of Three Dimensional Shapes, ADA031882, Aug 76.

D Marr and S Ullman, Directional Selectivity and Its Use in Early 
Visual Processing, ADA078054, Jun 79.

D Marr, The Low-Level Symbolic Representation of Intensity Changes in 
an Image, ADA013669, Dec 74.

WN Martin and JK Aggarwal, Dynamic Scene Analysis: The Study of Moving
Images, ADA042124, Jan 77.

WN Martin and JK Aggarwal, Survey: Dynamic Scene Analysis, ADA060536, 
78.

J McCarthy & T Binford & C Green & D Luckham & Z Manna ed L Earnest, 
Recent Research in Artificial Intelligence and Foundations of 
Programming, ADA066562, Sep 78.

JL McClelland & DE Rumelhart, An Interactive Activation Model of the 
Effect of Context in Perception Part II, ADA090189, Jul 80.

C McCormick, Strategies for Knowledge-Based Image Interpretation, 
ADA115914, May 82.

KG Mehrotra, Some Observations in Pattern Recognition, ADA113382, Feb 
82.

DL Milgram & A Rosenfeld & T Willett & G Tisdale, Algorithms and 
Hardware Technology for Image Recognition, ADA057191, Mar 78.

DL Milgram & DJ Kahl, Recursive Region Extraction, ADA049591, Dec 77.

DL Milgram, Region Extraction Using Convergent Evidence, ADA061591, 
Jun 78.

M Minsky, K-Lines: A Theory of Memory, ADA078116, Jun 79.

OR Mitchell & FP Kuhl & TA Grogan & DJ Charpentier, A Shape Extraction
and Recognition System, , Mar 82.

CB Moler & GW Stewart, An Efficient Matrix Factorization for Digital 
Image Processing, LA-7637-MS, Jan 79.

MG Moran, Image Analysis, ADA066732, Mar 79.

JL Muerle, Project PARA: Perceiving and Recognition Automata, AD33137,
Dec 63.

GK Myers & RE Twogood, An Algorithm for Enhancing Low-Contrast Details
in Digital Images, UCID-18015, Nov 78.

NTIS, Pattern Recognition and Image Processing Aug 1980-Nov 1981, 
PB82803453, Jan 82.

PM Narendra & BL Westover, Advanced Pattern-Matching Techniques for 
Autonomous Acquisition, ADB059773L, Jan 81.

WP Nelson, Learning Game Evaluation Functions with a Compound Linear 
Machine, ADA085710, Mar 80.

NJ Nilsson & B Raphael & S Wahlstrom, Application of Intelligent 
Automata to Reconnaissance, AD841509, Jun 68.

et. al., NJ Nilsson & CA Rosen & B Raphael, et. al., Application of 
Intelligent Automata to Reconnaissance, AD849872, Feb 69.

NJ Nilsson, A Framework for Artificial Intelligence, ADA068188, Mar 
79.

S Nyberg, On Image Restoration and Noise Reduction with Respect to 
Subjective Criteria, N81-30847, 81.

JV Oldfield, A Special-Purpose Processor for an Automatic Feature 
Extraction System, ADA090789, Aug 80.

JS Ostrem & HD Crane, Automatic Handwriting Verification (AHV), 
ADA111329, Nov 81.

CC Parma & AR Hanson & EM Riseman, Experiments in Schema-Driven 
Interpretation of a Natural Scene, ADA085780, Apr 80.

WA Pearlman, A Visual System Model and a New Distortion Measure in the
Context of Image Processing, PB274534, Jul 77.

T Peli, An Algorithm for Recognition and Localization of Rotated and 
Scaled Objects, ADA102920, Jul 80.

M Pietikainen & A Rosenfeld, Edge-Based Texture Measures, ADA102060, 
May 81.

LJ Pinson & JP Lankford, Research on Image Enhancement Algorithms, 
ADA103216, May 81.

T Poggio & HK Nishihara & KRK Nielsen, Zero-Crossing and 
Spatiotemporal Interpolation in Vision: Aliasing and Electric Coupling
Between Sensors, ADA117608, May 82.

T Poggio, Marr's Approach to Vision, ADA104198, Aug 81.

JM Prager, Extracting and Labelling Boundary Segments in Natural 
Scenes (Revised and Updated), ADA060042, Sep 78.

RC Prather and LM Uhr, Discovery and Learning Techniques for Patern 
Recognition, AD610725, Nov 64.

R Reddy and A Rosenfeld, Final Report on Workshop on Control 
Structures and Knowledge Representation for Image and Speech 
Understanding, ADA076563, Apr 79.

WC Rice & JS Shipman & RJ Spieler, Interactive Digital Image 
Processing Investigation Phase II, ADA087518, Apr 80.

W Richards & K Dismukes, Vision Research for Flight Simulation, 
ADA118721, Jul 82.

W Richards & KA Stevens, Efficient Computations and Representations of
Visual Surfaces, ADA089832, Dec 79.

CA Rosen and NJ Nilsson, Application of Intelligent Automata to 
Reconnaissance, AD820989, Sep 67.

S Rosenberg, Understanding in Incomplete Worlds, ADA062364, May 78.

A Rosenfeld & DL Milgram, Algorithms and Hardware Technology for Image
Recognition, ADA041906, Jul 77.

A Rosenfeld, Cellular Architectures for Pattern Recognition, 
ADA117049, Apr 82.

A Rosenfeld, Image Understanding Using Overlays, ADA086513, May 80.

A Rosenfeld, On Connectivity Properties of Grayscale Pictures, 
ADA108602, Sep 81.

A Rosenfeld, Pebble, Pushdown, and Parallel-Sequential Picture 
Acceptors, ADA051857, Feb 78.

JM Rubin & WA Richards, Color Vision and Image Intensities: When Are 
Changes Material?, ADA103926, May 81.

W Rutkowski, Shape Completion, ADA047682, Aug 77.

EC Seed & HJ Siegel, The Use of Database Techniques in the 
Implementation of a Syntactic Pattern Recognition Task on a Parallel 
Reconfigurable Machine, ADA113934, Dec 81.

S Seeman, FIPS Software for Fast Fourier Transform, Filtering and 
Image Rotation, N79-17594, Oct 78.

LG Shapiro & RM Haralick, A Spatial Data Structure, , 80.

LG Shapiro & RM Haralick, Organization of Relational Models for Scene 
Analysis, , Nov 82.

LG Shapiro & RM Haralick, Structural Descriptions and Inexact 
Matching, , Sep 81.

JE Shore & RM Gray, Minimum Cross-Entropy Pattern Classification and 
Cluster Analysis, ADA086158, Apr 80.

DW Small, Image Processing Program Completion Report, ADA061597, Aug 
78.

DA Smith, Using Enhanced Spherical Images for Object Representation, 
ADA078065, May 79.

DR Smith, On the Computational Complexity of Branch and Bound Search 
Strategies, ADA081608, Nov 79.

BE Soland & PM Narendra & RC Fitch & DV Serreyn & TG Kopet, Prototype 
Automatic Target Screener, ADA060849, Jun 78.

BE Soland & PM Narendra & RC Fitch & DV Serreyn & TG Kopet, Prototype 
Automatic Target Screener, ADA060850, Sep 78.

AJ Stenger & TA Zimmerlin & JP Thomas & M Braunstein, Advanced 
Computer Image Generation Techniques Exploting Perceptual 
Characteristics, ADA103365, Aug 81.

KA Stevens, Surface Perception from Local Analysis of Texture and 
Contour, ADA084803, Feb 80.

GC Stockman & BA Lambird & D Lavine & LN Kanal, Knowledge-Based Image 
Analysis, ADA101319, Apr 81.

GC Stockman & SH Kopstein, The Use of Models in Image Analysis, 
ADA067166, Jan 79.

TM Strat, A Numerical Method for Shape-From-Shading from a Single 
Image, ADA063071, Jan 79.

LT Suminski Jr. & PH Hulin, Computer Generated Imagery (CGI) Current 
Technology and Cost Measures Feasibility Study, ADA091636, Sep 80.

P Szolovits & WA Martin, Brand X Manual, ADA093041, Nov 80.

J Taboada, Coherent Optical Methods for Applications in Robot Visual 
Sensing, ADA110107, 81.

JM Tenenbaum & MA Fischler & HC Wolf, A Scene Analysis Approach to 
Remote Sensing, N79-13438, Jun 78.

U Maryland, Algorithms and Hardware Technology for Image Recognition, 
ADA049590, Oct 77.

S Ullman, The Interpretation of Structure from Motion, ADA062814, Oct 
76.

et. al., SA Underwood, et. al., Visual Learning and Recognition by 
Computer, AD752238, Apr 72.

L Vaina & S Cushing, Foundation of a Knowledge Representation System 
for Image Understanding, ADA095992, Oct 80.

FMDA Vilnrotter, Structural Analysis of Natural Textures, ADA110032, 
Sep 81.

HF Walker, The Mean-Square Error Optimal Linear Discriminant Function 
and Its Application to Incomplete Data Vectors, N79-21827, Feb 79.

S Wang & AY Wu & A Rosenfeld, Image Approximation from Grayscale 
"Medial Axes", ADA091993, May 80.

S Wang & DB Elliott & JB Campbell & RW Erich & RM Haralick, Spatial 
Reasoning in Remotely Sensed Data, , Jan 83.

LT Watson & RM Haralick & OA Zuniga, Constrained Transform Coding and 
Surface Fitting, , May 83.

OA Wehmanen, Pure Pixel Classification Software, N81-11689, JUL 80.

D Weinreb & D Moon, Flavors: Message Passing in the LISP Machine, 
ADA095523, Nov 80.

R Weyhrauch, Prolegomena to a Theory of Formal Reasoning, ADA065698, 
Dec 78.

TD Williams, Computer Interpretation of a Dynamic Image from a Moving 
Vehicle, ADA107565, May 81.

PH Winston & RH Brown editors, Progress in Artificial Intelligence 
1978 Volume 1, ADA068838, 79.

PH Winston & RH Brown eds., Progress in Artificial Intelligence 1978 
Volume 2, ADA068839, 79.

JW Woods, Markov Image Modeling, ADA066078, Oct 78.

AY Wu & T Hong & A Rosenfeld, Threshold Selection Using Quadtrees, 
ADA090245, Mar 80.

VA Yakubovich, Machines That Can Learn to Recognize Patterns, 
AD618643, 63.

JK Yan & DJ Sakrison, Encoding of Images Based on a Two-Component 
Source Model, ADA051033, Nov 77.

Y Yasuoka & RM Haralick, Peak Noise Removal by a Facet Model, , 83.

C Yen, An Image Processing Software Package, ADA101072, Jun 81.

C Yen, On the Use of Fisher's Linear Discriminant for Image 
Segmentation, ADA091591, Nov80.

R Yokoyama & RM Haralick, Texture Pattern Image Generation by Regular 
Markov Chain, , 79.

LA Zadeh, Theory of Fuzziness and Its Application to Information 
Processing and Decision-Making, ADA064598, Oct 76.

AL Zobrist and WB Thompson, Building a Distance Function for Gestalt 
Grouping, ADA015435, 75.

------------------------------

End of AIList Digest
********************
