From csnet_gateway Tue Sep 16 06:45:13 1986
Date: Tue, 16 Sep 86 06:45:08 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #180
Status: R


AIList Digest            Tuesday, 16 Sep 1986     Volume 4 : Issue 180

Today's Topics:
  Administrivia - Resumption of Service,
  AI Tools - C,
  Expert Systems - Matching,
  Philosophy - Argumentation Style & Sports Analogy,
  Physiology - Rate of Tissue Replacement

----------------------------------------------------------------------

Date: Tue 16 Sep 86 01:28:28-PDT
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Reply-to: AIList-Request@SRI-AI.ARPA
Subject: Resumption of Service

I'm back from vacation and almost have the mail streams under control
again.  This issue clears out "old business" messages relating to the
discussions in early August.  I'll follow with digests flushing the
accumulated queries, Usenet replies, news items, conference and
seminar abstracts, and bibliographic citations -- spread out a bit so
that I'm not deluged with mailer bounce messages from readers who have
dropped without notification.  Incidentally, about 30 people signed up
for direct distribution this month despite the inactivity of the list.
(Most of the additions for the last year have been on BITNET, often in
clusters as new universities join the net or become aware of the
Arpanet digests.  Most Arpanet and CSNet sites are now using bboards
and redistribution lists or are making use of the Usenet mod.ai/net.ai
distribution.)

I plan to pass along only an abbreviated announcement for conferences
that have already been announced in the NL-KR, IRList, or Prolog lists
-- you can contact the message author if you need the full text.
(Note that this may reduce the yield of keyword searches through the
AIList archive; future historians will have to search the other lists
to get a full picture of AI activity.  Anyone building an intelligent
mail-screening system should also incorporate cross-list linkages.
Any such screening system that can understand and coordinate these
message streams deserves a Turing award.)

                                        -- Ken Laws

------------------------------

Date: Wed, 20 Aug 86 10:07:49 edt
From: cdx39!jc%rclex.UUCP@harvard.HARVARD.EDU
Subject: Re: Reimplementing in C

> I've been hearing and seeing something for the past couple years,
> something that seems to be becoming a folk theorem.  The theorem goes
> like this:
>       Many expert systems are being reimplemented in C.
> I'm curious what the facts are.

  [I program in C, and have reached the conclusion that most AI
  programming could be done in that language as easily as in LISP
  if libraries of list-oriented subroutines were available.  (They
  needn't be consed lists -- I use dynamically allocated arrays.)
  You do have to worry about storage deallocation, but that buys you
  considerable run-time efficiency.  You also lose the powerful
  LISP debugging environment, so fill your code with lots of
  argument checks and ASSERTs.  Tail recursion isn't optimized,
  so C code should use iteration rather than recursion for most
  array-based list traversals.  Data-driven and object-oriented
  coding are easy enough, but you can't easily build run-time
  "active objects" (i.e., procedures to be applied to message
  arguments); compiled subroutines have to do the work, and dynamic
  linking is not generally worth the effort.  I haven't tried much
  parsing or hierarchy traversal, but programs such as LEX, YACC,
  and MAKE show that it can be done.  -- KIL]


Well, now, I don't know about re-implementing in C, but I myself
have been doing a fair amount of what might be called "expert
systems" programming in C, and pretty much out of necessity.

This is because I've been working in the up-and-coming world
of networks and "intelligent" communication devices.  These
show much promise for the future; unfortunately they also
add a very "interesting" aspect to the job of an application
(much less a system) programmer.

The basic problem is that such comm devices act like black
boxes with a very large number of internal states; the states
aren't completely documented; those that are documented are
invariably misunderstood by anyone but the people who built
the boxes; and worst of all, there is usually no reliable
way to get the box into a known initial state.

As a result, there is usually no way to write a simple,
straightforward routine to deal with such gadgets.  Rather,
you are forced to write code that tries to determine 1)
what states a given box can have; 2) what state it appears
to be in now; and 3) what sort of command will get it from
state X to state Y.  The debugging process involves noting
unusual responses of the box to a command, discussing the
"new" behavior with the experts (the designers if they are
available, or others with experience with the box), and
adding new cases to your code to handle the behavior when
it shows up again.

One of the simplest examples is an "intelligent ACU", which
we used to call a "dial-out modem".  These now contain their
own processor, plus sufficiently much ROM and RAM to amount
to small computer systems of their own.  Where such boxes
used to have little more than a status line to indicate the
state of a line (connected/disconnected), they now have an
impressive repertoire of commands, with a truly astonishing
list of responses, most of which you hope never to see.  But
your code will indeed see them.  When your code first talks
to the ACU, the responses may include any of:
        1. Nothing at all.
        2. Echo of the prompt.
        3. Command prompt (different for each ACU).
        4. Diagnostic (any of a large set).
Or the ACU may have been in a "connected" state, in which
case your message will be transmitted down the line, to be
interpreted by whatever the ACU was connected to by the most
recent user.  (This recursive case is really fun!:-)

The last point is crucial:  In many cases, you don't know
who is responding to your message.  You are dealing with
chains of boxes, each of which may respond to your message
and/or pass it on to the next box.  Each box has a different
behaviour repertoire, and even worse, each has a different
syntax.  Furthermore, at any time, for whatever reason
(such as power glitches or commands from other sources),
any box may reset its internal state to any other state.
You can be talking to the 3rd box in a chain, and suddenly
the 2nd breaks in and responds to a message not intended
for it.

The best way of handling such complexity is via an explicit
state table that says what was last sent down the line, what
the response was, what sort of box we seem to be talking to,
and what its internal state seems to be.  The code to use such
info to elicit a desired behavior rapidly develops into a real
piece of "expert-systems" code.

So far, there's no real need for C; this is all well within the
powers of Lisp or Smalltalk or Prolog.  So why C?  Well, when
you're writing comm code, you have one extra goodie.  It's very
important that you have precise control over every bit of every
character.  The higher-level languages always seen to want to
"help" by tokenizing the input and putting the output into some
sort of standard format.  This is unacceptable.

For instance, the messages transmitted often don't have any
well-defined terminators.  Or, rather, each box has its own
terminator(s), but you don't know beforehand which box will
respond to a given message.  They often require nulls.  It's
often very important whether you use CR or LF (or both, in
a particular order).  And you have to timeout various inputs,
else your code just hangs forever.  Such things are very awkward,
if not impossible to express in the typical AI languages.

This isn't to say that C is the world's best AI language; quite
the contrary.  I'd love to get a chance to work on a better one.
(Hint, hint....)  But given the languages available, it seems
to be the best of a bad lot, so I use it.

If you think doing it in C is weird, just wait 'til
you see it in Ada....

------------------------------

Date: 2 Sep 86 08:31:00 EST
From: "CLSTR1::BECK" <beck@clstr1.decnet>
Reply-to: "CLSTR1::BECK" <beck@clstr1.decnet>
Subject: matching

Mr. Rosa, is correct in saying that "the obstacles to implementation are not
technological,"  since this procedure is currently being implemented.  See
"matches hit civil servants hardest" in the august 15, 1986 GOVERNMENT COMPUTER
NEWS. "Computer Matching/Matches" is defined as searching the available data for
addresses, financial information, specific personal identifiers and various
irregularities".  The congressional Office of Technology Assessment has recently
issued a report, "Electronic Record Systems and Individual Privacy" that
discusses matching.

My concern with this is how will the conflicting rules of society be reconciled
to treat the indiviual fairly.  Maybe the cash society and anonymous logins will
become prevalent.    Do you think that the falling cost of data will force data
keepers to do more searches to justify their existence?  Has there been any
discussion of this topic?

peter beck     <beck@ardec-lcss.arpa>

------------------------------

Date: Tue, 12 Aug 86 13:20:20 EDT
From: "Col. G. L. Sicherman" <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Re: philosophy articles

> >Out of curiosity I hunted up [Jackson, "What Mary Didn't Know," _J.
> >of Philosophy_ 83(1986) 291-295] on the way back from lunch.
> >It's aggressive and condescending; any sympathy I might have felt for
> >the author's argument was repulsed by his sophomoric writing.  I hope it's
> >not typical of the writing in philosophy journals.
>
> I don't quite understand what "aggressive and condescending" or
> "sophomoric writing" have to do with philosophical argumentation.
> One thing that philosophers try not to do is give ad hominem arguments.
> A philosophical arguement stands or falls on its logical merits, not its
> rhetoric.

That's an automatic reaction, and I think it's unsound.  Since we're
not in net.philosophy, I'll be brief.

Philosophers argue about logic, terminology, and their experience of
reality.  There isn't really much to argue about where logic is
concerned:  we all know the principles of formal logic, and we're
all writing sincerely about reality, which has no contradictions in
itself.  What we're really interested in is the nature of our exist-
ence; the logic of how we describe it doesn't matter.

One reason that Jackson's article irritated me is that he uses formal
logic, of the sort "Either A or B, but not A, therefore B." This kind
of argument insults the reader's intelligence.  Jackson ought to know
that nobody is going to question the soundness of such logic, but that
all his opponents will question his premises and his definitions.  More-
over, he appears to regard his premises and definitions as unassailable.
I call that stupid philosophizing.

Ad-hominem attacks may well help to discover the truth.  When the man
with jaundice announces that everything is fundamentally yellow, you
must attack the man, not the logic.  So long as he's got the disease,
he's right!

------------------------------

Date: Tue, 12 Aug 86 12:53:07 EDT
From: "Col. G. L. Sicherman" <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Re: talk to the medium (from Risks Digest)

> Whether he was talking about the broadcast or the computer industry, he
> got the analogy wrong.

Of course--that's what makes the analogy "stick."

> If the subject is broadcasting, the sports analogy to a "programmer"
> is the guy that makes the play schedules.

Not exactly.  McLuhan's "programmer" is the man who selects the content
of the medium, not what computer people call a programmer.

>                                       ... But still, in computing,
> a programmer bears at least partial responsibility for the computer's
> (mis)behaviour.

I agree.  McLuhan is writing about not responsibility but responsiveness.
Last Saturday I visited an apartment where a group of men and kids were
shouting at the TV set during a football game.  It's a natural response,
and it would have been effective if TV were an interactive medium.

If you dislike this posting, will you complain to the moderator?  To
the people who programmed netnews?  To the editor of the New York
_Times?_ Of course not; you must talk to the medium, not to the
programmer!

------------------------------

Date: Wed, 20 Aug 86 10:06:56 edt
From: cdx39!jc%rclex.UUCP@harvard.HARVARD.EDU
Subject: Re: "Proper" Study of Science, Conservation of Info


  [The following hasn't any obvious AI, but it's interesting enough
  to pass along.  Commonsense reasoning at work.  -- KIL]


> The ability to quantify and measure ... has profound implications ...
>
>               ...  A decade from now it's likely that none of our bodies
> will contain EVEN A SINGLE ATOM now in them.  Even bones are fluid in
> biological organisms; ...

OK, let's do some BOTE (Back Of The Envelope) calculations.
According to several bio and med texts I've read over the
years, a good estimate of the half-life residency of an atom
in the soft portions of a mammal's body is 1/2 year; in the
bones it is around 2 years.  The qualifications are quite
obvious and irrelevant here; we are going for order-of-magnitude
figures.

For those not familiar with the term, "half-life residency"
means the time to replace half the original atoms.  This
doesn't mean that you replace half your soft tissues in
6 months, and the other half in the next six months.  What
happens is exponential:  in one year, 1/4 of the original
are left; in 18 months, 1/8 are left, and so on.

Ten years is about 5 half-lives for the bones, and 20 for the
soft tissues.  A human body masses about 50 Kg, give or take
a factor of 2.  The soft tissues are primarily water (75%)
and COH2; we can treat it all as water for estimating the
number of atoms.  This is about (50Kg) * (1000 KG/g) / (16
g/mole) = 3000 moles, times 6*10^23 gives us about 2*10^26
atoms.  The bones are a bit denser (with fewer atoms per
gram); the rest is a bit less dense (with more atoms per
gram), but it's about right.  For order-of-magnitude estimates,
we would have roughly 10^26 atoms in each kind of tissue.

In 5 half-lives, we would divide this by 2^5 = 32 to get the
number of original atoms, giving us about 7*10^25 atoms of the
bones left.  For the soft tissues, we divide by 2^20 = 4*10^6,
giving us about 2 or 3 * 10^20 of the original atoms.

Of course, although these are big numbers, they don't amount to
much mass, especially for the soft tissues.  But they are a lot
more than a single atom, even if they are off by an order of
magnitude..

Does anyone see any serious errors in these calculations?  Remember
that these are order-of magnitude estimates; quibbling with anything
other than the first significant digit and the exponent is beside
the point.  The only likely source of error is in the half-life
estimate, but the replacement would have to be much faster than a
half-year to stand a chance of eliminating every atom in a year.

In fact, with the exponential-decay at work here, it is easy
to see that it would take about 80 half-lives (2*10^26 = 2^79)
to replace the last atom with better than 50% probability.
For 10 years, this would mean a half-life residency of about
6 weeks, which may be true for a mouse or a sparrow, but I've
never seen any hint that human bodies might replace themselves
nearly this fast.

In fact, we can get a good upper bound on how fast our atoms
could be replaced, as well as a good cross-check on the above
rough calculations, by considering how much we eat.  A normal
human diet is roughly a single Kg of food a day.  (The air
breathed isn't relevant; very little of the oxygen ends up
incorporated into tissues.) In 6 weeks, this would add up to
about 50 Kg.  So it would require using very nearly all the
atoms in our food as replacement atoms to do the job required.
This is clearly not feasible; it is almost exactly the upper
bound, and the actual figure has to be lower.  A factor of 4
lower would give us the above estimate for the soft tissues,
which seems feasible.

There's one more qualification, but it works in the other
direction.  The above calculations are based on the assumption
that incoming atoms are all 'new'.  For people in most urban
settings, this is close enough to be treated as true.  But
consider someone whose sewage goes into a septic tank and
whose garbage goes into a compost pile, and whose diet is
based on produce of their garden, hen-house, etc.  The diet
of such people will contain many atoms that have been part
of their bodies in previous cycles, especially the C and N
atoms, but also many of the O and H atoms.  Such people could
retain a significantly larger fraction of original atoms
after a decade.

Please don't take this as a personal attack.  I just couldn't
resist the combination of the quoted lines, which seemed to
be a clear invitation to do some numeric calculations.  In
fact, if someone has figures good to more places, I'd like
to see them.

------------------------------

End of AIList Digest
********************

From csnet_gateway Fri Sep 19 19:08:39 1986
Date: Fri, 19 Sep 86 19:08:30 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #181
Status: R


AIList Digest           Wednesday, 17 Sep 1986    Volume 4 : Issue 181

Today's Topics:
  Conferences - ACM Office Information Systems &
    IEEE Symposium on Logic Programming '86

----------------------------------------------------------------------

Date: Mon, 15 Sep 86 23:14:51 edt
From: rba@petrus.bellcore.com (Robert B. Allen)
Subject: Conference on Office Information Systems - Brown U.


          ACM CONFERENCE ON OFFICE INFORMATION SYSTEMS
               October 6-8, 1968, Providence, R.I.

Conference Chair:        Carl Hewitt, MIT
Program Chair:           Stan Zdonik, Brown University
Keynote Speaker:         J.C.R. Licklider, MIT
Distinguished Lecturer:  A. van Dam, Brown University

COIS is a major research conference on the design and use of
computing systems for professional and knowledge workers.

At this meeting, sessions and panels emphasize AI and
organizational models of offices as sites for distributed information
processing.  Other themes include user interfaces,
graphics, group cooperation, and object-oriented systems.

For more information, call the Conference Registrar at Brown U.
(401-813-1839), or send electronic mail to mhf@brown.CSNET.

------------------------------

Date: Tue, 9 Sep 86 23:50:34 MDT
From: keller@utah-cs.ARPA (Bob Keller)
Subject: Conference - SLP '86

We have requested, and the IEEE has agreed, that
Symposium registrations be accepted at the "early" fee for a
couple of more days, so please act immediately
if you wish to exploit this rate.

  [Sorry for the delay -- AIList doesn't always function in
  real time.  -- KIL]


Hotel Reservations: phone 801-531-1000, telex 389434

The (nearly) final schedule:
                             SLP '86

                     Third IEEE Symposium on

                        LOGIC PROGRAMMING

                      September 21-25, 1986
                        Westin Hotel Utah
                      Salt Lake City, Utah

SUNDAY, September 21

19:00 - 22:00   Symposium and tutorial registration


MONDAY, September 22

08:00 - 09:00   Symposium and tutorial registration

09:00 - 17:30 TUTORIALS (concurrent) Please see abstracts later.

George Luger            Introduction to AI Programming in Prolog
University of New Mexico

David Scott Warren              Building Prolog Interpreters
SUNY, Stony Brook

John Conery           Theory of Parallelism, with Applications to
University of Oregon                   Logic Programming


12:00 - 17:30   Exhibit set up time

18:00 - 22:00   Symposium registration

20:00 - 22:00   Reception


TUESDAY, September 23

08:00 - 12:30   Symposium registration

09:00           Exhibits open

09:00 - 09:30   Welcome and announcements

09:30 - 10:30   INVITED SPEAKER:
                        W. W. Bledsoe, MCC
                 Some Thoughts on Proof Discovery


11:00 - 12:30   SESSION 1: Applications
                           (Chair: Harvey Abramson)

The Logic of Tensed Statements in English -
an Application of Logic Programming
Peter Ohrstrom, University of Aalborg
Nils Klarlund, University of Aarhus

Incremental Flavor-Mixing of Meta-Interpreters for
Expert System Construction
Leon Sterling and Randall D. Beer
Case Western Reserve University

The Phoning Philosopher's Problem or
Logic Programming for Telecommunications Applications
J.L. Armstrong, N.A. Elshiewy, and R. Virding
Ericsson Telecom


14:00 - 15:30   SESSION 2: Secondary Storage
                           (Chair: Maurice Bruynooghe)

EDUCE - A Marriage of Convenience:
Prolog and a Relational DBMS
Jorge Bocca, ECRC, Munich

Paging Strategy for Prolog Based Dynamic Virtual Memory
Mark Ross, Royal Melbourne Institute of Technology
K. Ramamohanarao, University of Melbourne

A Logical Treatment of Secondary Storage
Anthony J. Kusalik, University of Saskatchewan
Ian T. Foster, Imperial College, London


16:00 - 17:30   SESSION 3: Compilation
                           (Chair: Richard O'Keefe)

Compiling Control
Maurice Bruynooghe, Danny De Schreye, Bruno Krekels
Katholieke Universiteit Leuven

Automatic Mode Inference for Prolog Programs
Saumya K. Debray, David S. Warren
SUNY at Stony Brook

IDEAL: an Ideal DEductive Applicative Language
Pier Giorgio Bosco, Elio Giovannetti
C.S.E.L.T., Torino

17:30 - 19:30   Reception

20:30 - 22:30   Panel (Wm. Kornfeld, moderator)
                Logic Programming for Systems Programming
                Panelists:  Steve Taylor, Weizmann Institute
                            Steve Gregory, Imperial College
                            Bill Wadge
                            A researcher from ICOT
                            (sorry this is incomplete)

WEDNESDAY, September 24

09:00 - 10:00   INVITED SPEAKER:
                        Sten Ake Tarnlund, Uppsala University
                          Logic Programming - A Logical View


10:30 - 12:00   SESSION 4: Theory
                           (Chair: Jean-Louis Lassez)

A Theory of Modules for Logic Programming
Dale Miller
University of Pennsylvania

Building-In Classical Equality into Prolog
P. Hoddinott, E.W. Elcock
The University of Western Ontario

Negation as Failure Using Tight Derivations
for General Logic Programs
Allen Van Gelder
Stanford University


13:30 - 15:00   SESSION 5: Control
                           (Chair: Jacques Cohen)

Characterisation of Terminating Logic Programs
Thomas Vasak, The University of New South Wales
John Potter, New South Wales Institute of Technology

An Execution Model for Committed-Choice
Non-Deterministic Languages
Jim Crammond
Heriot-Watt University

Timestamped Term Representation in Implementing Prolog
Heikki Mannila, Esko Ukkonen
University of Helsinki


15:30 - 22:00   Excursion


THURSDAY, September 25


09:00 - 10:30   SESSION 6: Unification
                           (Chair: Uday Reddy)

Refutation Methods for Horn Clauses with Equality
Based on E-Unification
Jean H. Gallier and Stan Raatz
University of Pennsylvania

An Algorithm for Unification in Equational Theories
Alberto Martelli, Gianfranco Rossi
Universita' di Torino

An Implementation of Narrowing: the RITE Way
Alan Josephson and Nachum Dershowitz
University of Illinois at Urbana-Champaign


11:00 - 12:30   SESSION 7: Parallelism
                           (Chair: Jim Crammond)

Selecting the Backtrack Literal in the
AND Process of the AND/OR Process Model
Nam S. Woo and Kwang-Moo Choe
AT & T Bell Laboratories

Distributed Semi-Intelligent Backtracking for a
Stack-based AND-parallel Prolog
Peter Borgwardt, Tektronix Labs
Doris Rea, University of Minnesota

The Sync Model for Parallel Execution of Logic Programming
Pey-yun Peggy Li and Alain J. Martin
California Institute of Technology


14:00 - 15:30   SESSION 8: Performance

Redundancy in Function-Free Recursive Rules
Jeff Naughton
Stanford University

Performance Evaluation of a Storage Model for
OR-Parallel Execution
Andrzej Ciepelewski and Bogumil Hausman
Swedish Institute of Computer Science (SICS)

MALI: A Memory with a Real-Time Garbage Collector
for Implementing Logic Programming Languages
Yves Bekkers, Bernard Canet, Olivier Ridoux, Lucien Ungaro
IRISA/INRIA Rennes


16:00 - 17:30   SESSION 9: Warren Abstract Machine
                           (Chair: Manuel Hermenegildo)

A High Performance LOW RISC Machine
for Logic Programming
J.W. Mills
Arizona State University

Register Allocation in a Prolog Machine
Saumya K. Debray
SUNY at Stony Brook

Garbage Cut for Garbage Collection of Iterative Programs
Jonas Barklund and Hakan Millroth
Uppsala University


EXHIBITS:

An exhibit  area  including  displays  by  publishers,  equipment
manufacturers, and software houses will accompany the  Symposium.
The list of exhibitors includes: Arity, Addison-Wesley, Elsevier,
Expert Systems, Logicware, Overbeek Enterprises, Prolog  Systems,
and Quintus.  For more information, please contact:

                Dr. Ross A. Overbeek
                Mathematics and Computer Science Division
                Argonne National Laboratory
                9700 South Cass Ave.
                Argonne, IL 60439
                312/972-7856


ACCOMODATIONS:

The Westin Hotel  Utah is a  gracious turn of  the century  hotel
with Mobil  4-Star and  AAA 5-Star  ratings.  The  Temple  Square
Hotel, located one  city block  away, offers  basic comforts  for
budget-conscious attendees.


MEALS AND SOCIAL EVENTS:

Symposium registrants  (excluding students  and retired  members)
will receive tickets  for lunches  on September 23,  24, and  25,
receptions on September 22 and 23, and an excursion the afternoon
of September 24.  The excursion will comprise a steam train  trip
through scenic  Provo  Canyon,  and a  barbeque  at  Deer  Valley
Resort, Park City, Utah.

Tutorial registrants will receive lunch tickets for September 22.


TRAVEL:

The Official  Carrier for  SLP '86  is United  Airlines, and  the
Official Travel Agent is Morris Travel (361 West Lawndale  Drive,
Salt Lake  City,  Utah  84115,  phone  1-800-621-3535).   Special
airfares are  available to  SLP  '86 attendees.   Contact  Morris
Travel for details.

A courtesy limousine  is available from  Salt Lake  International
Airport to both  symposium hotels, running  every half hour  from
6:30 to 23:00.  The taxi fare is approximately $10.

CLIMATE:

Salt Lake City generally has warm weather in September,  although
evenings may be cool.   A warm jacket should  be brought for  the
excursion.  Some rain is normal this time of year.


SLP '86 Symposium and Tutorial Registration Coupon:

Advance symposium and  tutorial registration  is available  until
September 1, 1986.  No refunds will be made after that date. Send
a check or money order (no currency will be accepted) payable  to
"Third IEEE Symposium on Logic Programming" to:

        Third IEEE Symposium on Logic Programming
        IEEE Computer Society
        1730 Massachusetts Avenue, N.W.
        Washington, D.C. 20036-1903

[...]

Symposium Registration:         Advance On-Site

IEEE Computer Society members   $185    $215
Non-members                     $230    $270
Full-time student members       $ 50    $ 50
Full-time student non-members   $ 65    $ 65
Retired members                 $ 50    $ 50

Tutorial Registration:
        ("Luger", "Warren", or "Ostlund")

                                Advance On-Site

IEEE Computer Society members   $140    $170
Non-members                     $175    $215


SLP '86 Hotel Reservation:

        Mail or Call:   phone 801-531-1000, telex 389434

                                Westin Hotel Utah
                                Main and South Temple Streets
                                Salt Lake City, UT 84111

A deposit  of  one  night's  room or  credit  card  guarantee  is
required for arrivals after 6pm.

Room Rates:
                Westin Hotel Utah       Temple Square Hotel

single room             $60             $30
double room             $70             $36

Reservations must be made mentioning  SLP '86 by August 31,  1986
to guarantee these special rates.


                   SLP '86 TUTORIAL ABSTRACTS



       IMPLEMENTATION OF PROLOG INTERPRETERS AND COMPILERS

                       DAVID SCOTT WARREN

                       SUNY AT STONY BROOK

Prolog is  by far  the  most used  of various  logic  programming
languages that have been  proposed.  The reason  for this is  the
existence of very efficient implementations.  This tutorial  will
show in detail how this efficiency is achieved.

The first  half  of  this tutorial  will  concentrate  on  Prolog
compilation.  The approach  is first to  define a Prolog  Virtual
Machine (PVM), which can  be implemented in software,  microcode,
hardware, or  by  translation  to the  language  of  an  existing
machine.  We will describe  in detail the  PVM defined by  D.H.D.
Warren (SRI Technical Note 309) and discuss how its data  objects
can be represented  efficiently.  We  will also  cover issues  of
compilation  of  Prolog  source   programs  into  efficient   PVM
programs.



               ARTIFICIAL INTELLIGENCE AND PROLOG:
                 AN INTRODUCTION TO THEORETICAL
                ISSUES IN AI WITH PROLOG EXAMPLES

                         GEORGE F. LUGER

                    UNIVERSITY OF NEW MEXICO

This tutorial is intended to introduce the important concepts  of
both  Artificial   Intelligence   and  Logic   Programming.    To
accomplish this  task,  the  theoretical issues  involved  in  AI
problem solving are  presented and discussed.   These issues  are
exemplified with programs  written in Prolog  that implement  the
core ideas.   Finally,  the design  of  a Prolog  interpreter  as
Resolution Refutation system is presented.

The main  ideas  from  AI  problem  solving  that  are  presented
include: 1) An introduction of  AI as representation and  search.
2)  An  introduction  of  the  Predicate  Calculus  as  the  main
representation formalism for Artificial Intelligence.  3)  Simple
examples  of  Predicate  Calculus  representations,  including  a
relational data  base.   4)  Unification and  its  role  both  in
Predicate  Calculus  and  Prolog.   5)  Recursion,  the   control
mechanism for searching trees and graphs, 6) The design of search
strategies, especially depth first, breadth first and best  first
or "heuristic" techniques, and 7)  The Production System and  its
use both for organizing search in a Prolog data base, as well  as
the basic data structure for "rule based" Expert Systems.

The  above  topics  are  presented  with  simple  Prolog  program
implementations,  including   a   Production  System   code   for
demonstrating search strategies.  The final topic presented is an
analysis of  the  Prolog  interpreter and  an  analysis  of  this
approach  to  the  more  general  issue  of  logic   programming.
Resolution is considered as an inference strategy and its use  in
a refutation system for  "answer extraction" is presented.   More
general issues in  AI problem  solving, such as  the relation  of
"logic" to "functional" programming are also discussed.



                PARALLELISM IN LOGIC PROGRAMMING

                          JOHN CONERY
                     UNIVERSITY OF OREGON

The fields  of parallel  processing  and logic  programming  have
independently   attracted   great   interest   among    computing
professionals  recently,  and  there  is  currently  considerable
activity at  the  interface, i.e.  in  applying the  concepts  of
parallel computing to  logic programming  and, more  specifically
yet,  to  Prolog.   The  application  of  parallelism  to   Logic
Programming takes two  basic but related  directions.  The  first
involves leaving  the semantics  of sequential  programming,  say
ordinary Prolog,  as intact  as possible,  and uses  parallelism,
hidden from the programmer, to improve execution speed.  This has
traditionally been a difficult problem requiring very intelligent
compilers.  It may  be an easier  problem with logic  programming
since parallelism is  not artificially made  sequential, as  with
many applications expressed in procedural languages.  The  second
direction involves adding new parallel programming primitives  to
Logic Programming to allow  the programmer to explicitly  express
the parallelism in an application.

This tutorial will assume a basic knowledge of Logic Programming,
but  will  describe   current  research   in  parallel   computer
architectures, and will survey many of the new parallel machines,
including shared-memory  architectures  (RP3,  for  example)  and
non-shared-memory   architectures   (hypercube   machines,    for
example).  The tutorial  will then describe  many of the  current
proposals for parallelism in  Logic Programming, including  those
that allow the  programmer to express  the parallelism and  those
that hide the parallelism from the programmer.  Included will  be
such proposals as Concurrent Prolog, Parlog, Guarded Horn Clauses
(GHC), and Delta-Prolog.   An attempt will  be made to  partially
evaluate  many  of  these  proposals  for  parallelism  in  Logic
Programming, both  from a  pragmatic architectural  viewpoint  as
well as from a semantic viewpoint.

                     Conference Chairperson
               Gary Lindstrom, University of Utah

                       Program Chairperson
              Robert M. Keller, University of Utah

                 Local Arrangements Chairperson
             Thomas C. Henderson, University of Utah

                      Tutorials Chairperson
             George Luger, University of New Mexico

                      Exhibits Chairperson
              Ross Overbeek, Argonne National Lab.

                        Program Committee

                     Francois Bancilhon, MCC
                    John Conery, U. of Oregon
                    Al Despain, U.C. Berkeley
                  Herve Gallaire, ECRC, Munich
                  Seif Haridi, SICS, Stockholm
                     Lynette Hirschman, SDC
                     Peter Kogge, IBM, Owego
                William Kornfeld, Quintus Systems
               Gary Lindstrom, University of Utah
             George Luger, University of New Mexico
                   Rikio Onai, ICOT/NTT, Tokyo
              Ross Overbeek, Argonne National  Lab.
                 Mark Stickel, SRI International
              Sten Ake Tarnlund, Uppsala University

------------------------------

End of AIList Digest
********************

From csnet_gateway Thu Sep 18 06:51:39 1986
Date: Thu, 18 Sep 86 06:51:32 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #182
Status: R


AIList Digest           Wednesday, 17 Sep 1986    Volume 4 : Issue 182

Today's Topics:
  Conference - ISMIS'86 program
----------------------------------------------------------------------

Date: Thu, 11 Sep 86 17:03 EST
From: ZEMANKOVA%tennessee.csnet@CSNET-RELAY.ARPA
Subject: Conference - ISMIS'86 program


      PRELIMINARY PROGRAM


      INTERNATIONAL SYMPOSIUM ON METHODOLOGIES FOR

      INTELLIGENT SYSTEMS


      October 22 - 25, 1986
      Hilton Hotel
      Knoxville, Tennessee

                     Sponsored by

      * ACM Special Interest Group on Artificial Intelligence

                  in cooperation with

      *  University of Tennessee at Knoxville
      *  The Data Systems Research and Development Program
         of Martin Marietta Energy Systems, and
         Oak Ridge National Laboratory
      *  University of North Carolina at Charlotte

                     and hosted by

      *  The Procter and Gamble Company


                   CHAIRPERSONS
         Zbigniew W. Ras (UTK and UNCC)
         Maria Zemankova (UTK and UNCC)

               SYMPOSIUM COORDINATOR
         J. Robin B. Cockett (UTK)

               ORGANIZING COMMITTEE
         S. Chen (IUPUI)                M. Emrich (ORNL)
         G. Epstein (UNCC & Indiana)    K. O'Kane (UTK)
         J. Poore (Georgia Tech.& UTK)  R. Yager (Iona)

               PROGRAM COMMITTEE
         P. Andrews (Carnegie-Mellon)
         J. Bourne (Vanderbilt)
         M. Fitting (CUNY)
         B. Gaines (Calgary, Canada)
         M. Gupta (Saskatchewan, Canada)
         M. Karpinski (Bonn, West Germany)
         E. Knuth (Budapest, Hungary)
         S. Kundu (LSU)
         W. Marek (Kentucky)
         R. Michalski (Illinois-Urbana)
         C. Negoita (CUNY)
         R. Nelson (Case Western Reserve)
         Z. Pawlak (Warsaw, Poland)
         A. Pettorossi (Rome, Italy)
         E. Sandewall (Linkoping, Sweden)
         G. Shafer (Kansas)
         M. Shaw (Calgary, Canada)
         J. Tou (Florida)


              PURPOSE OF THE SYMPOSIUM

       This Symposium is intended to attract researchers
       who are actively engaged both in theoretical and
       practical aspects of intelligent systems. The goal
       is to provide a platform for a useful exchange
       between theoreticians and practitioners, and to
       foster the crossfertilization of ideas in the
       following areas:

             * Expert Systems
             * Knowledge Representation
             * Logic for Artificial Intelligence
             * Learning and Adaptive Systems
             * Intelligent Databases
             * Approximate Reasoning

       There will be an exhibit of A.I. hardware and software
       and of A.I. literature.

       Symposium Proceedings will be published by ACM Press.



ISMIS 86 Symposium Schedule

Tuesday, October 21, 1986
=========================
   6:00 pm - 9:00 pm   Symposium Registration
   7:00 pm - 9:00 pm   Reception (Cash Bar)
   6:00 pm - 9:00 pm   Exhibits


Wednesday, October 22, 1986
===========================
   8:00 am - 12:00 am  Symposium Registration

ISMIS'86 Opening Session
   9:00 am - 9:20 am

Session 1: Expert Systems
I1: Invited Papers
    Chair: M. Emrich (ORNL)

   9:20am - 10:05am
        "Recent Developments in Expert Systems"
           B. Buchanan (Stanford Univ.)
   10:05am - 10:50am
        "Generic Tasks in Artificial Intelligence and Mentalese"
           B. Chandrasekaran (Ohio State Univ.)

A1: Contributed Papers
    Chair: R. Cockett (UT Knoxville)

   11:15am - 11:40am
        "The Frame-Definition Language for Customizing the
         Raffaello Structure-Editor in Host Expert Systems"
           E. Nissan (Ben-Gurion, Israel)
   11:40am - 12:05am
        "Knowledge Base Organization in Expert Systems"
           S. Frediani, L. Saitta (Torino, Italy)
   12:05am - 12:30pm
        "NESS: A Coupled Simulation Expert System"
           K. Kawamura, G. Beale, J. Rodriguez-Moscoso, B.J. Hsieh,
           S. Padalkar (Vanderbilt)

B1: Contributed Papers
    Chair: J. Bourne (Vanderbilt)

   11:15am - 11:40am
        "Design of an Expert System for Utilization Research"
           A. Zvieli, S.K. MacGregor, J.Z. Shapiro (LSU)
   11:40am - 12:05am
        "An Expert System for Dynamic Scheduling"
           S. Floyd, D. Ford (Huntsville, Alabama)
   12:05am - 12:30pm
        "Beginners' Strategies in Example Based Expert Systems"
           T. Whalen, B. Schott (Atlanta, Georgia)

   12:30 pm - 2:00 pm  Exhibits


Session 2: Intelligent Databases
I2: Invited Papers
    Chair: W. Marek (UK Lexington)

   2:00pm - 2:45pm
        "Using Knowledge Representation for the Development
         of Interactive Information Systems"
           J. Mylopoulos (Toronto, Canada)
   2:45pm - 3:30pm
        "Acquisition of Knowledge from Data"
           G. Wiederhold (Stanford Univ.)

A2: Contributed Papers
    Chair: S. Kundu (LSU)

   3:50pm - 4:15pm
        "A Decidable Query Answering Algorithm for Circumscriptive
         Theories"
           T. Przymusinski (El Paso, Texas)
   4:15pm - 4:40pm
        "Fuzzy Knowledge Engineering Techniques in Scientific Document
         Classification"
           R. Lopez de Mantaras (Barcelona, Spain)
   4:40pm - 5:05pm
        "A Semantic and Logical Front-end to a Database System"
           M. Rajinikanth, P.K. Bose (Texas Instruments, Dallas)
   5:05pm - 5:30pm
        "A Knowledge-Based Approach to Online Document Retrieval
         System Design"
           G. Biswas, J.C. Bezdek, R.L. Oakman (Columbia, S.C.)
   5:30pm - 5:55pm
        "Towards an Intelligent and Personalized Information Retrieval
         System"
           S.Myaeng, R.R. Korfhage (Southern Methodist, Texas)


   6:00 pm - 7:30 pm   Exhibits


   7:30 pm - 10:00 pm  Dinner Theatre
                       Karel Capek, R.U.R.


Thursday, October 23, 1986
==========================

Session 3: Approximate Reasoning
I3: Invited Papers
    Chair: M. Zemankova (UT Knoxville)

   9:00am - 9:45am
        "Inductive Models under Uncertainty"
           P. Cheeseman (NASA AMES and SRI)
   9:45am - 10:30am
        "The Concept of Generalized Assignment Statement and its
         Application to Knowledge Representation in Fuzzy Logic"
           L.A. Zadeh (Berkeley)

A3: Contributed Papers
    Chair: B. Bouchon (Paris, France)

   10:50am - 11:15am
        "Expert System on a Chip: An Engine for Real-Time Approximate
         Reasoning"
           M. Togai (Rockwell International),
           H. Watanabe (AT&T Bell Lab, Holmdel)
   11:15am - 11:40am
        "Selecting Expert System Frameworks within the Bayesian Theory"
           S.W. Norton (PAR Government Systems Co., New Hartford)
   11:40am - 12:05pm
        "Inference Propagation in Emitter, System Hierarchies"
           T. Sudkamp (Wright State)
   12:05pm - 12:30pm
        "Estimation of Minimax Values"
           P. Purdom (Indiana), C.H. Tzeng (Ball State Univ.)

B3: Contributed Papers
    Chair: E. Nissan (Ben-Gurion, Israel)

   10:50am - 11:15am
        "Aggregating Criteria with Quantifiers"
           R.R. Yager (Iona College)
   11:15am - 11:40am
        "Approximating Sets with Equivalence Relations"
           W. Marek (Kentucky), H. Rasiowa (Warsaw, Poland)
   11:40am - 12:05pm
        "Evidential Logic and Dempster-Shafer Theory"
           S. Chen (UNC-Charlotte)
   12:05pm - 12:30pm
        "Propagating Belief Functions with Local Computations"
           P.P. Shenoy, G. Shafer (Lawrence, Kansas)


   12:30 pm - 2:00 pm  Exhibits


Session 4: Logics for Artificial Intelligence
I4: Invited Papers
    Chair: M. Fitting (CUNY)

   2:00pm - 2:45pm
        "Automated Theorem Proving: Mapping Logic into A.I."
           D.W. Loveland (Duke Univ.)
   2:45pm - 3:30pm
        "Extensions to Functional Programming in Scheme"
           D.A. Plaisted, J. W. Curry (UNC Chapel Hill)

A4: Contributed Papers
    Chair: G. Epstein (UNC Charlotte)

   3:50pm - 4:15pm
        "Logic Programming Semantics using a Compact Data Structure"
           M. Fitting (CUNY)
   4:15pm - 4:40pm
        "On the Relationship between Autoepistemic Logic and Parallel
         Circumscription"
           M. Gelfond, H. Przymusinska (El Paso, Texas)
   4:40pm - 5:05pm
        "A Preliminary Excursion Into Step-Logics"
           J. Drapkin, D. Perlis (College Park, Maryland)
   5:05pm - 5:30pm
        "Tree Resolution and Generalized Semantic Tree"
           S. Kundu (LSU)
   5:30pm - 5:55pm
        "An Inference Model for Inheritance Hierarchies with
         Exceptions"
           K. Whitebread (Honeywell, Minneapolis)


   6:00 pm - 7:30 pm   Exhibits


   7:30 pm - 9:30 pm   Symposium Banquet
                       Keynote Speaker: Brian Gaines (Calgary, Canada)


Friday, October 24, 1986
========================

Session 5: Learning and Adaptive Systems
I5: Invited Papers
    Chair: Z. Ras (UT Knoxville)

   8:45am - 9:30am
        "Analogical Reasoning in Planning and Decision Making"
           J. Carbonell (Carnegie-Mellon Univ.)
   9:30am - 10:15am
        "Emerging Principles in Machine Learning"
           R. Michalski (Univ. of Illinois at Urbana)

A5: Contributed Papers
    Chair: D. Perlis (Maryland)

   10:35am - 11:00am
        "Memory Length as a Feedback Parameter in Learning Systems"
           G. Epstein (UNC-Charlotte)
   11:00am - 11:25am
        "Experimenting and Theorizing in Theory Formation"
           B. Koehn, J.M. Zytkow (Wichita State)
   11:25am - 11:50am
        "On Learning and Evaluation of Decision Rules in the Context
         of Rough Sets"
           S.K.M. Wong, W. Ziarko (Regina, Canada)
   11:50am - 12:15pm
        "Taxonomic Ambiguities in Category Variations Needed to Support
         Machine Conceptualization"
           L.J. Mazlack (Berkeley)
   12:15pm - 12:40pm
        "A Model for Self-Adaptation in a Robot Colony"
           T.V.D.Kumar, N. Parameswaran (Madras, India)


   12:45 pm - 2:00 pm  Symposium Luncheon
                       Keynote Speaker: Joseph Deken (NSF)
                       "Viable Inference Systems"


Session 6: Knowledge Representation
I6: Invited Papers
    Chair: S. Chen (UNC Charlotte)

   2:15pm - 3:00pm
        "Self-Improvement in Problem-Solving"
           R.B. Banerji (St. Joseph's Univ.)
   3:00pm - 3:45pm
        "Logical Foundations for Knowledge Representation in
         Intelligent Systems"
           B.R. Gaines (Calgary, Canada)

A6: Contributed Papers
    Chair: M. Togai (Rockwell International)

   4:00pm - 4:25pm
        "Simulations and Symbolic Explanations"
           D.H. Helman, J.L. Bennett, A.W. Foster (Case Western Reserve)
   4:25pm - 4:50pm
        "Notes on Conceptual Representations"
           E. Knuth, L. Hannak, A. Hernadi (Budapest, Hungary)
   4:50pm - 5:15pm
        "Spaceprobe: A System for Representing Complex Knowledge"
           J. Dinsmore (Carbondale, Ill)
   5:15pm - 5:40pm
        "Challenges in Applying Artificial Intelligence Methodologies
         to Military Operations"
           L.F. Arrowood, M.L. Emrich, M.R. Hilliard, H.L. Hwang
           (Oak Ridge National Lab.)

B6: Contributed Papers
    Chair: L. de Mantaras (Barcelona, Spain)

   4:00pm - 4:25pm
        "Knowledge-Based Processing/Interpretation of Oceanographic
         Satellite Data"
           M.G. Thomason, R.E. Blake (UTK), M. Lybanon (NTSL)
   4:25pm - 4:50pm
        "A Framework for Knowledge Representation and use in Pattern
         Analysis"
           F. Bergadano, A. Giordana (Torino, Italy)
   4:50pm - 5:15pm
        "Algebraic Properties of Knowledge Representation Systems"
           J.W. Grzymala-Busse (Lawrence, Kansas)
   5:15pm - 5:40pm
        "Prime Rule-based Methodologies Give Inadequate Control"
           J.R.B. Cockett, J. Herrera (UTK)


ISMIS'86 Closing Session
   5:45pm - 6:00pm


Saturday, October 25, 1986
==========================

   9:00 am - 12:30 pm  Colloquia (parallel sessions)

   1:30 pm - 7:30 pm   Trip to the Smoky Mountains



   SYMPOSIUM FEES

   Advance Symposium Registration
   Received by September 15, 1986
   Member of ACM                  $220.00
   Non-member                     $250.00
   Student*                       $ 30.00

   Late or On-Site Registration
   Member of ACM                  $265.00
   Non-member                     $295.00
   Student*                       $ 40.00

   Additional Tickets
   Reception                      $  5.00
   Dinner Theatre                 $ 25.00
   Symposium Banquet              $ 25.00
   Symposium Luncheon             $ 10.00
   Trip to Smoky Mountains        $ 25.00

   Symposium registration fee includes the Proceedings (available at
   the Symposium), continental breakfasts, reception, dinner theatre,
   symposium banquet, symposium luncheon, coffee breaks.

   *  Student  registration  includes only  coffee  breaks. Students
      registration limited, hence students should register early.


   ACCOMMODATIONS:

   A block of rooms has been reserved for the symposium at the
   Hilton Hotel.  The ISMIS 86 rate for a single occupancy is $47.00
   and double occupancy $55.00.  To reserve your room, contact the
   Hilton Hotel, 501 Church Avenue, S.W., Knoxville, TN 37902-2591,
   telephone 615-523-2300 by September 30, 1986.  The Hilton Hotel
   will continue to accept reservations after this date on a space
   availability basis at the ISMIS 86 rates.  However, you are
   strongly encouraged to make your reservations by the cutoff date
   of September 30.

   Reservation must be accompanied by a deposit of one night's room
   rental.


   TRANSPORTATION:

   The Hilton Hotel provides a free limousine service from and to
   the airport.

   If arriving by your vehicle, all overnight guests receive free
   parking.


   SPECIAL AIRFARE RATES:

   DELTA Airlines has been designated as the official carrier for the
   Symposium. Attendees arranging flights with DELTA will receive a
   35% discount off the regular coach fare to Knoxville. To take
   advantage of this speical rate call (toll-free) 1-800-241-6760,
   referring to FILE #J0170. This number is staffed from 8:00 a.m.
   to 8:00 p.m. EDT, seven days per week.


   GENERAL INFORMATION:

   Knoxville is located in East Tennessee, the area that is noted for its
   abundant water reservoirs, rivers, mountains, hardwood forests and
   wildlife refuges.  The Great Smoky Mountains National Park, the
   Cumberland Mountains, the resort city of Gatlinburg, and the Oak
   Ridge Museum of Science and Energy are all within an hours drive
   from the downtown area.  The Fall season offers spectacular views
   of radiant colors within the city and the surrounding contryside.
   Interstates 40 and 75 provide access into Knoxville.


   REGISTRATION FORM:

   For the registration form, please write to

       UTK Departments of Conferences
       2014 Lake Avenue
       Knoxville, TN 37996-3910



   FURTHER INFORMATION:

   Further information can be obtained from:

      Zbigniew W. Ras                   Maria Zemankova
      Dept. of Computer Science         Dept. of Computer Science
      University of North Carolina      University of Tennessee
      Charlotte, NC 28223               Knoxville, TN 37996-1301
      (704) 597-4567                    (615) 974-5067
      ras%unccvax@mcnc.CSNET            zemankova@utenn.CSNET

------------------------------

End of AIList Digest
********************

From csnet_gateway Fri Sep 19 19:08:51 1986
Date: Fri, 19 Sep 86 19:08:43 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #183
Status: R


AIList Digest           Wednesday, 17 Sep 1986    Volume 4 : Issue 183

Today's Topics:
  Queries - Space/Military Expert Systems & Communications/Control ES &
    Structured Analysis References & NuBus-to-VME Adapter &
    Robotic Cutting Arm & Schematics Drafter & Mechanical Engineering ES &
    OPS5 & IJCAI-87 Net Address & Common Lisp Flavors &
    Looping in Belief Revision System & 2-D Math Editor

----------------------------------------------------------------------

Date: 16 Aug 86 15:39:32 GMT
From: mcvax!ukc!reading!brueer!ckennedy@seismo.css.gov  (C.M.Kennedy )
Subject: CONTACT REQUIRED: SPACE OR MILITARY EXPERT SYSTEMS


CONTACT REQUIRED: SPACE OR MILITARY EXPERT SYSTEMS


I wish to contact someone (reasonably senior) who has worked on an expert
system in one of the following areas:


       1. Space technology - monitoring, control, planning

       2. Military Science - of particular interest is:

               - prediction, e.g. modelling behaviour of states or terrorist
                 organisations and making predictions based on available
                 knowledge

               - interpretation of sensor data. i.e. integrating raw data
                 from multiple sensors and giving a high-level "user-
                 friendly" interpretation of what is going on.



I wish to obtain the following information:


       1. Postal address and telephone number (along with email address).
          If possible: Times of day (and days of the week) when telephone
          contact is convenient.


       2. Details of how to obtain the following documentation (or better
          still a direct mailing of it if this is convenient):

               - TECHNICAL papers describing the architecture, knowledge
                 representation, inference engine, tools, language, machine
                 etc.

               - papers giving the precise REQUIREMENTS of the system. If
                 this is not possible, a short summary will do.


       3. Was the project successful? Were all the original requirements
          satisfied? Has the system been used successfuly in an operational
          environment?


       4. What were the problems encountered and what has been learned from
          the project?



 I would also be interested to hear from someone who has done RESEARCH on
 any of the above (or knows of someone who has).


       Catriona Kennedy

       Mail address: ckennedy@ee.brunel.ac.uk

------------------------------

Date: Mon 25 Aug 86 12:18:41-EDT
From: CAROZZONI@RADC-TOPS20.ARPA
Subject: Cooperative Expert System

The Decision Aids Section at Rome Air Development Center is performing an
in-house study to establish a technical baseline in support of an upcoming
(FY 87) procurement effort related to the design of a "cooperative" expert
system - i.e., one which supports both communication and more extensive,
knowledge-based cooperation between existing systems.

We are particularly interested in hearing about any work related to expert
system design, distributed AI, and models for communication and cooperation
that may be relevant to this effort.  Please respond by net to
Hirshfield@RADC-multics, or write to Hirshfield at RADC/COAD, Griffiss Air
Force Base, NY 13441.

------------------------------

Date: Wed 10 Sep 86 15:49:36-CDT
From: Werner Uhrig  <CMP.WERNER@R20.UTEXAS.EDU>
Subject: Communications Expert System - does anyone know more ?

[ from InfoWorld, Sep 8, page 16 ]

COMMUNICATIONS PROGRAM TO HELP NOVICES, EXPERTS

Smyran, Ga - A communications software pulisher said it wil sell an on-line
expert system that helps computer users solve data communications problems and
work out idiosyncracies in the interaction of popular communications hardware
and software.

Line Expert, which will sell for $49.95 when it is released October 1, will ask
users questions about their particular configuration and suggest solutions,
according to Nat Atwell, director of marketing for publisher Concept
Development Systems.

..........

------------------------------

Date: Mon, 8 Sep 86 22:25:30 cdt
From: Esmail Bonakdarian <bonak%cs.uiowa.edu@CSNET-RELAY.ARPA>
Subject: Expert Systems and Data Communication


I am working on my M.S. thesis which deals with the use of Expert Systems
in the area of Data Communications (e.g. help diagnose sources of
communication problems, help to "configure" components [DTE's and DCE's]
correctly, etc).  I am curious to find out what knowledge based systems
(if any) exist that deal with this problem domain.  I would very much
appreciate any pointers to literature or persons doing work in this area.

Thanks,
Esmail

------------------------------

Date: Wed, 20 Aug 86 9:45:15 EDT
From: Marty Hall <hall@hopkins-eecs-bravo.ARPA>
Subject: Wanted: References on Structured Analysis Inadequacies


We are looking for references that point out some of the inadequacies
of Structured Analysis methods (ala Yourdon, for instance) in a
Software Development Process for AI software.  We have a couple of
references vouching for the utility of Rapid Prototyping and
Exploratory Programming (thanks, by the way, for those who pointed
me to some of these references), but not explicitly contrasting this
with the more traditional Structured Design/Analysis methods.
These references are needed by our AI group for a "Convince the Software
Managers" session.  :-)

Any help greatly appreciated!

                                        - Marty Hall

Arpa: hall@hopkins                           AI and Simulation Dept, MP E-315
UUCP: seismo!umcp-cs!aplcen!jhunix!ins_amrh  Martin Marietta Baltimore Aerospace
                                             103 Chesapeake Park Plaza
                                             Baltimore, MD  21220
                                             (301) 682-0917

------------------------------

Date: 15 Aug 86 14:32:00 GMT
From: pyrnj!mirror!datacube!berger@CAIP.RUTGERS.EDU
Subject: NuBus to VME adapter?


I figured this would be as good a place as any for the following question:

Anyone know of a NuBus to VMEbus adapter? Something to allow VMEbus
boards to plug into a NuBus?  We want to beable to connect our
Image Processing boards into things like the TI explorer and LMI machines.

                        Bob Berger

Datacube Inc. 4 Dearborn Rd. Peabody, Ma 01960  617-535-6644

ihnp4!datacube!berger
{seismo,cbosgd,cuae2,mit-eddie}!mirror!datacube!berger

------------------------------

Date: Thu, 11 Sep 86 10:44 MST
From: McGuire@HIS-PHOENIX-MULTICS.ARPA
Subject: robotics query: cutting arm

Could anyone give possible sources for a robotic arm, to be attached to
a CAD/CAM system(such as Auto-Cad), driven by a micro, such as a PC/AT?
This arm would be used to cut stencils, maximum 3 feet diameter, so it
would have to be very strong or complex.  Canadian sources preferred.
Thanks.  M.McGuire, Calgary, Alberta.

------------------------------

Date: 19 Aug 86 12:41:39 edt
From: Raul Valdes-Perez <valdes@ht.ai.mit.edu>
Subject: schematics drafting request

I have designed and programmed a non-rule-based KBES that drafts
the schematic of a digital circuit (actually only the placement
part).  To have an objective measure of the ability of this program,
I would like to compare its output with that of any other (perhaps
algorithmic) schematics drafter.  I expect that a large CAD circuit
design package would have something like this.

Can anyone help me obtain access to such a drafter?  (Please note
that this has little to do with a schematic *entry* program, nor
with a VLSI *layout* program.

Thanks in advance.

Raul E. Valdes-Perez     or   (valdes@mit-htvax.arpa)
MIT AI Lab, Room 833
545 Technology Square
Cambridge, MA 02139

------------------------------

Date: Wed, 3 Sep 86 08:16 CDT
From: Bennett@HI-MULTICS.ARPA
Subject: Looking for Expert Systems for Mechanical Engineering



  A friend of mine is looking for pointers to work done in Expert Systems
  for Mechanical Engineering ---   specifically in the area of
  Mechanical Design.

  If anyone has any information that would help please send it directly
  to me as Bennett at HI-Multics.

  Bonnie Bennett (612)782-7381

------------------------------

Date: Mon, 25 Aug 86 22:54 EDT
From: EDMUNDSY%northeastern.edu@CSNET-RELAY.ARPA
Subject: Any OPS5 in PC ?

Does anyone know whether there is any OPS5 software package availiable in PC?
I would like to know where I can find it. Thanks!!!

------------------------------

Date: 27 Aug 86 10:49:18 GMT
From: ulysses!mhuxr!aluxp!prieto@ucbvax.Berkeley.EDU  (PRIETO)
Subject: ATT 3B2/400, 3B5 CMU - OPS5

What are the OPS5 requirements to be used in small machines like
3B2/400, 3B5 vs. VAX 11/780? Storage, memory, etc. Is there OPS5
software executing in these types of machines? Can software development
for an expert system application be done in the smaller machines or is a
VAX needed?

aluxp!prieto
(215)770-3285

ps. I am interested in getting OPS 5 - where could I obtain it?

------------------------------

Date: 13 Aug 86 04:48:39 GMT
From: ucbcad!nike!lll-crg!micropro!ptsfa!jeg@ucbvax.berkeley.edu (John Girard)
Subject: IJCAI-87 ... usenet contacts


I am looking for a usenet or usenet compatible connection by which
I can inquire about the IJCAI program, ground rules and deadlines.

Please respond to

             [ihnp4,dual,pyramid,cbosgd,bellcore,qantel]ptsfa!jeg
             John Girard
             USA:  415-823-1961   415-449-5745

------------------------------

Date: Mon, 8 Sep 86 18:33:09 -0100
From: mcvax!csinn!solvay@seismo.CSS.GOV (Jean Philippe Solvay)
Subject: flavors and Common Lisp

Hi Kenneth,

Do yo know if there is any implementation of flavors in Common Lisp currently
available (public domain, if possible)?

Thanks in advance,

Jean-Philippe Solvay.
inria!csinn!solvay@mcvax.UUCP

------------------------------

Date: Mon, 08 Sep 86 16:48:15 -0800
From: Don Rose <drose@CIP.UCI.EDU>
Subject: TMS, DDB and infinite loops

Does anyone know whether the standard algorithms for belief revision
(e.g. dependency-directed backtracking in TMS-like systems) are
guaranteed to halt? That is, is it possible for certain belief networks
to be arranged such that no set of mutually consistent beliefs can be found
(without outside influence)? --Donald Rose
                               drose@ics.uci.edu
                               ICS Dept
                               Irvine CA 92717

------------------------------

Date: 0  0 00:00:00 PDT
From: "LLLASD::GARBARINI" <garbarini%lllasd.DECNET@lll-crg.arpa>
Reply-to: "LLLASD::GARBARINI" <garbarini@lllasd.decnet>
Subject: Availability of interactive 2-d math editing interfaces...

I am working with a number of other people on a project called Automatic
Programming for Physics.  The goal is to build an AI based automatic
programming system to aid scientist in the building of numerical
simulations of physical systems.

In the user interface to the system we would like to have interactive
editing of mathematical expressions in two-dimensional form.

It seems a number of people have recently made much progress in this
area. (See C. Smith and N. Soiffer, "MathScribe: A User Interface for
Computer Algebra Systems," Conference Proceedings of Symsac 86, (July,
1986) and B. Leong, "Iris: Design of a User Interface Program for
Symbolic Algebra," Proc. 1986 ACM-SIGSAM Symposium on Symbolic and
Algebraic Manipulation, July 1986.)

Not wishing to reinvent the wheel, I'd appreciate receiving information
regarding the availability of any such interface.

Joe P. Garbarini Jr.
Lawrence Livermore National Lab
P. O. Box 808 , L-308
7000 East Avenue
Livermore Ca. , 94550
(415)-423-2808

arpanet address: GARBARINI%LLLASD.DECNET@LLL-CRG.ARPA

------------------------------

End of AIList Digest
********************

From csnet_gateway Sat Sep 20 00:54:04 1986
Date: Sat, 20 Sep 86 00:53:57 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #184
Status: R


AIList Digest           Thursday, 18 Sep 1986     Volume 4 : Issue 184

Today's Topics:
  Correction - Conference on Office Information Systems,
  AI Tools - Interlisp vs. C,
  Queries - NL Grammar & Unix Software,
  Education - AI Schools,
  AI Tools - Turbo Prolog

----------------------------------------------------------------------

Date: Wed, 17 Sep 86 14:13:52 cdt
From: preece%ccvaxa@gswd-vms.ARPA (Scott E. Preece)
Subject: Correction - Conference on Office Information Sys

> From: rba@petrus.bellcore.com (Robert B. Allen)
> Subject: Conference on Office Information Systems - Brown U.
>
>
>           ACM CONFERENCE ON OFFICE INFORMATION SYSTEMS
>                October 6-8, 1968, Providence, R.I.
                               ^
Gee, I didn't join the ACM until 1970, but I didn't think
they had invented "Office Information Systems" then...

--
scott preece
gould/csd - urbana
uucp:   ihnp4!uiucdcs!ccvaxa!preece
arpa:   preece@gswd-vms

------------------------------

Date: 16 Sep 86 13:07 EDT
From: Denber.wbst@Xerox.COM
Subject: Re: Reimplementing in C

        "Such things are very awkward, if not impossible to express in the
typical AI languages"

Well, maybe I've been using an atypical AI language, but Interlisp-D has
all that stuff - byte I/O, streams, timers, whatever.  It's real e-z to
use.  Check it out.

                        - Michel

------------------------------

Date: Thu, 14 Aug 86 11:58 EDT
From: EDMUNDSY%northeastern.edu@CSNET-RELAY.ARPA
Subject: Looking for Production Rules for English Grammar

Does anyone know where can I find the information (or existed results) of
transforming English (or a simplified subset) grammar into production rules of
regular grammar, context-free or context sensitive grammar. For example,
    Sentences --> Noun Verb Noun        etc.
If anyone gets any information on that, I would appreciate if you can leave me
a pointer for those information. Thanks!! I can be contacted by any of the
following means:
NET: EDMUNDSY@NORTHEASTERN.EDU
ADD: Sy, Bon-Kiem
     Northeastern University
     Dept. ECE DA 409
     Boston, MA 02115
Phone: (617)-437-5055
                                Bon Sy

------------------------------

Date: Mon, 8 Sep 86 18:32:06 edt
From: brant@linc.cis.upenn.edu (Brant A. Cheikes)
Subject: Unix Consultant references?

I'm looking for the most recent reports by the group working
on the Unix Consultant project at UC Berkeley.  Does anybody
know what that is, and is there a network address to which
report requests can be sent?  The ref I was given was UCB
report CSD 87/303, but I'm not sure if it's available or even
recent.  Any information in this vein would be appreciated.

------------------------------

Date: 12 Sep 86 00:32:34 GMT
From: micropro!ptsfa!jeg@lll-crg.arpa  (John Girard)
Subject: AI tools/products in UNIX


Greetings,

I am looking for any information I can get on Artificial Intelligence
tools and products in the UNIX environment.  I will compile and publish
the results in net.ai.  Please help me out with any of the following:

versions of LISP and PROLOG running in unix

expert system shells available in unix

expert system and natural language products that have been developed
in the unix environment, both available now and in R&D, especially ones
that relate to unix problem domains (sys admin, security).

Reply to:  John Girard
            415-823-1961
            [ihnp4,dual,cbosgd,nike,qantel,bellcore]!ptsfa!jeg

P.S. Very interested in things that run on less horsepower than a SUN.

------------------------------

Date: Sat, 13 Sep 86 13:44:47 pdt
From: ucsbcsl!uncle@ucbvax.Berkeley.EDU
Subject: request for core nl system code


We are looking for a core nl system which we can tailor and
extend.  There is as yet little comp.ling activity at UCSB,
so we have no local sources.  We are interested in developing
a system which can be used in foreign language education, hence
we would need a system in which the "syntactic components"
are such that we could incrementally mung the system into
speaking german or french or russian without having to
redesign the system.  my knowledge in this area is fuzzy
(not 'Fuzzy(tm)' etc, just fuzzy!) .

I have read a little about systems such as the Phran component of the
Wilensky et al project called unix-consultant, and i
understand that the approach taken there is susceptible
to generalization to other languages by entering a new
data-base of pattern-action pairs (i.e. an EXACT parse of
a syntactically admissable sentence is not required) Unfortunately,
Berekeley CS is not currently giving access to components of that system.

Does anyone have pointers to available code for systems
that fall into that part of the syntax-semantics spectrum?
Is it, in fact, reasonable for us to seek such a system as
a tool, or are we better advised to start with car and cdr ????

------------------------------

Date: 19 Aug 86 19:29:25 GMT
From: decvax!dartvax!kapil@ucbvax.Berkeley.EDU  (Kapil Khetan)
Subject: Where can one do an off-campus Ph.D. in AI/ES


        After graduating from Dartmouth, with an MS in
Computer & Information Science, I have been residing and working
in New York City.

        I am interested in continuing education and think
Expert Systems is a nice field to learn more about.  I took
a ten week course in which we dabbled in Prolog and M1.

        If any of you know of a college in the area (Columbia,
NYU, PACE) which has something like it, or any other college
anywhere else which has an off-campus program, please hit the 'r' key.

        Thank-you.

                                Kapil Khetan

Chemical Bank, 55 Water St., New York, NY 10041

------------------------------

Date: 25 Aug 86 18:27:08 GMT
From: ihnp4!gargoyle!sphinx!bri5@ucbvax.Berkeley.EDU  (Eric Brill)
Subject: Grad Schools

Hello.  I am planning on entering graduate school next year.  I was wondering
what schools are considered the best in Artificial Intelligence (specifically
in language comprehension and learning).  I would be especially interested
in your opinions as to which schools would be considered the top 10.
Thank you very much.

Eric Brill

ps, if there is anybody else out there interested in the above, send me mail,
and I will forward all interesting replies.

------------------------------

Date: Fri, 12 Sep 86 15:35 CDT
From: PADIN%FNALB.BITNET@WISCVM.WISC.EDU
Subject: ADVICE ON ENTERING THE AI COMMUNITY


        As a newcomer to the AI arena, I am compelled to ask some
fundamentally novice (and,as such,sometimes ridiculous) sounding
questions.  Nonetheless, here goes.

        If one were to attempt to enter the AI field, what are the
   basic requirements; what are some special requirements?
   With a BS im physics, is further schooling mandatory? Are there
   particular schools which I should consider or ones I should
   avoid?  Are there books which I MUST read!!? As a 29 year old
   with a Math and Physics background, am I hopelessly over-the-hill
   for such musings to become a reality? Are there questions which I
   should be asking?

   If you care to answer in private I can be reached at:


                        PADIN@FNALB.BITNET

------------------------------

Date: 2 Sep 86 21:44:00 GMT
From: pyrnj!mirror!prism!mattj@CAIP.RUTGERS.EDU
Subject: Re: Grad Schools


Eric Brill:

Here is my own personal ranking of general AI programs:

                Stanford
                MIT
                Carnegie-Mellon
                UIllinois@Urbana
                URochester

Also good: UMaryland, Johns Hopkins, UMass@Amherst, ... can't think now.

[...]
                                - Matthew Jensen

------------------------------

Date: 6 Sep 86 02:52:06 GMT
From: ubc-vision!ubc-cs!andrews@UW-BEAVER.ARPA  (Jamie Andrews)
Subject: Re: Grad Schools (Rochester?)


     I've heard that Rochester has quite a good AI / logic
programming program, and it definitely has some good people...
but can anyone tell me what it's like to LIVE in Rochester?
Or is the campus far enough from Rochester that it doesn't
matter?  Please respond (r or R) to me rather than to the net.

Adv(merci)ance,
--Jamie.
...!seismo!ubc-vision!ubc-cs!andrews
"Hundred million bottles washed up on the shore"

------------------------------

Date: 3 Sep 86 21:20:31 GMT
From: mcvax!prlb2!lln-cs!pv@seismo.css.gov  (Patrick Vandamme)
Subject: Bug in Turbo Prolog

I am testing the `famous' Turbo Prolog Software and, after all the good things
that I heard about it, I was very surprised at having problems with the first
large program I tried. I give here this program. It finds all the relations
between a person and his family. But for some people, it answers with a lot
of strange characters. I think there must be a dangling pointer somewhere.
Note that this happens only with large programs !

Have someone the same result ?

(for the stange characters, try with "veronique").

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/*
     +-----------------------------------------------------+
     |      Programme de gestion d'une base de donnees     |
     |               de relations familiales.              |
     +-----------------------------------------------------+
            P. Vandamme - Unite Info - UCL - Aout 1986
*/

  [Deleted due to length.  See following message for an explanation
  of the problem.  -- KIL]

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
--
Patrick Vandamme

Unite d'Informatique              UUCP : (prlb2)lln-cs!pv
Universite Catholique de Louvain  Phone: +32 10 43 24 15
Place Sainte-Barbe, 2             Telex: 59037 UCL B
B-1348 Louvain-La-Neuve           Eurokom: Patrick Vandamme UCL
Belgium                           Fax  : +32 10 41 56 47

------------------------------

Date: 8 Sep 86 23:08:05 GMT
From: clyde!cbatt!cbuxc!cbuxb!cbrma!clh@CAIP.RUTGERS.EDU  (C.Harting)
Subject: Re: Bug in Turbo Prolog


I purchased Turbo Prolog Friday night, and immediately tried to compile the
GeoBase program on my Tandy 1000 (384K).  I could not even create a .OBJ file
on my machine, so I compiled it on a 640K AT&T PC6300.  Caveat No. 1: large
programs need large amounts of memory.  I compiled Patrick's "programme de
gestion" to disk and it ran flawlessly (I think -- this is my first lesson in
French!).  BUT when compiled to memory, I got the same errors as Patrick.
Caveat No. 2: compile large programs to disk and run standalone.  And, Caveat
No. 3: leave out as many memory-resident programs as you can stand when
booting the machine to run Turbo Prolog.
'Nuff said?

===============================================================================
Chris Harting                           "Many are cold, few are frozen."
AT&T Network Systems                    Columbus, Ohio
The Path (?!?):                         cbosgd!cbrma!clh

------------------------------

Date: 11 Sep 86 18:40:05 GMT
From: john@unix.macc.wisc.edu  (John Jacobsen)
Subject: Re: Re: Bug in Turbo Prolog

> Xref: uwmacc net.ai:1941 net.lang.prolog:528
> Summary: How to get around it.


I got the "Programme de Gestion de Base de Donnees" to work fine... on an
AT with a meg of memory.  I think Patrick Vandamme just ran out of memory,
cause his code is immaculate.

John E. Jacobsen
University of Wisconsin -- Madison Academic Computing Center

------------------------------

Date: Tue, 16 Sep 86 17:26 PDT
From: jan cornish <cornish@RUSSIAN.SPA.Symbohics.COM>
Subject: Turbo Prolog


I've heard some chilling things about Turbo Prolog. Such as

1) The programmer must not only declare each predicate, but also whether
each parameter to the predicate (not correct terminology) is input
or output. This means you can't write relational predicates like
grandfather.

2) The backtracking is not standard.

3) "You can do any thing in Turbo Prolog that you can do in Turbo Pascal"

I want to hear from the LP community on Turbo Prolog as to it's ultimate
merit.  Something beyond the dismissive flames.

Thanks in advance,

Jan

------------------------------

End of AIList Digest
********************

From csnet_gateway Sat Sep 20 00:53:32 1986
Date: Sat, 20 Sep 86 00:53:26 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #185
Status: R


AIList Digest            Friday, 19 Sep 1986      Volume 4 : Issue 185

Today's Topics:
  Query - Connectionist References,
  Cognitive Psychology - Connectionist Learning,
  Review - Notes on AAAI '86

----------------------------------------------------------------------

Date: 21 Aug 86 12:11:25 GMT
From: lepine@istg.dec.com@decwrl.dec.com  (Normand Lepine 225-6715)
Subject: Connectionist references

I am interested in learning about the connectionist model and would appreciate
any pointers to papers, texts, etc. on the subject.  Please mail references to
me and I will compile and post a bibliography to the net.

Thanks for your help,
Normand Lepine

uucp: ...!decwrl!cons.dec.com!lepine
ARPA: lepine@cons.dec.com
      lepine%cons.dec.com@decwrl.dec.com (without domain servers)

------------------------------

Date: 22 Aug 86 12:04:30 GMT
From: mcvax!ukc!reading!brueer!ckennedy@seismo.css.gov  (C.M.Kennedy )
Subject: Re: Connectionist Expert System Learning

The following is a list of the useful replies received so far:


Date: Wed, 30 Jul 86 8:56:08 BST
From: Ronan Reilly <rreilly%euroies@reading.ac.uk>
Sender: rreilly%euroies@reading.ac.uk
Subject: Re: Connectionist Approaches To Expert System Learning

Hi,

What you're looking for, effectively, are attempts to implement
production systems within a connectionist framework.  Researchers
are making progress, slowly but surely, in that direction.  The
most recent paper I've come across in thge area is:

Touretzky, D. S. & Hinton, G. E. (1985).  Symbols among the neurons
        details of a connectionist inference architecture.  In
        Proceedings IJCAI '85, Los Angeles.

I've a copy of this somewhere.  So if the IJCAI proceedings don't come
to hand, I'll post it onto you.

There are two books which are due to be published this year, and they
are set to be the standard reference books for the area:

Rumelhart, D. E. & McClelland, J. L. (1986).  Parallel distributed
        processing: Explorations in the microstructure of cognition.
        Vol. 1: Foundations.  Cambridge, MA: Bradford Books.

Rumelhart, D. E. & McClelland, J. L. (1986).  Parallel distributed
        processing: Explorations in the microstructure of cognition.
        Vol. 2: Applications.  Cambridge, MA: Bradford Books.

Another good source of information on the localist school of
connectionism is the University of Rochester technical report series.
They have one report which lists all their recent connectionist
reports.  The address to write to is:

        Computer Science Department
        The University of Rochester
        Rochester, NY 14627
        USA

I've implemented a version of the Rochester ISCON simulator in
Salford Lisp on our Prime 750.  The simulator is a flexible system
for building and testing connectionist models.  You're welcome to
a copy of it.  Salford Lisp is a Maclisp variant.

Regards,
Ronan
...mcvax!euroies!rreilly


Date: Sat, 2 Aug 86 09:33:46 PDT
From: Mike Mozer <mozer%ics.ucsd.edu@reading.ac.uk>
Subject: Re: Connectionist Approaches To Expert System Learning

I've just finished a connectionist expert system paper, which I'd be glad
to send you if you're interested (need an address, though).

Here's the abstract:

RAMBOT:  A connectionist expert system that learns by example

Expert systems seem to be quite the rage in Artificial Intelligence, but
getting expert knowledge into these systems is a difficult problem.  One
solution would be to endow the systems with powerful learning procedures
which could discover appropriate behaviors by observing an expert in action.
A promising source of such learning procedures
can be found in recent work on connectionist networks, that is, massively
parallel networks of simple processing elements.  In this paper, I discuss a
Connectionist expert system that learns to play a simple video game by
observing a human player.  The game, Robots, is played on a two-dimensional
board containing the player and a number of computer-controlled robots.  The
object of the game is for the player to move around the board in a
manner that will force all of the robots to collide with one another
before any robot is able to catch the player.  The connectionist system
learns to associate observed situations on the board with observed
moves.  It is capable not only of replicating the performance of the
human player, but of learning generalizations that apply to novel
situations.

Mike Mozer
mozer@nprdc.arpa


Date: Fri, 8 Aug 86 18:53:57 edt
From: Tom Frauenhofer <tfra%ur-tut@reading.ac.uk>
Subject: Re: Connectionist Approaches To Expert System Learning
Organization: U. of Rochester Computing Center

Catriona,

I am (slightly) familiar with a thesis by Gary Cotrell of the U of R here
that dealt with a connectionist approach to language understanding.  I believe
he worked closely with a psychologist to figure out how people understand
language and words, and then tried to model the behavior in a connectionist
framework.  You should be able to get a copy of the thesis from the Computer
Science Department here.  It's not expert systems, but it is fascinating.

- Tom Frauenhofer
...!seismo!rochester!ur-tut!tfra


>From sandon@ai.wisc.edu Sat Aug  9 17:25:29 1986
Date: Fri, 8 Aug 86 11:38:43 CDT
From: Pete Sandon <sandon%ai.wisc.edu@reading.ac.uk>
Subject: Connectionist Learning


Hi,

   You may have already received this information, but I will pass it
along anyway. Steve Gallant, at Northeastern University, has done some
work on using a modified perceptron learning algorithm for expert
system knowledge acquisition. He has written a number of tech reports
in the last few years. His email address is: sig@northeastern.csnet.
His postal address is:    Steve Gallant
                          College of Computer Science
                          Boston, MA. 02115

--Pete Sandon

------------------------------

Date: 17 Aug 86 22:08:30 GMT
From: ix133@sdcc6.ucsd.EDU (Catherine L. Harris)
Subject: Q:  How can structure be learned?  A:  PDP

  [Excerpted from the NL-KR Digest by Laws@SRI-STRIPE.]
  [Forwarded from USENET net.nlang]


  [... The following portion discusses connectionist learning.  -- KIL]

        One Alternative to the Endogenous Structure View

Jeffrey Goldberg says (in an immediately preceding article) [in net.nlang -B],

> Chomsky has set him self up asking the question:  "How can children,
> given a finite amount of input, learn a language?"  The only answer
> could be that children are equipped with a large portion of language to
> begin with.  If something is innate than it will show up in all
> languages (a universal), and if something is unlearnable then it, too,
> must be innate (and therefore universal).

The important idea behind the nativist and language-modularity
hypotheses are that language structure is too complex, time is too
short, and the form of the input data (i.e., parent's speech to
children) is too degenerate for the target grammar to be learned.
Several people (e.g., Steven Pinker of MIT) have bolstered this
argument with formal "learnability" analyses:  you make an estimate of
the power of the learning mechanism, make assumptions about factors in
the learning situation (e.g., no negative feedback) and then
mathematically prove that a given grammar (a transformational grammar,
or a lexical functional grammar, or whatever) is unlearnable.

My problem with these analyses  -- and with nativist assumptions in
general -- is that they aren't considering a type of learning mechanism
that may be powerful enough to learn something as complex as a grammar,
even under the supposedly impoverished learning environment a child
encounters.  The mechanism is what Rumelhart and McClelland (of UCSD)
call the PDP approach (see their just-released from MIT Press, Parallel
Distributed Processing:  Explorations in the Microstructure of
Cognition).

The idea behind PDP (and other connectionist approaches to explaining
intelligent behavior)   is that input from hundred/thousands/millions
of information sources jointly combine to specify a result.   A
rule-governed system is, according to this approach, best represented
not by explicit rules (e.g., a set of productions or rewrite rules) but
by a large network of units:  input units, internal units, and output
units.  Given any set of inputs, the whole system iteratively "relaxes"
to a stable configuration (e.g., the soap bubble relaxing to
a parabola, our visual system finding one stable interpretation of
a visual illustion).

While  many/most people accept the idea that constraint-satisfaction
networks may underlie phenomenon like visual perception, they are more
reluctant to to see its applications to language processing or language
acquisition.  There are currently (in the Rumelhart and McClelland
work  -- and I'm sure you cognitive science buffs have already rushed
to your bookstore/library!) two convincing PDP models on language,
one on sentence processing (case role assignment) and the other on
children's acquisition of past-tense morphology.  While no one has yet
tried to use this approach to explain syntactic acquisition, I see this
as the next step.


For people interested in hard empirical, cross-linguistic data that
supports a connectionist, non-nativist, approach to acquisition, I
recommend *Mechanisms of Language Acquisition*, Brain MacWhinney Ed.,
in press.

I realize I rushed so fast over the explanation of what PDP is that
people who haven't heard about it before may be lost.   I'd like to see
a discussion on this -- perhaps other people can talk about the brand
of connectionism they're encountering at their school/research/job and
what they think its benefits and limitations are  -- in
explaining the psycholinguistic facts or just in general.

Cathy Harris    "Sweating it out on the reaction time floor -- what,
                when you could be in that ole armchair theo-- ? Never mind;
                it's only til 1990!"

------------------------------

Date: 21 Aug 86 11:28:53 GMT
From: ulysses!mhuxr!mhuxt!houxm!hounx!kort@ucbvax.Berkeley.EDU (B.KORT)
Subject: Notes on AAAI '86


                              Notes on AAAI

                                Barry Kort


                                 Abstract

       The Fifth Annual AAAI Conference on Artificial Intelligence
       was held August 11-15 at the Philadelphia Civic Center.

       These notes record the author's personal impressions of the
       state of AI, and the business prospects for AI technology.
       The views expressed are those of the author and do not
       necessarily reflect the perspective or intentions of other
       individuals or organizations.

                                  * * *


       The American Association for Artificial Intelligence held
       its Fifth Annual Conference during the week of August 11,
       1986, at the Philadelphia Civic Center.

       Approximately 5000 attendees were treated to the latest
       results of this fast growing field.  An extensive program of
       tutorials enabled the naive beginner and technical-
       professional alike to rise to a common baseline of
       understanding. Research and Science Sessions concentrated on
       the theoretical underpinnings, while the complementary
       Engineering Sessions focused on reduction of theory to
       practice.

       Dr. Herbert Schorr of IBM delivered the Keynote Address.
       His message was simple and straightforward:  AI is here
       today, it's real, and it works.  The exhibit floor was a sea
       of high-end workstations, running flashy applications
       ranging from CAT scan imagery to automated fault diagnosis,
       to automated reasoning, to 3-D scene animation, to
       iconographic model-based reasoning.  Symbolics, TI, Xerox,
       Digital, HP, Sun, and other vendors exhibited state of the
       art hardware, while Intellicorp, Teknowledge, Inference,
       Carnegie-Mellon Group, and other software houses offered
       knowledge engineering power tools that make short work of
       automated reasoning.

       Knowledge representation schema include the ubiquitous tree,
       as well as animated iconographic models of dynamic systems.
       Inductive and deductive reasoning and goal-directed logic
       appear in the guise of forward and backward chaining
       algorithms which seek the desired chain of nodes linking
       premiss to predicted conclusion or hypothesis to observed
       symptoms.  Such schema are especially well adapted to
       diagnosis of ills, be it human ailment or machine
       malfunction.

       Natural Language understanding remains a hard problem, due
       to the inscrutable ambiguity of most human-generated
       utterances.  Nevertheless, silicon can diagram sentences as
       well as a precocious fifth grader.  In limited domain
       vocabularies, the semantic content of such diagrammatic
       representations can be reliably extracted.

       Robotics and vision remain challenging fields, but advances
       in parallel architectures may clear the way for notable
       progress in scene recognition.

       Qualitative reasoning, model-based reasoning, and reasoning
       by analogy still require substantial human guidance, perhaps
       because of the difficulty of implementing the interdomain
       pattern recognition which humans know as analogy, metaphor,
       and parable.

       Interesting philosophical questions abound when AI moves
       into the fields of automated advisors and agents.  Such
       systems require the introduction of Value Systems, which may
       or may not conflict with individual preferences for
       benevolent ethics or hard-nosed business pragmatics. One
       speaker chose the provocative title, "Can Machines Be
       Intelligent If They Don't Give a Damn?"  We may be on the
       threshold of Artificial Intelligence, but we have a long way
       to go before we arrive at Artificial Wisdom.  Nevertheless,
       some progress is being made in reducing to practice such
       esoteric concepts as Theories of Equity and Justice, leading
       to the possibility of unbiased Jurisprudence.

       AI goes hand in hand with Theories of Learning and
       Instruction, and the field appears to be paying dividends in
       the art and practice of knowledge exchange, following the
       strategy first suggested by Socrates some 2500 years ago.
       The dialogue format abounds, and mixed initiative dialogues
       seem to capture the essence of mutual teaching and
       mirroring.  Perhaps sanity can be turned into an art form
       and a science.

       Belief Revision and Truth Maintenance enable systems to
       unravel confusion caused by the injection of mutually
       inconsistent inputs.  Nobody's fool, these systems let the
       user know that there's a fib in there somewhere.

       Psychology of computers becomes an issue, and the Silicon
       Syndrome of Neuroses can be detected whenever the machines
       are not taught how to think straight.  Machines are already
       sapient.  Soon they will acquire sentience, and maybe even
       free will (nothing more than a random number generator
       coupled with a value system).  Perhaps by the end of the
       Millenium (just 14 years away), the planet will see its
       first Artificial Sentient Being.  Perhaps Von Neumann knew
       what he was talking about when he wrote his cryptic volume
       entitled, On the Theory of Self-Reproducing Automata.

       There were no Cybernauts in Philadelphia this year, but many
       of the piece parts were in evidence.  Perhaps it is just a
       matter of time until the Golem takes its first step.

       In the mean time, we have entered the era of the Competent
       System, somewhat short on world-class expertise, but able to
       hold it's own in today's corporate culture.  It learns about
       as fast as its human counterpart, and is infinitely
       clonable.

       Once upon a time it was felt that machines should work and
       people should think.  Now that machines can think, perhaps
       people can take more time to enjoy the state of being called
       Life.

                                  * * *

       Lincroft, NJ
       August 17, 1986

------------------------------

End of AIList Digest
********************

From csnet_gateway Sat Sep 20 00:54:25 1986
Date: Sat, 20 Sep 86 00:54:11 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #186
Status: R


AIList Digest            Friday, 19 Sep 1986      Volume 4 : Issue 186

Today's Topics:
  Cognitive Science - Commentaries on the State of AI

----------------------------------------------------------------------

Date: 29 Aug 86 01:58:30 GMT
From: hsgj@tcgould.tn.cornell.edu  (Mr. Barbecue)
Subject: Re: Notes on AAAI '86

(not really a followup article, more of a commentary)

I find it very interesting that there is so much excitement generated over
parallel processing computer systems by the AI community.  Interesting in
that the problems of AI (the intractability of: language, vision, and general
cognition to name a few) are not anywhere near limited by computational
power but by our lack of understanding.  If somebody managed to create a
truely intelligent system, I think we would have heard about it by now,
even if the program took a month to run.  Fact of the matter is that our
knowledge of such problems is minimal.  Attempts to solve them leads to
researchers banging their heads against a very hard wall, indeed.  So what
is happening?  The field that was once A.I. is very quickly headed back to
it's origins in computer science and is producing "Expert Systems" by the
droves.  The problem isn't that they aren't useful, but rather that they
are being touted as the A.I., and true insights into actual human thinking
are still rare (if not non-existant).

Has everybody given up?  I doubt it.  However, it seems that economic reality
has set in.  People are forced to show practical systems with everyday appli-
cations.  Financers can't understand why we would be overjoyed if we could
develop a system that learns like a baby, and so all the money is being
siphoned away and into robotics, Expert Systems, and even spelling checkers!
(no, I don't think that welding cars together requires a great deal of true
intelligence, though technically it may be great)

So what is one to do?  Go into cog-psych?  At least psychologists are working
on the fundamental problems that AI started, but many seem to be grasping at
straws, trying to find a simple solution (i.e., familly resemblance, primary
attribute analysis, etc.)

What seems to be lacking is a cogent combination of theories.  Some attempts
have been made, but these authors basically punt on the issue, stating
like "none of the above theories adequately explain the observed phenomena,
perhaps the solution is a combination of current hypothesis".  Very good, now
lets do that research and see if this is true!

My opinion?  Well, some current work has dealt with computer nervous systems,
(Science --sometime this summer).  This is similar in form to the hypercube
systems but the theory seems different.  Really the work is towards computer
neurons.  Distributed systems in which each element contributes a little to
the final result.  Signals are not binary, but graded.  They combine with other
signals from various sources and form an output.  Again, this could be done
with a linear machine that hold partial results.  But, I'm not suggesting that
this alone is a solution, it's just interesting.  My real opinion is that
without "bringing baby up" so to speak, we won't get much accomplished.  The
ultimate system will have to be able to reach out, grasp (whether visually or
physically, or whatever) and sense it's world around it in a rich manner.  It
will have to be malleable, but still have certain guidelines built in.  It
must truely learn, forming a myriad of connections with past experiences and
thoughts.  In sum, it will have to be a living animal (though made of sand..)

Yes, I do think that you need the full range of systems to create a truely
intelligent system.  Hellen Keller still had touch.  She could feel vibrations,
and she could use this information to create a world that was probably
perceptually much different than ours.  But, she had true intelligence.
(I realize that the semantics of all these words and phrases are highly
debated, you know what I'm talking, so don't try to be difficult!)  :)

Well, that's enough for a day.

Ted Inoue.
Cornell







--
ARPA:  hsgj%vax2.ccs.cornell.edu@cu-arpa.cs.cornell.edu
UUCP:  ihnp4!cornell!batcomputer!hsgj   BITNET:  hsgj@cornella

------------------------------

Date: 1 Sep 86 10:25:25 GMT
From: ulysses!mhuxr!mhuxt!houxm!hounx!kort@ucbvax.Berkeley.EDU (B.KORT)
Subject: Re: Notes on AAAI '86

I appreciated Ted Inoue's commentary on the State of AI.  I especially
agree with his point that a cogent combination of theories is needed.
My own betting card favors the theories of Piaget on learning, coupled
with the modern animated-graphic mixed-initiative dialogues that merge
the Socratic-style dialectic with inexpensive PC's.  See for instance
the Mind Mirror by Electronic Arts.  It's a flashy example of the clever
integration of Cognitive Psychology, Mixed Initiative Dialogues, Color
Animated Graphics, and the Software/Mindware Exchange.  Such illustrations
of the imagery in the Mind's Eye can breathe new life into the relationship
between silicon systems and their carbon-based friends.

Barry Kort
hounx!kort

------------------------------

Date: 4 Sep 86 21:39:37 GMT
From: tektronix!orca!tekecs!mikes@ucbvax.Berkeley.EDU  (Michael Sellers)
Subject: transition from AI to Cognitive Science (was: Re: Notes on
         AAAI '86)


> I find it very interesting that there is so much excitement generated over
> parallel processing computer systems by the AI community.  Interesting in
> that the problems of AI (the intractability of: language, vision, and general
> cognition to name a few) are not anywhere near limited by computational
> power but by our lack of understanding. [...]
>                The field that was once A.I. is very quickly headed back to
> it's origins in computer science and is producing "Expert Systems" by the
> droves.  The problem isn't that they aren't useful, but rather that they
> are being touted as the A.I., and true insights into actual human thinking
> are still rare (if not non-existant).

Inordinate amounts of hype have long been a problem in AI; the only difference
now is that there is actually a small something there (i.e. knowledge based
systems), so the hype is rising to truly unbelievable heights.  I don't know
that AI is returning to its roots in computer science, probably there is just
more emphasis on the area(s) where something actually *works* right now.

> Has everybody given up?  I doubt it.  However, it seems that economic reality
> has set in.  People are forced to show practical systems with everyday appli-
> cations.

Good points.  You should check out the book "The AI Business" by ...rats, it
escapes me (possibly Winston or McCarthy?).  I think it was published in late
'84 or early '85, and makes the same kinds of points that you're making here,
talking about the hype, the history, and the current state of the art and the
business.

> So what is one to do?  Go into cog-psych?  At least psychologists are working
> on the fundamental problems that AI started, but many seem to be grasping at
> straws, trying to find a simple solution (i.e., familly resemblance, primary
> attribute analysis, etc.)

The Grass is Always Greener.  I started out going into neurophysiology, then
switched to cog psych because the neuro research is still at a lower level than
I wanted, and then became disillusioned because all of the psych work being
done seemed to be either super low-level or infeasable to test empirically.
So, I started looking into computers, longing to get into the world of AI.
Luckily, I stopped before I got to the point you are at now, and found
something better (no, besides Amway :-)...

> What seems to be lacking is a cogent combination of theories.  Some attempts
> have been made, but these authors basically punt on the issue, stating
> like "none of the above theories adequately explain the observed phenomena,
> perhaps the solution is a combination of current hypothesis".  Very good, now
> lets do that research and see if this is true!

And this is exactly what is happening in the new field of Cognitive Science.
While there is still no "cogent combination of theories", things are beginning
to coalesce. (Pylyshyn described the current state of the field as Physics
searching for its Newton.  Everyone agrees that the field needs a Newton to
bring it all together, and everyone thinks that he or she is probably that
person.  The problem is, no one else agrees with you, except maybe your own
grad students.)  Cog sci is still emerging as a separate field, even though
its beginnings can probably be pegged as being in the late '70s or early '80s.
It is taking material, paradigms, and techniques from AI, neurology, cog psych,
anthropology, linguistics, and several other fields, and forming a new field
dedicated to the study of cognition in general.  This does not mean that
cognition should be looked at in a vacuum (as is to some degree the case with
AI), but that it can and should be examined in both natural and artificial
contexts, allowing for the difference between them.  It can and should take
into account all types and levels of cognition, from the low-level neural
processing to the highly plastic levels of linguistic and social cognitive
interaction, researching and applying these areas in artificial settings
as it becomes feasable.

>                                               [...]  My real opinion is that
> without "bringing baby up" so to speak, we won't get much accomplished.  The
> ultimate system will have to be able to reach out, grasp (whether visually or
> physically, or whatever) and sense it's world around it in a rich manner.  It
> will have to be malleable, but still have certain guidelines built in.  It
> must truely learn, forming a myriad of connections with past experiences and
> thoughts.  In sum, it will have to be a living animal (though made of sand..)

This is one possibility, though not the only one.  Certainly an artificially
cogitating system without many of the abilities you mention would be different
from us, in that its primary needs (food, shelter, sensory input) would not
be the same.  This does not make these things a requirement, however.  If we
would wish to build an artificial cogitator that had roughly the same sort of
world view as we have, then we probably would have to give it some way of
directly interacting with its environment through the use of sensors and
effectors of some sort.
  I suggest that you find and peruse the last 5 or 6 years of the journal
Cognitive Science, put out by the Cognitive Science Society.  Most of the
things that have been written in there are still fairly up-to-date, as the
field is still reaching "critical mass" in terms of theoretical quantity
and quality (an article by Norman, "Twelve Issues for Cognitive Science"
from 1980 in this journal (not sure which issue) discusses many of the things
you are talking about here).

Let's hear more on this subject!

> Ted Inoue.
> Cornell

--

                Mike Sellers
        UUCP: {...your spinal column here...}!tektronix!tekecs!mikes


           INNING:  1  2  3  4  5  6  7  8  9  TOTAL
        IDEALISTS   0  0  0  0  0  0  0  0  0    1
         REALISTS   1  1  0  4  3  1  2  0  2    0

------------------------------

Date: 6 Sep 86 19:09:31 GMT
From: craig@think.com  (Craig Stanfill)
Subject: Re: transition from AI to Cognitive Science (was: Re: Notes
         on AAAI '86)

> I find it very interesting that there is so much excitement generated over
> parallel processing computer systems by the AI community.  Interesting in
> that the problems of AI (the intractability of: language, vision, and general
> cognition to name a few) are not anywhere near limited by computational
> power but by our lack of understanding. [...]

For the last year, I have been working on AI on the Connection Machine,
which is a massively parallel computer.  Depending on the application,
the CM is between 100 and 1000 times faster than a Symbolics 36xx.  I
have performed some experiments on models of reasoning from memory
(Memory Based Reasoning, Stannfill and Waltz, TMC Technical Report).
Some of these experiments required 5 hours on a 32,000 processor CM.  I,
for one, do not consider a 500-5000 hour experiment on a Symbolics a
practical way to work.

More substantially, having a massively parallel machine changes the way
you think about writing programs.  When certain operations become 1000
times faster, what you put into the inner loop of a program may change
drasticly.

------------------------------

Date: 7 Sep 86 16:46:51 GMT
From: clyde!watmath!watnot!watdragon!rggoebel@CAIP.RUTGERS.EDU 
      (Randy Goebel LPAIG)
Subject: Re: transition from AI to Cognitive Science (was: Re: Notes
         on AAAI '86)

Mike Sellers from Tektronix in Wilsonville, Oregon writes:

| Inordinate amounts of hype have long been a problem in AI; the only difference
| now is that there is actually a small something there (i.e. knowledge based
| systems), so the hype is rising to truly unbelievable heights.  I don't know
| that AI is returning to its roots in computer science, probably there is just
| more emphasis on the area(s) where something actually *works* right now.

I would like to remind all that don't know or have forgotten that the notion
of a rational artifact as digitial computer does have its roots in
computing, but the more general notion of intelligent artifact has concerned
scientists and philosophers much longer than the lifetime of the digital
computer.  John Haugeland's book ``AI: the very idea'' would be good reading
for those who aren't aware that there is a pre-Dartmouth history of ``AI.''

Randy Goebel
U. of Waterloo

------------------------------

End of AIList Digest
********************

From csnet_gateway Sat Sep 20 00:54:41 1986
Date: Sat, 20 Sep 86 00:54:35 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #187
Status: R


AIList Digest            Friday, 19 Sep 1986      Volume 4 : Issue 187

Today's Topics:
  Queries - Natural Language DB Interface & NL Generation &
    Production Systems & Facial Recognition & Smalltalk & Symbolics CAD &
    Lisp Machine News & MACSYMA & San Diego Speakers Wanted

----------------------------------------------------------------------

Date: 16 Sep 86 20:05:31 GMT
From: mnetor!utzoo!utcs!bnr-vpa!bnr-di!yali@seismo.css.gov
Subject: natural language DB interface

Has anyone out there any experience with
the Swan* natural language database interface
put out by Natural Language Products of Berkeley?
This system was demo-ed at AAAI this August.
I am primarily interested in the system's
ability to talk to "different databases
associated with different DBMS's"
simultaneously (quoting an information sheet
put out by NLP).
How flexible is it and how easy is it
to adapt to new domains?

======================================================
Yawar Ali
{the world}!watmath!utcsri!utcs!bnr-vpa!bnr-di!yali
======================================================

* Swan is an unregistered trademark of NLP

------------------------------

Date: Thu, 18 Sep 86 16:34:47 edt
From: lb0q@andrew.cmu.edu (Leslie Burkholder)
Subject: natural language generation

Has work been done on the problem of generating relatively idiomatic English
from sentences written in a language for first-order predicate logic?
Any pointers would be appreciated.

Leslie Burkholder
lb0q@andrew.cmu.edu

------------------------------

Date: Thu, 18 Sep 1986  17:10 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: queries about expert systems

Maybe some AI guru out there can help with the following questions:

1. Production systems are the implementation of many expert systems.
In what other forms are "expert systems" implemented?

[I use the term "expert system" to describe the codification of any
process that people use to reason, plan, or make decisions as a set of
computer rules, involving a detailed description of the precise
thought processes used.  If you have a better description, please
share it.]

2. A production system is in essence a set of rules that state that
"IF X occurs, THEN take action Y."  System designers must anticipate
the set of "X" that can occur.  What if something happens that is not
anticipated in the specified set of "X"?  I assert that the most
common result in such cases is that nothing happens.  Am I right,
wrong, or off the map?

Thanks.

Herb Lin

------------------------------

Date: 11 Sep 86 20:42:14 GMT
From: ihnp4!islenet!humu!uhmanoa!aloha1!ryan@ucbvax.Berkeley.EDU (ryan)
Subject: querying a data base using an inference engine


This is a sort of banner letting the rest of the world know that we at
the Artificial Intelligence Lab at the University of Hawaii are currently
looking at the problem of querying a database using AI techniques. We will be
using a natural language front end for querying the database. We will appretiate
 any information from anyone working on or interested in the same.

 my address is

    Paul Ryan
    ...{dual,vortex,ihnp4}!islenet!aloha1!ryan
    ...nosvax!humu!islenet!aloha1!ryan

------------------------------

Date: Thu, 18 Sep 86 18:55:43 edt
From: philabs!micomvax!peters@seismo.CSS.GOV
Subject: Computer Vision

We are starting a project related to automatic classification of facial
features from photographs. If anyone out there has any info/references
related to this area please let me hear from you.

email: !philabs!micomvax!peters
mail:  Peter Srulovicz
       Philips Information Systems
       600 Dr. Philips Blvd
       St. Laurent Quebec
       Canada H4M-2S9

------------------------------

Date: 16 Sep 86 01:22:57 GMT
From: whuxcc!lcuxlm!akgua!gatech!gitpyr!krubin@bellcore.com
Subject: Smalltalk as an AI research tool?


        I am currently working on an AI project where we are
using Smalltalk-80 as our implementation language. Are there
others who have used Smalltalk to do serious AI work? If so,
and you can talk about what you have done, please respond. I
would be interested in learning how well suited the language
is for serious AI work.
        We have plans to implement an (Intelligent Operator
Assistant) using an IBM PC-AT running a version of Digitalk
Incorporated's Smalltalk/V. Any comments on this software
would also be helpful (especially speed information!).


                  Kenneth S. Rubin   (404) 894-4318
               Center for Man-Machine Systems Research
             School of Industrial and Systems Engineering
                   Georgia Institute of Technology
                        Post Office Box  35826
                        Atlanta, Georgia 30332
       Majoring with: School of Information and Computer Science
...!{akgua,allegra,amd,hplabs,ihnp4,seismo,ut-ngp}!gatech!gitpyr!krubin

------------------------------

Date: 14 Sep 86 11:35:00 GMT
From: osiris!chandra@uxc.cso.uiuc.edu
Subject: Wanted: CAD program for Symbolics


        CAD software for the Symbolics Machine

Hi,
        I just got a Symbolics lisp machine. I am looking for any
Public Domain design/drafting program. Being an architect I'd
like to draw stuff on my lisp machine.

        Hints, pointers, references would be appreciated.

        Thanks,

navin chandra


ARPA:    dchandra@athena.mit.edu
BITNET:  ank%cunyvms1

------------------------------

Date: 18 Sep 86 03:29:53 GMT
From: hp-sdd!ncr-sd!milano!dave@hplabs.hp.com
Subject: Lisp Machine News?

Does anyone have or know of a zwei-based interface
to news?  (If it exists, 3 to 2 it's called ZNEWS.)

Dave Bridgeland -- MCC Software Technology
  ARPA:    dave@mcc.arpa
  UUCP:    ut-sally!im4u!milano!daveb

  "Things we can be proud of as Americans:
     * Greatest number of citizens who have actually boarded a UFO
     * Many newspapers feature "JUMBLE"
     * Hourly motel rates
     * Vast majority of Elvis movies made here
     * Didn't just give up right away during World War II
             like some countries we could mention
     * Goatees & Van Dykes thought to be worn only by weenies
     * Our well-behaved golf professionals
     * Fabulous babes coast to coast"

------------------------------

Date: 15 Sep 86 16:17:00 GMT
From: uiucuxa!lenzini@uxc.cso.uiuc.edu
Subject: Wanted: MACSYMA info


Hi,

I have a friend in the nuclear eng. department who is currently working on
a problem in - I can't remember right now but that's not the point - anyway,
this problem involves the analytic solution of a rather complex integral
(I believe it's called Chen's (sp?) integral).  A while back I heard something
about a group of programs called MACSYMA that were able to solve integrals that
were previously unsolvable.  I suggested that he may want to look into the
availabiliy of MACSYMA.  I would appreciate any information about these
programs - what they can and can't do, how they are used, how to purchase
(preferably with a university discount) , etc.

Thanks in advance,

Andy Lenzini
University of Illinois.
...pur-ee!uiucdcs!uiucuxa!lenzini

------------------------------

Date: 18 Sep 86 13:58 PDT
From: sigart@LOGICON.ARPA
Subject: Speakers wanted


The San Diego Special Interest Group on Artificial Intelligence
(SDSIGART) is looking for speakers for its regular monthly meetings.
We are presently looking for individuals who would like to give a
presentation on any AI topic during the January to April 1987
time-frame.  We typically hold our meetings on the fourth thursday of
the month, and provide for a single presentation during the meeting.
If you anticipate being in San Diego during that time and would like to
give a presentation please contact us via E-mail at
sigart\@logicon.arpa.

We cannot provide transportation reimbursement for speakers from
outside the San Diego area, but we can provide some reimbursement of
hotel/meal expenses.

                                        Thank You,
                                        Bill D'Camp

------------------------------

End of AIList Digest
********************

From csnet_gateway Sun Sep 21 06:58:05 1986
Date: Sun, 21 Sep 86 06:57:56 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #188
Status: RO


AIList Digest           Saturday, 20 Sep 1986     Volume 4 : Issue 188

Today's Topics:
  Education - AI Grad Schools,
  Philosophy - Discussion of AI and Associative Memory,
  AI Tools - Turbo Prolog

----------------------------------------------------------------------

Date: 12 Sep 86 20:39:56 GMT
From: ihnp4!gargoyle!sphinx!bri5@ucbvax.Berkeley.EDU  (Eric Brill)
Subject: AI Grad Schools

A few weeks ago, I posted a request for info on good graduate schools
for AI.  I got so many requests to forward the replies I got, that I
decided to just post a summary to the net.  So here it is:


Almost everybody seemed to believe that the top 4 schools are MIT,
CMU, Stanford and Yale (not necessarily in that order).
Other schools which got at least 2 votes for being in the top 10 were
Toronto, Illinois(Urbana), UMass(Amherst), Berkeley, UCLA, UCIrvine,
UTexas(Austin).
Other schools which got one vote for being in the top 10 were
URochester, UCSD, Syracuse and Duke.

------------------------------

Date: Tue, 09 Sep 86 12:00:00 GMT+2
From: H29%DHDURZ2.BITNET@WISCVM.WISC.EDU
Subject: AI-discussion


In the last AI-lists there has been a discussion about the possibilities
of intelligent machines.
I am going to add some arguments I missed in the discussion.
1. First is to claim, that there are a lot of cognitive functions of
man which can be simulated by the computer. But one problem is, that
up to now these different functions are not integrated in one machine
or superprogram to my kwowledge.
2. There is the phenomenon of intentionality amd motivation in man that
finds no direct correspondent phenomenon in the computer.
3. Man's neuronal processing is more analogue than digital in spite of
the fact that neurons can only have two states.
Man's organisation of memory is rather associative than categorial.

  [Neurons are not two-state devices!  Even if we ignore chemical and
  physiological memory correlates and the growth and decay of synapses,
  there are the analog or temporal effects of potential buildup and the
  fact that neurons often transmit information via firing rates rather
  than single pulses.  Neurons are nonlinear but hardly bistable. -- KIL]

Let me elaborate upon these points:
Point 1: Konrad Lorenz assumes a phenomenon he called " fulguration" for
systems. This means in the end nothing more than: The whole is more
than the sum of parts. If you merge all possible functions a
computer can do to simulate human abilities, you will get higher
functions which transgress the sum of all lower functions.
You may once get a function like consciousness or even selfconscious-
ness. If you define self as the man's knowledge of himself: his
qualities, abilities, his existence. I see no general problem to feed
this knowledge to a computer.
Real "understanding" of natural language however needs not only lingui-
stic competence but also sensory processing and recognition abilities
(visual, acoustical).   Language normally refers to objects which we
first experience  by sensory input and then name it. The construct-
ivistic theory of human learning of language by Paul Lorenzen und
O. Schwemmer (Erlanger Schule) assumes a "demonstration act" (Zeige-
handlung)  constituting a fundamental element of man (child) learning
language.  Without this empirical fundament of language you will never
leave the hermeneutic circle, which drove former philosphers into
despair.
Point 2.:
One difference between man and computer is that man needs food and
computers need electricity and further on the computer doesn't cry
when somebody is going to pull his plug.
Nevertheless this can be made: A computer,a robot that attacks every-
body by weapon, who tries to pull his plug. But who has an interest
to construct such a machine? To living organisms made by evolution
is given the primary motivation of self-preservation. This is the
natural basis of intentionality. Only the implementation of intentionality,
motivation, goals and needs can create a machine that deserves the name
"intelligent". It is intelligent by the way it reaches "his" goals.
Implementation of "meaning" needs the ability of sensory perception and
recognition, linguistical competence and   understanding, having or
simulating intentions. To know the meaning of an object means to
understand the function of this object for man in a means-end relation
within his living context. It means to realize for which goals or needs
the "object" can be used.
Point 3.:
Analogue information processing may be totally simulated by digitital
processing or may be not. Man's associative organization of memory,
however needs storage and retrieval mechanism other than those now
available or used by computers.
I have heard that some scientists try to simulate associative memory
organization in the states, but I have no further information about
that. (Perhaps somebody can give me information or references.
Thanks in advance!).

  [Geoffrey E. Hinton and James A. Anderson (eds.), Parallel Models
  of Associative Memory, Lawrence Erlbaum Associates, Inc., Hillsdale
  NJ.  Dr. Hinton is with the Applied Psychology Unit, Cambridge England.
  -- KIL]

Scientists working on AI should have an attitude I call "critical opti-
mism".  This means being critical,see the problems and not being euphoric,
that all problems can be solved in the next ten years. On the other hand
it means not to assume any problem as unsolvable but to be optimistic,
that the scientific community will solve problems step by step, one
after the other how long it will ever last.

Finally let me - being a psychologist - state some provocative hypotheses:
The belief, that man's cognitive or intelligent abilities including
having intentions will never be reached by a machine, is founded in the
conscious or unconscious assumption of man's godlike or godmade
uniqueness, which is supported by the religious tradition of our
culture.  It needs a lot of self-reflection, courage and consciousness
about one's own existential fears to overcome the need of being
unique.
I would claim, that the conviction mentioned above however philosphical
or sophisticated it may be justified, is only the "RATIONALIZATION" (in
the psychoanalytic meaning of the word) of understandable   but
irrational and normally unconscious existential fears and need of human
being.


       PETER PIRRON      MAIL ADDRESS: <H29@DHDURZ2.BIYNET>

                         Psychologisches Institut
                         Hauptstrasse 49-53
                         D-6900 Heidelberg
                         Western Germany

------------------------------

Date: Thu 18 Sep 86 20:04:31-CDT
From: CS.VANSICKLE@R20.UTEXAS.EDU
Subject: What's wrong with Turbo Prolog

1. Is Borland's Turbo Prolog a superset of the Clocksin &
   Mellish (C & M) standard?

On the back cover of Turbo Prolog's manual is the description
"A complete Prolog incremental compiler supporting a large
superset of Clocksin & Mellish Edinburgh standard Prolog."
This statement is not true.  On page 127 the manual says
"Turbo Prolog . . . contains virtually all the features
described in Programming in Prolog by Clocksin and Mellish."
If you read "virtually" as "about 20% of" then this statement
is true.  Turbo Prolog does use Edinburgh syntax, that is,

        :- for "if" in rules,
        capitalized names for variables,
        lower case names for symbols,
        square brackets for delimiting lists, and
        | between the head and tail of a list.

Almost all the Clocksin & Mellish predicates have different
names, different arguments, or are missing entirely from Turbo
Prolog.  For example, "var" is "free," and "get0" is
"readchar."  Differences in predicate names and arguments are
tolerable, and can be handled by a simple conversion program
or by substitutions using an editor.  They could also be
handled by adding rules that define the C & M predicates in
terms of Turbo Prolog predicates, for example,

        var(X):-free(X).

These kinds of differences are acceptable in different
implementations of Prolog.  Even C & M say that their
definition should be considered a core set of features, that
each implementation may have different syntax.  However,
Borland has done much more than just rename a few predicates.

2. Is Borland's Turbo Prolog really Prolog?

NO. Turbo Prolog lacks features that are an essential part of
any Prolog implementation and requires declarations.  Borland
has redefined Prolog to suit themselves, and not for the
better.

A key feature of Lisp and Prolog is the ability to treat
programs and data identically.  In Prolog "clause," "call,"
and "=.." are the predicates that allow programs to be treated
as data, and these are missing entirely from Turbo Prolog.
One use of this feature is in providing "how" and "why"
explanations in an expert system.  A second use is writing a
Prolog interpreter in Prolog.  This is not just a
theoretically elegant feature, it has practical value.  For a
specific domain a specialized interpreter can apply domain
knowledge to speed up execution, or an intelligent
backtracking algorithm could be implemented.  In C & M Prolog
a Prolog interpreter takes four clauses.  Borland gives an
example "interpreter" on page 150 of the Turbo Prolog manual -
nine clauses and twenty-two declarations.  However, their
"interpreter" can't deal with any clause, it can only deal
with "clauses" in a very special form.  A clause such as
likes(ellen,tennis) would have to be represented as

  clause(atom(likes,[symbol(ellen),symbol(tennis)]),[])

in Borland's "interpreter."  I don't expect "clause" to
retrieve compiled clauses, but I do expect Prolog to include
it.  By dropping it Borland has introduced a distinction
between programs and data that eliminates a key feature of
Prolog.

Turbo Prolog absolutely requires data typing.  Optional typing
would be a good feature for Prolog - it can produce better
compiled code and help with documentation.  However, required
typing is not part of any other Prolog implementation that I
know of.  Typing makes life easier for the Turbo Prolog
compiler writer at the expense of the Turbo Prolog
programmers.  A little more effort by the few compiler writers
would have simplified the work of the thousands of potential
users.  There are good Prolog compilers in existence that do
not require typing, for example, the compiler for DEC-10
Prolog.  It may also be that Borland thought they were
improving Prolog by requiring typing, but again, why not make
it optional?

Besides introducing a distinction between programs and data,
Turbo Prolog weakens the ability to construct terms at run
time.  One of the great strengths of Prolog is its ability to
do symbolic computation, and Borland has seriously weakened
this ability.  Again this omission seems to be for the
convenience of the compiler writers.  There are no predicates
corresponding to the following C & M predicates, even under
other names: "arg," "functor," "name," "=..," "atom,"
"integer," and "atomic."  These predicates are used in
programs that parse, build, and rewrite structured terms, for
example, symbolic integration and differentiation programs, or
a program that converts logical expressions to conjunctive
normal form.  The predicate "op" is not included in Turbo
Prolog.  Full functional notation must be used.  You can write
predicates to pretty print terms, and the manual gives an
example of this, but it is work that shouldn't be necessary.
Dropping "op" removed one of Prolog's strongest features for
doing symbolic computation.

Turbo Prolog introduces another distinction between clauses
defined at compile time and facts asserted at run time.
Apparently only ground terms can be asserted, and rules cannot
be asserted.  This may be partly a result of having only a
compiler and no interpreter.  The predicates for any facts to
be asserted must be declared at compile time.  This is another
unecessary distinction for the convenience of the compiler
writers.

One other annoyance is the lack of DCG rules, and the general
difficulty of writing front ends that translate DCG rules and
other "syntactic sugar" notations to Prolog rules.

3. Is Turbo Prolog suitable for real applications?

I think Turbo Prolog could run some real applications, but one
limitation is that a maximum of 500 clauses is allowed for
each predicate.  One real application  program computes the
intervals of a directed graph representing a program flow
graph.  Each node represents a program statement,  and each
arc represents a potential transfer of control from the head
node to the tail node.  There is a Prolog clause for each node
and a clause for each arc.   A program with 501 statements
would exceed Turbo Prolog's limit.  I assume Borland could
increase this limit, but as it stands, this is one real
application that Turbo Prolog would not run.

4. Is there anything good about Turbo Prolog?

YES. I like having predicates for windows, drawing, and sound.
It looks easy to write some nice user interfaces using Turbo
Prolog's built in predicates.  The manual is well done, with
lots of examples.  There is easy access to the facilities of
MS-DOS.  There is a good program development environment, with
windows for editing code, running the program, and tracing.
There are also features for allowing programming teams to
create applications - modules and separate name spaces.  At
$100 the price is right.  If this were Prolog, it would be a
great product.

-- Larry Van Sickle
   cs.vansickle@r20.utexas.edu    512-471-9589

------------------------------

Date: Thu 18 Sep 86 20:12:29-CDT
From: CS.VANSICKLE@R20.UTEXAS.EDU
Subject: Simple jobs Turbo Prolog can't do

Two simple things you CANNOT do in Turbo Prolog:

1. Compute lists containing elements of different basic types.

Turbo Prolog does not let you have goals such as

  append([a,b,c],[1,2,3],L).

Turbo Prolog requires that the types of every predicate be
declared, but the typing system does not allow you to declare
types that mix basic types. Also lists like:

  [1,a]
  [2,[3,4]]
  [5,a(6)]

cannot be created in Turbo Prolog. The syntax of types is:

a)  name = basictype

    where basictype is integer, char, real, string or symbol,

b)  name = type*

    where type is either a basic type or a user defined type,
    the asterisk indicates a list,

c)  name = f1(d11,...d1n1);f2(d21,...,d2n2);...fm(dm1,...d2nm)

    where fi are functors and dij are types, called
    "domains."  The functors and their domains are
    alternative structures allowed in the type being
    defined.

The important thing to notice is that you cannot define a type
that has basic types as alternatives.  You can only define
alternatives for types that contain functors.  So you cannot
define types

  mytype = integer;symbol

  mylisttype = mytype*

which is what you would need to append a list of integers to a
list of symbols.

What the Turbo Prolog manual recommends for this case is to
define

  mytype = s(symbol);i(integer)

  mylisttype = mytype*

and declare append as

  append(mylisttype,mylisttype,mylisttype)

which would allow us to state the goal

  append([s(a),s(b),s(c)],[i(1),i(2),i(3)],L).

This is clumsy, kludgy, and ugly.

2. Compute expressions that contain different basic types or
   mixtures of structures and basic types.

Simplifying arithmetic expressions that contain constants and
variables seems like it should be easy in a language designed
to do symbolic computation.  In C & M Prolog some rules for
simplifying multiplication might be

  simplify(0 * X,0).
  simplify(X * 0,0).
  simplify(1 * X,X).
  simplify(X * 1,X).

In C & M Prolog you can enter goals such as

  simplify(a - 1 * (b - c),X).

Now in Turbo Prolog, because of the limited typing, you cannot
have expressions that contain both symbols and integers.  (You
also cannot have infix expressions, but that is another
issue).  Instead, you would have to do something like this:

  exprtype = i(integer);s(symbol);times(exprtype,exprtype)

and declare simplify as:

  simplify(exprtype,exprtype)

and the clauses would be:

  simplify(times(i(0),X),i(0)).
  simplify(times(X,i(0)),i(0)).
  simplify(times(i(1),X),X).
  simplify(times(X,i(1)),X).

The goal would be:

  simplify(minus(s(a),times(i(1),minus(s(b),s(c)))),X).

This should speak for itself, but I'll spell it out:
REAL Prolog can do symbolic computation involving mixtures of
symbols, numeric constants, and expressions; the programs are
simple and elegant; input and output are easy.  In Turbo
Prolog you can't even create most of the expressions that real
Prolog can; the programs are long, opaque, and clumsy; you
have to write your own predicates to read and write
expressions in infix notation.

It is a shame that this product comes from a company with
a reputation for good software.  If it came
from an unknown company people would be a lot more cautious
about buying it.  Since it's from Borland, a lot of people
will assume it's good.  They are going to be disappointed.

-- Larry Van Sickle
   cs.vansickle@r20.utexas.edu    512-471-9589

------------------------------

End of AIList Digest
********************

From csnet_gateway Sun Sep 21 06:57:50 1986
Date: Sun, 21 Sep 86 06:57:42 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #189
Status: RO


AIList Digest           Saturday, 20 Sep 1986     Volume 4 : Issue 189

Today's Topics:
  AI Tools - Symbolics Lisp Release 7,
  Games - Connect Four & Computer Chess News,
  Applications - Music-Research Digest,
  Contest - New Deadline for HP Contest

----------------------------------------------------------------------

Date: 2 Sep 86 20:03:34 GMT
From: mcvax!euroies!ceri@seismo.css.gov  (Ceri John Fisher)
Subject: Symbolics Lisp Release 7 (Genera)

Information requested:
Does anybody have any concrete comments on Symbolics Release 7 Common Lisp
and ZetaLisp and new window system.  We have release 6 and are rather fear-
fully awaiting the next release since we have started to hear rumours of
large resources required and relatively poor performance (in spite of increased
ease of use).  Can anyone confirm or deny this from actual experience ?
Mail me with your comments and I will summarize to the net if there's
enough interest.
Thank you for your attention.

Ceri Fisher, Plessey (UK) Ltd, Christchurch, England.
ceri@euroies.UUCP or ..<your route to europe>!mcvax!ukc!euroies!ceri
<disclaimer, quip> -- currently under revision

------------------------------

Date: 6 Sep 86 19:02:10 GMT
From: well!jjacobs@hplabs.hp.com  (Jeffrey Jacobs)
Subject: Re: Symbolics Lisp Release 7 (Genera)


You want Common Lisp, you gotta pay the price <GRIN>!  I've heard the same
rumors...

------------------------------

Date: 15 Aug 86 16:11:25 GMT
From: mcvax!botter!klipper!victor@seismo.css.gov  (L. Victor Allis)
Subject: Information wanted.

I'm looking for any information I can get on a game which is a
more complex kind of tic-tac-toe. In the Netherlands this game
is called 'vier op een rij', in Germany 'vier gewinnt'.

  [Here it's marketed by Milton Bradley as Connect Four.  -- KIL]


Description of the game:

   'Vier op een rij' is played on a vertical 6 x 7 grid. Two players,
   white and black, the former having 21 white, the latter having
   21 black stones, play the game by alternately throwing one of
   their stones in one of the 7 vertical columns. The stone will
   fall down as far as possible.
   The goal of the game is to have four of your stones on four
   consecutive horizontal, vertical or diagonal positions (like
   tic-tac-toe). The one who achieves this first, wins. A draw is
   possible, if none achieved this and the grid is full.
   White always has the first 'move'.
   It is not allowed to pass.

   Possible situation in a game:

   ---------------
   | | | | | | | |  White (x) will lose this game since in this
   ---------------  situation he has to play the second column to
   | | | |o| | | |  prevent black (o) from winning horizontaly,
   ---------------  but this will give black the possibility to
   | | | |x| | | |  win diagonaly by playing the second column again.
   ---------------
   | | |o|o|o| | |
   ---------------
   | |x|x|o|x| | |
   ---------------
   |o|x|x|x|o| | |
   ---------------

I would like to know if there is someone who wrote a program for
this game and any results which were obtained by this program, like:

1) Result of the game after perfect play of both sides.
2) Best opening moves for both sides.

Thanks !

Victor Allis.                             victor@klipper.UUCP
Free University of Amsterdam.
The Netherlands.

------------------------------

Date: 18 Aug 86 23:27:15 GMT
From: ihnp4!cuae2!ltuxa!ttrdc!levy@ucbvax.Berkeley.EDU  (Daniel R. Levy)
Subject: Re: Information wanted.

In article <585@klipper.UUCP>, victor@klipper.UUCP (L. Victor Allis) writes:
>I'm looking for any information I can get on a game which is a
>more complex kind of tic-tac-toe. In the Netherlands this game
>is called 'vier op een rij', in Germany 'vier gewinnt'.

On this vanilla System V R2 3B20 the game is available as /usr/games/connect4
(sorry, no source code came with it on this UNIX-source-licensed system
and even if it did it might be proprietary [ :-) ] but I hope this pointer
is better than nothing).

Please excuse me for posting rather than mailing.  My route to overseas sites
seems tenuous at best.

 -------------------------------    Disclaimer:  The views contained herein are
|       dan levy | yvel nad      |  my own and are not at all those of my em-
|         an engihacker @        |  ployer or the administrator of any computer
| at&t computer systems division |  upon which I may hack.
|        skokie, illinois        |
 --------------------------------   Path: ..!{akgua,homxb,ihnp4,ltuxa,mvuxa,
           go for it!                   allegra,ulysses,vax135}!ttrdc!levy

------------------------------

Date: 4 Sep 86 21:42:59 GMT
From: ubc-vision!alberta!tony@uw-beaver.arpa  (Tony Marsland)
Subject: Computer Chess News

The June 1986 issue of the ICCA Journal is now being distributed.
The issue contains the following articles:
    "Intuition in Chess" by A.D. de Groot
    "Selective Search without Tears" by D. Beal
    "When will Brute-force Programs beat Kasparov?" by D. Levy
Also there is a complete report on the 5th World Computer Chess Championship
by Helmut Horacek and Ken Thompson, including all the games.

There are many other short articles, reviews and news items.
Subscriptions available from:
Jonathan Schaeffer, Computing Science Dept., Univ. of Alberta,
                    Edmonton T6G 2H1, Canada.
Cost: $15 for all four 1985 issues
      $20 per year beginning 1987, $US money order or check/cheque.
email: jonathan@alberta.uucp for more information.

------------------------------

Date: Sat, 30 Aug 86 11:00:56 GMT
From: Stephen Page
      <music-research-request%sevax.prg.oxford.ac.uk@Cs.Ucl.AC.UK>
Subject: New list: Music-Research Digest

                         COMPUTERS AND MUSIC RESEARCH

                     An electronic mail discussion group

The Music-Research electronic mail redistribution list was established after a
suggestion made at a meeting in Oxford in July 1986, to provide an effective
and fast means of bringing together musicologists, music analysts, computer
scientists, and others working on applications of computers in music research.

Initially, the list was established for people whose chief interests concern
computers and their applications to

        - music representation systems
        - information retrieval systems for musical scores
        - music printing
        - music analysis
        - musicology and ethnomusicology
        - tertiary music education
        - databases of musical information

The following areas are not the principal concern of this list, although
overlapping subjects may well be interesting:

        - primary and secondary education
        - sound generation techniques
        - composition

There are two addresses being used for this list:
   -  music-research-request@uk.ac.oxford.prg
          for requests to be added to or deleted from the list, and other
          administrivia for the moderator.

   -  music-research@uk.ac.oxford.prg
          for contributions to the list.

The above addresses are given in UK (NRS) form. For overseas users, the
INTERNET domain-style name for the moderator is
      music-research-request@prg.oxford.ac.uk

If your mailer does not support domain-style addressing, get it fixed. For the
moment, explicitly send via the London gateway, using
      music-research-request%prg.oxford.ac.uk@cs.ucl.ac.uk
   or music-research-request%prg.oxford.ac.uk@ucl-cs.arpa
UUCP users who do not have domain-style addressing may send via Kent:
      ...!ukc!ox-prg!music-research-request

------------------------------

Date: 8 Sep 86 19:46:47 GMT
From: hpcea!hpfcdc!hpfclp!hpai@hplabs.hp.com  (AI)
Subject: New deadline for HP contest

        [Forwarded from the Prolog Digest by Laws@SRI-STRIPE.]

Hewlett-Packard has extended the submittal deadline for its AI programming
contest.  Software and entry forms must by sent on or before February 1, 1987.

In addition, originality has been added as a judging criterion.  That is,
newly written software will be weighted more heavily than ported software.

Revised rules and an entry form follow.


                           Hewlett-Packard

                        AI Programming Contest

To celebrate the release of its AI workstation, Hewlett-Packard is
sponsoring a programming contest.  Submit your public domain software
by February 1, 1987 to be considered for the following prizes:

   First prize:   One HP72445A computer   (Vectra)
   Second prize:  One HP45711B computer   (Portable Plus)
   Third prize:   One HP16C calculator    (Computer Scientist)

                        Complete rules follow.

1.  All entries must be programs of interest to the symbolic computing
or artificial intelligence communities.  They must be executable on
HP9000 Series 300 computers running the HP-UX operating system.  This
includes programs written in the Common LISP, C, Pascal, FORTRAN, or
shell script languages, or in any of our third party AI software.

2.  All entries must include source code, machine-readable
documentation, a test suite, and any special instructions necessary to
run the software.  Entries may be submitted by electronic mail or
shipped on HP formatted 1/4" Certified Data Cartridge tapes.

3.  All entries must be in the public domain and must be accompanied
by an entry form signed by the contributor(s).  Entries must be sent
on or before February 1, 1987.

4.  Only residents of the U.S. may enter.  HP employees and their
dependents are ineligible to receive prizes, but are welcome to submit
software.  In the case of team entries, each member of the team must
be eligible.  No duplicate prizes will be awarded.  Disposition of the
prize is solely the responsibility of the winning team.

5.  Entries will be judged on the basis of originality, relevance to our
user community, complexity, completeness, and ease of use.  The date of
receipt will be used as a tie-breaker.  Decision of the judges will be
final.

6.  HP cannot return tape cartridges.

7.  Selected entries will be distributed by HP on an unsupported
software tape.  This tape will be available from HP for a distribution
fee.  The contributor(s) of each entry which is selected for this tape
will receive a complimentary copy.



To enter:

Print and complete the following entry form and mail it to:

    AI Programming Contest  M.S. 99
    Hewlett-Packard
    3404 E. Harmony Road
    Fort Collins, CO  80525

Send your software on HP formatted 1/4"tape to the same address, or
send it via electronic mail to:
 hplabs!hpfcla!aicontest  or   ihnp4!hpfcla!aicontest

  [Form deleted: write to the author or check the Prolog Digest.  I generally
  omit entry forms and conference reservation coupons to save bandwidth,
  reduce storage space, and avoid annoying those with slow terminals
  or expensive communication links. -- KIL]

------------------------------

End of AIList Digest
********************

From csnet_gateway Sat Sep 20 18:52:44 1986
Date: Sat, 20 Sep 86 18:52:34 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #190
Status: RO


AIList Digest           Saturday, 20 Sep 1986     Volume 4 : Issue 190

Today's Topics:
  AI Tools - Xerox Dandelion vs. Symbolics

----------------------------------------------------------------------

Date: 4 Sep 86 14:27:00 GMT
From: princeton!siemens!steve@CAIP.RUTGERS.EDU
Subject: Xerox Dandelion vs. Symbolics?


Why do people choose Symbolics/ZetaLisp/CommonLisp over
Xerox Dandelion/Interlisp?

I have been "brought up" on Interlisp and had virtually no exposure to
Maclisp derivatives, but more to the point, I've been brought up on the
Xerox Dandelion lisp machine and never used a Symbolics.  Every chance I
get, I try to find out what a Symbolics/Zetalisp machine has that the
Dandelion doesn't.  So far I have found only the following:

1)  More powerful machine (but less power per dollar).

2)  The standard of Commonlisp (only the past couple years).

3)  People are ignorant of what the Dandelion has to offer.

4)  Edit/debug cycle (and editor) very similar to old standard systems
    such as Unix/C/Emacs or TOPS/Pascal/Emacs, and therefore easier
    for beginners with previous experience.

I have found a large number of what seem to be advantages of the Xerox
Dandelion Interlisp system over the Symbolics.  I won't post anything
now because this already is too much like an ad for Xerox, but you might
get me to post some separately.

I am not personally affiliated with Xerox (although other parts of my
company are).  I am posting this because I am genuinely curious to find
out what I am missing, if anything.

By the way, the Interlisp system on the Dandelion is about 5 megabytes
(it varies depending on how much extra stuff you load in - I've never
seen the system get as large as 6 Mb).  I hear that Zetalisp is 24 Mb.
Is that true?  What is in it, that takes so much space?

Steven J. Clark, Siemens Research and Technology Laboratory etc.
{ihnp4!princeton | topaz}!siemens!steve

something like this ought to work from ARPANET:  steve@siemens@spice.cs.cmu
(i.e. some machines at CMU know siemens).

------------------------------

Date: 5 Sep 86 16:38:57 GMT
From: tektronix!orca!tekecs!mikes@ucbvax.Berkeley.EDU  (Michael Sellers)
Subject: Re: Xerox Dandelion vs. Symbolics?  [vs. Tek 4400 series]


> Why do people choose Symbolics/ZetaLisp/CommonLisp over
> Xerox Dandelion/Interlisp?

Maybe I'm getting in over my head (and this is not unbiased), but what
about Tek's 4400 series (I think they have CommonLisp & Franz Lisp, but
I could be wrong)?  I was under the impression that they offered much
more bang for the buck than did the other major AI workstation folks.
Have you seen these and decided they are not what you want, or are you
unaware of their capabilities/cost?

> ...Dandelion Interlisp system over the Symbolics.  I won't post anything
> now because this already is too much like an ad for Xerox, but you might
> get me to post some separately.

Maybe, if we're going to have testimonials, we could nudge someone from
Tek's 4400 group (I know some of them are on the net) into giving us a
rundown on their capabilities.

> I am not personally affiliated with Xerox (although other parts of my
> company are).  I am posting this because I am genuinely curious to find
> out what I am missing, if anything.

I am personally affiliated with Tek (in a paycheck sort of relationship),
though not with the group that makes the 4400 series of AI machines.  I did
have one on my desk for a while, though (sigh), and was impressed.  I think
you're missing a fair amount :-).

> Steven J. Clark, Siemens Research and Technology Laboratory etc.

                Mike Sellers
        UUCP: {...your spinal column here...}!tektronix!tekecs!mikes


           INNING:  1  2  3  4  5  6  7  8  9  TOTAL
        IDEALISTS   0  0  0  0  0  0  0  0  0    1
         REALISTS   1  1  0  4  3  1  2  0  2    0

------------------------------

Date: 5 Sep 86 17:27:54 GMT
From: gatech!royt@seismo.css.gov  (Roy M Turner)
Subject: Re: Xerox Dandelion vs. Symbolics?

In article <25800003@siemens.UUCP> steve@siemens.UUCP writes:
>
>Every chance I
>get, I try to find out what a Symbolics/Zetalisp machine has that the
>Dandelion doesn't.  So far I have found only the following:
>...
>Steven J. Clark, Siemens Research and Technology Laboratory etc.
>{ihnp4!princeton | topaz}!siemens!steve
>

As a user of Symbolics Lisp machines, I will try to answer some of Steve's
comments.  We have had Symbolics machines here since before I started on my
degree two years ago; we recently were given thirteen Dandelions and two
DandyTigers by Xerox.  We use the Symbolics as our research machines, and the
Xerox machines for teaching AI.

The Symbolics are more powerful, as Steve says, and quite possibly he is right
about the power per dollar being less for them than for Xerox; since the Xerox
machines were free to us, certainly he's right in our case! :-)  However, I
find the Dandelions abysmally slow for even small Lisp programs, on the order
of the ones we use in teaching (GPS (baby version), micro versions of SAM,
ELI, etc.).  To contemplate using them for the very large programs that we
develop as our research would be absurd--in my opinion, of course.

The "standard" of CommonLisp will (so Xerox tells us) be available for the
Dandelions soon...'course, they've been saying that for some time now :-).  So
the two machines may potentially be equal on that basis.  ZetaLisp is quite
close to CommonLisp (since it was one of the dialects Common Lisp is based
on), and also close to other major dialects of lisp--Maclisp, etc.--enough so
that I've never had any trouble switching between it and other lisps...with
one exception--you guessed it, Interlisp-D.  I realize that whatever you are
used to colors your thinking, but Lord, that lisp seems weird to me!  I mean,
comments that return values?? Gimme a break!

"People are ignorant of what the Dandelion has to offer."  I agree.  I'm one
of the people.  It has nice windows, much less complicated than Symbolics.
MasterScope is nice, too.  So is the structure editor, but that is not too
much of a problem to write on any other lisp machine, and is somewhat
confusing to learn (at least, that's the attitude I perceive in the students).
What the Dandelions *lack*, however, is any decent file manipulation
facilities (perhaps Common Lisp will fix this), a nice way of handling
processes, a communications package that works (IP-TCP, at least the copy we
received, will trash the hard disk when our UNIX machines write to the
DandyTigers...the only thing that works even marginally well is when we send
files from the Symbolics!  Also, the translation portion of the communication
package leaves extraneous line-feeds, etc., lying about in the received file),
and A DECENT EDITOR! Which brings us to the next point made by Steve:

>4)  Edit/debug cycle (and editor) very similar to old standard systems
>    such as Unix/C/Emacs or TOPS/Pascal/Emacs, and therefore easier
>    for beginners with previous experience.

This is true.  However, it is also easier for experts and semi-experts (like
me) who may or may not have had prior experience with EMACS.  The Dandelions
offer a structure editor (and Tedit for text, but that doesn't count) and
that's it...if you want to edit something, you do it function by function.
Typically, what I do and what other people do on the Xerox machines is enter a
function in the lisp window, which makes it very difficult to keep track of
what you are doing in the function, and makes it mandatory that you enter
one function at a time.  Also, the function is immediately evaluated (the
defineq is, that is) and becomes part of your environment.  Heaven help you if
you didn't really mean to do it!  At least with ZMACS you can look over a file
before evaluating it.  Another gripe.  Many of our programs used property
lists, laboriously entered via the lisp interactor.  We do a makefile, and
voila--next time we load the file, the properties aren't there!  This has yet
to happen when something is put in an edit buffer and saved to disk on the
Symbolics.  Perhaps there is a way of editing on the Xerox machines that lends
itself to editing files (and multiple files at once), so that large programs
can be entered, edited, and documented (Interlisp-D comments are rather bad
for actually documenting code) easily...if so, I haven't found it.

Another point in Symbolics favor: reliability.  Granted, it sometimes isn't
that great for Symbolics, either, but we have had numerous, *numerous*
software and hardware failures on the Dandelions.  It's so bad that we have to
make sure the students save their functions to disk often, and have even had
to teach them how to copy sysouts and handle dead machines, since the machines
lock up from time to time with no apparent cause.  And the students must be
cautioned not to save their stuff only to one place, but to save it to the
file server, a floppy, and anywhere else they can, since floppies are trashed
quite often.  Dribble to the hard disk, forget to turn dribble off, there goes
the hard disk... Type (logout t) on the Dandelions to cause it not to save
your world, and there goes the Dandelion (it works on the DandyTigers).

About worlds and sysouts.  The Symbolics has a 24-30 meg world, something like
that.  This is *not* just lisp--it is your virtual memory, just as it is in a
Xerox Sysout.  The difference in size reflects the amount of space you have at
your disposal when creating conses, not the relative sizes of system software
(though I imagine ZetaLisp is larger than Interlisp-D).  You do not
necessarily save a world each time you logout from a Symbolics; you do on a
Dandelion...thus the next user who reboots a Symbolics gets a clean lisp,
whereas the next user of a Dandelion gets what was there before unless he
first copies another sysout and boots off of it.  It is, however, much harder
to save a world on the Symbolics than on the Xerox machines.

Well, I suppose I have sounded like a salesman for Symbolics.  I do not mean
to imply that Symbolics machines are without faults, nor do I mean to say that
Xerox machines are without merit!  We are quite grateful for the gift of the
Xerox machines; they are useful for teaching.  I just tried to present the
opinions of one Symbolics-jaded lisp machine user.

Back to the Symbolics machine now...I suppose that the DandyTiger beside it
will bite me! :-)

Roy

------------------------------

Date: 6 Sep 86 22:36:43 GMT
From: jade!violet.berkeley.edu!mkent@ucbvax.Berkeley.EDU
Subject: Re: Xerox Dandelion vs. Symbolics?


   As a long-term user of Interlisp-D, I'd be very interested in hearing an
*informed* comparison of it with ZetaLisp.  However, I'm not particularly
interested in hearing what an experienced Zetalisp user with a couple of
hours of Interlisp experience has to say on the topic, other than in
regard to issues of transfer and learnability.  I spent about 4 days using
the Symbolics, and my initial reaction was that the user interface was out
of the stone age.  But I realize this has more to do with *my* background
then with Zetalisp itself.
   Is there anyone out there with *non-trivial* experience with *both*
environments who can shed some light on the subject?

                        Marty Kent

"You must perfect the Napoleon before they finish Beef Wellington!  The
future of Europe hangs in the balance..."

------------------------------

Date: 9 Sep 86 06:14:00 GMT
From: uiucdcsp!hogge@a.cs.uiuc.edu
Subject: Re: Xerox Dandelion vs. Symbolics?


>...I spent about 4 days using
>the Symbolics, and my initial reaction was that the user interface was out
>of the stone age.  But I realize this has more to do with *my* background
>then with Zetalisp itself.

Four days *might* be enough time to get familiarize yourself with the help
mechanisms, if that's specifically what you were concentrating on doing.
Once you learn the help mechanisms (which aren't bundled all that nicely and
are rarely visible on the screen), your opinion of the user interface will
grow monotonically with use.  If you are interested in having more visible
help mechanisms for first-time users, check out what the TI Explorer adds to
the traditional Zetalisp environment.  LMI and Sperry also provide their own
versions of the environment.

--John

------------------------------

Date: 10 Sep 86 10:35:40 GMT
From: mob@MEDIA-LAB.MIT.EDU  (Mario O. Bourgoin)
Subject: Re: Xerox Dandelion vs. Symbolics?

In article <3500016@uiucdcsp>, hogge@uiucdcsp.CS.UIUC.EDU writes:
> >...I spent about 4 days using
> >the Symbolics, and my initial reaction was that the user interface was out
> >of the stone age.....
>
> Four days *might* be enough time to get familiarize yourself with the help
> mechanisms, if that's specifically what you were concentrating on doing.

Four days to learn the help mechanisms?   Come on, an  acceptable user
interface should give you control  of help within  minutes _not days_.
Seriously  folks, it took  me  less than 10   seconds to  learn  about
ZMACS's apropos on the old CADRs and before the end of the day, I knew
about a lot more.  Have you ever used the "help" key?  The Symbolics's
software isn't much  different from the CADR's.  I'll  grant that  the
lispm's presentation of information isn't that obvious or elegant  but
it isn't stone age and doesn't require 4 days to get a handle on.

If you're arguing internals, I haven't worked with the Dandelion  so I
can't provide an opinion on it. The CADR's user interface software was
certainly featureful and appeared to my eyes to come  from a different
school than what  I later saw of  Xerox's software.  It  is useful and
manipulable but didn't look  intended to be  programmed by anyone just
off the street.   If you want  to  learn the  internals  of  the  user
interface, _then_ i'll grant you four days (and more).

--Mario O. Bourgoin

------------------------------

Date: 10 Sep 86 15:23:29 GMT
From: milano!Dave@im4u.utexas.edu
Subject: Re:  36xx vs. Xerox


A few to add to pro-36xx list:

    5.  Reliable hardware

    6.  Reliable software

    7.  Good service

A year ago, I was on project which used
Dandeanimals.  As a group, they were up about 60% of the time, and
there were days when all 5 were down.  The extra screw was that
the first level of repair was a photocopier repairman.  It always
took several days before we got people who knew something about the
machines.

Dave Bridgeland -- MCC Software Technology  (Standard Disclaimer)
  ARPA:    dave@mcc.arpa
  UUCP:    ut-sally!im4u!milano!daveb

  "Things we can be proud of as Americans:
     * Greatest number of citizens who have actually boarded a UFO
     * Many newspapers feature "JUMBLE"
     * Hourly motel rates
     * Vast majority of Elvis movies made here
     * Didn't just give up right away during World War II
             like some countries we could mention
     * Goatees & Van Dykes thought to be worn only by weenies
     * Our well-behaved golf professionals
     * Fabulous babes coast to coast"

------------------------------

End of AIList Digest
********************

From csnet_gateway Sun Sep 21 06:58:19 1986
Date: Sun, 21 Sep 86 06:58:09 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #191
Status: RO


AIList Digest           Saturday, 20 Sep 1986     Volume 4 : Issue 191

Today's Topics:
  AI Tools - Xerox Dandelion vs. Symbolics

----------------------------------------------------------------------

Date: 8 Sep 86 17:35:52 GMT
From: hpcea!hpfcdc!hpcnoe!jad@hplabs.hp.com  (John Dilley)
Subject: Re: Xerox Dandelion vs. Symbolics?  [vs. Tek 4400 series]

> Why do people choose Symbolics/ZetaLisp/CommonLisp over
> Xerox Dandelion/Interlisp?
> ...
> 3)  People are ignorant of what the Dandelion has to offer.

        I have a file of quotes, one of which has to do with this
        problem Xerox seems to have.  I've heard great things about
        Dandelion/Interlisp, and their Smalltalk environments, but have
        never seen one of these machines in "real life" (whatever that
        is).  Anyway, the quote I was referring to is:

"It doesn't matter how great the computer is if nobody buys it. Xerox
proved that."
                                --      Chris Espinosa

        And while we're at it ... remember Apple?

"One of the things we really learned with Lisa and from looking at what
 Xerox has done at PARC was that we could construct elegant, simple systems
 based on just a bit map..."
                                --      Steve Jobs

        Seems like Xerox needed more advertising or something.  It's a
        shame to see such nice machines go unnoticed by the general
        public, especially considering what choices we're often left with.

                              --      jad      --
                                 John A Dilley

Phone:                           (303)229-2787
Email:                     {ihnp4,hplabs} !hpfcla!jad
(ARPA):                      hpcnoe!jad@hplabs.ARPA
Disclaimer:     My employer has no clue that I'm going to send this.

------------------------------

Date: 11 Sep 86 17:58:23 GMT
From: gatech!royt@seismo.css.gov  (Roy M Turner)
Subject: Re: Xerox Dandelion vs. Symbolics?

In response to a prior posting by me, Marty (mkent@violet.berkely.edu) writes:

>
>   As a long-term user of Interlisp-D, I'd be very interested in hearing an
>*informed* comparison of it with ZetaLisp.  However, I'm not particularly
>interested in hearing what an experienced Zetalisp user with a couple of
>hours of Interlisp experience has to say on the topic...
>  ...

Who, me? :-)
If I was unclear in my posting, I apologize.  I have had a bit more than two
hours of experience w/ Dandelions.  I used them in a class I was taking, and
also was partly responsible for helping new users and for maintaining some
of the software on them.  Altogether about 4 months of fairly constant use.

Another posting said we were using outdated software; that is undoubtedly
correct, as we just got Coda; we were using Intermezzo.  Some problems
are probably fixed.  However, we have not received the new ip-tcp from
Xerox...but, what do you expect with free machines? :-)

Roy

Above opionions my own...'course, they *should* be everyone's! :-)

Roy Turner
School of Information and Computer Science
Georgia Insitute of Technology, Atlanta Georgia, 30332
...!{akgua,allegra,amd,hplabs,ihnp4,seismo,ut-ngp}!gatech!royt

------------------------------

Date: 12 Sep 86 14:58:07 GMT
From: wdmcu%cunyvm.bitnet@ucbvax.Berkeley.EDU
Subject: Re: Xerox Dandelion vs. Symbolics?

In article <3500016@uiucdcsp>, hogge@uiucdcsp.CS.UIUC.EDU says:

>Once you learn the help mechanisms (which aren't bundled all that nicely and
>are rarely visible on the screen), your opinion of the user interface will
>grow monotonically with use.  If you are interested in having more visible
      ^^^^^^^^^^^^^
Could you please define this word in this context.
Thanks.
(This is a serious question)
/*--------------------------------------------------------------------*/
/* Bill Michtom - work: (212) 903-3685 home: (718) 788-5946           */
/*                                                                    */
/*      WDMCU@CUNYVM (Bitnet)        Timelessness is transient        */
/*      BILL@BITNIC  (Bitnet)                                         */
/*                                                                    */
/*        Never blame on malice that which can be adequately          */
/*                 explained by stupidity.                            */
/*    A conclusion is the place where you got tired of thinking.      */
/*--------------------------------------------------------------------*/

------------------------------

Date: 12 Sep 86 07:31:00 GMT
From: uiucdcsp!hogge@a.cs.uiuc.edu
Subject: Re: Xerox Dandelion vs. Symbolics?


>> Four days *might* be enough time to get familiarize yourself with the help
>> mechanisms, if that's specifically what you were concentrating on doing.
>
>Four days to learn the help mechanisms?   Come on, an  acceptable user
>interface should give you control  of help within  minutes _not days_.
>Seriously  folks, it took  me  less than 10   seconds to  learn  about
>ZMACS's apropos on the old CADRs and before the end of the day, I knew
>about a lot more.  Have you ever used the "help" key?
>software isn't much  different from the CADR's.  I'll  grant that  the
>lispm's presentation of information isn't that obvious or elegant  but
>it isn't stone age and doesn't require 4 days to get a handle on.

There's more subtle help available on the machine than just the help key,
and my experience is that it takes a long time for one to learn the
mechanisms that are there.  The HELP key *is* the main source of help, but not
the only source.  Examples include: 1. use of Zmacs meta-point to find
examples of how to do things (such as hack windows) from the system source,
2. use of c-/ in the Zmacs minibuffer for listing command completions (and
what a drag if you don't know about this command)  3. the importance of
reading who-line documentation  4. use of the Apropos function to hunt down
useful functions, as well as WHO-CALLS  5. use of the various Lisp Machine
manufacturer's custom help mechanisms, such as the Symbolics flavor examiner
and documentation examiner, or TI's Lisp-completion input editor commands and
Suggestions Menus.

The Lisp Machine is a big system, and there's lots of good help available.
But it isn't trivial learning how to get it nor when to seek it.

--John

------------------------------

Date: 12 Sep 86 14:42:58 GMT
From: ihnp4!wucs!sbc@ucbvax.Berkeley.EDU  (Steve Cousins)
Subject: Re: Xerox Dandelion vs. Symbolics?

In article <322@mit-amt.MIT.EDU> mob@mit-amt.UUCP writes:
>... It  is useful and
>manipulable but didn't look  intended to be  programmed by anyone just
>off the street.   If you want  to  learn the  internals  of  the  user
>interface, _then_ i'll grant you four days (and more).
>
>--Mario O. Bourgoin

I think you could argue that *no* machine (AI or otherwise) can be programmed
by anyone just off the street :-).  I haven't used the Symbolics, but my
view of the Dandelion has changed drastically since taking a course on it
by Xerox.  The interface is very powerful and well-integrated, but the
"infant mortality curve" (the time to get good enough not to crash the
machines) is somewhat high.  [Disclaimer:  These machines are supposed to
be much better when networked than stand-alone.  My change in attitude
occurred just as we got ours on the network, and I'm not sure how much
to attribute to the class, and how much to attribute to the network].

I like the Dandelion now, but the first 4 days did not give me a good
impression of the machine.  There is a lot to say about learning a new machine
from a guru...

Steve Cousins                   ...ihnp4!wucs!sbc or sbc@wucs.UUCP
Washington University

------------------------------

Date: 15 Sep 86 12:58:18 GMT
From: clyde!watmath!watnot!watmum!rgatkinson@caip.rutgers.edu 
      (Robert Atkinson)
Subject: Re: Xerox Dandelion vs. Symbolics?  [vs. Tek 4400 series]

In article <580001@hpcnoe.UUCP> jad@hpcnoe.UUCP (John Dilley) writes:
>> Why do people choose Symbolics/ZetaLisp/CommonLisp over
>> Xerox Dandelion/Interlisp?
>> ...
>> 3)  People are ignorant of what the Dandelion has to offer.
>
>       I have a file of quotes, one of which has to do with this
>       problem Xerox seems to have.  I've heard great things about
>       Dandelion/Interlisp, and their Smalltalk environments, but have
>       never seen one of these machines in "real life" (whatever that
...

        Smalltalk is now (finally!) available from Xerox.  An organization
        known as Parc Place Systems is now licensing both the virtual
        image and virtual machine implementations for Suns and other
        workstations.  For further info contact:
                Duane Bay
                Parc Place Systems
                3333 Coyote Hill Rd
                Palo Alto, CA 94304

        -bob

------------------------------

Date: 17 Sep 86 16:49:00 GMT
From: princeton!siemens!steve@CAIP.RUTGERS.EDU
Subject: Dandelion vs Symbolics


I have received enough misinformation about Dandelions and Symbolics
machines (by net and mail) that I feel forced to reply.  This is not,
however, the last word I have to say.  I like to keep the net in suspense,
acting like I'm saving the BIG REVELATION for later.

Key: S= Symbolics, X = Xerox Dandelions
     - point against X, for S
     = point for X, against S
     * misinformation against X, fact in favor (my opinion of course)
     ? point not classifiable in previous categories

A writer who prefers to remain anonymous pointed out:
- If your system is bigger than 32 Mb, it can't be done on a Xerox machine.
- It takes a great deal of effort to get good numerical performance on X.
- X. editor is slow on big functions & lists.  My opinion is that it is
  bad programming style to have such large functions.  However, sometimes
  the application dictates and so this is a point.
* "Garbage collection is much more sophisticated on Symbolics" To my
  knowledge this is absolutely false.  S. talks about their garbage
  collection more, but X's is better.  Discuss this more later if you want.
* Preference for Zmacs with magic key sequences to load or compile portions
  of a file, over Dandelion.  People who never learn how to use the X system
  right have this opinion.  more later.
* "Symbolics system extra tools better integrated"  Again, to my knowledge
  this is false.  I know people who say no two tools of S. work together
  without modification.  I have had virtually no trouble with diverse X.
  tools working together.
? "S. has more tools and functions available e.g. matrix pkg."  On the other
  hand I have heard S. described as a "kitchen sink" system full of many
  slightly different versions of the same thing.

There is a general belief that the reason the X system is around 5 - 6 Mb
vs. S. around 24 is that S. includes more tools & packages.
+ When you load in most of the biggest of the tools & packages to the X
  system you still are down around 6 - 7 Mb!
+ If your network is set up reasonably, then it is trivial to load whatever
  packages you want.  It is very nice NOT to have junk cluttering up your
  system that you don't want.
? "The difference in size reflects how much space you have for CONSes, etc."
  Huh?  I have 20Mb available, yet I find myself actually using less than
  7Mb.  My world is 7Mb.  If I CONS a list 3 Mb long, my world will be 10Mb.

Royt@gatech had some "interesting" observations:
+ Performance per dollar:  you can get at least 5 X machines for the cost
  of a single S machine.  AT LEAST.  Both types prefer to be on networks
  with fileservers etc., which adds overhead to both.
? X abysmally slow for baby GPS etc.  My guess is that whoever ported/wrote
  the software didn't know how to get performance out of the X machines.
  It's not too hard, but it's not always obvious either.
= Xerox is getting on the Commonlisp bandwagon only a little late.  But how
  "common" is Commonlisp when window packages are machine dependent?
= For every quirk you find in Interlisp (".. Lord, that lisp seems weird to
  me!  I mean, comments that return values??"), I can find one in Commonlisp.
  (Atoms whose print names are 1.2 for example.)
+ X has nice windows, less complicated than S.  No one i know has ever crashed
  a X machine by messing with the windows.  Opposite for S. machine.
+ structure editor on X machine, none on S.
* "Dandelions *lack* decent file manipulation..."  Wrong, comment later.
? he has bad experience with the old IP/TCP package.  Me too, but the new
  one works great.  (The X NS protocols actually are quite good but the rest
  of the world doesn't speak them :-().
? "..Typically, what I do and what other people do .. is enter a function in
  the lisp window, which makes it very difficult ..."  Didn't you realise
  you must be doing something wrong?  That's not how you enter functions!
  You give other examples of how you and your cohorts don't know how to
  use the Xerox system right.  You're too stuck on the old C & Fortran
  kinds of editing and saving stuff.
* He goes on about reliability of X being the pits.  Every person I have
  known who learned to use the X machine caused it to crash in various
  ways, but by the time (s)he had enough experience to be able to explain
  what he did to someone else, the machine no longer crashed.  I guess
  the X machines have a "novice detector".  My understanding is that
  S has its problems too.

One guy had bad experience with KEE, which was developed on X.  I do not
think his experience is representative.  What he did say was that it kept
popping up windows he didn't want; X systems make much more use of
sophisticated window and graphic oriented tools and interfaces than S,
but it doesn't often pop up useless windows in general.

Dave@milano thinks S offers reliable hardware, reliable software, and
good service that X doesn't.  WRONG!  At his site, they were obviously
doing something sytematically wrong with their machines, and they didn't
get a good repairman.  I can give you horror stories about Symbolics, too,
but I have some pretty reliable points:
+ At a site I know they have around 20 S.  They have sophisticated users
  and they do their own board swapping.  Still they have 10% downtime.
+ At my site we have very roughly 20 machine-years with X.  Total downtime
  less than 2 machine weeks.
+ S. has such hardware problems that a) they have a "lemon" program where
  you can return your machine for a new one, b) their service contracts
  are OUTRAGEOUSLY EXPENSIVE!

These lisp machines are very complex systems.  If you don't have someone
teach you, who already knows the right ways to use the machine, then it
will take you more than 4 months to learn how to use it to the best
advantage.  Hell, I've been using a Dandelion almost constantly for close
to three years and there are still subsystems that I only know superficially,
and which I know I could make better use of!  If the same isn't true of
Symbolics it can only be because the environment is far less rich.  It is
not difficult to learn these subsystems; the problem is there's just SO
MUCH to learn.  Interlisp documentation was just re-done and it's 4.5 inches
thick!  (Used to be only 2.25)

Finally, I will expound a little on why Xerox is better than Symbolics.
The Xerox file system and edit/debug cycle is far superior to an old-
fashioned standard system like Symbolics which has a character-oriented
editor like Zmacs.  The hard part for many people to learn the Xerox file
system, is that first they have to forget what they know about editors and
files.  A lot of people are religious about their editors, so this step
can be nearly impossible.  Secondly, the documentation until the new version
was suitable primarily for people who already knew what was going on.  That
hurt a lot.  (It took me maybe 1.5 years before I really got control of the
file package, but I was trying to learn Lisp in the first place, and
everything else at the same time.)  Now it's much much faster to learn.

The old notion of files and editors is like assembly language.  Zmacs with
magic key sequences to compile regions etc. is like a modern, good assembler
with powerful macros and data structures and so forth.  Xerox's file system
is like Fortran/Pascal/C.  Ask the modern assembly programmer what he sees
in Fortranetc. and he'll say "nothing".  It'll be hard for him to  learn.
He's used to the finer grain of control over the machine that assembly gives
him and he doesn't understand how to take advantage of the higher level
features of the Fortranetc. language.  Before you flame at me too much,
remember I am analogizing to a modern, powerful assembler not the trash
you used 5 years ago on your TRS-80.  The xerox file package treats a file
as a database of function definitions, variable values, etc.  and gives you
plenty of power to deal with them as databases.  This note is long enough
and I don't know what else to say so I'll drop this topic somewhat unfinished
(but I will NOT give lessons on how to use the Xerox file package).

A final final note:  the guy down the hall from me has used S. for some
years and now has to learn X.  He isn't complaining too much.  I hope he'll
post his own remarks soon, but I've got to relate one story.  I wanted to
show him something, and of course when I went to run it it didn't work
right.  As I spent a minute or two eradicating the bug, he was impressed
by the use of display-oriented tools on the Dandelion.  He said, "Symbolics
can't even come close."

Steven J. Clark, Siemens RTL
{ihnp4!princeton or topaz}!siemens!steve

------------------------------

End of AIList Digest
********************

From csnet_gateway Sun Sep 21 18:42:51 1986
Date: Sun, 21 Sep 86 18:42:43 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #192
Status: RO


AIList Digest            Sunday, 21 Sep 1986      Volume 4 : Issue 192

Today's Topics:
  Conferences - AI and Law &
    Logic in Computer Science &
    SIGIR R&D in Information Retrieval &
    Logical Solutions to the Frame Problem &
    CSCW '86 Program

----------------------------------------------------------------------

Date: 13 Aug 86 20:36:33 EDT
From: MCCARTY@RED.RUTGERS.EDU
Subject: Conference - AI and Law


                               CALL FOR PAPERS:

                       First International Conference on
                        ARTIFICIAL INTELLIGENCE AND LAW

                                May 27-29, 1987
                            Northeastern University
                          Boston, Massachusetts, USA

In  recent  years  there  has been an increased interest in the applications of
artificial intelligence to law.  Some of this interest is due to the  potential
practical  applications:    A number of researchers are developing legal expert
systems, intended as an aid  to  lawyers  and  judges;  other  researchers  are
developing  conceptual legal retrieval systems, intended as a complement to the
existing full-text legal retrieval systems.  But the problems in this field are
very difficult.  The natural language of the law is exceedingly complex, and it
is grounded in the fundamental patterns of human common sense reasoning.  Thus,
many  researchers have also adopted the law as an ideal problem domain in which
to tackle some of the basic theoretical issues in AI:   the  representation  of
common  sense  concepts;  the  process of reasoning with concrete examples; the
construction and use of analogies; etc.  There is  reason  to  believe  that  a
thorough  interdisciplinary  approach  to these problems will have significance
for both fields, with both practical and theoretical benefits.

The purpose of this First International Conference on  Artificial  Intelligence
and  Law  is  to  stimulate  further  collaboration  between AI researchers and
lawyers, and to provide a forum for the latest research results in  the  field.
The  conference  is  sponsored  by  the  Center for Law and Computer Science at
Northeastern University.  The General Chair is: Carole D.  Hafner,  College  of
Computer  Science,  Northeastern  University,  360 Huntington Avenue, Boston MA
02115, USA; (617) 437-5116 or (617) 437-2462; hafner.northeastern@csnet-relay.

Authors are invited to contribute papers on the following topics:

   - Legal Expert Systems
   - Conceptual Legal Retrieval Systems
   - Automatic Processing of Natural Legal Texts
   - Computational Models of Legal Reasoning

In addition, papers on the relevant theoretical issues in AI are also  invited,
if  the  relationship  to the law can be clearly demonstrated.  It is important
that authors identify the original contributions presented in their papers, and
that  they  include  a  comparison with previous work.  Each submission will be
reviewed by at least three members of the Program Committee (listed below), and
judged as to its originality, quality and significance.

Authors  should submit six (6) copies of an Extended Abstract (6 to 8 pages) by
January 15, 1987, to the Program Chair:    L.  Thorne  McCarty,  Department  of
Computer  Science,  Rutgers  University,  New  Brunswick  NJ  08903, USA; (201)
932-2657; mccarty@rutgers.arpa.  Notification of acceptance or  rejection  will
be  sent  out  by March 1, 1987.  Final camera-ready copy of the complete paper
(up to 15 pages) will be due by April 15, 1987.

Conference Chair:        Carole D. Hafner         Northeastern University

Program Chair:           L. Thorne McCarty        Rutgers University

Program Committee:       Donald H. Berman         Northeastern University
                         Michael G. Dyer          UCLA
                         Edwina L. Rissland       University of Massachusetts
                         Marek J. Sergot          Imperial College, London
                         Donald A. Waterman       The RAND Corporation

------------------------------

Date: Tue, 9 Sep 86 09:26:57 PDT
From: Moshe Vardi <vardi@navajo.stanford.edu>
Subject: Conference - Logic in Computer Science


                            CALL FOR PAPERS

                       SECOND ANNUAL SYMPOSIUM ON
                       LOGIC IN COMPUTER SCIENCE

                           22 - 25 June 1987
                Cornell University, Ithaca, New York, USA

THE SYMPOSIUM will cover a wide range of theoretical and practical
issues in Computer Science that relate to logic in a broad sense,
including algebraic and topological approaches.

Suggested (but not exclusive) topics of interest include: abstract
data types, computer theorem proving, verification, concurrency, type
theory and constructive mathematics, data base theory, foundations of
logic programming, program logics and semantics, knowledge and belief,
software specifications, logic-based programming languages, logic in
complexity theory.

                          Organizing Committee

      K. Barwise               E. Engeler                 A. Meyer
      W. Bledsoe               J. Goguen                  R. Parikh
      A. Chandra (chair)       D. Kozen                   G. Plotkin
      E. Dijkstra              Z. Manna                   D. Scott

                           Program Committee

    S. Brookes      D. Gries (chair)    J.-P. Jouannaud     A. Nerode
    L. Cardelli     J. Goguen           R. Ladner           G. Plotkin
    R. Constable    Y. Gurevich         V. Lifschitz        A. Pnueli
    M. Fitting      D. Harel            G. Longo            P. Scott

PAPER SUBMISSION.  Authors should send 16 copies of a detailed abstract
(not a full paper) by 9 DECEMBER 1986 to the program chairman:

          David Gries -- LICS              (607) 255-9207
          Department of Computer Science   gries@gvax.cs.cornell.edu
          Cornell University
          Ithaca, New York 14853

Abstracts must be clearly written and provide sufficient detail to allow the
program committee to assess the merits of the paper.  References and
comparisons with related work should be included where appropriate.  Abstracts
must be no more than 2500 words.  Late abstracts or abstracts departing
significantly from these guidelines run a high risk of not being considered.
If a copier is not available to the author,  a single copy of the abstract
will be accepted.

Authors will be notified of acceptance or rejection by 30 JANUARY 1987.
Accepted papers, typed on special forms for inclusion in the symposium
proceedings, will be due 30 MARCH 1987.

The symposium is sponsored by the IEEE Computer Society, Technical
Committee on Mathematical Foundations of Computing and Cornell
University, in cooperation with ACM SIGACT, ASL, and EATCS.

     GENERAL CHAIRMAN                      LOCAL ARRANGEMENTS
     Ashok K. Chandra                      Dexter C. Kozen
     IBM Thomas J. Watson Research Center  Department of Computer Science
     P.O. Box 218                          Cornell University
     Yorktown Heights, New York 10598      Ithaca, New York 14853
     (914) 945-1752                        (607) 255-9209
     ashok@ibm.com                         kozen@gvax.cs.cornell.edu

------------------------------

Date: Tue, 12 Aug 86 16:16:01 cdt
From: Don <kraft%lsu.csnet@CSNET-RELAY.ARPA>
Subject: Conference - SIGIR Conf. on R&D in Information Retrieval

         Association for Computing Machinery (ACM)
  Special Interest Group on Information Retrieval (SIGIR)
1987 International Conference on Research and Development
            in Information Retrieval

June 3-5, 1987   Monteleone Hotel (in the French Quarter)
                                                   New Orleans, Louisiana   USA

                      CALL FOR PAPERS
Papers are invited on theory, methodology, and  applications
of  information retrieval.  Emerging areas related to infor-
mation  retrieval,  such  as  office  automation,   computer
hardware technology, and artificial intelligence and natural
language processing are welcome.

Topics include, but are not limited to:
retrieval system modeling           user interfaces
retrieval in office environments    mathematical models
system development and evaluation   natural language processing
knowledge representation            linguistic models
hardware development                complexity problems
multimedia retrieval                storage and search techniques
cognitive and semantic models       retrieval system performance
information retrieval and database management

Submitted papers can be either full length papers of approx-
imately twenty to twenty-five pages or extended abstracts of
no more than ten  pages.   All  papers  should  contain  the
authors'  contributions  in comparison to existing solutions
to the same or to similar problems.

Important Dates
        Submission Deadline       December 15, 1986
        Acceptance Notification   February 15, 1987
        Final Copy Due            March 20, 1987
        Conference                June 3-5, 1987

Four copies of each paper should be submitted.  Papers  sub-
mitted from North America can be sent to Clement T. Yu; sub-
missions from outside North America should be sent to C.  J.
"Keith" van Rijsbergen.

Conference Chairman              Program Co-Chairmen
Donald H. Kraft            Clement T. Yu            C. J. "Keith" van Rijsbergen
Department of              Department of            Department of
  Computer Science           Electrical Engineering   Computer Science
Louisiana State University   and Computer Science   University of Glascow
Baton Rouge, LA  70803     University of Illinois,  Lilybank Gardens
                              Chicago               Glascow   G12 8QQ
                           Chicago, IL  60680       SCOTLAND
(504) 388-1495             (312) 996-2318           (041) 339-8855

For details, contact the Conference Chairman at kraft%lsu@csnet-relay or
Michael Stinson, the Arrangements Chairman at stinson%lsu@csnet-relay.

Don Kraft
kraft%lsu@csnet-relay

------------------------------

Date: Fri, 19 Sep 86 16:04:04 CDT
From: Glenn Veach <veach%ukans.csnet@CSNET-RELAY.ARPA>
Subject: Conference - Logical Solutions to the Frame Problem

                    CALL FOR PAPERS
   WORKSHOP ON LOGICAL SOLUTIONS TO THE FRAME PROBLEM

   The American Association for Artificial Intelligence (AAAI) is
sponsoring this workshop in Lawrence, Kansas from March 23 to March 25,1987.
   The frame problem is one of the most fundamental problems in
Artificial Intelligence and essentially is the problem of describing in
a computationally reasonable manner what properties persist and what
properties change as action are performed.  The intrinsic problem lies in
the fact that we cannot expect to be able to exhaustively list for every
possible action (or combination of concurrent actions) and for every
possible state of the world how that action (or concurrent actions) change
the truth or falsity of each individual fact.  We can only list the obvious
results of the action and hope that our basic inferential system will be
able to deduce the truth or falsity of the other less obvious facts.
   In recent years there have been a number of approaches to constructing
new kinds of logical systems such as non-monotonic logics, default logics,
circumscription logics, modal reflexive logics, and persistence logics which
hopefully can be applied to solving the frame problem by allowing the missing
facts to be deduced.  This workshop will attempt to bring together the
proponents of these various approaches.
   Papers on logics applicable to the problem of reasoning about such
unintended consequences of actions are invited for consideration.  Two
copies of either an extended abstract or a full length paper should be
sent to the workshop chairman before Nov 20,1986.  Acceptance notices will
be mailed by December 1,1986 along with instructions for preparing the final
versions of accepted papers.  The final versions are due January 12,1987.
  In order to encourage vigorous interaction and exchange of ideas
the workshop will be kept small -- about 25 participants.  There will
be individual presentations and ample time for technical discussions.
An attempt will be made to define the current state of the art and future
research needs.
        Partial travel support (from  AAAI) for participants is available.

Workshop Chairman:
      Dr. Frank M. Brown
      Dept Computer Science
      110 strong Hall
      The University of Kansas
      Lawrence, Kansas
      (913) 864-4482

Please send any net inquiries to: veach@ukans.csnet

------------------------------

Date: Tue 2 Sep 86 15:20:55-EDT
From: Irene Greif <GREIF@XX.LCS.MIT.EDU>
Subject: Conference - CSCW '86 Program


Following is the program for CSCW '86: the Conference on
Computer-Supported Cooperative Work .  Registration material can
be obtained from Barbara Smith at MCC (basmith@mcc).

  [Contact the author for the full program.  -- KIL]

------------------------------

End of AIList Digest
********************

From csnet_gateway Sun Sep 21 18:43:06 1986
Date: Sun, 21 Sep 86 18:42:56 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #193
Status: RO


AIList Digest            Sunday, 21 Sep 1986      Volume 4 : Issue 193

Today's Topics:
  Seminars - Backtrach Search for Constraint Satisfaction (SRI) &
    Minisupercomputers and AI Machines (CMU) &
    Equal Opportunity Interactive Systems (SU),
  Seminars (Past) - AI in Communication Networks (Rutgers) &
    Goal Integration in Heuristic Algorithm Design (Rutgers) &
    Long-Term Planning Systems (TI) &
    Learning by Understanding Analogies (SRI) &
    Belief Revision (SRI) &
    Factorization in Experiment Generation (SRI)

----------------------------------------------------------------------

Date: Wed 17 Sep 86 13:39:03-PDT
From: Amy Lansky <LANSKY@SRI-VENICE.ARPA>
Subject: Seminar - Backtrach Search for Constraint Satisfaction (SRI)

                  IMPROVING BACKTRACK SEARCH ALGORITHMS
                  FOR CONSTRAINT-SATISFACTION PROBLEMS

                  Rina Dechter (DECHTER@CS.UCLA.EDU)
    Cognitive System Laboratory, Computer Science Department, U.C.L.A.
                                and
         Artificial Intelligence Center, Hughes Aircraft Company

                    11:00 AM, TUESDAY, September 23
               SRI International, Building E, Room EJ228

The subject of improving search efficiency has been on the agenda of
researchers in the area of Constraint-Satisfaction- Problems (CSPs)
for quite some time.  A recent increase of interest in this subject,
concentrating on backtrack search, can be attributed to its use as the
control strategy in PROLOG, and in Truth-Maintenance-Systems (TMS).
The terms ``intelligent backtracking'', ``selective backtracking'',
and ``dependency- directed backtracking'' describe various efforts for
producing improved dialects of backtrack search in these systems.  In
this talk I will review the common features of these attempts and will
present two schemes for enhancing backtrack search in solving CSPs.

The first scheme, a version of "look-back", guides the decision of
what to do in dead-end situations.  Specifically, we concentrate on
the idea of constraint recording, namely, analyzing and storing the
reasons for the dead-ends, and using them to guide future decisions,
so that the same conflicts will not arise again.  We view constraint
recording as a process of learning, and examine several possible
learning schemes studying the tradeoffs between the amount of learning
and the improvement in search efficiency.

The second improvement scheme exploits the fact that CSPs whose
constraint graph is a tree can be solved easily, i.e., in linear time.
This leads to the following observation: If, in the course of a
backtrack search, the subgraph resulting from removing all nodes
corresponding to the instantiated variables is a tree, then the rest
of the search can be completed in linear time.  Consequently, the aim
of ordering the variables should be to instantiate as quickly as
possible a set of variables that cut all cycles in the
constraint-graph (cycle-cutset).  This use of cycle-cutsets can be
incorporated in any given "intelligent" backtrack and is guaranteed to
improve it (subject to minor qualifications).

The performance of these two schemes is evaluated both theoretically
and experimentally using randomly generated problems as well as
several "classical" problems described in the literature.


VISITORS:  Please arrive 5 minutes early so that you can be escorted up
from the E-building receptionist's desk.  Thanks!

ALSO: NOTE DAY CHANGE!!! (Tuesday -- this week only)

------------------------------

Date: 17 Sep 86 14:53:24 EDT
From: Barbara.Grandillo@n.sp.cs.cmu.edu
Subject: Seminar - Minisupercomputers and AI Machines (CMU)


                      Special Computer Science Seminar

Speaker:    Professor Kai Hwang
            University of Southern California

Title:      Design Issues of Minisupercomputers and AI Machines

Date:       Monday, September 22, 1986
Time:       12:00 noon
Place:      Wean Hall 4605

In this seminar, Dr. Hwang will address the fundamental issues in
designing efficient multiprocessor/multicomputer minisupercomputers or
AI machines.  The talk covers the systems architectural choices,
interprocessor communication mechanisms, resource allocation methods,
I/O and OS functions, mapping of parallel algorithms, and the creation
of parallel programming environment for these machines.

These design issues and their possible solutions are drawn from the
following commercial or exploratory systems: Alliant FX/8, FPS
T-Series and M64 Series,Flex/32, Encore Multimax, Flex/32, Elxsi 6400,
Sequent 8000, Connection Machine, BBN Butterfly, FAIM-1, Alice, Dado,
Soar, and Redfiflow, etc.

Dr. Hwang will also assess the technological basis and future trends in
low-cost supercomputing and AI processing.

------------------------------

Date: 19 Sep 86  0845 PDT
From: Rosemary Napier <RFN@SAIL.STANFORD.EDU>
Subject: Seminar - Equal Opportunity Interactive Systems (SU)

Computer Science Colloquium

Tuesday, October 7, 1986, 4:15PM, Terman Auditorium

"Equal Opportunity Interactive Systems and Innovative Design"

Harold Thimbleby
Dept. of Computer Science
University of York
Heslington, York
United Kingdom YO1 5DD

Most interactive systems distinguish between the input and output
of information. Equal opportunity is a design heuristic that
discards these distinctions; it was inspired by polymodality
in logic programming and a well-known problem solving heuristic.
The seminar makes the case for equal opportunity, and shows how
several user engineering principles, techniques and systems can
be reappraised under equal opportunity.

By way of illustration, equal opportunity is used to guide the
design of a calculator and spreadsheet. The resulting systems
have declarative user interfaces and are arguably easier to
use despite complex operational models.

About the speaker: Harold Thimbleby did his doctoral research in
user interface design. He joined the Computer Science department
at York in 1982 and is currently on sabbatical at the Knowledge
Sciences Institute, Calgary. He is currently writing a book on
the application of formal methods as heuristics for user interface
design.

------------------------------

Date: 8 Sep 86 23:50:47 EDT
From: Tom Fawcett <FAWCETT@RED.RUTGERS.EDU>
Subject: Seminar - AI in Communication Networks (Rutgers)

The first speaker of this year's Machine Learning Seminar series at Rutgers
will be Andrew Jennings of Telecom Australia, speaking on "AI in
Communication Networks".  Dr.  Andrews will speak in Hill-423 at 11 AM on
THURSDAY, September 18th (NB: this is NOT the standard day for the ML
series).  The abstract follows:


                              Andrew Jennings
        (Arpanet address: munnari!trlamct.oz!andrew@seismo.CSS.GOV)
                             Telecom Australia
                        AI in Communication Networks


        Expert  systems  are  gaining  wide  application  in   communication
systems, especially in the areas of maintenance, design and planning.  Where
there are large bodies  of existing expertise, expert  systems are a  useful
programming technology  for  capturing and  making  use of  that  expertise.
However  will  AI  techniques  be  limited  to  retrospective  capturing  of
expertise or can they be of  use for next generation communication  systems?
This talk  will  present  several  projects  that aim  to  make  use  of  AI
techniques in next-generation communication  networks.  An important  aspect
of these systems is their ability to learn from experience.

        This talk  will  discuss  some of  the  difficulties  in  developing
learning in practical  problem domains,  and the value  of addressing  these
difficulties now.  In  particular the  problems of  learning in  intractable
problem domains is of great importance  for these problems and some  ongoing
work on this problem  will be presented.  The  projects discussed include  a
system for capacity assignment in networks, a project to develop AI  systems
for routing in fast packet networks and a system for VLSI design from a high
level specification.

------------------------------

Date: 9 Sep 86 12:43:20 EDT
From: Tom Fawcett <FAWCETT@RED.RUTGERS.EDU>
Subject: Seminar - Goal Integration in Heuristic Algorithm Design
         (Rutgers)

Next week, on Tuesday, September 16th in Hill 423 at 11 AM, Jack
Mostow will give a talk based on his work with Kerstin Voigt, entitled
"A Case Study of Goal Integration in Heuristic Algorithm Design".

This a joint ML/III seminar, and is a dry run for a talk being given at the
Knowledge Compilation Workshop.  There's no paper for the talk, but Jack
recommends his AAAI86 article with Bill Swartout as good background reading.
The abstract follows:



                                Jack Mostow
                             Rutgers University
                 (Arpanet address: MOSTOW@RED.RUTGERS.EDU)

      A Case Study of Goal Integration in Heuristic Algorithm Design:
   A Transformational Rederivation of MYCIN's Therapy Selection Algorithm


An important but little-studied aspect of compiling knowledge into
efficient procedures has to do with integrating multiple, sometimes
conflicting goals expressed as part of that knowledge.  We are
developing an artificial intelligence model of heuristic algorithm
design that makes explicit the interactions among multiple goals.  The
model will represent intermediate states and goals in the design
process, transformations that get from one state to the next, and
control mechanisms that govern the selection of which transformation
to apply next.  It will explicitly model the multiple goals that
motivate and are affected by each design choice.

We are currently testing and refining the model by using it to explain
the design of the algorithm used for therapy selection in the medical
expert system MYCIN.  Previously we analyzed how this algorithm
derives from the informal specification "Find the set of drugs that
best satisfies the medical goals of maximizing effectiveness,
minimizing number of drugs, giving priority to treating likelier
organisms, [etcetera]."  The reformulation and integration of these
goals is discussed in Mostow & Swartout's AAAI86 paper.  Doctoral
student Kerstin Voigt is implementing a complete derivation that will
address additional goals important in the design of the algorithm,
such as efficient use of time, space, and experts.

------------------------------

Date: Mon 18 Aug 86 16:28:02-CDT
From: Rajini <Rajini%ti-csl.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Long-Term Planning Systems (TI)


Dr. Jim Hendler, Assistant Professor at Univ of Maryland, is a giving a
special seminar at 10:00 am on August 28th. Abstract of his talk follows.
It will be held in Conference room #2, Computer Science Center, Texas
Instruments, Dallas.

--Rajini

  rajini@ti-csl
  (214) 995-0779



                Long-term planning systems

                        James Hendler
                     Computer Science Dept.
                     University of Maryland
                     College Park, Md. 20903


Most present day planning systems work in domains where a single goal is
planned for a single user.  Further, the only object changing the world is
the planner itself.  The few systems that go beyond this, for example Vere's
DEVISER system, tend to work in domains where the world, although changing,
behaves according to a set of well-defined rules.  In this talk we describe
on-going research directed at extending planning systems to function in the
dynamic environments necessary for such tasks as job-shop scheduling,
process control, and autonomous vehicle missions.

The talk starts by describing the inadequacies of present-day systems for
working in such tasks.  We focus on two, necessity of a static domain and
inability to handle large numbers of interacting goals, and show some of the
extensions needed to handle these systems.  We describe an extension to
marker-passing, a parallel, spreading activation system, which can be used
for handling the goal interaction problems, and we discuss representational
issues necessary to handling dynamic worlds.  We end by describing work on
a system which is being implemented to deal with these problems.

------------------------------

Date: Tue 19 Aug 86 19:55:33-PDT
From: Margaret Olender <OLENDER@SRI-WARBUCKS.ARPA>
Subject: Seminar - Learning by Understanding Analogies (SRI)


Russell Greiner, Toronto, will be guest speaker at the RR Group's
PlanLunch (August 20, EJ228, 11:00am).


                 LEARNING BY UNDERSTANDING ANALOGIES

This research describes a method for learning by analogy---i.e., for
proposing new conjectures about a target analogue based on facts known
about a source analogue.  After formally defining this process, we
present heuristics which efficiently guide it to the conjectures which
can help solve a given problem.  These rules are based on the view
that a useful analogy is one which provides the information needed to
solve the problem, and no more.  Experimental data confirms the
effectiveness of this approach.

------------------------------

Date: Wed 20 Aug 86 16:02:46-PDT
From: Amy Lansky <LANSKY@SRI-WARBUCKS.ARPA>
Subject: Seminar - Belief Revision (SRI)


                IS BELIEF REVISION HARDER THAN YOU THOUGHT?

                            Marianne Winslett (WINSLETT@SCORE)
               Stanford University, Computer Science Department

                        11:00 AM, MONDAY, Aug. 25
               SRI International, Building E, Room EJ228

Suppose one wishes to construct, use, and maintain a database of
knowledge about the real world, even though the facts about that world
are only partially known.  In the AI domain, this problem arises when
an agent has a base set of extensional beliefs that reflect partial
knowledge about the world, and then tries to incorporate new, possibly
contradictory extensional knowledge into the old set of beliefs.  We
choose to represent such an extensional knowledge base as a logical
theory, and view the models of the theory as possible states of the
world that are consistent with the agent's extensional beliefs.

How can new information be incorporated into the extensional knowledge
base?  For example, given the new information that "b or c is true,"
how can we get rid of all outdated information about b and c, add the
new information, and yet in the process not disturb any other
extensional information in the extensional knowledge base?  The burden
may be placed on the user or other omniscient authority to determine
exactly which changes in the theory will bring about the desired set
of models.  But what's really needed is a way to specify the update
intensionally, by stating some well-formed formula that the state of
the world is now known to satisfy and letting internal knowledge base
mechanisms automatically figure out how to accomplish that update.  In
this talk we present semantics and algorithms for an operation to add
new information to extensional knowledge bases, and demonstrate that
this action of extensional belief revision is separate from, and
in practice must occur prior to, the traditional belief revision
processes associated with truth maintenance systems.

------------------------------

Date: Wed 3 Sep 86 14:51:36-PDT
From: Amy Lansky <LANSKY@SRI-WARBUCKS.ARPA>
Subject: Seminar - Factorization in Experiment Generation (SRI)


                 FACTORIZATION IN EXPERIMENT GENERATION

                           Devika Subramanian
               Stanford University, Computer Science Department

                    11:00 AM, MONDAY, September 8
               SRI International, Building E, Room EJ228


Experiment Generation is an important part of incremental concept
learning. One basic function of experimentation is to gather data
to refine an existing space of hypotheses. In this talk, we examine
the class of experiments that accomplish this, called discrimination
experiments, and propose factoring as a technique for generating
them efficiently.

------------------------------

End of AIList Digest
********************

From csnet_gateway Sun Sep 21 06:59:00 1986
Date: Sun, 21 Sep 86 06:58:55 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #194
Status: RO


AIList Digest            Sunday, 21 Sep 1986      Volume 4 : Issue 194

Today's Topics:
  Seminars (Past) - Rule Induction in Computer Chess (ACM LA Chapter) &
    Mechanization of Geometry (SU) &
    Automatic Algorithm Designer (CMU) &
    Representations and Checkerboards (CMU) &
    Deriving Problem Reduction Operators (Rutgers) &
    Evolution of Automata (SRI) &
    Active Reduction of Uncertainty (UPenn) &
    Rational Conservatism and the Will to Believe (CMU) &
    BiggerTalk: An Object-Oriented Extension to Prolog (UTexas)

----------------------------------------------------------------------

Date: 21 Aug 86 12:01:50 PDT (Thu)
From: ledoux@aerospace.ARPA
Subject: Seminar - Rule Induction in Computer Chess (ACM LA Chapter)


                 ACM LOS ANGELES CHAPTER DINNER MEETING

                      WEDNESDAY, 3 SEPTEMBER 1986


                    STRUCTURED EXPERT RULE INDUCTION


                    Expert Systems and Computer Chess

                         Speaker: Dr. Alen Shapiro


One  of  the  major  problems  with  expert  systems  is  "the knowledge
engineering bottleneck."    This  occurs  when  development  is  delayed
because  specifications  are  unavailable  and  either the expert system
developers need time to learn the problem, or else  the  domain  experts
who  already  know  the  problem need time to learn how to use the often
opaque expert system development languages.   A  promising  approach  to
overcoming  the  bottleneck is to build tools that automatically extract
knowledge from the domain experts.  This talk presents  an  overview  of
inductive  knowledge  acquisition  and  the  results  of  experiments in
inductive rule generation in the domain of chess endgames.   The  system
that will be described was able to generate humanly-understandable rules
and to play correct chess  endgames.    This  research  has  significant
implications  for  the  design  of  expert  system  languages  and  rule
induction programs.  The talk is also an interesting look into the world
of computer chess.

Dr.    Shapiro,  a Fellow of the Turing Institute since its inception in
1983, received his Ph.D.  in Machine Intelligence from the University of
Edinburgh in 1983.  From 1979 to 1986 he was associated with Intelligent
Terminals, Ltd., and a member of the Rulemaster and  Expert-Ease  design
teams.    He  has  served  as  Visiting  Professor  at the University of
Illinois on two occasions.  His publications include articles on pattern
recognition,  automatic  induction  of  chess  classification rules, and
(with  David  Michie),  "A  Self-Commenting  Facility  for   Inductively
Synthesized Endgame Expertise."

In 1986 Dr. Shapiro joined the New Technology Department at Citicorp-TTI
in Santa Monica as a Computer Scientist concerned with  the  development
of inductive knowledge engineering tools for the banking industry.


                                 PLACE

                              Amfac Hotel
                           8601 Lincoln Blvd.
                     corner of Lincoln & Manchester
                        Westchester, California
                              8:00 p.m.

------------------------------

Date: Mon, 18 Aug 86 11:25:38 PDT
From: coraki!pratt@Sun.COM (Vaughan Pratt)
Subject: Seminar - Mechanization of Geometry (SU)


SPEAKER         Professor Wu Wen-tsun
TITLE           Mechanization of Geometry
DATE            Thursday, August 21
TIME            2:00 pm
PLACE           Margaret Jacks Hall, room 352

A mechanical method of geometry based on Ritt's characteristic set
theory will be described which has a variety of applications including
mechanical geometry theorem proving in particular. The method has been
implemented on computers by several researchers and turns out to be
efficient for many applications.

BACKGROUND
Professor Wu received his doctorate in France in the 1950's, and was a
member of the Bourbaki group.  In the first National Science and
Technology Awards in China in 1956, Professor Wu was one of three
people awarded a first prize for their contributions to science and
technology.  He is currently the president of the Chinese Mathematical
Society.

In 1977, Wu extended classical algebraic geometry work of Ritt to an
algorithm for proving theorems of elementary geometry.  The method has
recently become well-known in the Automated Theorem Proving community;
at the University of Texas it has been applied it to the machine proof
of more than 300 theorems of Euclidean and non-Euclidean geometry.

------------------------------

Date: 5 September 1986 1527-EDT
From: Betsy Herk@A.CS.CMU.EDU
Subject: Seminar - Automatic Algorithm Designer (CMU)

Speaker:        David Steier
Date:           Friday, Sept. 12
Place:          5409 Wean Hall
Time:           3:30 p.m.
Title:          Integrating multiple sources of knowledge in an
                        automatic algorithm designer


One of the reasons that designing algorithms is so difficult is the
large amount of knowledge needed to guide the design process.  In this
proposal, I identify nine sources of such knowledge within four
general areas:  general problem-solving, algorithm design and
implementation techniques, knowledge of the application domain,
and methods for learning from experience.  To understand how
knowledge from these sources can be represented and integrated, I
propose to build a system that automatically designs algorithms.
An implementation of the system, Designer-Soar, uses several
of the knowledge sources described in the proposal to design several
very simple algorithms.  The goal of the thesis is to extend
Designer-Soar to design moderately complex algorithms in a domain
such as graph theory or computational geometry.

------------------------------

Date: 10 September 1986 1019-EDT
From: Elaine Atkinson@A.CS.CMU.EDU
Subject: Seminar - Representations and Checkerboards (CMU)

SPEAKER:  Craig Kaplan, CMU, Psychology Department
  TITLE:  "Representations and Checkerboards"
   DATE:  Thursday, September 11
   TIME:  4:00 p.m.
  PLACE:  Adamson Wing, BH

        Given the right representation, tricky "insight" problems
often become trivial to solve.  How do people arrive at the right
representations?  What factors affect people's ability to shift
representations, and how can understanding these factors help us
understand why insight problems are so difficult?

         Evidence from studies using the Mutilated Checkerboard
Problem points to Heuristic Search as a powerful way of addressing
these questions.  Specifically, it suggest that the quality of
the match between people's readily available search heuristics
and problem characteristics is a major determinant of problem
difficulty for some problems.

------------------------------

Date: 11 Sep 86 20:01:20 EDT
From: RIDDLE@RED.RUTGERS.EDU
Subject: Seminar - Deriving Problem Reduction Operators (Rutgers)

I am giving a practice talk of a talk I will be giving in a few weeks.
It is at 1 pm in 423 on Monday the 15th.
Everyone is invited and all comments are welcome.
The abstract follows.

This research deals with automatically shifting from one problem
representation to another representation which is more efficient, with
respect to a given problem solving method, for this problem class. I
attempt to discover general purpose primitive representation shifts
and techniques for automating them.  To achieve this goal, I am
defining and automating the primitive representation shifts explored
by Amarel in the Missionaries & Cannibals problem @cite(amarel1).
The techniques for shifting representations which I have already
defined are: compiling constraints, removing irrelevant information,
removing redundant information, deriving macro-operators, deriving
problem reduction operators, and deriving macro-objects.  In this
paper, I will concentrate on the technique for deriving problem
reduction operators (i.e., critical reduction) and a method for
automating this technique (i.e., invariant reduction).  A set of
sufficient conditions for the applicability of this technique over a
problem class is discussed; the proofs appear in a forthcoming
Rutgers technical report.

------------------------------

Date: Wed 10 Sep 86 15:00:22-PDT
From: Amy Lansky <LANSKY@SRI-WARBUCKS.ARPA>
Subject: Seminar - Evolution of Automata (SRI)


                THE EVOLUTION OF COMPUTATIONAL CAPABILITIES
                    IN POPULATIONS OF COMPETING AUTOMATA

                    Aviv Bergman (BERGMAN@SRI-AI)
                        SRI International
                               and
                         Michel Kerszberg
                   IFF der KFA Julich, W.-Germany

                    10:30 AM, MONDAY, September 15
               SRI International, Building E, Room EJ228


The diversity of the living world has been shaped, it is believed, by
Darwinian selection acting on random mutations. In the present work,
we study the emergence of nontrivial computational capabilities in
automata competing against each other in an environment where
possession of such capabilities is an advantage. The automata are
simple cellular computers with a certain number of parameters -
characterizing the "Statistical Distribution" of the connections -
initially set at random. Each generation of machines is subjected to a
test necessitating some computational task to be performed, e.g
recognize whether two patterns presented are or are not translated
versions of each other. "Adaptive Selection" is used during the task
in order to "Eliminate" redundant connections.  According to its
grade, each machine either dies or "reproduces", i.e. it creates an
additional machine with parameters almost similar to its own. The
population, it turns out, quickly learns to perform certain tests.
When the successful automata are "autopsied", it appears that they do
not all complete the task in the same way; certain groups of cells are
more active then others, and certain connections have grown or decayed
preferentially, but these features may vary from individual to
individual.  We try to draw some general conclusions regarding the
design of artificial intelligence systems, and the understanding of
biological computation. We also contrast this approach with the usual
Monte-Carlo procedure.

------------------------------

Date: Wed, 13 Aug 86 08:51 EDT
From: Tim Finin <Tim@cis.upenn.edu>
Subject: Seminar - Active Reduction of Uncertainty (UPenn)


         Active Reduction of Uncertainty in Multi-sensor Systems

                        Ph.D. Thesis Proposal

                            Greg Hager
                        (greg@upenn-grasp)
        General Robots and Active Sensory Perception Laboratory
                      University of Pennsylvania
             Department of Computer and Information Sciences
                       Philadelphia, PA 19104


                        10:00 AM, August 15, 1986
                            Room 554 Moore


If robots are to perform tasks in unconstrained environments, they will have
to rely on sensor information to make decisions.  In general, sensor
information has some uncertainty associated with it.  The uncertainty can be
conceptually divided into two types: statistical uncertainty due to signal
noise, and incompleteness of information due to limitations of sensor scope.
Inevitably, the information needed for proper action will be uncertain.  In
these cases, the robot will need to take action explicitly devoted to
reducing uncertainty.

The problem of reducing uncertainty can be studied within the theoretical
framework of team decision theory.  Team decision theory considers a number
of decision makers observing the world via information structures, and
taking action dictated by decision rules.  Decision rules are evaluated
relative to team and individual utility considerations.  In this vocabulary,
sensors are considered as controllable information structures whose behavior
is determined by individual and group utilities.  For the problem of
reducing uncertainty, these utilities are based on the information expected
as the result of taking action.

In general, a robot does not only consider direct sensor observations, but
also evaluates and combines that data over time relative to some model of
the observed environment.  In this proposal, information aggregation is
modeled via belief systems as studied in philosophy.  Reducing uncertainty
corresponds to driving the belief system into one of a set of information
states.  Within this context, the issues that will be addressed are the
specification of utilities in terms of belief states, the organization of a
sensor system, and the evaluation of decision rules.  These questions will
first be studied through theory and simulation, and finally applied to an
existing multi-sensor system.

Advisor: Dr. Max Mintz

Committee:  Dr. Ruzena Bajcsy (Chairperson)
            Dr. Zolton Domotor (Philosophy Dept.)
            Dr. Richard Paul
            Dr. Stanley Rosenschein (SRI International and CSLI)

------------------------------

Date: 10 Sep 1986 0848-EDT
From: Lydia Defilippo <DEFILIPPO@C.CS.CMU.EDU>
Subject: Seminar - Rational Conservatism and the Will to Believe (CMU)


                                CMU
                       PHILOSOPHY COLLOQUIUM

                             JON DOYLE

            RATIONAL CONSERVATISM AND THE WILL TO BELIEVE


   DATE:  MONDAY SEPTEMBER 15

   TIME:  4:OO P.M.

  PLACE:  PORTER HALL, RM 223d


*   Much of the reasoning automated in artificial intelligence is either
mindless deductive inference or is intentionally non-deductive. The common
explanations of these techniques, when given, are not very satisfactory, for
the real explanations involve the notion of bounded rationality, while over
time the notion of rationality has been largely dropped from the vocabulary of
artificial intelligence. We present the notion of rational self-government, in
which the agent rationally guides its own limited reasoning to whatever degree
is possible, via the examples of rational conservatism and rationally adopted
assumptions.  These ideas offer improvements on the practice of mindless
deductive inference and explantions of some of the usual non-deductive
inferences.

------------------------------

Date: Mon 15 Sep 86 10:35:02-CDT
From: ICS.BROWNE@R20.UTEXAS.EDU
Subject: Seminar - BiggerTalk: An Object-Oriented Extension to Prolog (UTexas)


                     Object-Oriented Programming Meeting

                            Friday, September 19
                               2:00-3:00 p.m.
                                Taylor 3.128

                                 BiggerTalk:
                    An Object-Oriented Extension to Prolog

                          Speaker:  Eric Gullichsen
                       MCC Software Technology Program





BiggerTalk is a system of Prolog routines which provide a capability for
object-oriented programming in Prolog.  When compiled into a standard
Prolog environment, the BiggerTalk system permits programming in the
object-oriented style of message passing between objects, themselves
defined as components of a poset (the 'inheritance structure')
created through other BiggerTalk commands.  Multiple inheritance of
methods and instance variables is provided dynamically.  The full functional
capability of Prolog is retained, and Prolog predicates can be invoked
from within BiggerTalk methods.

A provision exists for storage of BiggerTalk objects in the MCC-STP
Object Server, a shared permanent object repository.  The common external
form for objects in the Server permits (restricted) sharing of objects
between BiggerTalk and Zetalisp Flavors, the two languages currently
supported by the Server.  Concurrent access to permanent objects is
mediated by the server.

This talk will discuss a number of theoretical and pragmatic issues of
concern to BiggerTalk and its interface to the Object Server.  Some
acquaintance with the concepts of logic programming and object-oriented
programming will be assumed.

------------------------------

End of AIList Digest
********************

From csnet_gateway Thu Sep 25 06:41:54 1986
Date: Thu, 25 Sep 86 06:41:49 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #195
Status: R


AIList Digest           Thursday, 25 Sep 1986     Volume 4 : Issue 195

Today's Topics:
  Queries - Public-Domain Ops5 & XLisp & Lsmalltalk &
    Kyoto Common Lisp & LISP-to-FORTRAN Conversion &
    Cognitive Science Schools,
  AI Tools - OPS5 & OPSx & UNIX Tools,
  Expert Systems - Literature Resources & Implementation Styles

----------------------------------------------------------------------

Date: 20 Sep 86 15:32:14 GMT
From: ritcv!eer@ROCHESTER.ARPA  (Ed Reed)
Subject: Public domain Ops5 in any language

I'm looking for one of the versions of OPS5 in lisp (or ?)
that's in the public domain. I've heard that there are pd versions
running around, but haven't found any here, yet.

If in lisp (as I expect) I can use FranzLisp, DecCommonLisp, and
xlisp, and maybe InterLisp on a Xerox Dlion.

Thanks for the help..

Ed Reed
Rochester Inst. Technology,
Rochester, NY

....seismo!rochester!ritcv
Delphi: eertest
GEnie: SQA.INC

------------------------------

Date: 19 Sep 1986 21:30-EDT
From: cross@wpafb-afita
Subject: xlisp query

Would appreciate a pointer to where I could download the source code for
xlisp 1.6 and any demonstratable programs written in xlisp.  I'm aware
of the stuff published in AI Expert and have downloaded it, but cannot
find the source code.  Thanks in advance.

Steve Cross

------------------------------

Date: 24 Sep 86 03:50:21 GMT
From: booter@lll-crg.arpa  (Elaine Richards)
Subject: Lsmalltalk and XLisp


I spaced out on my friend's login name. He is at Cal State University
Hayward, which has no news feed. He is a fanatic for smalltalk and
LISP and I hope you folks out there can assist. Please no flamage, this
guy is not a regular netter and he really would love some contacts.
Here is what he asked me to post.
        *****************************************************
        *  e-mail responses to                              *
        *  {seismo,ihnp4,pyramid}!lll-crg!csuh!jeff         *
        *  -or-                                             *
        *  hplabs!qantel!csuh!jeff                          *
        *****************************************************

#1
            To all people,places, and things who possess some
knowledge about Lsmalltalk:

            I am just getting into Lsmalltalk and I am interested
in communicating with others who have some experience with it. I
am using Smalltalk 'blue' as my map of the Lsmalltalk system; can
anyone suggest a way around class-variables and methods ( is the
class Smalltalk the only way?). Is there anyone who has done some
interesting applications they would like to share?

                     jeff


#2
              The young and struggling C.S. department of the
Calif. State University of Hayward would like to get to Xlisp.
If somebody out there knows were we can get it,  could you please
pass that information along?

                        jeff

------------------------------

Date: 23 Sep 86 01:00:29 GMT
From: zeus!stiber@locus.ucla.edu  (Michael D Stiber)
Subject: Kyoto Common Lisp

Does anyone have experience using this Lisp, or have any information
about it.  I am specifically interested in comments on Ibuki Lisp, an
implementation of Kyoto Common Lisp that runs on the IBM RT.

                     Michael Stiber
                     ARPANET: stiber@ucla-locus.arpa
                     USENET:  ...{ucbvax,ihpn4}!ucla-cs!stiber

               Born too late to be a yuppy -- and proud of it!

------------------------------

Date: Wed, 24 Sep 86 08:32:13 edt
From: jlynch@nswc-wo.ARPA
Subject: LISP Conversion


           I  am  gathering  information  concerning  the  conversion  or
        translation  of  programs  written in  LISP  to  FORTRAN.   Would
        appreciate  comments from anyone who has tried to do this and the
        likelihood of success.  Interested in both manual methods as well
        as  conversion routines or programs.    I will summarize  replies
        for the AILIST.       Thanks, Jim Lynch (jlynch@nswc-wo.arpa)

------------------------------

Date: Mon, 22 Sep 86 11:03:20 -0500
From: schwamb@mitre.ARPA
Subject: Cognitive Science Schools

Well, now that some folks have commented on the best AI schools in
the country, could we also hear about the best Cognitive Science
programs?  Cog Sci has been providing a lot of fuel for thought to
the AI community and I'd like to know where one might specialize
in this.

Thanks, Karl   (schwamb@mitre)

------------------------------

Date: 18 Sep 86 13:38:33 GMT
From: gilbh%cunyvm.bitnet@ucbvax.Berkeley.EDU
Subject: Re: AI Grad Schools


One might consider CUNY (City University of New York) too.

------------------------------

Date: Sat, 20 Sep 86 07:39:34 MDT
From: halff@utah-cs.arpa (Henry M. Halff)
Subject: Re: Any OPS5 in PC ?

In article <8609181610.AA08808@ucbvax.Berkeley.EDU>,
    EDMUNDSY%northeastern.edu@RELAY.CS.NET writes:
> Does anyone know whether there is any OPS5 software package availiable in PC?
> I would like to know where I can find it. Thanks!!!

Contact
        Computer*Thought
        1721 West Plano Parkway
        Suite 125
        Plano, TX 75075
        214/424-3511
        ctvax!mark.UUCP

Disclaimer:  I know people at Computer*Thought, but I don't know anything about
their OPS-5.  I don't know how well it works.  I don't even know if I would
know how to tell how well it works.

------------------------------

Date: Fri, 19 Sep 86 06:15:37 cdt
From: mlw@ncsc.ARPA (Williams)
Subject: OPSx for PCs


For parties seeking OPS5 on PCs...an implementation of OPS/83 is being
marketed by Production Systems Technologies, Inc.
            642 Gettyburg Street
            Pittsburgh, PA 15206
            (412)362-3117
I have no comparison information relating OPS5 to OPS83 other than the
fact that OPS83 is compiled and is supposed to provide better performance
in production on micros than is possible with OPS5.  I'd be glad to see
more information on the topic in this forum.

Usual disclaimers...

Mark L. Williams
(mlw @ncsc.arpa)

------------------------------

Date: 18 Sep 86 19:21:50 GMT
From: ssc-vax!bcsaic!pamp@uw-beaver.arpa  (wagener)
Subject: Re: Info on UNIX based AI Tools/applications (2nd req)

In article <1657@ptsfa.UUCP> jeg@ptsfa.UUCP (John Girard) writes:
>
>This is a second request for information on Artificial Intelligence
>tools and applications available in the unix environment.
>
>       Expert System Shells
>       Working AI applications (academic and commercial)


 I can recomend at least one good comprehensive listing of
tools,languages and companies;

        The International Directory of Artificial Intelligence
                Companies,2nd edition,1986,Artificial
                Intelligence Software S.R.L.,Via A. Mario,12/A,
                45100 ROVIGO, Italy. Della Jane Hallpike,ed.
                Ph.(0425)27151

It mainly looks at the companies, but it does have descriptions
of their products.

Also look into D.A.Waterman's book, A guide to expert systems;
        Addison-Wesley Pub.Co.,1985.

I also recomend you check out the Expert system Magazines;

        1) Expert Systems - The Ineternational Journal of
                Knowledge Engineering;Learned Information Ltd.,
                (This is an English Publication. It's US office
                address is;
                        Learned information Co.
                        143 Old Marlton Pike
                        Medfor,NJ 08055
                        PH.(609) 654-6266
                Subscription Price: $79

        2) Expert Systems User; Expert Systems User Ltd.
                                Cromwell House,
                                20 Bride Lane
                                London EC4 8DX
                                PH.01-353 7400
                                Telex: 23862
                Subscription Price: $210

        3) IEEE Expert - Intelligent Systems and their Applications
                         IEEE Computer Society
                         IEEE Headquartes
                         345 East 47th Street
                         New York,NY 10017

                         IEEE Computer Society West Coast Office
                         10662 Los Vaqueros Circle
                         Los Alamitos, CA 90720
                Subscription Price (IEEE Members): $12/yr

        4) AI Expert
                AI Expert
                P.O.Box 10952
                Palo Alto, CA 94303-0968
                Subscription Price: $39/yr $69/2yr $99/3yr

There are some good product description sections and articles
in these (especially the British ones which are the older
publications).  There are quite a number of systems out there.
Good luck.
Pam Pincha-Wagener

------------------------------

Date: 20 Sep 86 15:44:00 edt
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: queries about expert systems


   Date: Thu, 18 Sep 1986  17:10 EDT
   From: LIN@XX.LCS.MIT.EDU

   1. Production systems are the implementation of many expert systems.
   In what other forms are "expert systems" implemented?

   [I use the term "expert system" to describe the codification of any
   process that people use to reason, plan, or make decisions as a set of
   computer rules, involving a detailed description of the precise
   thought processes used.  If you have a better description, please
   share it.]

``Expert System'' denotes a level of performance, not a technology.
The particularly important aspirations are generality and robustness.
Every program strives for some degree of generality and robustness, of
course, but calling a program an expert system means it's supposed to
be able to do the right thing even in situations that haven't been
explicitly anticipated, where ``the right thing'' might just be to
gracefully say ``I dunno'' when, indeed, the program doesn't have the
knowledge needed to solve the problem posed.

Production systems, or, more accurately, programs that work by running
a simple interpreter over a body of knowledge represented as IF-THEN
rules, ease the construction of simple expert systems because it's
possible to encode the knowledge without having to commit to a
particular order or context of using that knowledge.  The interpreter
determines what rule to apply next at runtime, and so long as you
don't include contradictory rules or assume a particular order of
application, such systems are easy to construct and work pretty well,
i.e. can be general (solve a wide variety of problem instances) and
robust (degrade gracefully by saying ``i dunno'' (no rules, or only
very general rules apply) in unusual situations, rather than trapping
out with an error).

That may not have seemed like an answer to question #1, so let me
return to it explicitly.  Production systems are not the only
technology for building expert systems, but pattern-directed
invocation is a theme common to all expert systems, whatever
technology is used.  Let me explain.  Another popular technology for
expert systems (in the medical domain, especially) might be called
Frames and Demons.  Facts are organized in a specialization hierarchy,
and attached to each fact may be a bunch of procedures (demons) that
are run when the fact is asserted, or denied, when the program needs
to figure out whether the fact is true or not, etc.  Running a demon
may trigger other demons, or add new facts, or new demons, and so the
system grinds away.  The underlying principle is the same as in
production systems: there is a large body of domain specific
knowledge, plus a simple interpreter that makes no initial commitment
to the order or context in which the facts are going to be used.  The
name of the game is pattern-directed invocation: the next action to
take is selected from among the ``rules'' or ``methods'' or ``demons''
that are relevant to the current situation.  This characteristic is
not unique to expert systems, but (I think) every program that has
ever been called an expert system has this characteristic in common,
and moreover that it was central to its behavior.

   2. A production system is in essence a set of rules that state that
   "IF X occurs, THEN take action Y."  System designers must anticipate
   the set of "X" that can occur.  What if something happens that is not
   anticipated in the specified set of "X"?  I assert that the most
   common result in such cases is that nothing happens.  Am I right,
   wrong, or off the map?

In most implementations of production systems, if the current
situation is such that no rules match it, nothing happens (maybe the
program prints out the atom 'DONE :-).  If the system is working in a
goal-directed fashion (e.g. it's trying to find out under what
circumstances it can take action Y (action Y might be "conclude that Z
has occurred")) and there aren't any rules that tell it anything about
Y, again, nothing happens: it can't conclude Z.  In practice, there
are always very general rules that apply when nothing else does.
Being general, they're probably not very helpful: "IF () THEN SAY
Take-Two-Aspirin-And-Call-Me-In-The-Morning."  The same applies to any
brand of pattern-directed invocation.

However, it's getting on the hairy edge of matters to say "System
designers must anticipate the set of X that can occur."  The reason is
that productions (methods, demons) are supposed to be modular;
independent of other productions; typically written to trigger on only
a handful of the possibly thousands of features of the current
situation.  So in fact I don't need to anticipate all the situations
that occur, but rather ``just'' figure out all the relevant features
of the space of situations, and then write rules that deal with
certain combinations of those features.  It's like a grammar: I don't
have to anticipate every valid sentence, except in the sense that I
need to figure out what all the word categories are and what local
combinations of words are legal.

Now, to hone your observation a bit, I suggest focusing on the notion
of ``figuring out all the relevant features of the space of
situations.''  That's what's difficult.  Experts (including
carbon-based ones) make mistakes when they ignore (or are unaware of)
features of the situation that modify or overrule the conclusions made
from other features.  The fundamental problem in building an expert
system that deals with the real world is not entirely in cramming
enough of the right rules into it (although that's hard), it's
encoding all the exceptions, or, more to the point, remembering to
include in the program's model of the world all the features that
might be relevant to producing exceptions.

End of overly long flame.

        Walter Hamscher

P.S. I am not an AI guru, rather, a mere neophyte disciple of the bona
fide gurus on my thesis committee.

------------------------------

Date: Tue Sep 23 11:33:13 GMT+1:00 1986
From: mcvax!lasso!ralph@seismo.CSS.GOV  (Ralph P. Sobek)
Subject: Re: queries about expert systems (Vol 4, no. 187)

Herb,

  >1. Production systems are the implementation of many expert systems.
  >In what other forms are "expert systems" implemented?

        I recommend the book "A Guide to Expert Systems," by Donald
Waterman.  It describes many expert systems, which fall more or less
into your definition, and in what they are implemented.  Production
Systems (PSs) can basically be divided into forward-chaining (R1/XCON) and
backward-chaining (EMYCIN); mixed systems which do both exist.  Other
representations include frame-based (SRL), semantic nets (KAS), object-
oriented, and logic-based.  The representation used often depends on what
is available in the underlying Expert System Tool.  Tools now exist which
provide an intergrated package of representation structures for the expert
system builder to use, e.g., KEE and LOOPS.  Expert systems are also written
in standard procedural languages such as Lisp, C, Pascal, and even Fortran.

  >2. A production system is in essence a set of rules that state that
  >"IF X occurs, THEN take action Y."  System designers must anticipate
  >the set of "X" that can occur.  What if something happens that is not
  >anticipated in the specified set of "X"?  I assert that the most
  >common result in such cases is that nothing happens.

        In both forward-chaining and backward-chaining PSs nothing happens.
If the PS produces "X" then we can verify if "X" is never used.  In the
general case, if "X" comes from some arbitrary source there is no
guarantee that the PS (or any other system) will even see the datum.

        Ralph P. Sobek

UUCP:   mcvax!inria!lasso!ralph@SEISMO.CSS.GOV
ARPA:   sobek@ucbernie.Berkeley.EDU     (automatic forwarding)
BITNET: SOBEK@FRMOP11

------------------------------

End of AIList Digest
********************

From csnet_gateway Tue Sep 30 20:38:17 1986
Date: Tue, 30 Sep 86 20:38:06 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #196
Status: R


AIList Digest           Thursday, 25 Sep 1986     Volume 4 : Issue 196

Today's Topics:
  Linguistics - NL Generation,
  Logic - TMS, DDB and Infinite Loops,
  AI Tools - Turbo Prolog & Xerox vs Symbolics,
  Philosophy - Associations & Intelligent Machines

----------------------------------------------------------------------

Date: Mon, 22 Sep 86 10:31:23 EDT
From: "William J. Rapaport" <rapaport%buffalo.csnet@CSNET-RELAY.ARPA>
Reply-to: rapaport@sunybcs.UUCP (William J. Rapaport)
Subject: followup on NL generation

In article <MS.lb0q.0.hatfield.284.1@andrew.cmu.edu>
    lb0q@ANDREW.CMU.EDU (Leslie Burkholder) writes:
>Has work been done on the problem of generating relatively idiomatic English
>from sentences written in a language for first-order predicate logic?
>Any pointers would be appreciated.
>
>Leslie Burkholder
>lb0q@andrew.cmu.edu


We do some work on NL generation from SNePS, which can easily be translated
into pred. logic.  See:

Shapiro, Stuart C. (1982), ``Generalized Augmented Transition Network
Grammars For Generation From Semantic Networks,'' American Journal of
Computational Linguistics 8:  12-25.


                                William J. Rapaport
                                Assistant Professor

Dept. of Computer Science, SUNY Buffalo, Buffalo, NY 14260

(716) 636-3193, 3180

uucp:   ..!{allegra,decvax,watmath,rocksanne}!sunybcs!rapaport
csnet:  rapaport@buffalo.csnet
bitnet: rapaport@sunybcs.bitnet

------------------------------

Date: 20 Sep 86 15:41:26 edt
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: TMS, DDB and infinite loops


   Date: Mon, 08 Sep 86 16:48:15 -0800
   From: Don Rose <drose@CIP.UCI.EDU>

   Does anyone know whether the standard algorithms for belief revision
   (e.g. dependency-directed backtracking in TMS-like systems) are
   guaranteed to halt? That is, is it possible for certain belief networks
   to be arranged such that no set of mutually consistent beliefs can be found
   (without outside influence)?

I think these are two different questions.  The answer to the
second question depends less on the algorithm than on whether
the underlying logic is two-valued or three-valued.  The answer
to the first question is that halting is only a problem when the
logic is two-valued and the space of beliefs isn't fixed during
belief revision [Satisifiability in the propositional calculus
is decidable (though NP-complete)].  Doyle's TMS goes into
infinite loops.  McAllester's won't.  deKleer's ATMS won't loop
either, but that's because it finds all the consistent
labelings, and there just might not be any.  Etc, etc; depends
on what you consider ``standard.''

        Walter Hamscher

------------------------------

Date: Sat, 20 Sep 86 15:02 PDT
From: dekleer.pa@Xerox.COM
Subject: TMS, DDB and infinite loops question.

    Does anyone know whether the standard algorithms for belief revision
    (e.g. dependency-directed backtracking in TMS-like systems) are
    guaranteed to halt?

It depends on what you consider the standard algorithms and what do you
consider a guarantee?  Typically a Doyle-style (NMTMS) comes in two
versions, (1) guaranteed to halt, and, (2) guaranteed to halt if there
are no "odd loops".  Version (2) is always more efficient and is
commonly used.  The McAllester-style (LTMS) or my style (ATMS) always
halt.  I don't know if anyone has actually proved these results.

    That is, is it possible for certain belief networks
    to be arranged such that no set of mutually consistent beliefs
    can be found (without outside influence)?

Sure, its called a contradiction.  However, the issue of what to do
about odd loops remains somewhat unresolved.  By odd loop I mean a node
which depends on its own disbelief an odd number of times, the most
trivial example being give A a non-monotonic justification with an empty
inlist and an outlist of (A).

------------------------------

Date: Tue 23 Sep 86 14:39:47-CDT
From: Larry Van Sickle <cs.vansickle@r20.utexas.edu>
Reply-to: CS.VANSICKLE@R20.UTEXAS.EDU
Subject: Money back on Turbo Prolog

Borland will refund the purchase price of Turbo Prolog
for sixty days after purchase.  The person I talked to
at Borland was courteous, did not argue, just said to
send the receipt and software.

Larry Van Sickle
U of Texas at Austin
cs.vansickle@r20.utexas.edu  512-471-9589

------------------------------

Date: Tue 23 Sep 86 13:54:29-PDT
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Reply-to: AIList-Request@SRI-AI.ARPA
Subject: Turbo Prolog

For another review of Turbo Prolog see the premier issue of AI Expert.
Darryl Rubin discusses several weaknesses relative to Clocksin-and-Mellish
prologs, but is enthusiastic about the package for users who have no
experience with (i.e., preconceptions from) other prologs.  The Turbo
version is very fast, quite compact, well documented, comes with a
lengthy library of example programs, and interfaces to a sophisticated
window system and other special tools.  It could be an excellent system
for database retrieval and other straightforward tasks.  His chief
reservation was about the "subroutine call" syntax that requires
all legal arities and argument types to be predeclared and does not
permit use of comma as a reduction operator.

                                        -- Ken Laws

------------------------------

Date: 19 Sep 86 14:27:15 GMT
From: sdcrdcf!darrelj@hplabs.hp.com  (Darrel VanBuer)
Subject: Re: Dandelion vs Symbolics

A slight echo on the Interlisp file package (partly response to earlier note
on problems using MAKEFILE, and losing a bunch of user-entered properties.

Rule 1.  Users never call MAKEFILE (in 9 years of Interlisp hacking, I've
probably called it half a dozen times).

So how do you make files?  I mainly use two functions:
CLEANUP()     or CLEANUP(file1 file2 ...)   Former does all files containing
        modifications, latter only named files.  The first thing CLEANUP does
        is call UPDATEFILES, which is also called by:
FILES?()        Reports the files which need action to have up to date
        source, compiled and hardcopies, also calls UPDATEFILES, which will
        engage you in a dialog asking you the location of every "new" object.

Most of the ways to define or modify objects are "noticed" by the file
package (e.g. the structure editor [DF, EF, DV ...], SETQ, PUTPROP, etc which
YOU type at top level).  When an object is noticed as modified, either the
file(s) containing it are marked as needing a remake, or it gets noted as
something to ask you about later.  You can write functions which play the
game by calling MARKASCHANGED as appropriate.

Two global variables interact with details of the process:
RECOMPILEDEFAULT  usually EXPRS or CHANGES.  I prefer the former, but CHANGES
        has been the default in Interlisp-D because EXPRS didn't work before
        Intermezzo.
CLEANUPOPTIONS  My setting is usually (RC STF LIST) which means as part of
cleanup, recompile, with compiler flags STF (F means forget source from in
core, filepkg will automagically retrieve it if you edit, etc), and make a
new hardcopy LISTing.

For real fun with filepkg and integration with other tools, try
MASTERSCOPE(ANALYZE ALL ON file1)
MASTERSCOPE(EDIT WHERE ANY CALLS FOO)
CLEANUP()

--
Darrel J. Van Buer, PhD
System Development Corp.
2525 Colorado Ave
Santa Monica, CA 90406
(213)820-4111 x5449
...{allegra,burdvax,cbosgd,hplabs,ihnp4,orstcs,sdcsvax,ucla-cs,akgua}
                                                            !sdcrdcf!darrelj
VANBUER@USC-ECL.ARPA

------------------------------

Date: Sat, 20 Sep 86 10:23:18 PDT
From: larus@kim.Berkeley.EDU (James Larus)
Subject: Symbolics v. Xerox

OK, here are my comments on the Great Symbolics-Xerox debate.  [As
background, I was an experienced Lisp programmer and emacs user before
trying a Symbolics.]  I think that the user interface on the Symbolics
is one of the poorest pieces of software that I have ever had the
misfortune of using.  Despite having a bit-mapped display, Symbolics
forces you to use a one-window on the screen at a time paradigm.  Not
only are the default windows too large, but some of them (e.g. the
document examiner) take over the whole screen (didn't anyone at
Symbolics think that someone might want to make use of the
documentation without taking notes on paper?).  Resizing the windows
(a painful process involving a half-dozen mouse-clicks) results in
unreadable messages and lost information since the windows don't
scroll (to be fixed in Genera 7).  I cannot understand how this
interface was designed (was it?) or why people swear by it (instead of
at it).

The rest of the system is better.  Their Common Lisp is pretty solid
and avoids some subtle bugs in other implementations.  Their debugger
is pretty weak.  I can't understand why a debugger that shows the
machine's bytecodes (which aren't even documented for the 3600
series!) is considered acceptable in a Lisp environment.  Even C has
symbolic debuggers these days!  Their machine coexists pretty well
with other types of systems on an internet.  Their local filesystem is
impressively slow.

The documentation is pretty bad, but is getting better.  It reminds me
of the earlier days of Unix, where most of the important stuff wasn't
written down.  If you had an office next to a Unix guru, you probably
thought Unix was great.  If you just got a tape from Bell, then you
probably thought Unix sucked.  There appears to be a large amount of
information about the Symbolics that is not written down and is common
knowledge at places like MIT that successfully use the machines.
(Perhaps Symbolics should ship a MIT graduate with their machines.)
We have had a lot of difficulty setting up our machines.  Symbolics
has not been very helpful at all.

/Jim

------------------------------

Date: Tue Sep 23 12:31:35 GMT+1:00 1986
From: mcvax!lasso!ralph@seismo.CSS.GOV  (Ralph P. Sobek)
Subject: Re: Xerox 11xx vs. Symbolics 36xx vs. ...

I enjoyed all the discussion on the pluses and minuses of these and other
lisp machines.  I, myself, am an Interlisp user.  Those who know a
particular system well will prefer it over another.  All these lisp systems
are quite complex and require a long time, a year or so, before one achieves
proficiency.  And as any language, human or otherwise, one's perception of
one's environment depends upon the tools/semantics that the language provides.

I have always found Interlisp much more homogeneous than Zetalisp.  The
packages are structured so as to interface easily.  I find the written
documentation  also much more structured, and smaller, than the number of
volumes that come with a Symbolics.  Maybe, Symbolics users only use the
online documentation and thus avoid the pain of trying to find something
in the written documentation.  The last time I tried to find something in
the Symbolics manuals -- I gave up, frustrated! :-)

Interesting will be the future generation of lisp machines, after Common
Lisp.

        Ralph P. Sobek

UUCP:   mcvax!inria!lasso!ralph@SEISMO.CSS.GOV
ARPA:   sobek@ucbernie.Berkeley.EDU     (automatic forwarding)
BITNET: SOBEK@FRMOP11

------------------------------

Date: 22 Sep 86 12:28:00 MST
From: fritts@afotec
Reply-to: <fritts@afotec>
Subject: Associations -- Comment on AIList Digest V4 #186

The remark has been made on AIList, I think, and elsewhere that computers
do not "think" at all like people do.  Problems are formally stated and
stepped through sequentially to reach a solution.  Humans find this very
difficult to do.  Instead, we seem to think in a series of observations
and associations.  Our observations are provided by our senses, but how
these are associated with stored memory of other observations is seemingly
the key to how humans "think".  I think that this process of sensory
observation and association runs more or less continuously and we are not
conciously aware of much of it.  What I'd like to know is how the decision
is made to associate one observation with another; what rules of association
are made and are they highly individualized or is there a more general
pattern.  How is it that we acquire large bodies of apparently diverse
observations under simple labels and then make complex decisions using
these simple labels rather than stepping laboriously through a logical
sequence to achieve the same end?  There must be some logic to our
associative process  or we could not be discussing this subject at all.

Steve Fritts
FRITTS@AFOTEC

------------------------------

Date: 22 Sep 86 09:01:50 PDT (Monday)
From: "charles_kalish.EdServices"@Xerox.COM
Subject: Re: intelligent machines

In his message, Peter Pirron sets out what he believes to be necessary
attributes of a machine that would deserved to be called intelligent
>From my experience, I think that his intuitions about what it would take
for for a machine to be intelligent are, by and large, pretty widely
shared and as far as I'm concerned, pretty accurate.  Where we differ,
though, is in how these intuitions apply to designing and demonstrating
machine intelligence.

Pirron writes: "There is the phenomenon of intentionality amd motivation
in man that finds no direct correspondent phenomenon in the computer." I
think it's true that we wouldn't call anything intelligent we didn't believe
had intentions (after all intelligent is an intentional ascription).
But I think that Dennet (see "Brainstorms") is right in that intentions
are something we ascribe to systems and not something that is built in
or a part of that system.  The problem then becomes justifying the use
of intentional descriptions for a machine; i.e. how can I justify my
claim that "the computer wants to take the opponent's queen" when the
skeptic responds that all that is happening is that the X procedure has
returned a value which causes the Y procedure to  move piece A to board
position Q?

I think the crucial issue in this question is how much (or whether) the
computer understands. The problem with systems now is that it is too
easy to say that the computer doesn't understand anything, it's just
manipulating markers. That is that any understanding is just
conventional -- we pretend that variable A means the Red Queen, but it
only means that to us (observers) not to the computer.  How then could
we ever get something to mean anything to a computer? Some people (I'm
thinking of Searle) would say you can't, computers can't have semantics
for the symbols they process. I  found this issue  in Pirron's message
where he says:
"Real "understanding" of natural language however needs not only
linguistic competence but also sensory processing and recognition
abilities (visual, acoustical).   Language normally refers to objects
which we first experience  by sensory input and then name it."   The
idea is that you want to ground the computer's use of symbols in some
non-symbolic experience.

Unfortunately, the solution proposed by Pirron:
"The constructivistic theory of human learning of language by Paul
Lorenzen und O. Schwemmer (Erlanger Schule) assumes a "demonstration
act" (Zeigehandlung)  constituting a fundamental element of man (child)
learning language.  Without this empirical fundament of language you
will never leave the hermeneutic circle, which drove former philosphers into
despair." ( having not read these people, I presume the mean something
like pointing at a rabbit and saying "rabbit") has been demonstrated by
Quine (see "Word and Object") to keep you well within the circle.  But
these arguments are about people, not computers and we do (at least
feel) that the symbols we use and communicate with are rooted in
non-symbolic something. I can see two directions from this.

One is looking for pre-symbolic, biological constraints;  Something like
Rosch's theory of basic levels of conceptualization.  Biologically
relevant, innate concepts, like mother, food, emotions, etc.  would
provide the grounding for complex concepts.  Unfortunately for a
computer, it doesn't have an evolutionary history which would generate
innate concepts-- everything it's got is symbolic.  We'd have to say
that no matter how good a computer got it wouldn't really understand.

The other point is that maybe we do have to stay within this symbolic
"prison-house" after all event the biological concepts are still
represented, not actual (no food in the brain just neuron firings).  The
thing here is that, even though you could look into a person's brain
and, say, pick out the neural representation of a horse, to the person
with the open skull that's not a representation, it constitutes a horse,
it is a horse (from the point of view of the neural sytem).  And that's
what's different about people and computers. We credit people with a
point of view  and from that point of view, the symbols used in
processing are not symbolic at all, but real.  Why do people have a
point of view and not computers?  Computers can make reports of their
internal states probably better than we.  I think that Nagel has hit it
on the head (in "What is it like to be a Bat" I saw this article in "The
Minds I") with his notion of "it is (or is not) like something to be
that thing."  So it is like something to be a person and presumably is
not like something to be a computer.  For a machine to be intelligent
and truly understand it must be like something to be that machine. Only
then can we credit that machine with a point of view and stop looking at
the symbols it uses as "mere" symbols.  Those symbols will have content
from the machine's point of view.  Now, how does it get to be like
something to be a machine? I don't know but I know it has a lot more to
do with the Turing test than what kind of memory orgainization or search
algorithms the machine uses.

Sorry if this is incoherent, but it's not a paper so I'm not going to
proof it. I'd also like to comment on the claim that:
  "  I would claim, that the conviction mentioned above {that machines
can't equal humans} however philosphical or sophisticated it may be
justified, is only the "RATIONALIZATION"..  of understandable   but
irrational and normally unconscious existential fears and need of human
beings"  but this message is too long anyway.  Suffice it too say that
one can find a nasty Freudian interpretation of any point.

I'd appreciate hearing any comments on the above ramblings.

-Chuck

ARPA: chuck.edservices@Xerox.COM

------------------------------

End of AIList Digest
********************

From csnet_gateway Tue Sep 30 20:38:39 1986
Date: Tue, 30 Sep 86 20:38:26 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #197
Status: R


AIList Digest           Thursday, 25 Sep 1986     Volume 4 : Issue 197

Today's Topics:
  AI Tools - University of Rochester HORNE System &
    Parallel Inference System at Maryland,
  Conferences - Upcoming Conference Programs (FJCC, COMPSAC, OIS,
      Info. and Software Sciences, Chautaqua)

----------------------------------------------------------------------

Date: Thu, 11 Sep 86 13:30 EDT
From: Brad Miller <miller@UR-ACORN.ARPA>
Subject: New University of Rochester HORNE system available

The University of Rochester HORNE reasoning system has just been rereleased in
common-lisp form, currently running on a symbolics (though any common-lisp
system should be able to run it with minor porting).

Features:
        Horne Clause resolution prover (similar to PROLOG) with typed
unification and specialized reasoner for equalities (e.g. A and B can be
asserted to be equal, and so will unify). Equalities can be asserted between
any ground forms including functions with ground terms. A forward chaining
proof mechanism, and an interface between this system and arbitrary
common-lisp forms are also provided.

        As part of the same release we are providing REP, a frame-like
knowledge representation system built on top of the theorem prover, which uses
sturctured types to represent sets of objects. A structured type may have
relations (or "roles") between its set of objects and other sets. Arbitrary
instances of an object may be asserted to be equal to another instance which
will utelize the underlying HORNE equality mechanisms.

HORNE is the product of several years of R&D in the Natural Language
Understanding and Knowledge Representation projects supervised by Prof. James
Allen at the University of Rochester, and forms the basis for much of our
current implementation work.

A tutorial introduction and manual, TR 126 "The HORNE reasoning system in
Common-Lisp" by Allen and Miller is available for $2.50 from the following
address:

Ms. Peg Meeker
Technical Reports Administrator
Department of Computer Science
617 Hylan Building
University of Rochester
River Campus
Rochester, NY 14627

In addition a DC300XL cartridge tape in Symbolics distribution format, or
Symbolics carry-tape format (also suitable for TI Explorers), or a 1/2"
1600bpi reel in 4.2BSD TAR format (other formats are not available) is
available from the above address for a charge of $100.00 which will include
one copy of the TR.  This charge is made to defray the cost of the tape,
postage, and handling. The software itself is in the public domain. Larger
contributions are, of course, welcome.  Please specify which format tape you
wish to receive. By default, we will send the Symbolics distribution format.

All checks should be made payable to "University of Rochester, Computer
Science Department". POs from other Universities are also acceptable. Refunds
for any reason are not available.

DISCLAIMER: The software is supplied "as-is" without any implied warrenties of
merchantability or fitness for a particular purpose. We are not responsible
for any consequential damages as the result of using this software.  We are
happy to accept bug reports, but promise to fix nothing. Updates are not
included; future releases (if any) will probably be made available under a
similar arrangement to this one, but need not be. In other words, what you get
is what you get.

Brad Miller
Computer Science Department
University of Rochester
miller@rochester.arpa
miller@ur-acorn.arpa

------------------------------

Date: Thu, 11 Sep 86 17:08:33 EDT
From: Jack Minker <minker@mimsy.umd.edu>
Subject: Parallel Inference System at Maryland


        [Excerpted from the Prolog digest by Laws@SRI-STRIPE.]


            AI and Database Research Laboratory
                           at the
                   University of Maryland
                   Jack Minker - Director


     The AI and Database Research Laboratory at the  Univer-
sity  of  Maryland  is  pleased  to announce that a parallel
logic programming system (PRISM) is now operational  on  the
McMOB  multiprocessosor.  The system uses up to sixteen pro-
cessors to exploit medium grained parallelism in logic  pro-
grams.   The underlying ideas behind PRISM appeared in [Eis-
inger et. al., 1982] and [Kasif et. al., 1983].

[...]

     If you would like further information on PRISM,  please
contact  MINKER@MARYLAND  or MADHUR@MARYLAND.  We would also
be very interested in hearing from people who may have prob-
lems we could run on PRISM.

References:

1.   Eisinger, N., Kasif, S., and Minker,  J.,  "Logic  Pro-
     gramming:  A  Parallel Approach", in Proceedings of the
     First International Logic Programming Conference,  Mar-
     seilles, France, 1982.

2.   Kasif, S., Kohli, M., and Minker, J., "PRISM - A Paral-
     lel Inference System for Problem Solving", in IJCAI-83,
     Karlsruhe, Germany, 1983.

3.   Rieger, C., Bane, j., and Trigg, R.,  "ZMOB:  A  Highly
     Parallel  Multiprocessor",  University of Maryland, TR-
     911, May 1980

------------------------------

Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: ****** AI AT UPCOMING CONFERENCES ******

AI papers at November 2-6, 1986 FJCC, Dallas Texas

Professional Education Program Items
John D. McGregor, Building Expert Systems Workshop
Loiss Boggess and Julia Hodges, Knowledge-Based-Based Expert
Systems
Benjamin Wah,Architectures for AI Applications
Michael Lebowitz, Natural Language Processing
Michael Lebowitz, Machine Learning
Paul Bamberg, Speech Recognition:From Isolated Digits to Natural
Language Dissertation
John Kender and Takeo Kanade, Computer Vision from an AI Perspective
Douglas DeGroot, Prolog and Knowledge INfo Processing
Harland H. Black, AI Programming and Environments

Paper Sessions

AI-1, November 4, 1:30 PM to 3:30 PM
Panel Session on "Design Issues and Practice in AI Programming"

AI-2 Session 1, November 5, 10:00 am to noon, Computer Vision
Generic Surface Interpretation Inference Rules and Quasi-Invariants
  Thomas Binford, Stanford U.
An Overview of Computation of Structure and Motion From Images
  J. K. Aggarwal, University of Texas at Austin
Industrial World Vision
  Robert Haralick, Machine Vision International

AI-2 Session 2, November 5, 1:30 PM-3:30PM
Survey of Image Quality Measurements
  I. Abdou and N. Dusaussoy, University of Delaware
A Spatial Knowledge Structure for Image Information Systems Using
Symbolic Projects
  S. K. Chang, U. of Pittsburgh, E. Jungert, FFV Elektronic A. B.
Document Image Understanding
  S. N. Srihari, SUNY at Buffalo

AI-3 Session 1, November 5 3:45 PM- 5:15 PM, Robotics
Living in a Dynamic World
  Russell A. Andersson, AT&T Bell Labs
Error Modeling in Stereo Navigation
  L. Matthies and S. A. Shafer, Carnegie Mellon U
CMU Sidewalk Navigation System
  Y. Goto, et. al. Carnegie Mellon U.

AI-3 Session 2, November 6 10AM - noon
Automatic Gasp Planning: An Operation Space Approach
  M. T. Mason  R. C. Brost Carnegie Mellon U.
Planning Stable Grasps for Multi-fingered Hands
  V. Nguyen, MIT
Off-line Planning for On-line Object Localization
  T. Lozano-perez, W.E. Grimson, MIT

AI-3 Session 3, Novmeber 6, 1:30 PM - 3:30pm

AMLX: A Manufacturing Language/Extended
  L. Nackman, et al. IBM t. J. Watson Research Center
SATYR and the NYMPH: SoftwareDesign in a Multiprocessor
for Control Systems
  J. B. Chen et. al. Stanford University
The Meglos User Interface
  R. Gaglianello and H. Katseff, AT&T Bell Laboratories
A Robot Force and Motion Server
  R. Paul and H. Zhang, University of Pennsylvania

AI4, Session 1 November 5, 1:30pm - 3:30 pm, Rule Based
  Systems
The AI-ADA Interface
  Dr. Jorge Diaz-Herrera, George Mason University
The AI-LiISP Environment
  Dr. Harry Tennant, Texas Instruments
The AI PROLOG Environment: SIMPOS- Sequential Inference
Machine Programming
  Drs. H. Ishibashi, T. Chikayama, H. Sato, M. Sato and
  S. Uchida, ICOT Research Center
Software Engineering for Rule-Based systems
  R. J. K. Jacob and J. N. Froscher, Naval Research
  Laboratory

Session 2: Knowledge Engineering pannel, November 5,
  3:45 PM - 5:15 PM
  Dr. Richard Wexelblat, Philips Lab, Chair
  Dr. Paul Benjamin, Philips Laboratories
  Dr. Christina Jette, Schlumberger Well Services
  Dr. STeve Pollit, Digital Equipment

Session 3:, November 6, 1:30PM - 3:30 PM

"An Organizational Frameworks for Building Adapative Artificial
Intelligence Systems:
  T. Blaxton and B. Kushner, BDM Corporation
"An Object/Task Modelling Approach"
  Q. Chen, Beijing Research Institute of Surveying and Mapping
"A Plant INtelligent Supervisory Control Expert System:
  M. Ali and E. Washington, Unviersity of Tennessee
"Knowledge-Based Layout Design System for Industrial Plants"
  K. Yoshida, et. al., Hitachi Ltd.

Session 4: Prolog and Frame based Methods, November 6, 3:45 pm to 5:15 pm
"A Logic-Programming Approach to Frame-Based Language Design"
  H. H. Chen, I. P. Lin and C. P. Wu, National Taiwan University
"Interfacing Prolog to Pascal"
  K. Magel, North Dakota State University
"Knowledge-Based Optimization in Prolog Compiler"
  N. Tamura, Japan Science Institute, IBM Japan

Natural Language Processing, Session 1, Nov. 4 10AM - noon
"Communication with Expert Systems"
  Kathleen R. McKeown, Columbia University
"Language Analysis in Not-So-Limited Domains"
  Dr. Paul S. Jacobs, General Electric, R&D
"Providing Expert Systems with INtegrated Natural Language and Graphical
Interfaces"
  Dr. Philip J. Hayes, Carnegie Group Inc.
"Pragmatic Processes in a Portable NL System"
  Dr. Paul Martin, SRI_AI Center

Session 2:Nov 4 1:30-3:30pm
"Uses of Structured Knowledge Representation Systems in Natural Language
Processing"
  N. Sondheimer, University of Southern California
"Unifying Lexical, Syntactic and Semantic Text Processing"
  K. Eiselt, University of California at Irvine
"Robustness in Natural Language Interfaces"
  R. Cullingford, Georgia Tech
"Connectionist Approaches to Natural Language Processing"
  G. Cottrell, UC of San Diego

Panel: Problems and Prospects of NLP NOvember 4, 3:45pm - 5:15pm
Chair: Dr. Philip J. Hayes
Gene Charniak, Brown University, Dave Waltz Thinking Machines
Robert Wilensky, UC at Berkeley, Gary Hendrix, Symantec, Jerry Hobbs, SRI

"Parallel Processing for AI" Tuesday November 4 10am - 12noon
"Parallel Prodcessing of a Knowledge-Based Vision System"
  D. I. Moldovan and C. I. Wu, USC

"A Fault Tolerant, Bit-Parallel, Cellular Array Processor"
  S. Morton, ITt-Advanced Technology Center
"Implementation of Parallel Prolog onTree Machines"
  M. Imai, Toyohashi University of Technology
"Optimal Granularity of Parallel Evaluation of AND-Trees"
  G. J. Li and B. W. Wah, University of Illinois at Urbana

(some of the following sessions contain non-AI papers that are not listed)
Session 2: New Directions in Optical Computing" November 4 1:30pm - 3:30 pm
  "Optical Symbolic Computing" Dr. John Neff, DARPA/DSO andB. Kushner, BDM Co.


VLSI Design and Test: Theory and Practice, Nov 4 10AM - 12 noon

A Knowledge-Based TDM Selection System
  M. E. Breuer and X. Zhu, USC

Expert Systems for Design and Test Thursday, November 6, 10AM - 12 noon
DEFT, A Design for Testability Expert System
  J. A. B. Fortes and M. A. Samad
Experiences in Prolog DFT Rule Checking
  G. Cabodi, P. Camurati and P. Prinetto, Politecnico di Torino

Object-Oriented Software, Tuesday, November 4 1:30pm - 3:30 pm
"Some Problems with Is-A: Why Properties are Objects"
  Prof. Stan Zdonik, Brown University

Computer Chess Techniques
"Phased State Space Search" T. A. Marsland, University of Alberta and
N. Srimani, Southern Illinois U.
"A MultiprocessorChess Program" J. Schaeffer, University of Alberta
Panel Discussion
Tony Marsland, U. of Alberta, Hans Berliner, CMU, Ken Thompson, AT&T Bell Labs
Prof. Monroe Newborn, McGill University, David Levy, IntelligentSoftware,
Prof. Robert Hyatt,U. of Southern Mississippi

Searching, Nov 6, 10AM - 12 noon
"Combining Symmetry and Searching",
  L. Finkelstein, et al. Northeastern University

Fifth Generation Computers I: Language Arch, Nov 5, 10AM - 12 noon
Knowledge-Based Expert System for Hardware Logic Design
  T. Mano et. al. Fujitsu
Research Activities on Natural Language Processing of the FGCS Project
  H. Miyhoshi, et al., ICOT
ARGOS/V: A System for Verification of Prolog Programs
  H. Fujita, et al., Mitsubishi Electric

Session 4: "Supercomputing Systems" November 6 10:00am - noon
The IX Supercomputer for Knowledge Based Systems
  T. Higuchi, et al. ETL

(There are positions as volunteers available for which you get to attend
the conference and get a copy of the proceedings in exchange for helping
out one day.  If interested call, 409-845-8981.  The program is oriented
towards graduate students and seniors.)
__________________________________________________________________________
Compsac 86, Conference October 7-10 1986, Americana Congress Hotel, Chicago Ill

Tutorial: October 6, 1986, 9AM - 5PM
Doug DeGroot, Prolog and Knowledge Information Processing

October 8 11:00 AM - 12:30 PM
Modularized OPS-Based Expert Systems Using Unix Tools
  Pamela T. Surko, AT&T Bell Labs
Space Shuttle Main Engine Test Analysis: A Case Study for Inductive Knowledge
Based systems for Very Large Databases
  Djamshid Asgari, Rockwell International
  Kenneth L. Modesitt, California State University
A Knowledge Based Software Maintenance Environment;
  Steven S. Yau, Sying-Syang Liu, Northwestern University

October 8 2:00PM - 3:30 PM
An Evaluation of Two New INference Control Methods
  Y. H. Chin, W. L. Peng, National Tsing Hua University, Taiwan
Learning Dominance Relations inCombinatorial Search Problems
  Chee-Fen Yu, Benjamin Wah of University of Illinois at Urbana-Champaign, USA
Fuzzy Reasoning Based on Lambda-LH Resolution
  Xu-Hua Liu, Carl K. Chang and Jing-Pha Tsai, University of Illinois at Chicago

4:00-5:30PM Panel Discussion on the Impact of Knowledge-Based Technology
Chair: Charl K. Chang, University of Illinois at Chicago
Panelists: Don McNamara GE Corporate Research, Kiyoh Nakamura, Fujitsu (Japan),
Wider Yu, AT&T Bell Labs, R. C. T. Lee, National Hsing Hua Univeristy, Taiwan

Thursday, October 9, 1986, 10:30 - 12:00 PM
Special Purpose Computer Systems for Supporting AI Applications
Minireview by Benjamin Wah, University of Illinois at Urbana-Champaign

__________________________________________________________________________
ACM Conference on Office Information Systems, October 6-8 1986, Providence
Rhode Island

October 6, 1986 2:45 - 4:5PM
Adaptive Interface Design: A Symmetric Model and a
  Knowledge-Based Implementation
  Sherman W. Tyler, Siegfried Treu, University of Pittsburgh
Automating Review of Forms for International Trade Transactions: A Natural
Language Processing Approach
  V. Dhar, P. Ranganathan

October 8, 1986 9:10:15AM
Panel on "AI in the Office", Chair Gerald Barber

October 8, 1986 10:30 - 12:00 AM Organizational Analysis: Organizational Ecology
Modelling Due Process in the Workplace
  Elihu M. Gerson, Susan L. Star, Tremont Research Institute
An Empirical Study of the INtegration of Computing into Routine Work
  Les Gasser, University of Southern California
Offices are Open Systems
  Carl Hewitt, MIT Artificial Intelligence Lab

October 8, 1986 1:00 - 2:30PM
Handling Shared Resources in a Temporal Data Base Management System
  Thomas L. Dean, Brown University
Language Constructs for Programming by Example
  Robert V. Rubin, Brown University
Providing Intelligent Assistance in Distributed Office Environments
  Sergei Nirenburg, Victor Lessor Colgate University/ University of Massachusett

__________________________________________________________________________
Fourth Symposium on Empirical Foundations of Information and Software Sciences
October 22-24 Atlanta Georgia

October 22, 1:30-3:15 PM
Expert Systems for Knowledge Engineering: Modesof Development
  Glynn Harmon, University of Texas, Austin

October 23, 10:45 - 12:30 AM
Face to Machine Interaction in Natural Language: Empirical Results of Field
Studies with an English and German Interface
  Juergen Krause, Universitaet Regensburg, F. R. Germany

October 24, 9:00 - 10:30AM
Evaluating Natural Language INterfaces to Expert Systems
  Ralph M. Weischedel BBN, Cambridge MA
Couting Leaves: An Evaluation of Ada, LISP and Prolog
  Jagdish C. Agrawal, Embry-Riddle Aeronautical University, Daytona Beach, FL
  Shan Manicam, Western Carolina University, Cullowhee, NC

__________________________________________________________________________
The Fourth Chautaqua, October 25-29, 1986, Coronado, California

Session 9 Knowledge Based Systems 10:30-12:30PM

Knowledge-based Systems Development of CNC Software,
  Roy Tsui, Software R&D Engineer, Pneumo Precision, INc., Allied Company
Towards Resident Expertise in Systems Design
  Dr. Medhat Karima, CAD/CAM Consultant, Ontario CAD/CAM Center
The Engineer as an Expert System Builder
  Dr. Richard Rosen, Vice President, Product Development, Silogic Inc.
An Overview of Knowledge-Based Systems for Design and Manufacturing
  Dr. Larry G. Richards, Director, Master's Program, University of Virginia

------------------------------

End of AIList Digest
********************

From csnet_gateway Tue Sep 30 20:40:13 1986
Date: Tue, 30 Sep 86 20:40:03 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe
Subject: AIList Digest   V4 #198
Status: R


AIList Digest            Friday, 26 Sep 1986      Volume 4 : Issue 198

Today's Topics:
  Correction - Learned Information Address,
  Queries - Computers and Writing & Prospector Shell for IBM-PC &
    Learning via ES Rule Refinement & Character Recognition,
  AI Tools - OPS5 on the PC & Turbo Prolog &
    Xerox vs Symbolics Storage Reclaimation,
  Review - Spang Robinson Summary, August 1986

----------------------------------------------------------------------

Date: Thu, 25 Sep 86 03:49:19 EDT
From: Marty Lyons  <MARTY%ORION.BITNET@WISCVM.WISC.EDU>
Subject: Address correction for ref. in Vol 4, Issue 195


  Just in case someone might have problems with USPS, Medfor
should read Medford below. (Actually, mail to them should get
there anyway, as long as you remember the zip, but just in case...)
> AIList Digest      Thursday, 25 Sep 1986     Volume 4 : Issue 195
>
>Date: 18 Sep 86 19:21:50 GMT
>From: ssc-vax!bcsaic!pamp@uw-beaver.arpa  (wagener)
>Subject: Re: Info on UNIX based AI Tools/applications (2nd req)
>        1) Expert Systems - The Ineternational Journal of
>                Knowledge Engineering;Learned Information Ltd.,
>                (This is an English Publication. It's US office
>                address is;
>                        Learned information Co.
>                        143 Old Marlton Pike
>                        Medfor,NJ 08055
 *** Typo...             ******  This should be Medford

------------------------------

Date: Thu, 25 Sep 86 09:59 EDT
From: Hirshfield@RADC-MULTICS.ARPA
Subject: Computers and Writing - A Solicitation


I am soliciting contributions for a volume entitled Computers and
Writing:  Theory and Research to be published as part of Ablex
Publishing's Writing Research Series.  As the title implies, the volume
will be devoted to research and theoretical investigations of the
interactions of computing and writing and will focus on long- range
prospects.  Potential contributors include Richard Mayer, Colette
Daiute, Cynthia Selfe and Jim Levin.

I would be pleased to hear of any papers or any ongoing studies that
relate to this exciting topic.  Please respond asap by net to Hirshfield
at RADC-multics, or write directly to Stuart Hirshfield, Department of
Mathematics and Computer Science, Hamilton College, Clinton, NY 13323.

------------------------------

Date: 25 Sep 1986 17:48 (Thursday)
From: munnari!nswitgould.oz!wray@seismo.CSS.GOV (Wray Buntine)
Subject: Prospector ESs for IBM-PC


OK, I've seen the recent list of IBM-PC Expert System Shells,
But which PROSPECTOR-type shells have the following
        ability to link in external routines
          i.e. we have some C code that provides answers for some leaf nodes
I'd be grateful for any pointers re reliability and backup as well.

Wray Buntine

wray@nswitgould.oz.au@seismo
seismo!munnari!nswitgould.oz!wray

Computing Science
NSW Inst. of Tech.
PO Box 123, Broadway, 2007
Australia

------------------------------

Date: 26 Sep 1986 11:08-EDT
From: Hans.Tallis@ml.ri.cmu.edu
Subject: Learning via ES Rule Refinement?

I am working in learning by refining a given set of
expert system rules.  Ideally the learning cycle will involve no
humans in the loop.  I am familiar with Politakis's SEEK work already, but
pointers to other programs would be greatly appreciated.
--tallis@ml.ri.cmu.edu

------------------------------

Date: Thu, 25 Sep 86 11:10:16 edt
From: philabs!micomvax!peters@tezcatlipoca.CSS.GOV
Reply-to: micomva!peters@tezcatlipoca.CSS.GOV (peter srulovicz)
Subject: character recognition


We are starting a project that will involve a fair amount of character
recognition, both typed and handwritten. If anyone out there has information
about public domain software or software that can be purchased please let me
hear from you.

email: !philabs!micomvax!peters
mail:  Peter Srulovicz
       Philips Information Systems
       600 Dr. Philips Blvd
       St. Laurent Quebec
       Canada H4M-2S9

------------------------------

Date: 26 Sep 1986 11:13:45 EDT
From: David Smith <DAVSMITH@A.ISI.EDU>
Subject: OPS5 on the PC

There is an OPS5 called TOPSI available for the IBM PC from
        Dynamic Master Systems, Inc (404)565-0771

------------------------------

Date: Thu, 25 Sep 86 12:09:16 GMT
From: Gordon Joly <XTSY13%syse.surrey.ac.uk@Cs.Ucl.AC.UK>
Subject: Re: What's wrong with Turbo Prolog

Was Clocksin and Mellish handed down on tablets of stone? An which PROLOG
can claim to fulfill all the theoretical goals, eg be truly declarative?

Gordon Joly.
INET: joly%surrey.ac.uk@cs.ucl.ac.uk
EARN: joly%uk.ac.surrey@AC.UK

------------------------------

Date: 25 Sep 1986 14:45:40 EDT (Thu)
From: Dan Hoey <hoey@nrl-aic.ARPA>
Subject: Xerox vs Symbolics -- Reference counts vs Garbage collection

In AIList Digest V4 #191, Steven J. Clark responds to the statement
that ``Garbage collection is much more sophisticated on Symbolics''
with his belief that ``To my knowledge this is absolutely false.  S.
talks about their garbage collection more, but X's is better.''

Let me first deplore the abuse of language by which it is claimed that
Xerox has a garbage collector at all.  In the language of computer
science, Xerox reclaims storage using a ``reference counter''
technique, rather than a ``garbage collector.''  This terminology
appears in Knuth's 1973 *Art of Computer Programming* and originated in
papers published in 1960.  I remain undecided as to whether Xerox's
misuse of the term stems from an attempt at conciseness, ignorance of
standard terminology, or a conscious act of deceit.

The question remains of whether Interlisp-D or Zetalisp has the more
effective storage reclamation technique.  I suspect the answer depends
on the programmer.  If we are to believe Xerox, the reference counter
technique is fundamentally faster, and reclaims acceptable amounts of
storage.  However, it is apparent that reference counters will never
reclaim circular list structure.  As a frequent user of circular list
structure (doubly-linked lists, anyone?), I find the lack tantamount to
a failure to reclaim storage.  Apparently Xerox's programmers perform
their own careful deallocation of circular structures (opening the
cycles before dropping the references to the structures).  If I wanted
to do that, I would write my programs in C.

I have never understood why Xerox continues to neglect to write a
garbage collector.  It is not necessary to stop using reference counts,
but simply to have a garbage collector available for those putatively
rare occasions when they run out of memory.

Dan Hoey

------------------------------

Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Spang Robinson Summary, August 1986

Spang Robinson Report Summary, August 1986, Volume 2 No. 8

23 Artificial Intelligence Application Products are out and are being used by
customers.

Spang Robinson tracked down 92 specific applications in 56 different
companies, agencies or institutions that are being used by someone
other than the developers.  24 of these are in diagnostics, 22 in
manufacturing, 14 in computers, 6 in geology, 6 in chemistry, 5 in
military, 4 in agriculture, 4 in medicine and 7 in "other".

DEC has 20 expert systems in use with 50 under development.  IBM has
six in use and 64 in development.

TSA Associates that there are 1000 applications fielded on microcomuters.

Dataquest claims that revenues from shell products will reach 44
million in 1986, up from 22 million in 1985.  The majority of this is
for product training as opposed to actual price for the product.  They
are estimating expert systems applications to reach ten million.

AIC has sold 500 copies of Intellect, a high-end natural language
package and will receive 6 to 8 million dollars of revenue in 1986.
Symantec's Q&A has sold 17,000 copies of Q&A, a [micro - LEFF] product
with embedded natural language.

There are 24 to 30 companies with viable commercial speech recognition
products with market growth between 20 and 30 percent.  The 1986
market will be 20 million up from 16 million.

There are 100 companies in machine vision.  1985 market is estimated
at 150 million dollars.  General Motors bought 50 million of these
products.

Also, there is a discussion of estimates of how many working expert
systems there are for each expert-shell product.

__________________________________________________________________________
Micro Trends

Teknowledge has 2500 run-time systems.  Level 5 has 50 completed applications
with 200 run-time systems sold.  One of these systems has 3000 rules spread
across nine knowledge bases for one system.  Exsys has 200 applications with
2100 run-times.

__________________________________________________________________________
List of commercially available expert systems
Bravo: VLSI circuit design and layout (applicon)
Equinox: sheet metal design (applicon)
Mechanical Advantage 1000: MCAE with intelligent sketchpad (cognition)
Manufacturing and Operations Management and Financial Advisor (Palladian)
Expert Manufacturing Planning Systems (Tipnis, Inc.)
PlanPower: financial planning system (Applied Expert System)
Planman and Database; financial planning and report writer (Sterling
    Wentworth Corp.)
Profit Tool: financial services sales aid (Prophecy Development Corp)
Stock Portfolio Analysis and Futures Price Indexing (Athena Group, NY)
Newspaper Layout System (Composition Systems)
CEREBRAL MANAGER: manages document release process (KODAK)
ICAD: production design system (ICAD, Inc.)
MORE: direct marketing advisor and evaluation of mailing lists
ULTRAMAX: a self-learning expert system to optimize operations (Ultramax Corp.)
TRANSFORM/IMS (applications generator in COBOL (Transform Logic, Inc.)
TIMM TUNER: tuning for DEC VAXs (General Research  Corporation)
HYPERCALC: an intelligent spreadsheet for LISP machines (Chaparral Dallas)
REFINE: knowledge based software development environment (Reasoning
    systems, Inc.)
XMP: Expert Project Manager (XSP Corporation)
LEXAN: diagnostics for injection-molded plastic parts (GE)

Internally developed expert systems

Computers and electronics
XCON,XSEL, XSITE, configures VAX orders, checks them for accuracy and plan site
    layout
CALLISTRO: assisting in managing resources for chip designers (DEC)
DAS-LOGIC assists with logic designers
COMPASS analyzes maintenance records for telephone switching system
    and suggests maintenance actions
???? - System for design of digital circuits (Hughes)
CSS: aids in planning relocation, reinstallation and rearrangement of
    IBM mainframes (IBM)
PINE: guides people writing reports on analysis of software problems (IBM)
QMF Advisor: used by customer advisors to help customers access IMS
    databases (IBM)
Capital Assests Movements: help move capital assets quickly
OCEAN: checks orders for computer systems (NCR)

Diagnostic and/or preventive maintenance systems, internal use

AI-Spear: tape drives (DEC)
NTC:  Ethernet and DECNET networks (DEC)
PIES circuit fabrication line  (Fairchild)
Photolithographjy advisor:  photolithography steps (Hewlett-Packard)
DIG Voltage Tester: digital voltage sources in testing lab   (Lockheed)
BDS:  baseband distribution system of commuications hardware (Lockheed)
ACE:  telephone lines (Southwest Bell)
DIAG8100  DP equipment (Travelers Insurance)
????:  soup cookers (Campbell Soups)
Engine Cooling Advisor: engine cooling system (DELCO Products)
???? - peripherals (Hewlett-Packard)
PDS: machine processes (Westinghouse)
DOC: hardware and software bug analysis for Prime 750 (Prime)
???: hardware (NCR)
TITAN: TI 990 Minicomputer (Radian/TI)
Radar Tracking: object tracking software for radar
      (Arthur D. Little/Defense Contractor)
????: circuit board (Hughes)
XMAN: aircraft engines (Systems Control Technology/Air Force Logistics Command)
????: circuit fault (Maritn Marietta)
????: power system diagnosis (NASA)

Manufacturing or design, internal developed

????: brushes and springs for small electric motors (Delco)
ISA: schedules orders for manufacturing and delivery (DEC)
DISPATCHER: schedules dispatching of parts for robots (DEC)
ISI: schedules manufacturing steps in job shop (Westinghouse)
CELL DESIGNERS: reconfigures factories for group technologies (Arthur Anderson)
WELDSELECTOR: welding engineering (Colorodo School of Mines and TI)
????: configures aircraft electrical system components (Westinghouse)
CASE: electrical connector assembly (BOEING)
FACTORY LAYOUT: ADL
TEST FLOW DESIGN: quality test and rework sequencing (ADL for defense
                  contractor)
PTRANS: planning computer systems (DEC/CMU)
PROCESS CONTROL: monitors alkylation plant (ADL)
TEST FOR STORAGE SUBSYSTEM HARDWARE: IBM
???: Capacity Planning for System 38 (IBM)
??? optimization of chemical plant for EXXON
???: manage and predict weather conditions TEXACO
???: manufacturing simulation BADGER CO.
???: expert system connected to robot HERMES (Oak Ridge National Lab)
???: nuclear fuel enhancement (Westinghouse)
???: dry dock loading (General Dynamics)

Medicine, internal development
????: serum protein analysis: Helena Labs
PUFF: pulmonary function test interpretation: Pacific Medical Center
ONCOCIN: cancer therapy manager: Stanford Oncology Clinic
CORY: diagnoses invasive cardiac testas: Cedars Sinai Medimum Center
TQMSTUNE: tunes tripple quadrupole mass spectrometer
                (Lawrence Livermore National Labs)
DENDRAL: Molecular Design, Ltd.
Synchem: plans chemical synthesis tests: SUNY-Stonybrook
THEORISTS: polymer properties (3M)
???: organic chemical analysis (Hewlett-Packard)
APPL: real time control of chemical processes related to aircraft parts
      (Lockheed-Georgia)

Geology Internally Developed Systems

SECOFOR: drill bit sticking problems (Elf-Aquitatine)
GEOX: identifies earth minerals from remotely sensed hyperspectral image data
     (NASA)
MUDMAN: diagnoses drilling mud problems (NL Industries)
oNIX and DIPMETER ADVISOR: oil well logging data related systems(Schlumberger)
TOGA: analyze power transformation conditions (Radian/ for Hartford
    Steamboiler, Inspection and Insurance Co.)

Agriculture Internally Developed Systems

WHEAT COUNSELOR: diasease control (ICI)
POMME: apple orchard management (VA Poly inst.)
PLANT/cd and PLANT/ds: soybean diseases (University of Illinois)
GRAIN MARKETING ADVISOR: (PUrdue University and TI)

Military

AALPS: cargo planning for aircraft (US Army)
RNTDS: design command and control programs for ships (Sperry)
SONAR DOME TESTING: analysis of trials of sonar systems (ADL for defence
    contractor)
NAVEX: assistant to shuttle operations (NASA)

IMAGE INTERPRETATION: analyse aerial reconniassance photos (ADL for defense
    contractor)

Other

INFORMART ADVISOR: Advises shoppers on computer purchases
TVX: Teaches VMS operating systems (DEC)
DECGUIDE: teaches rules for design checking (Lockheed)
SEMACS: monitors Securities INdustry Automation Companies Network (SIAC/Sperry)
Financial Statement Analyser: Arthur Anderson

__________________________________________________________________________
Neuron Data plans to have NEXPERT running on the PC/AT, and the MICRO VAX.
The new system will have frames, object hierarchies and the ability to
move data among concurrently running programs which will allow them to
do blackboarding.

__________________________________________________________________________
Paine Webber has downgraded Symbolics from "Buy" to "Attractive" due
to 'market place confusion caused by Symbolics imminent transition to
gate-array-based."

Intellicorp got a "neutral' rating from Paine Webber due to the fact that
it runs 'unaacceptably slowly' and that 'rapid expansion and redeployment
of talent may strain IntelliCorp's sale forces ability to produce'

__________________________________________________________________________
Symbolics prices

3620 will sell at $49,900 and 3650 will sell for $65,900.  Symbolics
has introduced a product to allow developers to prevent users from accidentally
accessing underlying software utilities.

__________________________________________________________________________
Ibuki has announced Kyoto Common Lisp.  It takes 1.4MB with the kernel
in C.  It costs $700.00 and runs on AT&T 3B2, Integrated Solutions,
Ultrix, Suns, and 4bsd

__________________________________________________________________________
Integrated Inference Machines has announced SM45000 symbolic
machines.  It is microcodable for various languages and costs from $39,000
to $44,000.  The company claims more performance than a Symbolics.

__________________________________________________________________________
reviews of Wendy B. Rauch-Hindin's two volume Artificial Intelligence in
Business, Science and Industry, Artificial Intelligence Enters the Marketplace
by Larry Harris and Dwight Davis.  and Who's Who in Artifificial Intelligence.
The latter contains 399 individual biographies as well as other info.

------------------------------

End of AIList Digest
********************

From csnet_gateway Tue Sep 30 20:40:28 1986
Date: Tue, 30 Sep 86 20:40:20 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe
Subject: AIList Digest   V4 #199
Status: R


AIList Digest            Friday, 26 Sep 1986      Volume 4 : Issue 199

Today's Topics:
  Review - Canadian Artificial Intelligence, June 1986,
  Philosophy - Intelligence, Consciousness, and Intensionality

----------------------------------------------------------------------

Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Canadian Artificial Intelligence, June 1986

Summary:

Report of Outgoing and Ingoing Presidents

Interact R&D Starts AI division

Review of 1986 Canadian AI Conference at Montreal.  It had 375
people registered.  Best appaer was James Delgrande of Simon
Fraser University.

The Canadian Society for Computational Studies of Intelligence is
now up to 800 from 250 two years ago.  (This was prior to including
people who became members upon paying non-member fees at the Canadian
AI conference).

Proceedings of the 1986 Conference costs $30.00

Contents

Why Kids Should Learn to Program,
  Elliot Soloway, Yale University
Generative Structure in Enumerative Learning Systems
  Robert C. Holte, Brunel Univeristy,
  R. Michael Warton, York University
Detecting Analogous Learning
  Ken Wellsch, Marlene Junes of University of Waterloo
GUMS: A General User Modeling System
  Tim Finin, University of Pennsylvania
  Dave Drager, Arity Corporation
An Efficient Tableau-Based Theorem Prover
  Franz Oppacher, Ed Suen of Carleton University
Domain Circumscription Revisited
  David Etherington, Universityof British Columbia
  Robert Mercer, University of Western Ontario
A Propositional Logic for Natural Kinds
  James Delgrande, Simon Fraser University
Fagin and Halpern on Logical Omniscienceses: A Critique with an Alternative
  Robert F. Hadley Simon Fraser University
Representing Contextual Dependencies in Discourse
  Tomek Strzalkowski, Simon Fraser University
A Domain-Independent Natural Language Database Interface
  Yawar Ali, Raymond Aubin, Barry Hall, Bell Northern Research
Natural Language Report Synthesis: An Application to Marine Weather Forecasts
  R. Kittredge, A. Polguere of Universite de Montreal
  E. Goldberg Environment Canada
What's in an Answer: A Theoretical Perspectiveon Deductive Questioning Answering
  Lenhart Schubert, L. Watanabe of University of Alberta
A New Implementation for Generalized Phrase Structure Grammar
  Philip Harrison, Michael Maxwell Boeing Artificial Intelligence Center
TRACK: Toward a Robust Natural Language INterface
  Sandra Carberry, University of Delaware
Representation of Negative and Incomplete Information in Prolog
  Kowk Hung Chan, University of Western Ontario
On the Logic of Representing Dependencies by Graphs,
  Judea Pearl of Universityof California
  Azaria Paz Technion, Israel Institute of Technology
A proposal of Modal Logic Programming (Extended Abstract)
  Seiki Akama, Fujitsu ltd., Japan
Classical Equality and Prolog
  E. W. Elcock and P. Hoddinott of University of Western Ontario
Diagnosis of Non-Syntactic Programming Errors in the Scent Advisor
  Gordon McCalla, Richard B. Bunt, Janelle J. Harms of University of
  Saskatchewan
Using Relative Velocity INformation to Constrain the Motion Correspondence
Problem
  Michael Dawson and Zenon Pylyshyn, University of Western Ontario
Device Representation Using Instantiation Rules and Structural Templates
  Mingruey R. Taie, Sargur N. Srihari, James Geller, Stuart C. Shapro
  of State University of New York at Buffalo
Machine Translation Between Chinese and English
  Wanying Jin, University of Texas at Austin
Interword Constraints in Visual Word Recognition
  Jonathan J. Hull, State University of New York at Buffalo
Sensitivity to Corners inFlow Paterns
  Norah K. Link and STeve Zucker, McGill University
Stable Surface Estimation
  Peter T. Sander, STeve Zucker, McGill University
Measuring Motion in Dynamic Images: A Clustering Approach
  Amit Bandopadhay and R. Dutta, University of Rochester
Determining the 3-D Motion of a Rigid Surface Patch without Correspondence,
Under Perspective Projection
  John Aloimonos and Isidore Rigoutsos, University of Rochester
Active Navigation
  Amit Bandopadhay, Barun Chandra and Dana H. Ballard, University of Rochester
Combining Visual and Tactile Perception for Robotics
  J. C. Rodger and Roger A. Browse, Queens University
Observation on the Role of Constraints in Problem Solving
  Mark Fox of Carnegie-Mellon University
Rule Interaction in Expert System Knowledge Bases
  Stan Raatz, University of Pennsylvania
  George Drastal, Rutgers University
Towards User specific Explanations from Expert Systems
  Peter van Beek and Robin Cohen, University of Waterloo
DIALECT: An Expert Assistant for Informatin REtrieval
  Jeane-Claude Bassano, Universite de Paris-Sud
Subdivision of Knowledge for Igneous Rock Identification
  Brian W. Otis, MIT Lincoln Lab
  Eugene Freuder, University of New Hampshire
A Hybrid, Decidable, Logic-Based Knowledge Representation System
  Peter Patel-Schneider, Schlumberger Palo Alto Research
The Generalized-Concept Formalism: A Frames and Logic Based Representation
Model
  Mira Balaban, State University of New York at Albany
Knowledge Modules vs Knowledge-Bases: A Structure for Representing the
Granularity of Real-World Knowledge
  Diego Lo Giudice and Piero Scaruffi, Olivetti Artificial Intelligence Center,
  Italy
Reasoning in a Hierarchy of Deontic Defaults
  Frank M. Brown, Universityof Kansas
Belief Revision in SNeps
  Joao P. Martins Instituto Superior Tecnico, Portugal
  Stuart C. Shapiro, State University of New York at Buffalo
GENIAL: Un Generateur d'Interface en Langue Naturelle
  Bertrand Pelletier et Jean Vaucher, Universite de Montreal
Towards a Domain-Independent Method of Comparing Search Algorithm Run-times
  H. W. Davis, R. B. Polack, D. J. Golden of Wright State University
Properties of Greedily Optimized Ordering Problems
  Rina Dechter, Avi Dechter, University of California, Los Angeles
Mechanisms in ISFI: A Technical Overview (Short Form)
  Gary A. Cleveland TheMITRE Corp.
Un Systeme Formel de Caracterisation de L'Evolution des Connaissances
  Eugene Chouraqui, Centre National de la Recherche Scientifique
Une Experience de l'Ingenierie de la Connaissance: CODIAPSY Developpe avec
HAMEX
  Michel Maury, A. M. Massote, Henri Betaille, J. C. Penochet et Michelle
  Negre of CRIME et GRIP, Montpellier, France

__________________________________________________________________________
Report on University of Waterloo Research on Logic Mediated Knowledge
Based Personal Information Systems

They received a 3 year $450,000 grant.  They will prototype Theorist, a PROLOG
based system, in which they will implement a diagnostic system with natural
language interface for complex system, a system to diagnose children's
reading disabilities.  They will also develop a new Prolog in which
to write Theorist.

This group has already implemented DLOG, a "logic-based knowledge representation
sytem", two Prologs (one of which will be distributed by University of
wAterloo's Computer System Group), designed Theorist, implemented an expert
system for diagnosing reading disabilities (which will be redone in Theoritst)
and designed a new architecture for Prolog, and implemented Concurrent Prolog.

__________________________________________________________________________
Reviews of John Haugeland's "Artificial Intelligence: The Very Idea"
"The Connection Machine" by W W. Daniel HIllis, "Models of the Visual
Cortex" by David Rose and Vernon G. Dobson

------------------------------

Date: 25 Sep 86 08:12:00 EDT
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: Intelligence and Representation


This is in response to some points raised by Charles Kalish -
Allow a somewhat lengthy re-quotation to set the stage:

    I think that Dennet (see "Brainstorms") is right in that intentions
    are something we ascribe to systems and not something that is built in
    or a part of that system.  The problem then becomes justifying the use
    of intentional descriptions for a machine; i.e. how can I justify my
    claim that "the computer wants to take the opponent's queen" when the
    skeptic responds that all that is happening is that the X procedure
    has returned a value which causes the Y procedure to  move piece A to
    board position Q?...

    I think the crucial issue in this question is how much (or whether)
    the computer understands. The problem with systems now is that it is
    too easy to say that the computer doesn't understand anything, it's
    just manipulating markers. That is that any understanding is just
    conventional -- we pretend that variable A means the Red Queen, but it
    only means that to us (observers) not to the computer.  ...

    [Pirron's] idea is that you want to ground the computer's use of
    symbols in some non-symbolic experience....

    One is looking for pre-symbolic, biological constraints;  Something
    like Rosch's theory of basic levels of conceptualization.  ....

    The other point is that maybe we do have to stay within this symbolic
    "prison-house" after all event the biological concepts are still
    represented, not actual (no food in the brain just neuron firings).
    The thing here is that, even though you could look into a person's
    brain and, say, pick out the neural representation of a horse, to the
    person with the open skull that's not a representation, it constitutes
    a horse, it is a horse (from the point of view of the neural sytem).
    And that's what's different about people and computers. ...

These seem to me the right sorts of questions to be asking - here's a stab
at a partial answer.

We should start with a clear notion of "representation" - what does it mean
to say that the word "rock" represents a rock, or that a picture of a rock
represents a rock, or that a Lisp symbol represents a chess piece?

I think Dennett would agree that X represents Y only relative to some
contextual language (very broadly construed as any halfway-coherent
set of correspondence rules), hopefully with the presence of
an interpreter.  Eg, "rock" means rock in English to English-speakers.
opp-queen means opponent's queen in the mini-language set up by the
chess-playing program, as understood by the author.  To see the point a
bit more, consider the word "rock" neatly typed out on a piece of paper
in a universe in which the English language does not and never will exist.
Or consider a computer running a chess-playing program (maybe against
another machine, if you like) in a universe devoid of conscious entities.
I would contend that such entities do not represent anything.

So, roughly, representation is a 4-place relation:
R(representer,     represented, language            interpreter)
  "rock"           a rock       English             people
  picture of rock  a rock       visual similarity   people,
                                                    maybe some animals
  ...
and so on.

Now... what seems to me to be different about people and computers is that
in the case of computers, meaning is derivative and conventional, whereas
for people it seems intrinsic and natural.  (Huh?) ie, Searle's point is
well taken that even after we get the chess-playing program running, it
is still we who must be around to impute meaning to the opp-queen Lisp
symbol.  And furthermore, the symbol could just as easily have been
queen-of-opponent.  So for the four places of the representation relation
to get filled out, to ground the flying symbols, we still need people
to "watch" the two machines.  By contrast two humans can have a perfectly
valid game of chess all by themselves, even if they're Adam and Eve.

Now people certainly make use of conventional as well as natural
symbol systems (like English, frinstance).  But other representers in
our heads (like the perception of a horse, however encoded neurally).
seem to *intrinsically* represent.  Ie, for the representation
relation, if "my perception of a horse" is the representer, and the
horse out there in the field is the represented thing, the language
seems to be a "special", natural one namely the-language-of-normal-
veridical-perception. (BTW, it's *not* the case, as stated in
Charles's original posting that the perception simply is the horse -
we are *not* different from computers with respect to
the-use-of-internal-things-to-represent-external-things.)
Further, it doesn't seem to make much sense at all to speak of an
"interpreter".  If *I* see a horse, it seems a bit schizophrenic to
think of another part of myself as having to interpret that
perception. In any event, note that this is self-interpretation.

So people seem to be autonomous interpreters in a way that computers
are not (at least not yet).  In Dennett's terminology, it seems that
I (and you) have the authority to adopt an intentional stance towards
various things (chess-playing machines, ailist readers, etc.),
*including* ourselves - certainly computers do not yet have this
"authority" to designate other things, much less themselves,
as intentional subjects.

Please treat the above as speculation, not as some kind of air-tight
argument (no danger of that anyway, right?)

John Cugini <Cugini@NBS-VMS>

------------------------------

Date: Thu 25 Sep 86 10:24:01-PDT
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Subject: Emergent Consciousness

Recent philosophical discussions on consciousness and intentionality
have made me wonder about the analogy between Man and Bureaucracy.
Imagine a large corporation.  Without knowing the full internal chain of
command, an external observer could still deduce many of the following
characteristics.

  1) The corporation is composed of hundreds of nearly identical units
     (known as personnel), most of whom perform material-handling
     or information-handling tasks.  Although the tasks differ, the
     processing units are essentially interchangeable.

  2) The "intelligence" of this system is distributed -- proper functioning
     of the organization requires cooperative action by many rational agents.
     Many tasks can be carried out by small cliques of personnel without
     coming to the attention of the rest of the system.  Other tasks require
     the cooperation of all elements.

  3) Despite the similarity of the personnel, some are more "central" or
     important than others.  A reporter trying to discover what the
     organization is "doing" or "planning" would not be content to talk
     with a janitor or receptionist.  Even the internal personnel recognize
     this, and most would pass important queries or problems to more central
     personnel rather than presume to discuss or set policy themselves.

  4) The official corporate spokesman may be in contact with the most
     central elements, but is not himself central.  The spokesman is only
     an output channel for decisions that occur much deeper or perhaps in a
     distributed manner.  Many other personnel seem to function as inputs or
     effectors rather than decision makers.

  5) The chief executive officer (CEO) or perhaps the chairman of the board
     may regard the corporation as a personal extension.  This individual
     seems to be the most central, the "consciousness" of the organization.
     To paraphrase Louis XV, "I am the state."


It seems, therefore, that the organization has not only a distributed
intelligence but a localized consciousness.  Certain processing elements
and their own thought processes control the overall behavior of the
bureaucracy in a special way, even though these elements (e.g., the CEO)
are physiologically indistinguishable from other personnel.  They are
regarded as the seat of corporate consciousness by outsiders, insiders,
and themselves.

Consciousness is thus related to organizational function and information
flow rather than to personal function and characteristics.  By analogy,
it is quite possible that the human brain contains a cluster of simple
neural "circuits" that constitute the seat of consciousness, even though
these circuits are indistinguishable in form and individual functioning
from all the other circuits in the brain.  This central core, because of
its monitoring and control of the whole organism, has the right to
consider itself the sole autonomous agent.  Other portions of the brain
would reject their own autonomy if they were equipped to even consider
the matter.

I thus regard consciousness as a natural emergent property of hierarchical
systems (and perhaps of other distributed systems).  There is no need to
postulate a mind/body dualism or a separate soul.  I can't explain how
this consciousness arises, nor am I comfortable with the paradox.  But I
know that it does arise in any hierarchical organization of cooperating
rational agents, and I suspect that it can also arise in similar organizations
of nonrational agents such as neural nets or computer circuitry.

                                        -- Ken Laws

------------------------------

Date: 25 Sep 1986 1626-EDT
From: Bruce Krulwich <KRULWICH@C.CS.CMU.EDU>
Subject: semantic knowledge


howdy.

i think there was a discussion on searle that i missed a month or so
ago, so this may be rehash.  i disagree with searle's basic conjecture
that he bases all of his logic on, namely that since computers represent
everything in terms of 1's and 0's they are by definition storing
knowledge syntactically and not semantically.  this seems wrong to me.
as a simple counterexample, consider any old integer stored within a
computer.  it may be stored as a string of bits, but the program
implicitely has the "semantic" knowledge that it is an integer.
similarly, the stored activation levels and connection strengths in a
connectionist model simulator (or better, in a true hardware
implementation) may be stored as a bunch of munerical values, but the
software (ie, the model, not the simulator) semantically "knows" what
each value is just as the brain knows the meaning of activation patterns
over neurons and synapses (or so goes the theory).

i think the same can be said for data stored in a more conventional AI
program.  in response to a recent post, i don't think that there is a
fundamental difference between a human's knowledge of a horse and a
computers manipulation of the symbol it is using to represent it.  the
only differences are the inherently associative nature of the brain and
the amount of knowledge stored in the brain.  i think that it is these
two things that give us a "feel" for what a horse is when we think of
one, while most computer systems would make a small fraction of the
associations and would have much less knowledge and experience to
associate with.  these are both computational differences, not
fundamental ones.

none of this is to say that we are close or getting close to a seriously
"intelligent" computer system.  i just don't think that there are
fundamental philosophical barriers in our way.

bruce krulwich

arpa: krulwich@c.cs.cmu.edu
bitnet: krulwich%c.cs.cmu.edu@cmccvma

------------------------------

End of AIList Digest
********************

From csnet_gateway Tue Sep 30 20:42:46 1986
Date: Tue, 30 Sep 86 20:42:42 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe
Subject: AIList Digest   V4 #200
Status: R


AIList Digest            Monday, 29 Sep 1986      Volume 4 : Issue 200

Today's Topics:
  Seminars - Chemical Structure Generation (SU) &
    Fuzzy Relational Databases (SMU) &
    General Logic (MIT) &
    Generic Tasks in Knowledge-Based Reasoning (MIT),
  Conference - Workshop on Qualitative Physics

----------------------------------------------------------------------

Date: Mon 22 Sep 86 23:39:33-PDT
From: Olivier Lichtarge <LICHTARGE@SUMEX-AIM.ARPA>
Subject: Seminar - Chemical Structure Generation (SU)


I will be presenting my thesis defense in biophysics Thursday
September 25 in the chemistry Gazebo, starting at 2:15.

      Solution Structure Determination of Beta-endorphin by NMR
                                 and
     Validation of Protean: a Structure Generation Expert System


Solution structure determination of proteins by Nuclear Magnetic
Resonance involves two steps. First, the collection and interpretation
of data, from which the secondary structure of a protein is
characterized and a set of constraints on its tertiary structure
identified. Secondly, the generation of 3-dimensional models of the
protein which satisfy these constraints. This thesis presents works in
both these areas: one and two-dimensional NMR techniques are applied
to study the conformation of @g(b)-endorphin; and Protean, a new
structure generation expert system is introduced and validated by
testing its performance on myoglobin.
  It will be shown that @g(b)-endorphin is a random coil in water.  In
a 60% methanol and 40% water mixed solvent the following changes take
place: an @g(a)-helix is induced between residues 14 and 27, and a
salt bridge forms between Lysine28 and Glutamate31, however, there
still exists no strong evidence for the presence of tertiary structure.
  The validation of Protean establishes it as an unbiased and accurate
method of generating a representative sampling of all the possible
conformations which satisfy the experimental data. At the solid level,
the precision is good enough to clearly define the topology of the
protein. An analysis of Protean's performance using data sets of
dismal to ideal quality permits us to define the limits of the
precision with which a structure can be determined by this method.

------------------------------

Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Seminar - Fuzzy Relational Databases (SMU)

Design of Similarity-Based (Fuzzy) Relational Databases
Speaker: Bill P. Buckles, University of Texas, Arlington
Location 315 SIC, Southern Methodist University
Time: 2:00PM


While the core of an expert system is its inference mechanism, a
common component is a database or other form of knowledge
representation.  The authors have developed a variation of the
relational database model in which data that is linguistic or
inherently uncertain may be represented.  The keystone concept of
this representation is the replacement of the relationship " is
equivalent to" with the relationship "is similar to".  Similarity is
defined in fuzzy set theory as an $n sup 2$ relationship over a
domain D, |D| = n such that

  i. s(x,x)=1, x member D
 ii. s(x,y)=s(y,x) x,y member D
iii. s(x,y) >= max[min(s(x,y),s(y,z))]; x, y, z member D

Beginning with a universal relation, a method is given for developing
the domain sets, similarity relationships and base relations for a
similarity-based relational database.  The universal relation itself
enumerates all domains.  The domain sets may be numeric (in which case
no further design is needed) or scalar (in which case the selection of
a comprehensive scalar set is needed).  Similarity relationship
contains $n sup 2$ values where n is the number of scalars in a domain
set.  A method is described for developing a set of consistent values
when initially given n-1 values. The base relations are derived using
fuzzy functional dependencies.  This step also requires the
identification of candidate keys.

------------------------------

Date: Fri 26 Sep 86 10:47:21-EDT
From: Lisa F. Melcher <LISA@XX.LCS.MIT.EDU>
Subject: Seminar - General Logic (MIT)


                       Date: Thursday, October 2, 1986
                      Time: 1:45 p.m......Refreshments
                         Time: 2:00 p.m......Lecture
                             Place:  NE43 - 512A


                              " GENERAL LOGIC "

                               Gordon Plotkin
                       Department of Computer Science
                      University of Edinburgh, Scotland


     A good many logics have been proposed for use in Computer Science.
Implementing them involves repeating a great deal of work.  We propose a
general account of logics as regards both their syntax and inference rules.
As immediate target we envision a system to which one inputs a logic
obtaining a simple proof-checker.  The ideas build on work in logic of
Paulson, Martin-Lof and Schroeder-Heister and in the typed lambda-calculus of
Huet and Coquand and Meyer and Reinhold.  The slogan is: Judgements are
Types.  For example the judgement that a proposition is true is identified
with its type of proofs; general and hypothetical judgements are identified
with dependent product types.  This gives one account of Natural Deduction.
It would be interesting to extend the work to consider (two-sided) sequent
calculi for classical and modal logics.


              Sponsored by TOC, Laboratory for Computer Science
                             Albert Meyer, Host

------------------------------

Date: Fri 26 Sep 86 14:47:36-EDT
From: Rosemary B. Hegg <ROSIE@XX.LCS.MIT.EDU>
Subject: Seminar - Generic Tasks in Knowledge-Based Reasoning (MIT)

                       Date: Wednesday, October 1, 1986
                       Time:  2.45 pm....Refreshments
                              3.00 pm....Lecture
                      Place:  NE43-512A

                  GENERIC TASKS IN KNOWLEDGE-BASED REASONING:
              CHARACTERIZING AND DESIGNING EXPERT SYSTEMS AT THE
                      ``RIGHT'' LEVEL OF ABSTRACTION

                          B. CHANDRASEKARAN
           Laboratory for Artificial Intelligence Research
           Department of Computer and Information Science
                      The Ohio State University
                        Columbus, Ohio 43210

   We outline the elements of a framework for expert system design that
we have been developing in our research group over the last several years.
This framework is based on the claim that complex knowledge-based reasoning
tasks can often be decomposed into a number of @i(generic tasks each
with associated types of knowledge and family of control regimes). At
different stages in reasoning, the system will typically engage in
one of the tasks, depending upon the knowledge available and the state
of problem solving.  The advantages of this point of view are manifold:
(i) Since typically the generic tasks are at a much higher level of abstraction
than those associated with first generation expert system languages,
knowledge can be acquired and represented directly at the level appropriate to
the information processing task.  (ii) Since each of the generic tasks
has an appropriate control regime, problem solving behavior may be
more perspicuously encoded.  (iii) Because of a richer generic vocabulary
in terms of which knowledge and control are represented, explanation of
problem solving behavior is also more perspicuous.  We briefly
describe six generic tasks that we have found very useful in our
work on knowledge-based reasoning: classification, state abstraction,
knowledge-directed retrieval, object synthesis by plan selection and
refinement,
hypothesis matching, and assembly of compound hypotheses for
abduction.

Host:  Prof. Peter Szolovits

------------------------------

Date: Fri, 26 Sep 86 12:41:26 CDT
From: forbus@p.cs.uiuc.edu (Kenneth Forbus)
Subject: Conference - Workshop on Qualitative Physics

Call for Participation

Workshop on Qualitative Physics
May 27-29, 1987
Urbana, Illinois

Sponsored by:
        the American Association for Artificial Intelligence
                and
        Qualitative Reasoning Group
        University of Illinois at Urbana-Champaign

Organizing Committee:
        Ken Forbus (University of Illinois)
        Johan de Kleer (Xerox PARC)
        Jeff Shrager (Xerox PARC)
        Dan Weld (MIT AI Lab)

Objectives:
Qualitative Physics, the subarea of artificial intelligence concerned with
formalizing reasoning about the physical world, has become an important and
rapidly expanding topic of research.  The goal of this workshop is to
provide an opportunity for researchers in the area to communicate results
and exchange ideas.  Relevant topics of discussion include:

        -- Foundational research in qualitative physics
        -- Implementation techniques
        -- Applications of qualitative physics
        -- Connections with other areas of AI
                 (e.g., machine learning, robotics)

Attendance:  Attendence at the workshop will be limited in order to maximize
interaction.  Consequently, attendence will be by invitation only.  If you
are interested in attending, please submit an extended abstract (no more
than six pages) describing the work you wish to present.  The extended
abstracts will be reviewed by the organizing committee.  No proceedings will
be published; however, a selected subset of attendees will be invited to
contribute papers to a special issue of the International Journal of
Artificial Intelligence in Engineering.  There will be financial assistance
for graduate students who are invited to attend.

Requirements:
The deadline for submitting extended abstracts is February 10th.  On-line
submissions are not allowed; hard copy only please.  Any submission over 6
pages or rendered unreadable due to poor printer quality or microscopic font
size will not be reviewed.  Since no proceedings will be produced, abstracts
describing papers submitted to AAAI-87 are acceptable.  Invitations will be
sent out on March 1st.  Please send 6 copies of your extended abstracts to:

        Kenneth D. Forbus
        Qualitative Reasoning Group
        University of Illinois
        1304 W. Springfield Avenue
        Urbana, Illinois, 61801

------------------------------

End of AIList Digest
********************

From csnet_gateway Tue Sep 30 20:43:13 1986
Date: Tue, 30 Sep 86 20:43:03 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe
Subject: AIList Digest   V4 #201
Status: R


AIList Digest            Monday, 29 Sep 1986      Volume 4 : Issue 201

Today's Topics:
  Bibliography - Definitions & Recent Articles in Robotics and Vision

----------------------------------------------------------------------

Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Defs for ai.bib35, new keyword code for editorials

D MAG39 Computer Aided Design\
%V 18\
%N 3\
%D APR 1986
D MAG40 Automation and Remote Control\
%V 46\
%N 9 Part 2\
%D SEP 1985
D MAG41 IEEE Transactions on Industrial Electronics\
%V 33\
%N 2\
%D MAY 1986
D MAG42 Soviet Journal of Computer and Systems Sciences\
%V 23\
%N 6\
%D NOV-DEC 1985
D MAG43 Journal of Symbolic Computation\
%V 2\
%N 1\
%D MARCH 1986
D MAG44 Image and Vision Computing\
%V 3\
%N 4\
%D NOV 1985
D BOOK42 Second Conference on Software  Development Tools, Techniques and Altern
atives\
%I IEEE Computer Society Press\
%C Washington\
%D 1985
D BOOK43 Fundamentals of Computation Theory (Cottbus)\
%S Lecture Notes in Computer Science\
%V 199\
%I Springer-Verlag\
%C Berlin-Heidelberg-New York\
%D 1985
D BOOK44 Robot Sensors, Volume 1 (Vision)\
%I IFS Publications\
%C Bedford\
%D 1986
D BOOK45 Robot Sensors, Volume 2 (Tactile and Non-Vision)\
%I IFS Publications\
%C Bedford\
%D 1986
D MAG45 Journal of Logic Programming\
%V 2\
%D 1985\
%N 3
D BOOK46 Advances in Cryptology\
%S Lecture Notes in Computer Science\
%V 196\
%I Springer-Verlag\
%C Berlin-Heidelberg-New York\
%D 1985
D BOOK47 Mathematical Foundations of Software Development V 1\
%S Lecture Notes in Computer Science\
%V 185\
%I Springer-Verlag\
%C Berlin-Heidelberg-New York\
%D 1985
D MAG46 Proceedings of the 44th Session of the International
Statistical Institute\
%V 1\
%D 1983
D BOOK48 Seminar Notes on Concurrency\
%S Lecture Notes in Computer Science\
%V 197\
%I Springer-Verlag\
%C Berlin-Heidelberg-New York\
%D 1985
D MAG47 Proceedings of the Conference "Algebra and Logic"\
%D 1984\
%C Zagreb
D MAG48 Pattern Recognition\
%V 19\
%N 2\
%D 1986
D MAG49 IEEE Transactions on Geoscience and Remote Sensing\
%V 24\
%N 3\
%D MAY 1986
D MAG50 Information and Control\
%V 67\
%N 1-3\
%D OCT-DEC 1985
D MAG51 Kybernetes\
%V 15\
%N 2\
%D 1986
D MAG52 Data Processing\
%V 28\
%N 3\
%D APR 1986
D MAG53 J. Tsinghua Univ.\
%V 25\
%D 1985\
%N 2
D MAG54 Logique et. Anal (n. S.)\
%V 28\
%D 1985\
%N 110-111
D MAG55 Werkstattstechnik wt Zeitschrift fur Industrielle Fertigung\
%V 76\
%N 5\
%D MAY 1986
D MAG56 Robotica\
%V 4\
%D APR 1986
D MAG57 International Journal of Man Machine Studies\
%V 24\
%N 1\
%D JAN 1986
D MAG58 Computer Vision, Graphics and Image Processing\
%V 34\
%N 1\
%D APR 1986
D BOOK49 Flexible Manufacturing Systems: Methods and Studies\
%S Studies in Management Science and Systems\
%V 12\
%I North Holland Publishing Company\
%C Amsterdam\
%D 1986
D MAG59 International Journal for Robotics Research\
%V 5\
%N 1\
%D Spring 1986
D BOOK50 International Symposium on Logic Programming\
%D 1984
D MAG61 Proceedings of the 1986 Symposium on Symbolic and\
Algebriaic Computation\
%D JUL 21-23 1986

__________________________________________________________________________
A new keyword code for article types has been added, AT22, which is for
editorials.

------------------------------

Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Recent Articles in Robotics and Vision

%A Kunwoo Lee
%A Daniel A. Tortorelli
%T Computer-aided Design of Robotic Manipulators
%J MAG39
%P 139-146
%K AI07

%A Ho Bin
%T Inputting Constructive Solid Geometry Representations Directly from 2D
Orthographic Engineering Drawings
%J MAG39
%P 147-155
%K AA05

%A T. H. Richards
%A G. C. Onwubolu
%T Automatic Interpretation of Engineering Drawings for 3D Surface
Representation in CAD
%J MAG39
%P 156-160
%K AA05

%A  J. S.  Arora
%A G. Baenziger
%T Uses of Artificial Intelligence in Design Optimization
%J Computer Methods in Mechanics and Engineering
%V 54
%N 3
%D MAR 1986
%P 303-324
%K AA05

%A V. N. Burkov
%A V. V. Tayganov
%T Adaptive Functioning Mechanisms of Active Systems. I. Active
Identification and Progressive Mechanisms
%J MAG40
%P 1141-1146
%K AA20 AI09 AI04 AI08 AI13

%A A. A. Zlatopolskii
%T Image Segmentation along Discontinuous Boundaries
%J MAG40
%P 1160-1167
%K AI06

%A E. B. Yanovskaya
%T Axiomatic Characterization of the Maxmin and the Lexicographic Maxmin
Solution in Bargaining Schemes
%J MAG40
%P 1177-1185
%K AI02 AI03 AA11

%A Yu. V. Malyshenko
%T Estimating and Minimizing Diagnostic Information when Troubleshooting an
Analog Device
%J MAG40
%P 1192-1195
%K AA04 AA21

%A G. Hirzinger
%T Robot Systems Completely Based on Sensory Feedback
%J MAG41
%P 105-109
%K AI07 AI06

%A Y. Y. Hung
%A S. K. Cheng
%A N. K. Loh
%T A Computer Vision Techniques for Surface Curvature Gaging with
Project Grating
%J MAG41
%P 158-161
%K AI07 AI06

%A Zvi Galil
%T Optimal Prallel Algorithms for String Matching
%J Information and Control
%V 67
%N 1-3
%D 1985
%P 144-157
%K O06

%A E. Tanic
%T Urban Planning and Artificial Intelligence - The Urbys System
%J Computers, Environment and Urban Systems
%V 10
%N 3-4
%D 1986
%P 135-146
%K AA11

%A B. M. Shtilman
%T A Formal Linguistic Model for Solving Discrete Optimization Problems. II.
The Language of Zones, Translations and the Boundary Problem
%J MAG42
%P 17-28
%K AI02 AA05

%A V. A. Abramov
%A A. I. Piskunov
%A Yu. T. Rubanik
%T A Modification to the Bellman-Zadeh Multistep Procedure for Decision Making
under Fuzzy  Conditions for Microelectronic Systems
%J MAG42
%P 143-151
%K AI13 O05

%A James L. Eilbert
%A Richard M. Salter
%T Modeling Neural Networks in Scheme
%J Simulation
%V 46
%D 1986
%N 5
%P 193
%K AI12 T01

%A E. A. Shingareva
%T Semiotic Basis of the Pragmatic Approach to Recognition of the Text Meaning
%J Nauchno-Tekhnicheskaya Informatsiya, Seriya II- Informatisionnye Protessy I
Sistemy
%N 3
%D 1986
%K AI02

%A T. Kim
%A K. Chwa
%T Parallel Algorithms for a Depth First Search and a Breadth First Search
%J International Journal of Computer Mathematics
%V 19
%N 1
%D 1986
%P 39-56
%K AI03 H03

%A Hsu-Pin Wang
%A Richard A. Wysk
%T An Expert System for Machining Data Selection
%J Computers and Industrial Engineering
%V 10
%N 2
%D 1986
%K AA26 AI01

%A L. R. Rabiner
%A F. K. Soong
%T Single-Frame Vowel Recognition Using Vector Quantization with Several
Distance Measures
%J AT&T Technical Journal
%V 64
%N 10
%D DEC 1985
%P 2319-2330
%K AI05

%A A. Pasztor
%T Non-Standard Algorithmic and Dynamic Logics
%J MAG43
%P 59-82

%A Alex P. Pentland
%T On Describing Complex Surface Shapes
%J MAG44
%P 153-162
%K AI06 AI16

%A B. F. Buxton
%A D. W. Murray
%T Optic Flow Segmentation as an Ill-posed and Maximum Likelihood Problem
%J MAG44
%P 163-169
%K AI06

%A M. C. Ibison
%A L. Zapalowski
%A C. G. Harris
%T Direct Surface Reconstruction from a Moving Sensor
%J MAG44
%P 170-176
%K AI06

%A S. A. Lloyd
%T Binary Stereo Algorithm Based on the Disparity-Gradient Limit and Using
Optimization Theory
%J MAG44
%P 177-182
%K AI06

%A Andrew Blake
%A Andrew Zimmerman
%A Greg Knowles
%T Surface Descriptions from Stereo and Shading
%J MAG44
%P 183-196
%K AI06

%A G. D. Sullivan
%A K. D. Baker
%A J. A D. W. Anderson
%T Use of Multiple Difference-of-Gaussian Filters to Verify Geometric
Models
%J MAG44
%P 192-197
%K AI06

%A J. Hyde
%A J. A. Fullwood
%A D. R. Corrall
%T An Approach to Knowledge Driven Segmentation
%J MAG44
%P 198-205
%K AI06

%A J. Kittler
%A J. Illingworth
%T Relaxation Labelling Algorithm - A Review
%J MAG44
%P 206-216
%K AI06 AT08

%A R. T. Ritchings
%A A. C. F. Colchester
%A H. Q. Wang
%T Knowledge Based Analysis of Carotid Angiograms
%J MAG44
%P 217
%K AI06 AA01

%A W. L. Mcknight
%T Use of Grammar Templates for Software Engineering Environments
%J BOOK42
%P 56-66
%K AA08

%A M. T. Harandi
%A M. D. Lubars
%T A Knowledge Based Design Aid for Software Systems
%J BOOK42
%P 67-74
%K AA08

%A Y. Takefuji
%T AI Based General Purpose Cross Assembler
%J BOOK42
%P 75-85
%K AA08

%A R. N. Cronk
%A D. V. Zelinski
%T ES/AG System Generation Environment for Intelligent Application Software
%J BOOK42
%P 96-100
%K AA08

%A B. Friman
%T X - A Tool for Prototyping Through Examples
%J BOOK42
%P 141-148
%K AA08

%A D. Hammerslag
%A S. N. Kamin
%A R. H. Campbell
%T Tree-Oriented Interactive Processing with an Application to Theorem-Proving
%J BOOK42
%P 199-206
%K AA08 AI11


%A Gudmund Frandsen
%T Logic Programming and Substitutions
%B BOOK43
%P 146-158
%K AI10

%A H. J. Cho
%A C. K. Un
%T On Reducing Computational Complexity in Connected Digit Recognition by the
Frame Labeling Method
%J Proceedings of the IEEE
%V 74
%N 4
%D APR 1986
%P 614-615
%K AI06

%A Vijay Gehlot
%A Y. N. Srikant
%T An Interpreter for SLIPS - An Applicative Language Based on Lambda-Calculus
%J Computer Languages
%V 11
%N 1
%P 1-14
%D 1986

%A Sharon D. Stewart
%T Expert System Invades Military
%J Simulation
%V 46
%N 2
%D FEB 1986
%P 69
%K AI01 AA18

%A F. C. Hadipriono
%A H. S. toh
%T Approximate Reasoning Models for Consequences on Structural Component Due to
Failure Events
%J Civil Engineering Pract Design Engineering
%V 5
%N 3
%D 1986
%P 155-170
%K AA05 AA21 O04

%A J. Tymowski
%T Industrial Robots
%J Mechanik
%V 58
%N 10
%D 1985
%P 493-496
%K AI07
%X (in Polish with English, Polish, Russian and German summaries)

%A Dieter Schutt
%T Expert Systems - Forerunners of a New Technology
%J Siemens Review
%V 55
%N 1
%D JAN- FEB 1986
%P 30
%K AI01

%A H. Kasamatu
%A S. Omatu
%T Edge-Preserving Restoration of Noisy Images
%J International Journal of Systems Sciences
%V 17
%N 6
%D JUN 1985
%P 833-842
%K AI06

%A A. Pugh
%T Robot Sensors - A Personal View
%B BOOK44
%P 3-14
%K AI07

%A L. J. Pinson
%T Robot Vision - An Evaluation of Imaging Sensors
%B BOOK44
%P 15-66
%K AI07 AI06

%A D. G. Whitehead
%A I. Mitchell
%A P. V. Mellor
%T A Low-Resolution Vision Sensor
%B BOOK44
%P 67-74
%K AI06

%A J. E. Orrock
%A J. H. Garfunkel
%A B. A. Owen
%T An Integrated Vision/Range Sensor
%B BOOK44
%P 75-84
%K AI06

%A S. Baird
%A M. Lurie
%T Precise Robotic Assembly Using Vision in the Hand
%B BOOK44
%P 85-94
%K AI06 AI07 AA26

%A C. Loughlin
%A J. Morris
%T Line, Edge and Contour Following with Eye-in-Hand Vision
%B BOOK44
%P 95-102
%K AI06 AI07

%A P. P. L. Regtien
%A R. F. Wolffenbuttel
%T A Novel Solid-State Colour Sensor Suitable for Robotic Applicatinos
%B BOOK44
%P 103-114
%K AI06 AI07

%A A. Agrawal
%A M. Epstein
%T Robot Eye-in-Hand Using Fibre Optics
%B BOOK44
%P 115-126
%K AI06 AI07

%A P. A. Fehrenbach
%T Optical Alignment of Dual-in-Line Components for Assembly
%B BOOK44
%P 127-138
%K AI06 AI07 AA26 AA04

%A Da Fa Li
%T Semantically Positive Unit Resolution for Horn Sets
%J MAG53
%P 88-91
%K AI10
%X Chinese with English Summary

%A V. S. Neiman
%T Proof Search without Repeated Examination of Subgoals
%J Dokl. Akad. Nauk SSSR
%V 286
%D 1986
%N 5
%P 1065-1068
%K AI11
%X Russian

%A A. Colmerauer
%T About Natural Logic. Automated Reasoning in Nonclassical Logic
%J MAG54
%P 209-231
%K AI11

%A Ulf Grenander
%T Pictures as Complex Systems
%B Complexity, Language and Life: Mathematical Approaches
%S Biomathematics
%V 16
%I Spring
%C Berlin-Heidelberg-New York
%D 1986
%P 62-87
%K AI06

%A G. E. Mints
%T Resolution Calculi for Nonclassical Logics
%J Semiotics and Information Science
%V 25
%P 120-135
%D 1985
%K AI11
%X Akad. Nauk SSSR, Vsesoyuz. Inst. Nauchn. i Tekn. Inform., Moscow
(in Russian)

%A Charles G. Margon
%T Autologic. Automated Reasoning in Nonclassical Logic
%J MAG54
%P 257-282
%K AI11

%A B. M. Shtilman
%T A Formal Linguistic Model for Solving Discrete Optimization Problems I.
Optimization tools. Language of Trajectories
%J Soviet J. Computer Systems Science
%V 23
%D 1985
%N 5
%P 53-64

%A David Lee
%T Optimal Algorithms for Image Understanding: Current Status and Future Plans
%J J. Complexity
%V 1
%D 1985
%N 1
%P 138-146
%K AI06

%A Douglas B. West
%A Prithviraj Banerjee
%T Partial Matching in Degree-Restricted Bipartite Graphs
%J Proceedings of the Sixteenth Southeastern International Conference on
Combinatorics, Graph Theory and Computing
%P 259-266
%D 1985
%K O06

%A Kyota Aoki
%A N. Mugibayashi
%T Cellular Automata and Coupled Chaos Developed in  Lattice Chain of N
Equivalent Switching Elements
%J Phys. Lett. A
%V 114
%D 1986
%N 8-9
%P 425-429
%K AI12

%A R. J. R. Back
%T A Computational Interpretation of Truth Logic
%J Synthese
%V 66
%D 1986
%N 1
%P 15-34




%A Max Michel
%T Computation of Tempral Operators: Automated Reasoning in Nonclassical Logic
%J MAG54
%P 137-152
%K AI11

%A H. J. Warnecke
%A A. Altenhein
%T 2-1/2D Geometry Representation for Collision Avoidance of Industrial
Robots
%J MAG55
%P 269-272
%K AI07

%A W. Jacobi
%T Industrial Robots - Already Sufficiently Flexible for the User
%J MAG55
%P 273-277
%K AI07

%A H. J. Warnecke
%A G. Schiele
%T Measurement Methods for the Determination of Industrial Robot Characteristics
%J MAG55
%P 278-280
%K AI07

%A H. H. Raab
%T Assembly of Multipolar Plug Bonding Boxes in a Programmable Assembly
Cell
%J MAG55
%P 281-283
%K AA26

%A M. Schwiezer
%A E. M. Wolf
%T Strong Increase in Industrial Robot Installation
%J MAG55
%P 286
%K AT04 AI07

%A T. W. Stacey
%A A. E. Middleditch
%T The Geometry of Machining for Computer-aided Manufacture
%J MAG56
%P 83-92
%K AA26

%A S. S. Iyengar
%A C. L. Jorgensen
%A S. U. N. Rao
%A C. R. Weisbin
%T Robot Navigation Algorithms Using Learned Spatial Graphs
%J MAG56
%P 93-100
%K AI07

%A Guy Jamarie
%T On the Use of Time-Varying Intertia Links to Increase the Versatility of
Manipulators
%J MAG56
%P 101-106
%K AI07

%A Eugeny Krustev
%A Ljubomir Lilov
%T Kinematic Path Control of Robot Arms
%J MAG56
%P 107-116
%K AI07

%A Tony Owen
%T Robotics: The Strategic Issues
%J MAG56
%P 117
%K AI07

%A C. H. Cho
%T Expert Systems, Intelligent Devices, Plantwide Control and Self Tuning
Algorithms: An Update on the ISA/86 Technical Program
%J MAG56
%P 69
%K AA20 AI01

%A A. Hutchinson
%T A Data Structure and Algorithm for a Self-Augmenting Heuristic Program
%J The Computer Journal
%P 135-150
%V 29
%N 2
%D APR 1986
%K AI04

%A B. Kosko
%T Fuzzy Cognitive Maps
%J MAG57
%P 65-76
%K AI08 O04

%A C. L. Borgman
%T The Users Mental Model of an Information Retrieval System - An Experiment
on a Prototype Online Catalog
%J MAG57
%P 47-64
%K AI08 AA14

%A D. R. Peachey
%A G. I. Mccalla
%T Using Planning Techniques in Intelligent Tutoring Systems
%J MAG57
%P 77
%K AA07 AI09

%A H. J. Bernstein
%T Determining the Shape of a Convex n-sided Polygon Using 2n+k
Tacticle Probes
%J Information Processing Letters
%V 22
%N 5
%D APR 28, 1986
%P 255-260
%K AI07 O06


%A Fu-Nian Ku
%A Jian-Min Hu
%T A New Approach to the Restoration of an Image Blurred by a Linear
Uniform Motion
%J MAG58
%P 20-34
%K AI06

%A Charles F. Neveu
%A Charles R. Dyer
%A Roland T. Chin
%T Two-Dimensional Object Recognition Using Multiresolution Models
%J MAG58
%P 52-65
%K AI06

%A Keith E. Price
%T Hierarchical Matching Using Relaxation
%J MAG58
%P 66-75
%K AI06

%A Angela Y. Wu
%A S. K. Bhaskar
%A Azriel Rosenfeld
%T Computation of Geometric Properties from the Medial Axis Transform in
O(n log n) Time
%J MAG58
%P 76-92
%K AI06 O06

%A H. B. Bidasaria
%T A Method for Almost Exact Historgram Matching for Two Digitized Images
%J MAG58
%P 93-98
%K AI06 O06

%A Azriel Rosenfled
%T "Expert" Vision Systems: Some Issues
%J MAG58
%P 99-101
%K AI06 AI01

%A John R. Kender
%T Vision Expert Systems Demand Challenging Expert Interactions
%J MAG58
%P 102-103
%K AI06 AI01

%A Makoto Nagao
%T Comment on the Position Paper \*QExpert Vision Systems\*U
%J MAG58
%P 104
%K AI06 AI01

%A Leonard Uhr
%T Workshop on Goal Directed \*QExpert\*U Vision Systems: My Positions
and Comments
%J MAG58
%P 105-108
%K AI06 AI01

%A William B. Thompson
%T Comments on "Expert" Vision Systems: Some Issues
%J MAG58
%P 109-110
%K AI06 AI01

%A V. A. Kovalevsky
%T Dialog on "Expert" Vision Systems: Comments
%J MAG58
%P 111-113
%K AI06 AI01

%A David Sher
%T Expert Systems for Vision Based on Bayes Rule
%J MAG58
%P 114-115
%K AI06 AI01 O04

%A S. Tanimoto
%T The Case for Appropriate Architecture
%J MAG58
%P 116
%K AI06 AI01

%A Azriel Rosenfeld
%T Rosenfeld's Concluding Remarks
%J MAG58
%P 117
%K AI06 AI01

%A Robert M. Haralick
%T "Robot Vision" by Berthold Horn
%J MAG58
%P 118
%K AI06 AI07  AT07

%A K. Shirai
%A K. Mano
%T A Clustering Experiment of the Spectra and the Spectral Changes of Speech
to Extract Phonemic Features
%J MAG58
%P 279-290
%K AI05

%A A. K. Chakravarty
%A A. Shutub
%T Integration of Assembly Robots in a Flexible Assembly System
%B BOOK49
%P 71-88
%K AI07 AA26

%A R. C. Morey
%T Optimizing Versatility in Robotic Assembly Line Design- An Application
%B BOOK49
%P 89-98
%K AI07 AA26

%A J. Grobeiny
%T The Simple Linguistic Approach to Optimization of a Plant Layout by Branch
and Bound
%B BOOK49
%P 141-150
%K AA26 AI02 AI03

%A Z. J. Czjikiewicz
%T Justification of the Robots Applications
%B BOOK49
%P 367-376
%K AI07

%A M. J. P. Shaw
%A A. B. Whinston
%T Applications of Artificial Intelligence to Planning and Scheduling in
Flexible Manufacturing
%B BOOK49
%P 223-242
%K AI07

%A S. Subramanymam
%A R. G. Askin
%T An Expert Systems Approach to Scheduling in Flexible Manufacturing Systems
%B BOOK49
%P 243-256
%K AI07

%A Michael K. Brown
%T The Extraction of Curved Surface Features with Generic Range Sensors
%J MAG59
%P 3-18
%K AI06

%A Michael Erdmann
%T Using Backprojections for Fine Motion Planning with Uncertaintly
%J MAG59
%P 19-45
%K AI07 AI09 O04

%A Katsushi Ikeuchi
%A H. Keith Nishihara
%A Berthold K. P. Horn
%A Patrick Sobalvarro
%A Shigemi Nagati
%T Determining Grasp Configurations Using Photometric Stereo and the PRISM
Binocular Stereo System
%J MAG59
%P 46-65
%K AI06  AI07

%A Dragan Stokic
%A Miomir Vukobratovic
%A Dragan Hristic
%T Implementation of Force Feedback in Manipulation Robots
%J MAG59
%P 66-76
%K AI07

%A Oussama Khatib
%T Real-Time Obstacle Avoidance for Manipulators and Mobile Robots
%J MAG59
%P 90-98
%K AI07 AA19

%A R. Featherstone
%T A Geometric Investigation of Reach
%J MAG59
%P 99
%K AI07 AT07

%A Maria Virginia Aponte
%T Editing First Order Proofs: Programmed Rules vs. Derived Rules
%J BOOK50
%P 92-98
%K AI11

%A Hellfried Bottger
%T Automatic Theorem-Proving with Configuraitons
%J Elektron. Informationsverarb. Kybernet.
%V 21
%N 10-11
%P 523-546
%K AI11

%A D. R. Brough
%A M. H. van Emden
%T Dataflow, Flowcharts and \*QLUCID\*U style Programming in Logic
%J BOOK50
%P 252-258

%A Laurent Fribough
%T Handling Function Definitions Through Innermost Superposition and Rewriting
%B BOOK30
%P 325-344

%A T. Gergely
%A M. Szots
%T Cuttable Formulas for Logic Programming
%J BOOK50
%P 299-310

%A N. N. Leonteva
%T Information Model of the Automatic Translation System
%J Nauchno-Tekhnicheskaya Informatsiya, Seriya II -
Informatsionnye Protsessy I Sistemy
%N 10
%D 1985
%P 22-28
%X in Russian

------------------------------

End of AIList Digest
********************

From csnet_gateway Tue Sep 30 20:43:35 1986
Date: Tue, 30 Sep 86 20:43:21 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe
Subject: AIList Digest   V4 #202
Status: R


AIList Digest            Monday, 29 Sep 1986      Volume 4 : Issue 202

Today's Topics:
  Bibliography - Recent Reports

----------------------------------------------------------------------

Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Recent Reports

%A Ru-qian Lu
%T Expert Union: United Service of Distributed Expert Systems
%R 85-3
%I University of Minnesota-Duluth
%C Duluth, Minnesota
%D June, 1985
%K H03 AI01
%X A scheme for connecting expert systems in a network called an {\nit
expert union} is described.  Consultation scheduling algorithms used to
select the appropriate expert(s) to solve problems are proposed, as
are strategies for resolving contradictions.

%A J. C. F. M. Neves
%A G. F. Luger
%A L. F. Amaral
%T Integrating a User's Knowledge into a Knowledge Base Using a Logic
Based Representation
%I University of New Mexico
%R CS85-2
%K AA08 AI10

%A J. C. F. M. Neves
%A G. F. Luger
%T An Automated Reasoning System for Presupposition Analysis
%I University of New Mexico
%R CS85-3
%K AI16

%A J. C. F. M. Neves
%A G. F. Luger
%A J. M. Carvalho
%T A Formalism for Views in a Logic Data Base
%I University of New Mexico
%R CS85-4
%K AA08

%A Franz Winkler
%T A Note on Improving the Complexity of the Knuth-Bendix Completion
Algorithm
%I University of Delaware
%R 85-04
%K AI14

%A Claudio Gutierrez
%T An Integrated Office Environment Under the AI Paradigm
%I University of Delaware
%R 86-03
%K AA06

%A Amir M. Razi
%T An Empirical Study of Robust Natural Language Processing
%I University of Delaware
%R 86-08
%K AI02

%A John T. Lund
%T Multiple Cause Identification in Diagnostic Problem Solving
%I University of Delaware
%R 86-11
%K AA05 AA21

%A D. Nau
%A T.C. Chang
%T Hierarchical Representation of Problem-Solving Knowledge in a Frame-Based
Process Planning System
%I Production Automation Project, University of Rochester
%R TM-50
%C Rochester, New York
%K AA26

%T INEXACT REASONING IN PROLOG-BASED EXPERT SYSTEMS
%A Koenraad G. Lecot
%R CSD-860053
%I University of California, Los Angeles
%K AI01 O04 T02
%$ 13.75
%X Expert systems are only worthy  of  their  name  if  they  can  cope  in  a
consistent  and  natural  way  with  the  uncertainty and vagueness that is
inherent to  real  world  expertise.   This  thesis  explores  the  current
methodologies,  both  in  the  light  of  their   acceptabiity and of their
implementation in the logic programming language Prolog.  We treat in depth
the  subjective   Bayesian  approach  to  inexact  reasoning and describe a
meta-level implementation in Prolog.  This probabilistic method is compared
with  an  alternative  theory  of  belief  used  in  Mycin.  We describe an
implementation of Mycin's consultation phase.  We argue  further  that  the
theory  of  fuzzy  logic  is  more adequate to describe the uncertainty and
vagueness of real world situations.  Fuzzy logic is put  in  contrast  with
the probabilistic approaches and an implementation strategy is described.

%T DISTRIBUTED DIAGNOSIS IN CAUSAL MODELS WITH CONTINUOUS VARIABLES
%A Judea Pearl
%R CSD-860051
%I University of California, Los Angeles
%$ 1.50
%K O04 H03 AA21
%X We consider causal models in which the variables form  a  linearly  coupled
hierarchy,  and  are  subject to Gaussian sources of noise. We show that if
the number of circuits in the hierarchy is small, the impact  of  each  new
piece of evidence can be viewed as a perturbation that propagates through a
network of processors (one per variable) by local communication. This  mode
of  diagnosis  admits  flexible  control  strategies  and  facilitates  the
generation of intuitively meaningful explanations.

%T RELAXATION PROBLEM SOLVING
(with input to Chinese input problem)
%A Kam Pui Chow
%I University of California, Los Angeles
%R CSD-860058
%$ 12.00
%K AI02
%X Two fundamental problem solving techniques are introduced to help  automate
the  use  of  relaxation:  multilevel frameworks and constraint generation.
They are closely related to iterative relaxation and subproblem relaxation.
.sp 1
In multilevel problem  solving,  the  set  of  constraints  is  partitioned
vertically   into  different  levels.   Lower  level  constraints  generate
possible solutions while higher level constraints prune  the  solutions  to
reduce the combinatorial explosion.  Subproblem relaxation at first relaxes
the high level constraints; the solution is then improved by  strengthening
the relaxed constraints.
.sp 1
The constraint generation technique uses iterative relaxation to generate a
set  of  constraints  from  a  given model.  This set of constraints with a
constraint interpreter form an expert system.  This is an improvement  over
most  existing  expert  systems  which  require experts to write down their
expertise in rules.
.sp 1
These principles are illustrated by applying  them  to  the  Chinese  input
problem, which is to transform a phonetic spelling, without word breaks, of
a Chinese  sentence  into  the  corresponding  Chinese  characters.   Three
fundamental  issues  are  studied:  segmentation,  homophone  analysis, and
dictionary organization.  The problem is  partitioned  into  the  following
levels:   phonetic   spelling,   word,   and  grammar.   The  corresponding
constraints  are  legal  spellings,  legal  words,  and   legal   syntactic
structures.   Constraints  for  syntactic  structure  are  generated from a
Chinese grammar.

%T RELAXATION PROCESSES:  THEORY, CASE STUDIES AND APPLICATIONS
%A Ching-Tsun Chou
%R CSD-860057
%$ 6.25
%I University of California, Los Angeles
%K O02 T02 AA08
%X Relaxation is a powerful problem-solving paradigm in coping  with  problems
specified  using  constraints.  In  this  Thesis  we present a study of the
nature of relaxation processes. We begin with identifying  certain  typical
problems  solvable by relaxation.  Motivated by these concrete examples, we
develop a formal theory of relaxation  processes  and  design  the  General
Relaxation  Semi-Algorithm  for  solving  general  Relaxation  Problems. To
strengthen the theory,  we  do  case  studies  on  two  relaxation-solvable
problems: the Shortest-Path Problem and Prefix Inequalities.  The principal
results of these studies are polynomial-time algorithms for both  problems.
The  practical  usefulness  of relaxation is demonstrated by implementing a
program  called   TYPEINF which employs relaxation techniques to
automatically  infer  types  for Prolog programs.  Finally we indicate some
possible directions of future research.

%A J. R. Endsor
%A A. Dickinson
%A R. L. Blumenthal
%T Describe - An Explanation Facility for an Object Based System
%I Carnegie Mellon Computer Science Department
%D DEC 1985
%K AI01 O01

%A Kai-Fu Lee
%T Incremental Network Generation in Template-Based Word Recognition
%I Carnegie Mellon Computer Science Department
%D DEC 1985
%K AI05

%A J. Quinlan
%T A Comparative Analysis of Computer Architectures for Production
System Machines
%I Carnegie Mellon Computer Science Department
%D MAY 1985
%K AI01 H03 OPS5

%A M. Boggs
%A J. Carbonell
%A M. Kee
%A I. Monarch
%T Dypar-I: Tutorial and Reference Manual
%I Carnegie Mellon Computer Science Department
%D DEC 1985
%K AI01 AI02 Franz Lisp

%A Paola Giannini
%T Type Checking and Type Deduction Techniques for Polymorphic Programming
Languages
%I Carnegie Mellon Computer Science Department
%D DEC 1985
%K O02 lambda-calculus let construct

%A M. Dyer
%A M. Flowers
%A S. Muchnick
%T Lisp/85 User's Manual
%I University of Kansas, Computer Science Department
%R 77-4
%K T01

%A M. Flowers
%A M. DYer
%A S. Muchnick
%T LISP/85 Implementation Report
%I University of Kansas, Computer Science Department
%R 78-1
%K T01

%A N. Jones
%A S. Muchnick
%T Flow Analysis and Optimization of LISP-like Structures
%I University of Kansas, Computer Science Department
%R 78-2
%K T01

%A U. Pleban
%T The Standard Semantics of a Subset of SCHEME, A Dialect of LISP
%I University of Kansas, Computer Science Department
%R 79-3
%K T01  O02

%A S. Muchnick
%A U. Pleban
%T A Semantic Comparison of LISP and SCHEME
%I University of Kansas, Computer Science Department
%R 80-3
%K T01 O02

%A M. Jones
%T The PEGO Acquisition System Implementaiton Report
%I University of Kansas, Computer Science Department
%R 80-4

%A Gary Borchardt
%A Z. Bavel
%T  CLIP, Computer Language for Idea Processing
%I University of Kansas, Computer Science Department
%R 81-4

%A Marek Holynski
%A Brian R. Gardner
%A Rafail Ostrovsky
%T Toward an Intelligent Computer Graphics System
%I Boston University, Computer Science Department
%R BUCS Tech Report #86-003
%D JAN 1986
%K T01 AA16

%A Joyce Friedman
%A Carol Neidle
%T Phonological Analysis for French Dictation: Preliminaries to an Intelligent
Tutoring System
%I Boston University, Computer Science Department
%R BUCS Tech Report #86-004
%D APR 1986
%K AI02 AA07

%A Pawel Urzyczyn
%T Logics of Programs with Boolean Memory
%I Boston University, Computer Science Department
%R BUCS Tech Report #86-006
%D APR 1986
%K AI16

%A Chua-Huang
%A Christian Lengauer
%T The Derivation of Systolic Implementatons of Programs
%R TR-86-10
%I Department of Computer Sciences, University of Texas at Austin
%D APR 1986
%K AA08 AA04 H03 H02

%A E. Allen Emerson
%A Chin-Laung Lei
%T Model Checking in the Propositional Mu-Calculus
%R TR-86-06
%I Department of Computer Sciences, University of Texas at Austin
%D FEB 1986
%K  O02 AA08


%A R. D. Lins
%T On the Efficiency of Categorical Combinators as a Rewriting System
%D NOV 1985
%R No 34
%I University of Kent at Canterbury, Computing Laboratory
%K AI11 AI14


%A R. D. Lints
%T A Graph Reduction Machine for Execution of Categorical Combinators
%D NOV 1985
%R No 36
%I University of Kent at Canterbury, Computing Laboratory

%A S. J. Thompson
%T Proving Properties of Functions Defined on Lawful Types
%D MAY 1986
%R No 37
%I University of Kent at Canterbury, Computing Laboratory
%K AA08 AI11


%A V. A. Saraswat
%T Problems with Concurrent Prolog
%D JAN 1986
%I Carnegie Mellon University, Department of Computer Science
%K T02 H03

%A K. Shikano
%T Text-Independent Speaker Recognition Expertiments using Codebooks in Vector
quantization
%D JAN 1986
%I Carnegie Mellon University
%K AI05

%A S. Nakagawa
%T Speaker Independent Phoneme Recognition in Continuous Speech by
a Statistical Method and a Stochastic Dynamic Time Warping Method
%D JAN 1986
%I Carnegie Mellon University
%K AI05

%A F. Hau
%T Two Designs of Functional Units for VLSI Based Chess Machines
%D JAN 1986
%I Carnegie Mellon University
%K AA17 H03
%X Brute force chess automata searching 8 piles (4 full moves) or deeper have
been dominating the computer Chess scene in recent years and have reached
master level performance.  One intereting question is whether 3 or 4 additional
piles couples with an improved evaluation scheme will bring forth world
championship level performance.  Assuming an optimistic branching ratio of 5,
speedup of at least one hundred fold over the best current chess automaton
would be necessary to reach the 11 or 12 piles per move range.

%A Y. Iwasaki
%A H. A. Simon
%T Theories of Causual Ordering: Reply to de Kleer and Brown
%D FEB 1986
%I Carnegie Mellon University
%K Causality in Device Behavior AA04

%A H. Saito
%A M. Tomita
%T On Automatic Composition of Stereotypic Documents in Foreign Languages
%D DEC 1985
%I Carnegie Mellon University
%K AI02


%A T. Imielinski
%T Query Processing in Deductive Databases with Incomplete Information
%R DCS-TR-177
%I Rutgers University, Laboratory for Computer Science Research
%K AA09 AI10 Horn Clauses Skolem functions


%A T. Imielinski
%T Abstraction in Query Processing
%R DCS-TR-178
%I Rutgers University, Laboratory for Computer Science Research
%K AA09 AI11

%A T. Imielinski
%T Results on Translating Defaults to Circumscription
%R DCS-TR-179
%I Rutgers University, Laboratory for Computer Science Research
%K AA09

%A T. Imielinski
%T Transforming Logical Rules by Relational Algebra
%R DCS-TR-180
%I Rutgers University, Laboratory for Computer Science Research
%K AA09 AI10 Horn clauses

%A T. Imeielinski
%T Automated Deduction in Databases with Incomplete Information
%R DCS-TR-181
%I Rutgers University, Laboratory for Computer Science Research
%K AA09

%A B. A. Nadel
%T Representationi-Selection for Constraint Satisfaction Problems: A Case
Study Using n-queens
%R DCS-TR-182
%I Rutgers University, Laboratory for Computer Science Research
%K AI03 AA17

%A B. A. Nadel
%T Theory-Based Search Order Selection for Constraint Satisfaction
Problems
%R DCS-TR-183
%I Rutgers University, Laboratory for Computer Science Research
%K AI03

%A C. V. Srinivasan
%T Problems, Challenges and Opportunities in Naval Operational Planning
%R DCS-TR-187
%I Rutgers University, Laboratory for Computer Science Research
%K AI09 AA18

%A M. A. Bienkowski
%T An Example of Structured Explanation Generation
%I Princeton University Computer ScienceDepartment
%D NOV 1985
%K O01

%A Bruce G. Buchanan
%T Some Approaches to Knowledge Acquisition
%I Stanford University Computer Science Department
%R STAN-CS-85-1076
%D JUL 1985
%$ $5.00
%K AI16

%A John McCarthy
%T Applications of Circumscription to Formalizing Common Sense Knowledge
%I Stanford University Computer Science Department
%R STAN-CS-85-1077
%D SEP 1985
%$ $5.00
%K AI15

%A Stuart Russell, Esq.
%T The Compleat Guide to MRS
%I Stanford University Computer Science Department
%R STAN-CS-85-1080
%D JUN 1985
%$ $15.00
%K AI16

%A Jeffrey S. Rosenschein
%T Rational Interaction: Cooperation among Intelligent Agents
%I Stanford University Computer Science Department
%R STAN-CS-85-1081
%D OCT 1985
%$ $15.00
%K AI16

%A Allen Van Gelder
%T A Message Passing Framework for Logical Query Evaluation
%I Stanford University Computer Science Department
%R STAN-CS-85-1088
%D DEC 1985
%$ $5.00
%K AI10 Horn Clauses relational data bases H03 AA09 acyclic database schemas

%A Jeffrey D. Ullman
%A Allen Van Gelder
%T Parallel Complexity of Logical Query Programs
%I Stanford University Computer Science Department
%R STAN-CS-85-1089
%D DEC 1985
%$ $5.00
%K AI10 H03 AA09

%A Kaizhi Yue
%T Constructing and Analyzing Specifications of Real World Systems
%I Stanford University Computer Science Department
%R STAN-CS-86-1090
%D SEP 1985
%K AI01 AA08
%X available in microfilm only

%A Li-Min Fu
%T Learning Object-Level and Metal-Level Knowledge in Expert Systems
%I Stanford University Computer Science Department
%R STAN-CS-86-1091
%D NOV 1985
%$ $15.00
%K jaundice AI04 AI01 AA01 condenser

%A Devika Subramanian
%A Bruce G. Buchanan
%T A General Reading List for Artificial Intelligence
%I Stanford University Computer Science Department
%R STAN-CS-86-1093
%D DEC 1985
%$ 10.00
%K AT21
%X bibliography for students studying for AI qualifying exam at Stanford

%A Bruce G. Buchanan
%T Expert Systems: Working Systems and the Research Literature
%I Stanford University Computer Science Department
%R STAN-CS-86-1094
%D DEC 1985
%$ 10.00
%K AT21 AI01

%A Jiawei Han
%T Pattern-Based and Knowledge-Directed Query Compilation for Recursive Data
Bases
%I The University of Wisconsin-Madison Computer Sciences Department
%R TR 629
%D JAN 1986
%$ 5.70
%K AA09 AI01 AI09
%X Abstract:  Expert database systems (EDS's) comprise an interesting class of
computer systems which represent a confluence of research in artificial
intelligence, logic, and database management systems.  They involve
knowledge-directed processing of large volumes of shared information and
constitute a new generation of knowledge management systems.
Our research is on the deductive augmentation of relational database
systems, especially on the efficient realization of recursion.  We study
the compilation and processing of recursive rules in relational database
systems, investigating two related approaches:  pattern-based recursive rule
compilation and knowledge-directed recursive rule compilation and planning.
Pattern-based recursive rule compilation is a method of compiling and processing
recursive rules based on their recursion patterns.  We classify recursive rules
according to their processing complexity and develop three kinds of algorithms
for compiling and processing different classes of recursive rules: transitive
closure algorithms, SLSR wavefront algorithms, and stack-directed compilation
algorithms.  These algorithms, though distinct, are closely related.  The more
complex algorithms are generalizations of the simpler ones, and all apply the
heuristics of performing selection first and utilizing  previous processing
results (wavefronts) in reducing query processing costs.  The algorithms are
formally described and verified, and important aspects of their behavior are
analyzed and experimentally tested.
To further improve search efficiency, a knowledge-directed recursive rule
compilation and planning technique is introduced.  We analyze the issues raised
for the compilation of recursive rules and propose to deal with them by
incorporating functional definitions, domain-specific knowledge, query
constants, and a planning technique.  A prototype knowledge-directed relational
planner, RELPLAN, which maintains a high level user view and query interface,
has been designed and implemented, and experiments with the prototype are
reported and illustrated.

%A A. P. Anantharman
%A Sandip Dasgupta
%A Tarak S. goradia
%A Prasanna Kaikini
%A Chun-Pui Ng
%A Murali Subbarao
%A G. A. Venkatesh
%A Sudhanshu Verma
%A Kumar A. Vora
%T Experience with Crystal, Charlotte and Lynx
%I The University of Wisconsin-Madison Computer Sciences Department
%R TR 630
%D FEB 1986
%K H03 T02 Waltz constraint-propagation
%X Abstract:  This paper describes the most recent implementations of
distributed algorithms at Wisconsin that use the Crystal multicomputer, the
Charlotte operating system, and the Lynx language.  This environment is an
experimental testbed for design of such algorithms.  Our report is meant to
show the range of applications that we have found reasonable in such an
environment and to give some of the flavor of the algorithms that have been
developed.  We do not claim that the algorithms are the best possible for
these problems, although they have been designed with some care.  In
several cases they are completely new or represent significant
modifications of existing algorithms.  We present distributed
implementations of B trees, systolic arrays, prolog tree search, the
travelling salesman problem, incremental spanning trees, nearest-neighbor
search in k-d trees, and the Waltz constraint-propagation algorithm.  Our
conclusion is that the environment, although only recently available, is
already a valuable resource and will continue to grow in importance in
developing new algorithms.

%A William J, Rapaport
%T SNePS Considered as a Fully Intensional Propositional
Semantic Network
%R TR 85-15
%I Univ. at Buffalo (SUNY), Dept. of Computer Science
%D October 1985
%K Semantic Network Processing System, syntax, semantics,
intensional knowledge representation system, cognitive
modeling, database management, pattern recognition, expert
systems, belief revision, computational linguistics
aa01 ai09 ai16
%O 46 pages
%X Price: $1.00 North America, $1.50 Other

%A William J. Rapaport
%T Logic and Artificial Intelligence
%R TR 85-16
%I University at Buffalo (SUNY), Dept. of Computer Science
%D November 1985
%K logic, propositional logic, predicate logic, belief systems AA16
%O 44 pages
%X Price: $1.00 North America, $1.50 Other

%A William J. Rapaport
%T Review of "Ethical Issues in the Use of Computers"
%R TR 85-17
%I University at Buffalo, Dept. of Computer Science
%D November 1985
%K computer ethics O06
%O 6 pages
%X Price: $1.00 North America, $1.50 Other

%A Radmilo M. Bozinovic
%T Recognition of Off-line Cursive Handwriting:
a Case of Multi-level Machine Perception
%I Univ. at Buffalo (SUNY), Dept. of Computer Science
%D March 1985
%R TR 85-01
%K Cursive script recognition, artificial intelligence,
computer vision, language perception, language understanding
%O 150 pages
%X Price: $2.00 North America, $3.00 other

%A R. Hookway
%T Verification of Abstract Types Whose Representation Share Storage
%D April 1980
%I Case Western Reserve University, Computer Engineering and Science Department
%R CES-80-02
%K AA09
%$ $2.00

%A G. Ernst
%A J. K. Vavlakha
%A W. F. Ogden
%T Verification of Programs with Procedure-Type Parameters
%I Case Western Reserve University, Computer Engineering and Science Department
%R CES-80-11
%D 1980
%K AA09
%$ $2.00

%A G. Ernst
%A F. T. Bradshaw
%A R. J. Hookway
%T A Note on Specifications of Concurrent Processes
%I Case Western Reserve University, Computer Engineering and Science Department
%R CES-81-01
%D FEB 1981
%K AA09
%$ $2.00

%A J. Franco
%T The Probabilistic Analysis of the Pure Literal Heuristic in Theorem
Proving
%I Case Western Reserve University, Computer Engineering and Science Department
%R CES-81-04
%D 1981
%K AI03 AI11
%$ $2.00

%A E. J. Branagan
%T An Interactive Theorem Prover Verification
%I Case Western Reserve University, Computer Engineering and Science Department
%R CES-81-09
%D AUG 1981
%K AI11
%$ $2.00

%A G. W. Ernst
%T A Method for verifying Concurrent Processes
%I Case Western Reserve University, Computer Engineering and Science Department
%R CES-82-01
%D FEB 1982
%K AA09
%$ $2.00

%A Chang-Sheng Yang
%T A Computer Intelligent System for Understanding Chinese Homonyms
%I Case Western Reserve University, Computer Engineering and Science Department
%R CES-83-10
%D AUG 1983
%K AI02
%$ $2.00

%A G. Ernst
%T Extensions to Methods for Learning Problem Solving Strategies
%I Case Western Reserve University, Computer Engineering and Science Department
%R CES-84-02
%D MAY 1984
%K AI04
%$ $2.00

%A R. J. Hookway
%T Analysis of Asynchronous Circuits Using Temporal Logic
%I Case Western Reserve University, Computer Engineering and Science Department
%R CES-84-07
%D JUL 1984
%K AA04
%$ $2.00

%A Sterling, Leon
%T Explaining Explanations Clearly
%I Case Western Reserve University, Computer Engineering and Science Department
%R CES-85-03
%D MAY 1985
%K O01
%$ $2.00

------------------------------

End of AIList Digest
********************

From csnet_gateway Tue Oct  7 07:03:33 1986
Date: Tue, 7 Oct 86 07:03:27 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #203
Status: RO


AIList Digest             Monday, 6 Oct 1986      Volume 4 : Issue 203

Today's Topics:
  Query - Prolog Chemistry Systems & RuleMaster & AI Graduate Programs &
    Expert Systems and Deep Knowledge & Textbook for ES Applications &
    Communications Expert Systems & Generic Expert System &
    Integrated Inference Machines & Byte Prolog & Digitalk Smalltalk,
  AI Tools - Digitalk Smalltalk & Line Expert & XLISP &OPS5,
  Vision - Face Recognition,
  Logic Programming - TMS Loops

----------------------------------------------------------------------

Date: Sun, 28 Sep 86 10:46:15 -0200
From: Jacob Levy  <jaakov%wisdom.bitnet@WISCVM.WISC.EDU>
Subject: Chemistry systems & PROLOG

        Has anyone programmed or used a logic programming based system for
use in Chemistry? I am especially interested in organic synthesis planning
systems. Do you know of such systems written in other languages? Any help,
references and info will be greatly appreciated,

        Thanks
        Rusty Red (AKA Jacob Levy)

        BITNET:                         jaakov@wisdom
        ARPA:                           jaakov%wisdom.bitnet@wiscvm.ARPA
        CSNET:                          jaakov%wisdom.bitnet@csnet-relay
        UUCP:                           jaakov@wisdom.uucp

------------------------------

Date: Sat, 27 Sep 86 09:15:20 cdt
From: Esmail Bonakdarian <bonak%cs.uiowa.edu@CSNET-RELAY.ARPA>
Subject: RuleMaster

Anybody out there have any comments about RuleMaster? RuleMaster
(a product of the Radian Corporation) is a software tool for
supporting the development of expert systems. I would be grateful
for any information, comments from people who have used this package
(especially on a DOS machine) etc.

If there is enought interest I will collect and post all of the
responses back to AIlist.

Thanks,
Esmail

------------------------------

Date: 29 Sep 86 00:29:16 GMT
From: gatech!gitpyr!krubin@seismo.css.gov  (Kenny Rubin)
Subject: Differences among Grad AI programs


        The following is a request for information about the
differences among the various universities that offer graduate
degrees in AI. I apologize in advance if this topic has received
prior discussion, I have been out the country for a few months
and did not have access to the net.

        The goal of all this is to compile a current profile of
the graduate AI programs at the different universities. Thus, any
information about the different programs such as particular strengths
and weaknesses would be useful. Also, a comparison and/or conclusions
drawn between the various programs would be helpful.

        I am essentially interested in the areas of AI that each
university performs research in. For example research pertaining
to Knowledge Representation, Natural Language Processing, Expert
System Development, Learning, Robotics, etc...

        Basically anything that you think potential applicants to
the various universities would like to know, would be helpful. Feel
free to comment about the university(ies) that you know best:
   - MIT, CMU, Yale, Standford, UC Berkeley, UCLA, etc...

        Please send all response by E-mail to me to reduce net traffic.
If there is sufficient interest, I will post a compiled summary.

                  Kenneth S. Rubin   (404) 894-2348
               Center for Man-Machine Systems Research
             School of Industrial and Systems Engineering
                   Georgia Institute of Technology
                        Post Office Box  35826
                        Atlanta, Georgia 30332
       Majoring with: School of Information and Computer Science
...!{akgua,allegra,amd,hplabs,ihnp4,seismo,ut-ngp}!gatech!gitpyr!krubin

------------------------------

Date: 29 Sep 86 18:43:08 GMT
From: mcvax!kvvax4!rolfs@seismo.css.gov  (Rolf Skatteboe)
Subject: Expert systems and deep knowledge

Hello:

For the time being, I'm working on my MSc thesis with main goal to
investigate the combination of knowledge based diagnosis system and
the use of mathematical models  of gas turbines. I will used this models as
deep knowledge in order to improve the results of the diagnosis system.
The models can be used both as early warning fault systems and as sensor
verification and test. The model can also be used to evaluate changes in
machine parameters caused by engine degradiation.
So far I have found some articles about diagnostic reasoning based on structure
and behavior for digital electronic hardware.
While I'm trying to find the best system structure for a demonstration system,
I would like to get hold on information (articles references, program
examples, and other people's experiences) both on using deep knowledge
in expert systems in general, and  the use of mathematical models in
particular.

I hope that someone can help me.

Grethe Tangen
Kongsberg KVATRO, NORWAY

------------------------------

Date: 3 Oct 1986 0904-EDT
From: Holger Sommer <SOMMER@C.CS.CMU.EDU>
Subject: Expert system Textbook For Applications

I was asked to develop a course for Undergrad seniors and Beginning
Graduated Students in Engineering, an introductory course for Expert
System Technology with the focus on Application.  I am looking for a
suitable introductory textbook at the beginners level which could help
me to get the students familiar with AI in general and expert systems
specifically.  Also if anyone has some experiance teachning a course for
non-computer science students in the AI area I would appreciate our
comments.  Please send mail to Sommer@c.cs.cmu.edu

------------------------------

Date: Mon, 29 Sep 86 10:42:27 edt
From: Lisa Meyer <lem%galbp.uucp@CSNET-RELAY.ARPA>
Subject: Request for Info on Expert Systems Development


I am a senior Info & Computer Science major at Georgia Tech.  I will be
constructing an Expert System to diagnose communications setups & their
problems for my senior design project at the request of my cooperative
ed. employer.  I have only had an introductory course in AI, so a large
part of this project will be spent on researching information on expert
system development.

Any information on :  Constructing Expert Systems (esp. for diagnostics)
                      PC versions of Languages suitable for  building
                         Expert Systems
                      Public Domain Expert Systems, ES Shells, or de-
                         velopment tools
                      Or good books, articles, or references to the
                         subjects listed above

WOULD  BE GREATLY APPRECIATED.  As the goal of my project is to con-
   struct a working diagnostic expert system and not to learn every-
   thing there is to know about AI, pointers to good sources of
   infromation, copies of applicable source, and information those
   who ARE knowledgable in the field of AI and Expert System Con-
   struction would be EXTREMELY HELPFUL.

                  THANKS IN ADVANCE,
                                     Lisa Meyer  (404-329-8022)
                                                  Atlanta, GA


=====================================================================
Lisa Meyer
Harris / Lanier
Computer R&D (Cooperative Education Program)
Information & Computer Science Major
   Georgia Institute of Technology Georgia I
Ga. Tech Box 30750, Atlanta Ga. 30332
{akgua,akgub,gatech}!galbp!lem
=====================================================================

------------------------------

Date: 30 Sep 86 13:34:16 GMT
From: lrl%psuvm.bitnet@ucbvax.Berkeley.EDU
Subject: Expert System Wanted

Does anyone know of a general purpose expert system available for VM/CMS?
I'm looking for one that would be used on a university campus by a variety
of researchers in different disciplines.  Each researcher would feed their
own rules into it.

Also, can anyone recommend readings, conferences, etc. for someone getting
started in this field?

Thanks.

Linda Littleton                        phone:  (814) 863-0422
214 Computer Building                  bitnet: LRL at PSUVM
Pennsylvania State University
University Park, PA  16802

------------------------------

Date: Tue, 30 Sep 86 0:19:04 CDT
From: Glenn Veach <veach%ukans.csnet@CSNET-RELAY.ARPA>
Subject: Address???

In a recent summary of the Spang-Robinson Report reference was made to the
company "Integrated Inference Machines".  Does anyone have an address for them?

------------------------------

Date: 2 Oct 86 18:21:29 GMT
From: john@unix.macc.wisc.edu  (John Jacobsen)
Subject: pd prolog

!!!!!!!!!!!!!!

Does anyone have the public domain prolog package discussed in this month's
BYTE magazine?


John E. Jacobsen
University of Wisconsin -- Madison Academic Computing Center

------------------------------

Date: 4 Oct 86 00:32:11 GMT
From: humu!uhmanoa!todd@bass.nosc.mil  (Todd Ogasawara)
Subject: Digitalk Smalltalk for the PC

If anyone out there has played with the version of Smalltalk for the
PC by Digitalk, I'd like to get your opinions.  I am especially
interested in the object-oriented version of Prolog that comes with
the package.  Thanks..todd

Todd Ogasawara, University of Hawaii
Dept. of Psychology & U. of Hawaii Computing Center

UUCP:     {ihnp4,dual,vortex}!islenet!
                                      \
                                        \__ uhmanoa!todd
                                        /
      {backbone}!sdcsvax!noscvax!humu!/
                       /
                clyde/

                                        [soon to change to uhccux!todd]

ARPA:  humu!uhmanoa!todd@noscvax

** I used to be: ogasawar@nosc.ARPA & ogasawar@noscvax.UUCP

------------------------------

Date: 4 Oct 86 23:56:43 GMT
From: spdcc!dyer@harvard.harvard.edu  (Steve Dyer)
Subject: Re: Digitalk Smalltalk for the PC

I have it and am very impressed.  Perhaps more convincing though, I have
a friend who's been intimately involved with Smalltalk development
from the very beginning who was also very impressed.  It's even more
remarkable because the Digitalk folks didn't license the Smalltalk-80
virtual machine from Xerox; they developed their system from the formal
and not-so-formal specifications of Smalltalk 80 available in the public
domain.  Apparently, they can call their system "Smalltalk V" because
"Smalltalk" isn't a trademark of Xerox; only "Smalltalk-80" is.

I haven't played with their Prolog system written in Smalltalk.
--
Steve Dyer
dyer@harvard.HARVARD.EDU
{linus,wanginst,bbnccv,harvard,ima,ihnp4}!spdcc!dyer

------------------------------

Date: 3 Oct 1986 13:30:37 EDT
From: David Smith <DAVSMITH@A.ISI.EDU>
Subject: Communications Experts

I was curious to find out about Werner Uhrig's question (9/10) relating
to an Infoworld article from Smyrna, Ga since Ga is not exactly a
hotbed of AI activity.  I spoke to Nat Atwell of Concept
Development Systems about Line Expert ($49.95).  It is apparently
an off-line Turbo Prolog application with knowledge about data
set interfacing, known problems etc.,  including the ability to draw
schematics of cables on the screen for you.  For more info,
call nat at (404) 434-4813.

------------------------------

Date: Fri, 26 Sep 86 11:06 PDT
From: JREECE%sc.intel.com@CSNET-RELAY.ARPA
Subject: XLISP Availability

Although XLISP is available on a number of PC bulletin boards, your best bet
for the PC version would be the BIX network run by Byte magazine.  It has its
own forum run by the author, David Betz, and you can turn around a message
to him in 1-2 days.  Information on how to sign up has been in most of the
recent issues of Byte.  Also, the latest version is 1.7, and there is talk
of a compiler coming out in the future.

John Reece
Intel

------------------------------

Date: Mon, 29 Sep 86 0:30:53 BST
From: Fitch@Cs.Ucl.AC.UK
Subject: OPS5 on small machines (re V4 #183)

There is OPS5 for the IBM-PC running UOLISP, from North West Computer
Algorithms.  It is the franz version slightly modified.
  I have run OPS5 on an Atari and on an Amiga.  It does not need a very big
system to do some things.
==John Fitch

------------------------------

Date: Mon 29 Sep 86 15:40:24-CDT
From: Charles Petrie <AI.PETRIE@MCC.COM>
Reply-to: Petrie@MCC
Subject: TMS Query Response

More detail on Don Rose's TMS query:

  Does anyone know whether the standard algorithms for belief revision
  (e.g. dependency-directed backtracking in TMS-like systems) are
  guaranteed to halt?  That is, is it possible for certain belief networks
  to be arranged such that no set of mutually consistent beliefs can be found
  (without outside influence)?

There are at least three distinct Doyle-style algorithms. Doyle's doesn't
terminate on unsatisfiable cicularities.  James Goodwin's algorithm
does.  This algorithm is proved correct in "An Improved Algorithm for
Non-monotonic Dependency Net Update", LITH-MAT-R-82-23, Linkoping
Institute of Technology. David Russinoff's algorithm not only halts
given an unsatisfiable circularity, but is guaranteed to find a
well-founded, consistent set of status assignments, even if there are
odd loops, if such a set is possible. There are dependency nets for
which Russinoff's algorithm will properly assign statuses and Goodwin's
may not.  An example and proof of correctness for this algorithm is
given in "An Algorithm for Truth Maintenance", AI-068-85,
Microelectronics and Computer Technology Corporation.  Also, Doyle made
the claim that an unsatisfiable circularity can be detected if a node is
its own ancestor after finding a valid justification with a NIL status
in the Outlist. Detection of unsatisfiable circularities turns out to be
more difficult than this. This is noted in "A Diffusing Computation for
Truth Maintenance" wherein I give a distributed computation for status
assignment (published in the Proc. 1986 Internat. Conf. on Parallel
Processing, IEEE) that halts on unsatisfiable circularities.

The term "unsatisfiable circularity" was introduced by Doyle and refers
to a dependency network that has no correct status labeling.  The term
"odd loop" was introduced by Charniak, Riesbeck, and McDermott in
section 16.7 of "Artificial Intelligence Programming".  An equivalent
definition is given by Goodwin.  In both, an odd loop refers to a
particular circular path in a dependency net.  As Goodwin notes, such
odd loops are a necessary, but not sufficient, condition for an unsatisfiable
circularity.

All of the algorithms mentioned above are for finding a proper set of
status assignments for a dependency network.  A distinct issue is the
avoidance of the creation of odd loops, which may introduce
unsatisfiable circularities, by Doyle-style dependency-directed
backtracking.  Examples of creation of such odd loops and algorithms to
avoid such are described in my technical reports on DDB. Michael
Reinfrank's report on the KAPRI system also notes the possibility of
odd loops created by DDB. (DDB references on request to avoid an even
longer note.)

Charles Petrie
PETRIE@MCC

------------------------------

Date: 3 Oct 1986 13:36:55 EDT
From: David Smith <DAVSMITH@A.ISI.EDU>
Subject: Computer Vision

Peter Snilovicz recently asked about recognizing faces.  I saw a really
interesting presentation on the subject Cortical Thought Theory by Rick Routh,
ex-AFIT now with the Army at Fort Gordon.  He can be reached at (404)791-3011.

------------------------------

End of AIList Digest
********************

From csnet_gateway Tue Oct  7 07:03:50 1986
Date: Tue, 7 Oct 86 07:03:39 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #204
Status: RO


AIList Digest             Monday, 6 Oct 1986      Volume 4 : Issue 204

Today's Topics:
  Seminars - Connectionist Networks (UPenn) &
    Automatic Class Formation (SRI) &
    Computers are not Omnipotent (CMU) &
    Automating Diagnosis (CMU) &
    Temporal Logic (MIT) &
    Program Transformations and Parallel Lisp (SU) &
    Temporal Theorem Proving (SU) &
    Efficient Unification of Quantified Terms (MIT) &
    Planning Simultaneous Actions (BBN) &
    Cognitive Architecture (UPenn)

----------------------------------------------------------------------

Date: Mon, 29 Sep 86 14:52 EDT
From: Tim Finin <Tim@cis.upenn.edu>
Subject: Seminar - Connectionist Networks (UPenn)


                            CONNECTIONIST NETWORKS
                             Jerome A. Feldman
                        Computer Science Department
                          University of Rochester

There  is  a  growing interest in highly interconnected networks of very simple
processing elements within artificial intelligence circles.  These networks are
referred to as Connectionist Networks and are playing an increasingly important
role in artificial intelligence and  cognitive  science.    This  talk  briefly
discusses  the  motivation  behind pursuing the the connectionist approach, and
discusses a connectionist model of how mammals are able  to  deal  with  visual
objects   and   environments.     The  problems  addressed  include  perceptual
constancies, eye movements and the stable visual  world,  object  descriptions,
perceptual generalizations, and the representation of extrapersonal space.

The  development  is  based  on  an  action-oriented notion of perception.  The
observer  is  assumed  to  be  continuously  sampling  the  ambient  light  for
information  of  current  value.   The central problem of vision is taken to be
categorizing and locating objects in the environment.   The  critical  step  in
this   process  is  the  linking  of  visual  information  to  symbolic  object
descriptions,  i.e.,  indexing.    The  treatment  focuses  on  the   different
representations  of  information  used in the visual system.  The model employs
four representation frames that capture information  in  the  following  forms:
retinotopic, head-based, symbolic, and allocentric.

The  talk ends with a discussion of how connectionist models are being realized
on existing architectures such as large multiprocessors.

                         Thursday, October 2, 1986
                          Room 216 - Moore School
                             3:00 - 4:30 p.m.

                          Refreshments Available
                     Faculty Lounge - 2:00 - 3:00 p.m.

------------------------------

Date: Wed 1 Oct 86 11:46:40-PDT
From: Amy Lansky <LANSKY@SRI-VENICE.ARPA>
Subject: Seminar - Automatic Class Formation (SRI)


         PROBABILISTIC PREDICTION THROUGH AUTOMATIC CLASS FORMATION

                Peter Cheeseman (CHEESEMAN@AMES-PLUTO)
                      NASA Ames Research Center

                    11:00 AM, MONDAY, October 6
               SRI International, Building E, Room EJ228


A probabilistic expert system is a set of probabilistic connections
(e.g. conditional or joint probabilities) between the known variables.
These connections can be used to make (conditional) probabilistic
predictions for variables of interest given any combination of known
variable values.  Such systems suffer a major computational problem---
once the probabilstic connections form a complex inter-connected
network, the cost of performing the necessary probability calculations
becomes excessive.  One approach to reducing the computational
complexity is to introduce new "variables" (hidden causes or dummy
nodes) that decouple the interactions between the variables.  Judea
Pearl has described an algorithm for introducing sufficient dummy
nodes to create a tree structure, provided the probabilistic
connections satisfy certain (strong) restrictions.  This talk will
describe a procedure for finding only the significant "hidden causes",
that not only lead to a computationally simple procedure, but subsume
all the significant interactions between the variables.

VISITORS:  Please arrive 5 minutes early so that you can be escorted up
from the E-building receptionist's desk.  Thanks!

------------------------------

Date: 28 Sep 1986 1228-EDT
From: David A. Evans <DAE@C.CS.CMU.EDU>
Subject: Seminar - Computers are not Omnipotent (CMU)


                  PHILOSOPHY COLLOQUIUM ANNOUNCEMENT:

                    COMPUTERS ARE NOT OMNIPOTENT

                            David Harel

                         Weizmann Institute
                                and
                       Carnegie Mellon University


                     Monday, October 6   4:00 p.m.
                            Porter Hall 223D


In April, 1984, TIME magazine quoted a computer professional as saying:

  "Put the right kind of sofware into a computer and it will do
   whatever you want it to. There may be limits on what you can
   do with the machines themselves, but there are no limits on
   what you can do with the software."

In the talk we shall disprove this contention outright, by exhibiting a
wide array of results obtained by mathematicians and computer scientists
between 1935 and 1983. Since the results point to inherent limitations of
any kind of computing device, even with unlimited resources, they appear
to have interesting philosophical implications concerning our own
limitations as entities with finite mass.

------------------------------

Date: 29 September 1986 2247-EDT
From: Masaru Tomita@A.CS.CMU.EDU
Subject: Seminar - Automating Diagnosis (CMU)

Date:  10/7 (Tuesday)
Time: 3:30
Place: WeH 5409

               Some AI Applications at Digital
              Automating Diagnosis: A case study

                       Neil Pundit
                    Kamesh Ramakrishna

           Artificial Intelligence Applications Group
                Digital Equipment Corporation
                 77 Reed Road (HLO2-3/M10)
                Hudson, Massachusetts, 01749


The Artificial Intelligence Applications Group at Digital is engaged in the
development of expert systems technology in the context of many real-life
problems drawn from within the corporation and those of customers. In
addition, the group fosters basic research in AI by arrangements with
leading universities. We plan to briefly describe some interesting
applications. However, to satisfy your appetite for technical content, we
will describe in some detail our progress on Beta, a tool for automating
diagnosis.

The communication structure level is a knowledge level at which certain
kinds of diagnostic reasoning can occur. It is an intermediate level between
the level at which current expert systems are designed (using knowledge
acquired from experts) and the level at which ``deep reasoning'' systems
perform (based on knowledge of structure, function, and behavior of the
system being diagnosed). We present an example of an expert system that was
designed the old-fashioned way and the heuristics that were the basis for
recognizing the existence of the communication structure level.

Beta is a language for specifying the communication structure of a system so
that these heuristics can be compiled into a partially automatically
generated program for diagnosing system problems. The current version of
Beta can handle a specific class of communication structure that we call a
``control hierarchy'' and can analyze historical usage and error data
maintained as a log file. The compiler embeds the heuristics in a generated
mix of OPS5 and C code. We believe that Beta is a better way for designers
and programmers who are not AI experts to express their knowledge of a
system than the current rule-based or frame-based formalisms.

------------------------------

Date: Thu,  2 Oct 86 15:12:49 EDT
From: "Elisha P. Sacks" <elisha%MX.LCS.MIT.EDU@MC.LCS.MIT.EDU>
Subject: Seminar - Temporal Logic (MIT)


                             E. Taatnoon

              "The Third Cybernetics and Temporal Logic"


     I aim to link up the concepts of system bifurcation and system
catastrophe with temporal logic in order to show the applicability of
dialectical reasoning to metamorphic system transformations.  A system
catastrophe is an innovation resulting from reorganization resulting
from a switch from positive to negative feedback or vice versa.  The
subsystems would then be oscillators and the truth of any descriptive
statement is then distributive.  Such oscillations would produce an
uncertainty in the temporal trajectory of the system which would
increase both towards the past and the future.  This means that time
is not a scalar dimension, but a quadratic paraboloid distribution of
converging and diverging transition probabilities.  A social system
composed of such oscillators would be heterarchical rather than
hierarchical.

Refreshments.
Hosts: Dennis Fogg and Boaz Ben-Zvi

Place: 8th Floor Playroom
Time:  Noon

------------------------------

Date: 30 Sep 86  0947 PDT
From: Carolyn Talcott <CLT@SAIL.STANFORD.EDU>
Subject: Seminar - Program Transformations and Parallel Lisp (SU)


Speaker: James M. Boyle, Argonne National Laboratory

Time: Monday, October 6, 4pm
Place: 252 Margaret Jacks (Stanford Computer Science Dept)

                        Deriving Parallel Programs
                      from Pure LISP Specifications
                         by Program Transformation


                            Dr. James M. Boyle

                 Mathematics and Computer Science Division
                        Argonne National Laboratory
                          Argonne, IL 60439-4844
                            boyle@anl-mcs.arpa




     How can one implement a "dusty deck"  pure  Lisp  program  on  global-
memory parallel computers?  Fortunately, pure Lisp programs have a declara-
tive interpretation, which protects their decks from becoming too dusty!

     This declarative interpretation means that a pure Lisp program is  not
over-specified  in  the  direction  of sequential execution.  Thus there is
hope to detect parallelism automatically in pure Lisp programs.

     In this talk I shall describe a stepwise refinement of pure Lisp  pro-
grams  that  leads  to a parallel implementation.  From this point of view,
the pure Lisp program is an abstract specification, which program transfor-
mations  can  refine  in  several  steps  to  a  parallel program.  I shall
describe some of the transformations--correctness preserving rewrite  rules
--used to carry out the implementation.

     An important  property  of  a  parallel  program  is  whether  it  can
deadlock.  I shall discuss a number of the design decisions involved in the
refinement and their role in preserving the correctness of the  transformed
program, especially with regard to deadlock.

     Implementing a transformational refinement often leads to  interesting
insights  about  programming.   I  shall  discuss  some  of these insights,
including one about the compilation of recursive programs,  and  some  that
suggest  ways  to systematically relax the "purity" requirement on the Lisp
program being implemented.

     We have used this approach to implement a moderately large  pure  Lisp
program  (1300 lines, 42 functions) on several parallel machines, including
the Denelcor HEP (r.i.p.), the Encore Multimax, the Sequent  Balance  8000,
and the Alliant FX/8.  I shall discuss some measurements of the performance
of this program, which has achieved a speedup of 12.5 for 16 processors  on
realistic  data,  and some of the optimizations used to obtain this perfor-
mance.

     Oh, yes, and by the way, the transformations produce a  parallel  pro-
gram in FORTRAN!

------------------------------

Date: 01 Oct 86  1134 PDT
From: Martin Abadi <MA@SAIL.STANFORD.EDU>
Subject: Seminar - Temporal Theorem Proving (SU)


PhD Oral Examination
Wednesday, October 8, 2:15 PM
Margaret Jacks Hall 146

                        Temporal Theorem Proving

                                Martin Abadi
                        Computer Science Department

In the last few years, temporal logic has been applied in the
specification, verification, and synthesis of concurrent programs, as
well as in the synthesis of robot plans and in the verification of
hardware devices. Nevertheless, proof techniques for temporal logic
have been quite limited up to now.

This talk presents a novel proof system R for temporal logic. Proofs are
generally short and natural. The system is based on nonclausal resolution,
an attractive classical logic method, and involves a special treatment of
quantifiers and modal operators.

Unfortunately, no effective proof system for temporal logic is
complete. We examine soundness and completeness issues for R and other
systems. For example, a simple extension of our resolution system is
as powerful as Peano Arithmetic.  (Fortunately, refreshments will
follow the talk.)

Like classical resolution, temporal resolution suggests an approach to
logic programming. We explore temporal logic as a programming language
and a temporal resolution theorem prover as an interpreter for
programs in this language.

Other modal logics have found a variety of uses in artificial
intelligence and in the analysis of distributed systems. We discuss
resolution systems analogous to R for the modal logics K, T, K4, S4,
S5, D, D4, and G.

------------------------------

Date: Sat,  4 Oct 86 12:30:41 EDT
From: "Steven A. Swernofsky" <SASW%MX.LCS.MIT.EDU@MC.LCS.MIT.EDU>
Subject: Seminar - Efficient Unification of Quantified Terms (MIT)

From: Susan Hardy <SH at XX.LCS.MIT.EDU>


                        JOHN STAPLES
                   University of Queensland

        Efficient unification of quantified terms


               DATE:  Tuesday, October 7, l986
               TIME:  2:45 pm. - Refreshments
                      3:00 pm. - Talk
              PLACE:  2nd Floor Lounge


Quantifiers such as for-all, integral signs, block headings would be
a valuable enrichment of the vocabulary of a logic programming language
or other computational logic. The basic technical prerequisite is a
suitable unification algorithm. A programme is sketched for the
development of data structures and algorithms which efficiently
support the use of quantified terms. Progress in carrying out this
programme is reviewed. Both structure sharing and non structure
sharing representations of quantified terms are described, together
with a unification algorithm for each case. The efficiency of the
approach results from the techniques used to represent terms,
which enable naive substitution to implement correct substitution
for quantified terms. The work is joint with Peter J. Robinson.



HOST:  Arvind

------------------------------

Date: Sat,  4 Oct 86 13:04:33 EDT
From: "Steven A. Swernofsky" <SASW%MX.LCS.MIT.EDU@MC.LCS.MIT.EDU>
Subject: Seminar - Planning Simultaneous Actions (BBN)

From: Brad Goodman <BGOODMAN at BBNG.ARPA>

                         BBN Laboratories
                     Science Development Program
                        AI/Education Seminar


Speaker:  Professor James Allen
          University of Rochester
          (james@rochester)

Title:    Planning Simultaneous Actions in Temporally Rich Worlds

Date:     10:30a.m., Monday, October 6th
Location: 3rd floor large conference room,
          BBN Labs, 10 Moulton Street, Cambridge


    This talk describes work done with Richard Pelavin over the last few
years. We have developed a formal logic of action that allows us to
represent knowledge and reason about the interactions between events
that occur simultaneously or overlap in time. This includes interactions
between two (or more) actions that a single agent might perform
simultaneously, as well as interactions between an agent's actions and
events occuring in the external world. The logic is built upon an
interval-based temporal logic extended with modal operators similar to
temporal necessity and a counterfactual operator. Using this formalism,
we can represent a wide range of possible ways in which actions may
interact.

------------------------------

Date: Thu, 2 Oct 86 11:34 EDT
From: Tim Finin <Tim@cis.upenn.edu>
Subject: Seminar - Cognitive Architecture (UPenn)

             WHAT IS THE SHAPE OF THE COGNITIVE ARCHITECTURE?

                               Allen Newell
                        Computer Science Department
                        Carnegie Mellon University

                          12:00 noon, October 17
                        Alumni Hall, Towne Building
                        University of Pennsylvania

The architecture plays a critical role in computational systems, defining
the separation between structure and content, and hence the capability of
being programmed. All architectures have much in common. However, important
characteristics depend on which mechanisms occur in the architecture (rather
than in software) and what shape they take. There has been much research
recently on architectures throughout computer and cognitive science. Within
computer science the main drivers have been new hardware technologies (VLSI)
and the felt need for parallelism. Within cognitive science the main drivers
have been the hope of comprehensive psychological models (ACT*), the urge to
ground the architecture in neurophysiological mechanisms (the
connectionists) and the proposal of modularity as a general architectural
principle (from linguistics). The talk will be on human cognitive
architecture, but considerations will be brought to bear from everywhere.

------------------------------

End of AIList Digest
********************

From csnet_gateway Tue Oct  7 07:04:05 1986
Date: Tue, 7 Oct 86 07:03:51 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #205
Status: RO


AIList Digest             Monday, 6 Oct 1986      Volume 4 : Issue 205

Today's Topics:
  Humor - AI Limericks by Henry Kautz,
  AI Tools - Turbo Prolog &  Reference Counts vs Garbage Collection,
  Philosophy - Emergent Consciousness & Perception

----------------------------------------------------------------------

Date: Fri, 3 Oct 86 14:58:37 CDT
From: Glenn Veach <veach%ukans.csnet@CSNET-RELAY.ARPA>
Subject: AI Limericks by Henry Kautz

gleaned from the pages of CANADIAN ARTIFICIAL INTELLIGENCE
September 1986 No. 9 page 6:


        AI Limericks

       by Henry Kautz
   University of Rochester

        ***     ***

If you're dull as a napkin, don't sigh;
Make your name as a "deep" sort of guy.
  You just have to crib, see
  Any old book by Kripke
And publish in AAAI.

        ***     ***

A hacker who studied ontology
Was famed for his sense of frivolity.
  When his program inferred
  That Clyde is a bird
He blamed not his code but zoology.

        ***     ***

If your thesis is utterly vacuous
Use first-order predicate calculus.
  With sufficient formality
  The sheerist banality
Will be hailed by the critics: "Miraculous!"

If your thesis is quite indefensible
Reach for semantics intensional.
  Your committee will stammer
  Over Montague grammar
Not admitting it's incomprehensible.


------------------------------

Date: Fri, 26 Sep 86 11:40 PDT
From: JREECE%sc.intel.com@CSNET-RELAY.ARPA
Subject: Turbo Prolog - Yet Another Opinion

Although Turbo Prolog has been characterized by some wags as a "brain-dead
implementation" I think its mixture of strengths and weaknesses would be more
accurately described as those of an idiot savant.  Some of the extensions,
such as the built-in string editor predicates, are positively serendipitous,
and you get most of the development time advantages of a fifth generation
language for a conventional application plus good runtime performance for
only $70.  On the other hand, one tires quickly of writing NP-incomplete sets
of type declarations which are unnecessary in any other implementation....

If nothing else, for $70 you can prototype something that can be used to justify
spending $700 for a real PC Prolog compiler, or $18,000 for a VAX
implementation.

John Reece
Intel

------------------------------

Date: Fri, 26 Sep 86 18:52:26 CDT
From: neves@ai.wisc.edu (David M. Neves)
Reply-to: neves@ai.wisc.edu (David M. Neves)
Subject: Re: Xerox vs Symbolics -- Reference counts vs Garbage collection

When I was using MIT Lisp Machines (soon to become Symbolics) years
ago nobody used the garbage collector because it slowed down the
machine and was somewhat buggy.  Instead people operated for hours/days
until they ran out of space and then rebooted the machine.  The only
time I turned on the garbage collector was to compute 10000 factorial.
Do current Symbolics users use the garbage collector?

   "However, it is apparent that reference counters will never
   reclaim circular list structure."

This is a common complaint about reference counters.  However I don't
believe there is very many circular data structures in real Lisp code.
Has anyone looked into this?  Has any Xerox user run out of space
because of circular data structures in their environment?

--
David Neves, Computer Sciences Department, University of Wisconsin-Madison
Usenet:  {allegra,heurikon,ihnp4,seismo}!uwvax!neves
Arpanet: neves@rsch.wisc.edu

------------------------------

Date: 26 Sep 86 15:35:00 GMT
From: princeton!siemens!steve@CAIP.RUTGERS.EDU
Subject: Garb Collect Symb vs Xerox


I received mail that apparently also went onto the net, from Dan Hoey
(hoey@nrl-aic.ARPA).  He discussed garbage collection in response to
my unsupported allegation that, "S[ymbolics] talks about their garbage
collection more, but X[erox]'s is better."  I am glad to see someone
taking up an informed discussion in this area.

First, I briefly recap his letter, eliding (well-put) flames:

+ In the language of computer
+ science, Xerox reclaims storage using a ``reference counter''
+ technique, rather than a ``garbage collector.''

+ If we are to believe Xerox, the reference counter
+ technique is fundamentally faster, and reclaims acceptable amounts of
+ storage.  However, it is apparent that reference counters will never
+ reclaim circular list structure.  As a frequent user of circular list
+ structure (doubly-linked lists, anyone?), I find the lack tantamount to
+ a failure to reclaim storage.

+ I have never understood why Xerox continues to neglect to write a
+ garbage collector.  It is not necessary to stop using reference counts,
+ but simply to have a garbage collector available for those putatively
+ rare occasions when they run out of memory.

+ Dan Hoey

Xerox's system is designed for highly interactive use on a personal
workstation (sound familiar?).  They spread the work of storage reclamation
evenly throughout the computation by keeping reference counts.  Note that
they have many extra tricks such as "References from the stack are not
counted, but are handled separately at "sweep" time; thus the vast majority
of data manipulations do not cause updates to [the reference counts]"
(Interlisp-D Reference Manual, October, 1985).  Even if this scheme were
to use a greater total amount of CPU time than typical garbage collection,
it would remain more acceptable for use on a personal, highly interactive
workstation.  I have no idea how it can be compared to Symbolics for overall
performance, without comparing the entire Interlisp vs. Zetalisp systems.

Nevertheless, I can say that my experience is that Interlisp runs a "G.C."
every few seconds and it lasts, subjectively, an eyeblink.  Occasionally
I get it to take longer, for example when I zero my pointers to 1500 arrays
in one fell swoop.  I have some figures from one application, too.  An
old, shoddy implementation ran 113 seconds CPU and 37.5 seconds GC (25% GC).
A decent implementation of the same program, running a similar problem twice,
got 145 seconds CPU, but 10.8 and 20.3 seconds GC (6.9% and 12% GC).  (The
good implementation still doesn't have a good hashing function so it's still
slower.)  I cannot claim that these figures are representative.  I have
heard horror stories about other Lisps' GCs,
although I don't have any feel for Symbolics's "Ephemeral GC".

I have a strong feeling Xerox has other tricks besides the one about the
stack, which they don't want to tell anyone.  I know they recently fixed
the reference counter from counting 16 or more references as "infinity"
(and thus never reclaimable) to an overflow scheme where the reference
count gets squirreled away somewhere else when it gets bigger.

Finally, normally the amount of unreclaimed garbage (e.g. circular lists)
grows much slower than memory fragments, so you have to rebuild your
world before unreclaimed garbage becomes a problem anyway.

Postfinally, Xerox makes a big deal that their scheme takes time proportional
to the number of objects reclaimed, while traditional systems take time
proportional to the number of objects allocated.  I think Symbolics's
ephemeral scheme is a clever way to consider only subsets of the universe
of allocated objects, that are most likely to have garbage.  I wish I knew
whether it is a band-aid or an advance in the state-of-the-art.

Absolutely ultimately, "traditional" GC I refer to, is known as "mark-
and-sweep".

Steve Clark    {topaz or ihnp4}!princeton!siemens!steve

------------------------------

Date: Mon, 29 Sep 86 14:07:12 edt
From: segall@caip.rutgers.edu (Ed Segall)
Subject: Re: Emergent Consciousness

Why must we presume that the seat of consciousness must be in the form
of neural "circuits"? What's to prevent it from being a symbolic,
logical entity, rather than a physical entity? After all, the "center
of control" of most computers is some sort of kernal program, running
on the exact same hardware as the other programs. (Don't try to push
the analogy too far, you can probably find a hole in it.) Perhaps the
hierarchical system referred to is also not structural.

Might the brain operate even more like a conventional computer than we
realize, taking the role of an extremely sophisticated
(self-modifying) interpreter? The "program" that is interpreted is the
pattern of firings occurring at any given time. If this is so, then
moment-to-moment thought is almost completely in terms of the dynamic
information contained in neural signals, rather than the quasi-static
information contained in neural interconnections. The neurons simply
serve to "run" the thoughts.  This seems obvious to me, since I am
assuming that neural firings can process information much faster than
structural changes in neurons.

 I'd be interested to know about what rate neuron firings occur in the
brain, and if anyone has an intelligent guess as to how much
information can be stored at once in the "dynamic" form of firings
rather than the "static" form of interconnections.

I apologize in advance if what I suggest goes against well-understood
knowlege (not theory) of how the brain operates. My information is
from the perspective of a lay person, not a cognitive scientist.

------------------------------

Date: Mon, 29 Sep 86 09:34:01 EDT
From: "Col. G. L. Sicherman" <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Re: Consciousness as bureaucracy

Ken Laws' analogy between Bureaucracy and Man--more precisely, Man's
Mind--has been anticipated by Marvin Minsky.  I do not have the reference;
I think it was a rather broad article in a general science journal.

As I recall, the theory that Minsky proposed lay somewhere between the
lay concept of self and the Zen concept.  It seemed to suggest that
consciousness is an illusion to itself, but a genuine and observable
phenomenon to an outside observer, characterizable with the metaphor
of bureaucracy.  Perhaps some Ailist reader can identify the article.

Emergent consciousness has always been a hope of A.I.  I side with
those who suggest that consciousness depends on contact with the world
... even though I know some professors who seem to be counter-examples!
 :-)

------------------------------

Date: 2 Oct 86 17:14:00 EDT
From: "FENG, THEO-DRIC" <theo@ari-hq1.ARPA>
Reply-to: "FENG, THEO-DRIC" <theo@ari-hq1.ARPA>
Subject: Perception

I just ran across the following report and thought it might contribute some-
thing to the discussion on the "perception" of reality. (I'll try to summarize
the report where I can.)

according to
Thomas Saum in the
German Research Service,
Special Press Reports,
Vol. II, No. 7/86

A group of biologists in Bremen University has been furthering the theory devel-
oped by Maturana and Varela (both from Chile) in the late '70's, that the brain
neither reflects nor reproduces reality. They suggest that the brain creates its
own reality.
Gerhard Roth, a prof. of behavioral physiology at Bremen (with doctorates in
philosophy and biology), has written several essays on the subject. In one, he

        ...writes that in the past the "aesthesio-psychological perspective"
        of the psychomatic problem was commonly held by both laypersons
        and scientists. This train of thought claims that the sensory organs
        reporduce the world at least partially and convey this image to the
        brain, where it is then reassembled ("reconstructed") in a uniform
        concept. In other words, this theory maintains that the sense organs
        are the brain's gateway to the world.
             In order to illustrate clearly the incorrectness of this view,
        Roth suggests that the perspectives be exchanged: if one looks at
        the problem of perception from the brain's angle, instead of the
        sense organs, the brain merely receives uniform and basically homo-
        geneous bioelectric signals from the nerve tracks. It is capable of
        determining the intensity of the sensory agitation by the frequency
        of these signals, but this is all it can do. The signals provide no
        information on the quality of the stimulation, for instance, on whe-
        ther an object is red or green. Indeed, they do not even say any-
        thing about the modality of the stimulus, i.e. whether it is an
        optical, acoustical, or chemical stimulation.
             The constructivists [as these new theoreticians are labeled],
        believe that the brain is a self-contained system. Its only access
        to the world consists of the uniform code of the nerve signals which
        have nothing in common with the original stimuli. Since the brain
        has no original image, it cannot possibly "reporduce" reality; it
        has to create it itself. "It (the brain) has to reconstruct the di-
        versity of the outside owrld from the uniform language of the neu-
        rons", Roth claims. The brain accomplishses this task by "interpret-
        ing itself", i.e. by construing what is going on inside itself.
        Thus, the brain "draws conclusions" from the degree to which it is
        agitated by the modality of the original stimulus: all neuronal im-
        pulses reaching the occipital cortex, for example, are visual im-
        pressions.
             This isolated nature of the brain and its reality, however, are
        by no means a blunder on the part of nature; indeed, they are not
        even a necessary evil, Roth explains. On the contrary, it is an
        adaptive advantage acquired by more highlly developed creatures dur-
        ing the course of their phylogenic development. If the brain had di-
        rect access to the environment, Roth argues, then one and the same
        stimulus would necessarily always result in one and the same reac-
        tion by the organizsm. Since, however, the human brain has retained
        a certain amount of creative scope for its reconstruction of reality,
        it is in a position to master complicated stiuations and adapt itself
        to unforeseen circumstances.
             Only in this way is it possible to recognize an object in differ-
        ent light intensities, from a new angle of vision, or at a distance.
        Even experiments with "reversal spectacles" demonstrate man's powers
        of adaptation in interpreting reality: after a little while, test
        persons, who see the world upside down with special glasses, simply
        turn their environment around again in their "mind". When, after a
        few days, they remove the spectacles, the "real" world suddenly seems
        to be standing on its head.
             This mobility and adaptability on the part of our perceptive fa-
        culties were obviously much more important for the evolution of more
        highly developed vertebrates than was a further intensification of
        the signal input by the sense organs. The million fibers in man's
        optic nerve are only double the number of a frog's; the human brain,
        on the other hand, has one hundred thousand times more nerve cells
        than a frog brain. But first and foremost, the "reality workshop",
        i.e., the cerebral area not tied to specific sense, has expanded
        during the evolution of man's brain, apparently to the benefit of
        our species.

Contact: Prof. Dr. Gerhard Roth, Forschungsschwerpunkt Biosystemforschung,
            Universitat [note: umlaut the 'a'] Bremen, Postfach 330 440,
                D-2800 Bremen 33, West Germany.


[conveyed by Theo@ARI]

------------------------------

End of AIList Digest
********************

From csnet_gateway Tue Oct  7 07:04:25 1986
Date: Tue, 7 Oct 86 07:04:11 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #206
Status: RO


AIList Digest             Monday, 6 Oct 1986      Volume 4 : Issue 206

Today's Topics:
  Philosophy - Searle, Turing, Symbols, Categories

----------------------------------------------------------------------

Date: 27 Sep 86 14:20:21 GMT
From: princeton!mind!harnad@caip.rutgers.edu  (Stevan Harnad)
Subject: Searle, Turing, Symbols, Categories

The following are the Summary and Abstract, respectively, of two papers
I've been giving for the past year on the colloquium circuit. The first
is a joint critique of Searle's argument AND of the symbolic approach
to mind-modelling, and the second is an alternative proposal and a
synthesis of the symbolic and nonsymbolic approach to the induction
and representation of categories.

I'm about to publish both papers, but on the off chance that
there is a still a conceivable objection that I have not yet rebutted,
I am inviting critical responses. The full preprints are available
from me on request (and I'm still giving the talks, in case anyone's
interested).

***********************************************************
Paper #1:
(Preprint available from author)

                 MINDS, MACHINES AND SEARLE

                       Stevan Harnad
                Behavioral & Brain Sciences
                      20 Nassau Street
                     Princeton, NJ 08542

Summary and Conclusions:

Searle's provocative "Chinese Room Argument" attempted to
show that the goals of "Strong AI" are unrealizable.
Proponents of Strong AI are supposed to believe that (i) the
mind is a computer program, (ii) the brain is irrelevant,
and (iii) the Turing Test is decisive. Searle's point is
that since the programmed symbol-manipulating instructions
of a computer capable of passing the Turing Test for
understanding Chinese could always be performed instead by a
person who could not understand Chinese, the computer can
hardly be said to understand Chinese. Such "simulated"
understanding, Searle argues, is not the same as real
understanding, which can only be accomplished by something
that "duplicates" the "causal powers" of the brain. In the
present paper the following points have been made:

1.  Simulation versus Implementation:

Searle fails to distinguish between the simulation of a
mechanism, which is only the formal testing of a theory, and
the implementation of a mechanism, which does duplicate
causal powers. Searle's "simulation" only simulates
simulation rather than implementation. It can no more be
expected to understand than a simulated airplane can be
expected to fly. Nevertheless, a successful simulation must
capture formally all the relevant functional properties of a
successful implementation.

2.  Theory-Testing versus Turing-Testing:

Searle's argument conflates theory-testing and Turing-
Testing. Computer simulations formally encode and test
models for human perceptuomotor and cognitive performance
capacities; they are the medium in which the empirical and
theoretical work is done. The Turing Test is an informal and
open-ended test of whether or not people can discriminate
the performance of the implemented simulation from that of a
real human being. In a sense, we are Turing-Testing one
another all the time, in our everyday solutions to the
"other minds" problem.

3.  The Convergence Argument:

Searle fails to take underdetermination into account. All
scientific theories are underdetermined by their data; i.e.,
the data are compatible with more than one theory. But as
the data domain grows, the degrees of freedom for
alternative (equiparametric) theories shrink. This
"convergence" constraint applies to AI's "toy" linguistic
and robotic models as well, as they approach the capacity to
pass the Total (asympototic) Turing Test. Toy models are not
modules.

4.  Brain Modeling versus Mind Modeling:

Searle also fails to note that the brain itself can be
understood only through theoretical modeling, and that the
boundary between brain performance and body performance
becomes arbitrary as one converges on an asymptotic model of
total human performance capacity.

5.  The Modularity Assumption:

Searle implicitly adopts a strong, untested "modularity"
assumption to the effect that certain functional parts of
human cognitive performance capacity (such as language) can
be be successfully modeled independently of the rest (such
as perceptuomotor or "robotic" capacity). This assumption
may be false for models approaching the power and generality
needed to pass the Total Turing Test.

6.  The Teletype versus the Robot Turing Test:

Foundational issues in cognitive science depend critically
on the truth or falsity of such modularity assumptions. For
example, the "teletype" (linguistic) version of the Turing
Test could in principle (though not necessarily in practice)
be implemented by formal symbol-manipulation alone (symbols
in, symbols out), whereas the robot version necessarily
calls for full causal powers of interaction with the outside
world (seeing, doing AND linguistic understanding).

7.  The Transducer/Effector Argument:

Prior "robot" replies to Searle have not been principled
ones. They have added on robotic requirements as an
arbitrary extra constraint. A principled
"transducer/effector" counterargument, however, can be based
on the logical fact that transduction is necessarily
nonsymbolic, drawing on analog and analog-to-digital
functions that can only be simulated, but not implemented,
symbolically.

8.  Robotics and Causality:

Searle's argument hence fails logically for the robot
version of the Turing Test, for in simulating it he would
either have to USE its transducers and effectors (in which
case he would not be simulating all of its functions) or he
would have to BE its transducers and effectors, in which
case he would indeed be duplicating their causal powers (of
seeing and doing).

9.  Symbolic Functionalism versus Robotic Functionalism:

If symbol-manipulation ("symbolic functionalism") cannot in
principle accomplish the functions of the transducer and
effector surfaces, then there is no reason why every
function in between has to be symbolic either.  Nonsymbolic
function may be essential to implementing minds and may be a
crucial constituent of the functional substrate of mental
states ("robotic functionalism"): In order to work as
hypothesized, the functionalist's "brain-in-a-vat" may have
to be more than just an isolated symbolic "understanding"
module -- perhaps even hybrid analog/symbolic all the way
through, as the real brain is.

10.  "Strong" versus "Weak" AI:

Finally, it is not at all clear that Searle's "Strong
AI"/"Weak AI" distinction captures all the possibilities, or
is even representative of the views of most cognitive
scientists.

Hence, most of Searle's argument turns out to rest on
unanswered questions about the modularity of language and
the scope of the symbolic approach to modeling cognition. If
the modularity assumption turns out to be false, then a
top-down symbol-manipulative approach to explaining the mind
may be completely misguided because its symbols (and their
interpretations) remain ungrounded -- not for Searle's
reasons (since Searle's argument shares the cognitive
modularity assumption with "Strong AI"), but because of the
transdsucer/effector argument (and its ramifications for the
kind of hybrid, bottom-up processing that may then turn out
to be optimal, or even essential, in between transducers and
effectors). What is undeniable is that a successful theory
of cognition will have to be computable (simulable), if not
exclusively computational (symbol-manipulative). Perhaps
this is what Searle means (or ought to mean) by "Weak AI."

*************************************************************

Paper #2:
(To appear in: "Categorical Perception"
S. Harnad, ed., Cambridge University Press 1987
Preprint available from author)

            CATEGORY INDUCTION AND REPRESENTATION

                       Stevan Harnad
                Behavioral & Brain Sciences
                      20 Nassau Street
                     Princeton NJ 08542

Categorization is a very basic cognitive activity. It is
involved in any task that calls for differential responding,
from operant discrimination to pattern recognition to naming
and describing objects and states-of-affairs.  Explanations
of categorization range from nativist theories denying that
any nontrivial categories are acquired by learning to
inductivist theories claiming that most categories are learned.

"Categorical perception" (CP) is the name given to a
suggestive perceptual phenomenon that may serve as a useful
model for categorization in general: For certain perceptual
categories, within-category differences look much smaller
than between-category differences even when they are of the
same size physically. For example, in color perception,
differences between reds and differences between yellows
look much smaller than equal-sized differences that cross
the red/yellow boundary; the same is true of the phoneme
categories /ba/ and /da/. Indeed, the effect of the category
boundary is not merely quantitative, but qualitative.

There have been two theories to explain CP effects. The
"Whorf Hypothesis" explains color boundary effects by
proposing that language somehow determines our view of
reality. The "motor theory of speech perception" explains
phoneme boundary effects by attributing them to the patterns
of articulation required for pronunciation. Both theories
seem to raise more questions than they answer, for example:
(i) How general and pervasive are CP effects? Do they occur
in other modalities besides speech-sounds and color?  (ii)
Are CP effects inborn or can they be generated by learning
(and if so, how)? (iii) How are categories internally
represented? How does this representation generate
successful categorization and the CP boundary effect?

Some of the answers to these questions will have to come
from ongoing research, but the existing data do suggest a
provisional model for category formation and category
representation. According to this model, CP provides our
basic or elementary categories. In acquiring a category we
learn to label or identify positive and negative instances
from a sample of confusable alternatives. Two kinds of
internal representation are built up in this learning by
"acquaintance": (1) an iconic representation that subserves
our similarity judgments and (2) an analog/digital feature-
filter that picks out the invariant information allowing us
to categorize the instances correctly. This second,
categorical representation is associated with the category
name. Category names then serve as the atomic symbols for a
third representational system, the (3) symbolic
representations that underlie language and that make it
possible for us to learn by "description."

This model provides no particular or general solution to the
problem of inductive learning, only a conceptual framework;
but it does have some substantive implications, for example,
(a) the "cognitive identity of (current) indiscriminables":
Categories and their representations can only be provisional
and approximate, relative to the alternatives encountered to
date, rather than "exact." There is also (b) no such thing
as an absolute "feature," only those features that are
invariant within a particular context of confusable
alternatives. Contrary to prevailing "prototype" views,
however, (c) such provisionally invariant features MUST
underlie successful categorization, and must be "sufficient"
(at least in the "satisficing" sense) to subserve reliable
performance with all-or-none, bounded categories, as in CP.
Finally, the model brings out some basic limitations of the
"symbol-manipulative" approach to modeling cognition,
showing how (d) symbol meanings must be functionally
anchored in nonsymbolic, "shape-preserving" representations
-- iconic and categorical ones. Otherwise, all symbol
interpretations are ungrounded and indeterminate. This
amounts to a principled call for a psychophysical (rather
than a neural) "bottom-up" approach to cognition.

------------------------------

Date: Mon 29 Sep 86 09:55:11-PDT
From: Pat Hayes <PHayes@SRI-KL.ARPA>
Subject: Searle's logic

I try not to get involved in these arguments,  but bruce krulwich's assertion
that Searle 'bases all his logic on' the binary nature of computers is seriously
wrong.  We could have harware which worked with direct, physical, embodiments
of all of Shakespeare, and Searles arguments would apply to it just as well.
What bothers him ( and many other philosophers ) is the idea that the machine
works by manipulating SYMBOLIC descriptions of its environment ( or whatever it
happens to be thinking about ).  It's the internal representation idea, which
we AIers take in with our mothers milk, which he finds so silly and directs his
arguments against.
Look, I also don't think there's any real difference between a human's knowledge
of a horse and machine's manipulation of the symbol it is using to represent it.
But Searle has some very penetrating arguments against this idea, and one doesnt
make progress by just repeating one's intuitions, one has to understand his
arguments and explain what is wrong with them.  Start with the Chinese room, and
read all his replies to the simple counterarguments as well, THEN come back and
help us.
Pat Hayes

------------------------------

Date: 1 Oct 86 18:25:16 GMT
From: cbatt!cwruecmp!cwrudg!rush@ucbvax.Berkeley.EDU  (rush)
Subject: Re: Searle, Turing, Symbols, Categories (Question not comment)

In article <158@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>6.  The Teletype versus the Robot Turing Test:
>
>For example, the "teletype" (linguistic) version of the Turing...
> whereas the robot version necessarily
>calls for full causal powers of interaction with the outside
>world (seeing, doing AND linguistic understanding).
>
Uh...I never heard of the "robot version" of the Turing Test,
could someone please fill me in?? I think that understanding
the reasons for such a test would help me (I make
no claims for anyone else) make some sense out of the rest
of this article. In light of my lack of knowledge, please forgive
my presumption in the following comment.

>7.  The Transducer/Effector Argument:
>
>A principled
>"transducer/effector" counterargument, however, can be based
>on the logical fact that transduction is necessarily
>nonsymbolic, drawing on analog and analog-to-digital
>functions that can only be simulated, but not implemented,
>symbolically.
>
[ I know I claimed no commentary, but it seems that this argument
  depends heavily on the meaning of the term "symbol". This could
  be a problem that only arises when one attempts to implement some
  of the stranger possibilities for symbolic entities. ]

        Richard Rush    - Just another Jesus freak in computer science
        decvax!cwruecmp!cwrudg!rush

------------------------------

Date: 2 Oct 86 16:05:28 GMT
From: princeton!mind!harnad@caip.rutgers.edu  (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories (Question not comment)


In his commentary-not-reply to my <158@mind.UUCP>, Richard Rush
<150@cwrudge.UUCP> asks:

(1)
>     I never heard of the "robot version" of the Turing Test,
>     could someone please fill me in?

He also asks (in connection with my "transducer/effector" argument)
about the analog/symbolic distinction:

(2)
>     I know I claimed no commentary, but it seems that this argument
>     depends heavily on the meaning of the term "symbol". This could
>     be a problem that only arises when one attempts to implement some
>     of the stranger possibilities for symbolic entities.

In reply to (1): The linguistic version of the turing test (turing's
original version) is restricted to linguistic interactions:
Language-in/Language-out. The robotic version requires the candidate
system to operate on objects in the world. In both cases the (turing)
criterion is whether the system can PERFORM indistinguishably from a human
being. (The original version was proposed largely so that your
judgment would not be prejudiced by the system's nonhuman appearance.)

On my argument the distinction between the two versions is critical,
because the linguistic version can (in principle) be accomplished by
nothing but symbols-in/symbols-out (and symbols in between) whereas
the robotic version necessarily calls for non-symbolic processes
(transducer, effector, analog and A/D). This may represent a
substantive functional limitation on the symbol-manipulative approach
to the modeling of mind (what Searle calls "Strong AI").

In reply to (2): I don't know what "some of the stranger possibilities
for symbolic entities" are. I take symbol-manipulation to be
syntactic: Symbols are arbitrary tokens manipulated in accordance with
certain formal rules on the basis of their form rather than their meaning.
That's symbolic computation, whether it's done by computer or by
paper-and-pencil. The interpretations of the symbols (and indeed of
the manipulations and their outcomes) are ours, and are not part of
the computation. Informal and figurative meanings of "symbol" have
little to do with this technical concept.

Symbols as arbitrary syntactic tokens in a formal system can be
contrasted with other kinds of objects. The ones I singled out in my
papers were "icons" or analogs of physical objects, as they occur in
the proximal physical input/output in transduction, as they occur in
the A-side of A/D and D/A transformations, and as they may function in
any part of a hybrid system to the extent that their functional role
is not merely formal and syntactic (i.e., to the extent that their
form is not arbitrary and dependent on convention and interpretation
to link it to the objects they "stand for," but rather, the link is
one of physical resemblance and causality).

The category-representation paper proposes an architecture for such a
hybrid system.

Stevan Harnad
princeton!mind!harnad

------------------------------

End of AIList Digest
********************

From csnet_gateway Tue Oct  7 18:47:03 1986
Date: Tue, 7 Oct 86 18:46:55 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #207
Status: R


AIList Digest            Tuesday, 7 Oct 1986      Volume 4 : Issue 207

Today's Topics:
  Seminars - Cross-Talk in Mental Operations (UCB) &
    Deductive Databases (UPenn) &
    Concept Acquisition in Noisy Environments (SRI) &
    Prolog without Horns (CMU) &
    Knowledge Engineering and Ontological Structure (SU),
  Conferences - AAAI-87 Tutorials &
    1st Conf. on Neural Networks &
    Workshop on Qualitative Physics

----------------------------------------------------------------------

Date: Mon, 6 Oct 86 15:38:02 PDT
From: admin%cogsci.Berkeley.EDU@berkeley.edu (Cognitive Science Program)
Subject: Seminar - Cross-Talk in Mental Operations (UCB)


                         BERKELEY COGNITIVE SCIENCE PROGRAM


                     Cognitive Science Seminar - IDS 237A


                         Tuesday, October 14, 11:00 - 12:30
                                  2515 Tolman Hall
                              Discussion: 12:30 - 1:30
                                  2515 Tolman Hall


             ``Cross-Talk and Backward Processing in Mental Operations''

                                   Daniel Kahneman
                                Psychology Department



               There are many indications that  we  only  have  imperfect
           control of the operations of our mind.  It is common to compute
           far more than is necessary for the task at hand.  An  operation
           of  cleaning-up  and  inhibition  of inappropriate responses is
           often required, and this operation is often only partially suc-
           cessful.   For  example,  we cannot stop ourselves from reading
           words that we attend to; when asked to assess the similarity of
           two objects in a specified attribute we apparently compute many
           similarity relations in addition to  the  requisite  one.   The
           prevalence  of such cross-talk has significant implications for
           a psychologically realistic notion of meaning for the interpre-
           tation of incoherence in judgments.

                A standard view of cognitive function is that the  objects
           and  events of expeience are assimilated, more or less success-
           fully, to existing schemas and expectations.   Some  perceptual
           and  cognitive  phenomena  seem  to fit another model, in which
           objects and events elicit their own context  and  define  their
           own alternatives.  Surprise, for example, is better viewed as a
           failure to make sense of an event post hoc  than as a  violation
           of  expectations.  Some rules by which events evoke counterfac-
           tual alternatives to themselves will be described.

------------------------------

Date: Sun, 5 Oct 86 11:15 EDT
From: Tim Finin <Tim@cis.upenn.edu>
Subject: Seminar - Deductive Databases (UPenn)

                     3:00pm, Tuesday, October 7, 1986
                23 Moore School, University of Pennsylvania


                        EFFICIENT DEDUCTIVE DATABASES
                        WILL THEY EVER BE CONSTRUCTED?

                             Tomasz Imielinski
                            Rutgers University

The  area  of  deductive  databases  is  a rapidly growing field concerned with
enhancing   traditional   relational   databases   with   automated   deduction
capabilities.    Because  of  the  large  amounts  of  data  involved  here the
complexity issues become critical. We present a number of  results  related  to
the  complexity  of  query  processing  in  the  deductive databases, both with
complete and incomplete information.

In an attempt to answer the question of whether efficient  deductive  databases
will  ever  be constructed we demonstrate an idea of the "deductive database of
the future". In such a system the concept of an answer to a query  is  tailored
to the various limitations of computational resources.

------------------------------

Date: Mon 6 Oct 86 16:25:34-PDT
From: Joani Ichiki <ICHIKI@SRI-STRIPE.ARPA>
Subject: Seminar - Concept Acquisition in Noisy Environments (SRI)


L. Saitta (Dipartimento di Informatica, Universita di Torino, Italy)
will present his talk entitled, "AUTOMATED CONCEPT ACQUISITION IN
NOISY ENVIRONMENTS," 10/7/86 in EK242 at 11:00am.  Abstract follows.

This paper presents a system which performs automated concept
acquisition from examples and has been especially designed to work in
errorful and noisy environments.

The adopted learning methodology is aimed to the target problem of
finding discriminant descriptions of a given set of concepts and both
examples and counterexamples are used.

The learning knowledge is expressed in the form of production rules,
organized into separate clusters, linked together in a graph
structure; the condition part of the rules, corresponding to
descriptions of relevant aspects of the concepts, is expressed by
means of a first order logic based language, enriched with constructs
suitable to handle uncertainty and vagueness and to increase
readability by a human user.  A continuous-valued semantics is
associated to this language and each rule is affected by a certainty
factor.

Learning is considered as a cyclic process of knowledge extraction,
validation and refinement; the control of the cycle is left to the
teacher.

Knowledge extraction proceeds through a process of specialization,
rather than generalization, and utilizes a technique of problem
reduction to contain the computational complexity.  Moreover, the
search strategy is strongly focalized by means of task-oriented but
domain-independent heuristics, trying to emulate the learning
mechanism of a human being, faced to find discrimination rules from a
set of examples.

Several criteria are proposed for evaluating the acquired knowledge;
these criteria are used to guide the process of knowledge refinement.

The methodology has been tested on a problem in the field of speech
recognition and the obtained experimental results are reported and
discussed.

------------------------------

Date: 6 October 1986 1411-EDT
From: Peter Andrews@A.CS.CMU.EDU
Subject: Seminar - Prolog without Horns (CMU)

   The following talk will be given in the Seminar on Automated
Reasoning Wednesday, Oct. 15, at 4:30p.m. in room PH125C. The talk
is independent of preceding material in the seminar.

                  Prolog without Horns
                     D. W. Loveland
An extension to Prolog is defined that handles non-Horn clause sets
(programs) in a manner closer to standard Prolog than previously
proposed. Neither the negation symbol or a symbol for false are
formally introduced in the system, although the system is
conjectured to be propositionally complete. The intention of the
extension is to provide processing of "nearly Horn" programs with
minimal deviation from the Prolog format. Although knowledge of
Prolog is not essential, some prior exposure to Prolog will be helpful.

------------------------------

Date: Mon 6 Oct 86 16:55:52-PDT
From: Lynne Hollander <HOLLANDER@SUMEX-AIM.ARPA>
Subject: Seminar - Knowledge Engineering and Ontological Structure (SU)

                                SIGLUNCH

        Title:   KNOWLEDGE ENGINEERING AS THE INVESTIGATION OF
                 ONTOLOGICAL STRUCTURE

        Speaker: Michael J. Freiling
                 Computer Research Laboratory
                 Tektronix Laboratories

        Place:   Chemistry Gazebo

        Time:     12:05-1:15, Friday, October 10


Experience has shown that much of the difficulty of learning to build
knowledge-based systems lies in designing representation structures that
adequately capture the necessary forms of knowledge.  Ontological analysis
is a method we have found quite useful at Tektronix for analyzing and
designing knowledge-based systems.  The basic approach of ontological
analysis is a step-by-step construction of knowledge structures beginning
with simple objects and relationships in the task domain, and continuing
through representations of state, state transformations, and heuristics
for selecting transformations.  Formal tools that can be usefully employed
in ontological analysis include domain equations, semantic grammars, and
full-scale specification languages.  The principles and tools of
ontological analysis are illustrated with actual examples from
knowledge-based systems we have built or analyzed with this method.

------------------------------

Date: Mon 29 Sep 86 10:39:41-PDT
From: William J. Clancey <CLANCEY@SUMEX-AIM.ARPA>
Subject: AAAI-87 Tutorials

AAAI-87 Tutorials -- Request for Proposals

Tutorials will be presented at AAAI-87/Seattle on Monday, Tuesday, and
Thursday, July 13, 14, and 16.  Anyone interested in presenting a tutorial
on a new or standard topic should contact the Tutorial Chair, Bill Clancey.
Topic suggestions from tutorial attendees are also welcome.

Potential speakers should submit a brief resume covering relevant background
(primarily teaching experience) and any available examples of work (ideally,
a published tutorial-level article on the subject).  In addition, those
people suggesting a new or revised topic should offer a 1-page summary of
the idea, outlining the proposed subject and depth of coverage, identifying
the necessary background, and indicating why it is felt that the topic would
be well attended.

With regard to new courses, please keep in mind that tutorials are intended
to provide dissemination of reasonably well-agreed-upon information, that
is, there should be a substantial body of accepted material.  We especially
encourage submission of proposals for new advanced topics, which in 1986
included "Qualitiative Simulation," "AI Machines," and "Uncertainty
Management."

Decisions about topics and speakers will be made by November 1.  Speakers
should be prepared to submit completed course material by December 15.

Bill Clancey
Stanford Knowledge Systems Laboratory
701 Welch Road, Building C
Palo Alto, CA 94304

Clancey@SUMEX

------------------------------

Date: Tue, 30 Sep 86 11:43:56 pdt
From: mikeb@nprdc.arpa (Mike Blackburn)
Subject: 1st Conf. on Neural Networks


           CONFERENCE ANNOUNCEMENT: FIRST ANNUAL
        INTERNATIONAL CONFERENCE ON NEURAL NETWORKS


                   San Diego, California

                      21-24 June 1987



The  San  Diego  IEEE  Section   welcomes   neural   network
enthusiasts in industry, academia, and government world-wide
to participate in the inaugural annual  ICNN  conference  in
San Diego.

Papers are solicited on the following topics:

     * Network Architectures * Learning Algorithms  *  Self-
     Organization  *  Adaptive Resonance * Dynamical Network
     Stability *  Neurobiological  Connections  *  Cognitive
     Science Connections * Electrical Neurocomputers * Opti-
     cal Neurocomputers * Knowledge Processing  *  Vision  *
     Speech  Recognition  &  Synthesis  *  Robotics  * Novel
     Applications

Contributed Papers: Extended Abstract should be submitted by
1  February  1987  for Conference Presentation. The Abstract
must be single spaced, three to four pages on 8.5 x 11  inch
paper  with  1.5  inch  margins. Abstracts will be carefully
refereed. Accepted abstracts  will  be  distributed  at  the
conference. Final Papers due 1 June 1986.

FINAL RELEASE  OF  ABSTRACTS  AND  PAPERS  WITH  RESPECT  TO
PROPRIETARY  RIGHTS  AND  CLASSIFICATION  MUST  BE  OBTAINED
BEFORE SUBMITTAL.

Address all  Corresspondence  to:  Maureen  Caudill  -  ICNN
10615G Tierrasanta Blvd.  Suite 346, San Diego, CA 92124.

Registration Fee: $350 if received by 1 December 1986,  $450
thereafter.

Conference Venue: Sheraton Harbor Island Hotel (approx.  $95
-  single), space limited, phone (619) 291-6400. Other lodg-
ing within 10 minutes.

Tutorials and Exhibits: Several Tutorials are Planned.  Ven-
dor Exhibit Space Available - make reservations early.


Conference Chairman: Stephen Grossberg

International Chairman: Teuvo Kohonen

Organizing  Committee:  Kunihiko  Fukushima,  Clark   Guest,
Robert  Hecht-Nielsen,  Morris  Hirsch, Bart Kosko (Chairman
619-457-5550), Bernard Widrow.


















































                     September 30, 1986

------------------------------

Date: 5 Oct 1986  13:16 EDT (Sun)
From: "Daniel S. Weld" <WELD%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU>
Subject: Workshop on Qualitative Physics

Call for Participation

Workshop on Qualitative Physics
May 27-29, 1987
Urbana, Illinois

Sponsored by:
        the American Association for Artificial Intelligence
                and
        Qualitative Reasoning Group
        University of Illinois at Urbana-Champaign

Organizing Committee:
        Ken Forbus (University of Illinois)
        Johan de Kleer (Xerox PARC)
        Jeff Shrager (Xerox PARC)
        Dan Weld (MIT AI Lab)

Objectives:
Qualitative Physics, the subarea of artificial intelligence concerned with
formalizing reasoning about the physical world, has become an important and
rapidly expanding topic of research.  The goal of this workshop is to
provide an opportunity for researchers in the area to communicate results
and exchange ideas.  Relevant topics of discussion include:

        -- Foundational research in qualitative physics
        -- Implementation techniques
        -- Applications of qualitative physics
        -- Connections with other areas of AI
                 (e.g., machine learning, robotics)

Attendance:  Attendence at the workshop will be limited in order to maximize
interaction.  Consequently, attendence will be by invitation only.  If you
are interested in attending, please submit an extended abstract (no more
than six pages) describing the work you wish to present.  The extended
abstracts will be reviewed by the organizing committee.  No proceedings will
be published; however, a selected subset of attendees will be invited to
contribute papers to a special issue of the International Journal of
Artificial Intelligence in Engineering.

Requirements:  The deadline for submitting extended abstracts is February
10th.  On-line submissions are not allowed; hard copy only please.  Since
no proceedings will be produced, abstracts describing papers submitted to
AAAI-87 are acceptable.  Invitations will be sent out on March 1st.  Please
send 6 copies of your extended abstracts to:

        Kenneth D. Forbus
        Qualitative Reasoning Group
        University of Illinois
        1304 W. Springfield Avenue
        Urbana, Illinois, 61801

------------------------------

End of AIList Digest
********************

From csnet_gateway Fri Oct 10 02:46:30 1986
Date: Fri, 10 Oct 86 02:46:21 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe
Subject: AIList Digest   V4 #208
Status: R


AIList Digest            Thursday, 9 Oct 1986     Volume 4 : Issue 208

Today's Topics:
  Bibliography - News and Recent Articles

----------------------------------------------------------------------

Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: News and Recent Articles

%A Paul A. Eisenstein
%T Detroit Finds Robots Aren't Living Up to Expectations
%J Investor's Daily
%D April 21, 1986
%P 12
%K AI07 Chrysler General Motors AA25
%X Chrysler said that automation was one of the major reasons productivity
doubled since 1980.  GM's Lake Orion, a "factory of the future" with 157
automated robots instead of providing the best quality and productivity of
any GM plant is providing the lowest.
Two other plants have been giving GM the same problems.

%A Mary Petrosky
%T Expert Software Aids Large System Design
%J InfoWorld
%D FEB 17, 1986
%V 8
%N 7
%P 1+
%K AA08 AI01 H01 AT02 AT03 Arthur Young Knowledge Ware
%X Knowledge-Ware is selling the Information Engineering Workbench
which provides tools to support developing business programs.  It has
features for supporting entity diagrams, data flow diagrams, etc.  I
cannot find any indication from this article where AI is actually
used.

%A John Gantz
%T No Market Developing for Artificial Intelligence
%J InfoWorld
%D FEB 17, 1986
%V 8
%N 7
%P  27
%K AT04 AT14
%X D. M. Data predicts that the market for AI software will be $605
million this year and $2.65 billion in 1990.  Arthur D. Little says it
might be twice this.  He argues that when you look at the companies, most
of them are selling primarily to research market and not to the commercial
data processing market.  Intellicorp had 3.3 million in revenues for the 1984-
1985 fiscal year and it made a profit.  However, a full third of its systems
go to academics and 20 percent goes to Sperry for use in its own AI labs.

%A Jay Eisenlohr
%T Bug Debate
%J InfoWorld
%D FEB 17, 1986
%V 8
%N 7
%P 58
%K AT13 AT12 Airus AI Typist AT03
%X Response to harsh review of AI Typist by Infoworld from an employee
of the company selling it.

%A Eddy Goldberg
%T AI offerings Aim to Accelerate Adoption of Expert Systems
%J Computerworld
%D MAY 26, 1986
%V 20
%N 21
%P 24
%K Teknowledge Carnegie Group Intel Hypercube Gold Hill Common Lisp AT02
H03 T03 T01
%X Teknowledge has rewritten S.1 in the C language.  Intel has introduced
Concurrent Common Lisp for its hypercube based machine

%T New Products/Microcomputers
%J Computerworld
%D MAY 26, 1986
%V 20
%N 21
%P  94
%K AT04 AI06 H01 Digital Vision Computereyes
%X Digital Vision introduced Computereyes video acquisition system for IBM PC.
Cost is $249.95 without camera and $529.95 with one.

%T New Products/Software and Services
%J Computerworld
%D MAY 26, 1986
%V 20
%N 21
%P 90
%K T03 AT02
%X LS/Werner has introduced a package containg four expert system tools for
$1995.  A guide to AI is also included.

%A Douglas Barney
%T AT&T Conversant Systems Unveils Voice Recognition Model
%J ComputerWorld
%D APR 21, 1986
%V 20
%N 16
%P 13
%K AI05 AT02
%X AT&T Conversant systems has two products to do speech recognition, the
Model 80 which handles 80 simultaneous callers for $50,000 to $100,000 while
the Model 32 costs between $25,000 and $50,000 and handles 32 simultaneous
callers.  It handles "yes," "no" and the numbers zero through nine.

%A Charles Babcock
%A James Martin
%T MSA Users Give High Marks, Few Dollars to Information Expert
%J ComputerWorld
%D APR 21, 1986
%V 20
%N 16
%P 15
%K AA06 AT03
%X MSA has a product called Information Expert which integrates a variety
of business applications through a shared dictionary and also provides
reporting.  However the 'expert system components' failed to live up
to the "standard definition of expert systems."

%A Alan Alper
%T IBM Trumpets Experimental Speech Recognition System
%J ComputerWorld
%D APR 21, 1986
%V 20
%N 16
%P 25+
%K AI05 H01 Dragon Systems Kurzweil Products
%X IBM's speech recognition system can recognize utterances in real time
from a 5000 word pre-programmed vocabulary and can transcribe sentences
with 95 per cent accuracy.  The system may become a product.  It can handle
office correspondence in its present form.  The system requires that the
user speaks slowly and with pauses. The system runs on a PC/AT with specialized
speech recognizing circuits.  Kurzweil Applied Intelligence has a system
with a 1000 word recognition system selling for $65,000 that has been delivered
to several hundred customers.  They have working prototypes of systems with
5000 word vocabularies which requires only a 1/10 of a second pause.  Dragon
Systems has a system that can recognize up to 1000 words.

%A Stephen F. Fickas
%T Automating the Transformational Development of Software
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 1268-1277
%K AA08 Glitter  package routing
%X Describes a system to automate the selection of transformations to
be applied in creating a program from a specification.  Goes through an
example to route packages through a network consisting of binary trees.

%A Douglas R. Smith
%A Goirdon B. Kotik
%A Stephen J. Westwold
%T Research on Knowledge-Based Software Environments at Kestrel Institute
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 1278-1295
%K AA08 CHI
%X Describes the CHI project.  REFINE, developed by Reasoning Systems Inc.,
is based onthe principles and ideas demonstrated in the CHI prototype.
CHI has bootstrapped itself.  This system is a transformation based
system.  The specification language, V,
takes 1/5 to 1/10 the number
of lines as the program being specified if it was written in LISP.

%A Richard C. Waters
%T The Programmer's Apprentice: A Session with KBEmacs
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 1296-1320
%K AA08 Ada Lisp
%X This system, which uses plans to work hand-in-hand with a programmer
in constructing a piece of software is now being used to work with
ADA programs.   The example used is that of a simple report.  Currently,
KBEmacs knows only a few dozen types of plans out of a few hundred to a
few thousand for real work.  Some operations take five minutes, but it is
expected that a speedup by a factor of 30 could be done by straightforward
operations. It is currently 40,000 lines of LISP code.

%A David R. Barstow
%T Domain-Specifific Automatic Programming
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 1321-1336
%K AA08 AA03 well-log Schlumberger-Doll
%X This system describes a system to write programs to do
well-log interpretation.  This system contains knowledge about well-logs
as well as programming.

%A Robert Neches
%A William R. Swartout
%A Johanna D. Moore
%T Enhanced Maintenance and Explanation of Expert Systems Through Explicit
Models of Their Development
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 1337-1350
%K AA08 AI01
%X Describes a system for applying various transformations to improve
readability of a LISP program.  Also discusses techniques for providing
explanation of the operation of the LISP machine by looking at data
structures created as the expert system is built

%A Beth Adelson
%A Elliot Soloway
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 1351-1360
%K AA08 AI08
%X discusses protocol analysis of designers designing software systems.
Tries to show the effect of previous experience in the domain on these
operations

%A Elaine Kant
%T Understanding Automating Algorithm Design
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 1361-1374
%K AA08 AI08
%X protocol analysis on algorithm designers faced with the convex hull
problem.  Discussion of AI programs to design algorithms.

%A David M. Steier
%A Elaine Kant
%T The Roles of Execution and Analysis in Design
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 1375-1386
%K AA08

%A J. Doyle
%T Expert Systems and the Myth of Symbolic Reasoning
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 1386-1390
%K AI01 O02
%X compares traditional application development software engineering approaches
with those taken by the AI community

%A P. A. Subrahmanyam
%T The "Software Engineering" of Expert Systems: Is Prolog Appropriate?
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 1391-1400
%K T02 O02 AI01
%X discusses developing expert systems in PROLOG

%A Daniel G. Bobrow
%T If Prolog is the Answer, What is the Question? or What it Takes to
Support AI Programming Paradigms
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 1401-1408
%K T02 AI01

%T Japanese Urge Colleges to Teach Programmers
%J InfoWorld
%D April 14, 1986
%V 8
%N 15
%P 18
%K GA01
%X "A panel of experts at the Japanese Ministry of Education has urged that
enrollment in computer software-related departments at Japanese universities
and colleges be doubled by 1992.  The panel hopes to ensure that more systems
engineers and software specialists are trained to offset the shortage of
Japanese programmers.  An estimated 600,000 additional programmers will be
needed by 1990, the panel projected."

%T Germans Begin AI Work with $53 Million Grant
%J InfoWorld
%D April 14, 1986
%V 8
%N 15
%P 18
%K GA03
%K Siemens West Germany GA03 AT19
%X The Wester German government will be giving $53.8 million in grants for AI
research.

%T Resources
%J InfoWorld
%D April 14, 1986
%V 8
%N 15
%P 19
%X New newsletter: "AI capsule", costing $195 a year for 12 issues
Winters Group, Suite 920 Building, 14 Franklin Street, Rochester New York 14604

%J Electronic News
%V 32
%N 1603
%D MAY 26, 1986
%P 25
%K GA01 H02 T02 Mitsubishi
%X Mitsubishi Electric announces an AI workstation doing 40,000 Prolog Lips
costing $118.941.

%T Image-Processing Module Works like a VMEBUS CPU
%J Electronics
%D JUN 16, 1986
%P 74
%V 59
%N 24
%K AI06 AT02 Datacube VMEbus Analog Devices
%X Product Announcement: VMEbus CPU card containing a digital signal-processing
chip supporting 8 MIPS

%T Robot Info Automatically
%J IEEE Spectrum
%D JAN 1986
%V 23
%N 1
%P 96
%K AT09 AT02 AI07
%X  Robotics database available on diskette of articles on robots.
Cost $90.00 per year, "Robotics Database, PO BOX 3004-17 Corvallis, Ore 97339

%A John A. Adams
%T Aerospace and Military
%J IEEE Spectrum
%D JAN 1986
%V 23
%N 1
%P 76-81
%K AA19 AI06 AI07 AA18 AI01
%X Darpa's Autonomous Land Vehicle succeeded in guiding itself at 5
kilometers per hour using a vision system along a paved road.


%A Richard L. Henneman
%A William B. Rouse
%T On Measuring the Complexity of Monitoring and Controlling Large-Scale
Systems
%J IEEE Transactions on Systems, Man and Cybernetics
%V SMC-16
%N 2
%D March/April 1986
%P 193-207
%K AI08 AA20
%X discusses the effect of number of levels of hierarchy, redundancy and
number of nodes on a display page on the ability of human operators to find
errors in a simulated system

%A G. R. Dattatreya
%A L. N. Kanal
%T Adaptive Pattern Recognition with Random Costs and Its Applications to
Decision Trees
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 208-218
%K AI06  AA01 AI04 AI01 clustering spina bifida bladder radiology
%X applies  clustering algorithm to results of reading radiographs of
the bladder.  The system was able to determine clusters that corresponded
to those of patients with spina bifida.

%A Klaus-Peter Adlassnig
%T Fuzzy Set Theory in Medical Diagnosis
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 260-265
%K AA01 AI01 O04
%X They developed systems for diagnosing rheumatologic diseases and pancreatic
disorders.  They achieved 94.5 and 100 percent accuracy, respectively.

%A William E. Pracht
%T GISMO: A Visual PRoblem Structuring and Knowledge-Organization Tool
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 265-270
%K AI13 AI08 Witkin Geft AA06
%X discusses the use of a system for displaying effect diagrams on
decision making in a simulated business environment.  The tool improved
net income production.   The tool provided more assistance to those
who were more analytical than to those who used heuristic reasoning as
measured by the Witkin GEFT.

%A Henri Farreny
%A Henri Prade
%T Default and Inexact Reasoning with Possiblity Degrees
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 270-276
%K O04 AI01 AA06
%X discusses storing for each proposition, a pair consisting of the
probability that it is true
and probability that it is false where these two probabilities do not
necessarily add up to 1.  Inference rules have been developed for such
a system including analogs to modus ponens, modus tollens and how to
combine two such ordered pairs applying to the same fact.  These have
been applied to an expert system in financial analysis.

%A Chelsea C. White, III
%A Edward A. Sykes
%T A User Preference Guided Approach to Conflict Resolution in
Rule-Based Expert Systems
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 276-278
%K AI01 multiattribute utility theory
%X discusses an application of multiattribute utility theory to
resolve conflicts between rules in an expert system.


%A David Bright
%T Chip Triggers Software Race
%J ComputerWorld
%V 20
%N 30
%D JUL 28, 1986
%P 1+
%K intel 3086 T01 T03 H01 Gold Hill Computers Arity Lucid T02 Hummingbird Franz
%X Gold HIll Computers, Franz, Arity, Lucid, Quintus and Teknowledge have agreed
to port their AI software to the 80386

%A David Bright
%T Voice-activated Writer's Block
%J ComputerWorld
%V 20
%N 30
%D JUL 28, 1986
%P 23+
%K AI05 Kurzweill Victor Zue
%X MIT's Victor Zue says that current voice recognition technology is not
ready to be extended to "complex tasks."  They have been able to train
researchers to transcribe unknown sentences from spectrograms with 85%
success.  A Votan Survey showed that 87% of  office workers require only
45 words to run their typical applications.  Votan's add-in boards
can recognized 150 words at a time.

%A David Bright
%T Nestor Software Translates Handwriting to ASCII code
%J ComputerWorld
%V 20
%N 30
%D JUL 28, 1986
%P 23+
%K AI06 Brown University
%X Nestor has commercial software that converts handwriting entereded via
a digitizing tablet into ascii text.  First user: a French insurance firm.
The system has been trained to recognize Japanese kanji characters and they will
develop a video system to read handwritten checks.

%A Namir Clement Shammas
%T Turbo Prolog
%J Byte
%D SEP 1986
%V 11
%N 9
%P 293-295
%K T02 H01 AT17
%X  another review of Turbo-Prolog

%A Bruce Webster
%T Two Fine Products
%J Byte
%D SEP 1986
%V 11
%N 9
%P 335-347
%K T02 H01 AT17 Turbo-Prolog
%X yet another review of Turbo-Prolog

%A Karen Sorensen
%T Expert Systems Emerging as Real Tools
%J Infoworld
%V 8
%N 16
%P 33
%D APR 21, 1986
%K AI01 AT08

%A Rosemary Hamilton
%T MVS Gets Own Expert System
%J ComputerWorld
%D APR 7, 1986
%V 20
%N 14
%P 1
%K T03 IBM
%X IBM introduced expert system tools for the MVS operating system similar
to those already introduced for VM. The run-time system is $25,000 per month
while development environment is $35,000 per month.

%A Amy D. Wohl
%T On Writing Keynotes: Try Artificial Intelligence
%J ComputerWorld
%D APR 7, 1986
%V 20
%N 14
%P 17
%X tongue in cheek article about the "keynote" speech which appears at
many conferences.  (Not really about AI)

%A Elisabeth Horwitt
%T Hybrid Net Management Pending
%J ComputerWorld
%D APR 7, 1986
%V 20
%N 14
%P 19
%K AA08 AI01 AT02 Symbolics Avant-Garde nettworks AA15 H02
%X Avant-Garde Computer is developing an interface to networks to assist in
the management thereof.  Soon there will be an expert sytem on a Symbolics
to interface to that to assist the user of the system.

%T Software Notes
%J ComputerWorld
%D APR 7, 1986
%V 20
%N 14
%P 29+
%K ultrix DEC VAX AT02 T01
%X DEC has announce a supported version of VAX Lisp for Ultrix

%A Jeffrey Tarter
%T Master Programmers: Insights on Style from Four of the Best
%J ComputerWorld
%D APR 7, 1986
%V 20
%N 14
%P 41+
%K Jeff Gibbons O02 Palladian AA06
%X contains information on Jeff Gibbons, a programmer at Palladian which
does financial expert systems

%T Software and Services
%J ComputerWorld
%D APR 7, 1986
%V 20
%N 14
%P 76
%K T02 Quintus PC/RT AT02
%X Quintus has ported its Prolog to the IBM PC/RT.  It costs $8000.00


%T New Products/Microcomputers
%J ComputerWorld
%D APR 7, 1986
%V 20
%N 14
%P 81-82
%K AT02 AI06
%X ADS has announced a real-time digitizer for use with micros costing between
$15,000 and $25,000

%A David Bright
%T Datacopy Presents Text, Image Scanner for IBM PC Family
%J ComputerWorld
%D APR 28, 1986
%V 20
%N 17
%P 36
%K H02 AT02 AI06
%X For $2950 you can get an integrated text and iamge scanner which can
convert typewritten text to ASCII code.  It can be trained to recognize
unlimited numbers of fonts.  It can also be used to input 200 x 200 or 300 x 300
dot per inch resolution images.

%T Lisp to Separate Sales, Marketing
%J Electronic News
%P 27
%D APR 14, 1986
%V 32
%N 1597
%K H02 LMI AT11
%X Lisp Machines is separating sales and marketing.  Ken Johnson, the former
vice-president of sales and marketing, has left LMI for VG Systems

%A Steven Bruke
%T Englishlike 1-2-3 Interface Shown
%J InfoWorld
%D APR 28, 1986
%P 5
%V 8
%N 17
%K Lotus AI02 H01 AA15
%X Lotus is selling HAL, which allows users to access 1-2-3 using English
commands

%T TI Sells Japan Lisp Computer
%J Electronics
%D JUN 2, 1986
%P 60
%V 59
%N 22
%K GA02 GA01  H02 AT16
%X C. Itoh has agreed to market TI's Lisp Machine

%A Larry Waller
%T Tseng Sees Peril in Hyping of AI
%J Electronics
%D APR 21, 1986
%P 73
%V 59
%N 16
%K Hughes AT06 AI06 AI07
%X Interview with David Y. Tseng, head of the Exploratory
Studies Department at Malibu Research Laboratories.

%T Image Processor Beats 'Real Time'
%J Electronics
%P 54+
%D APR 14, 1986
%V 59
%N 15
%K AI06 AT02 H01 Imaging Technology
%X Imaging Technology's Series 151 will process an image
in 27 milliseconds and offers the user the ability to
select an area to be processed.  It interfaces to a PC/AT.
It costs $11,495 with an optional convolution board for
$3,995.


%A A. P. Sage
%A C. C. White, III
%T Ariadne: A Knowledge Based Interactive System for Planning and Decision
Support
%J IEEE Transactions on Systems, Man, Cybernetics
%V SMC-14
%D JAN/FEB 1984
%N 1
%P 48-54
%K AI13

%A R. M. Hunt
%A W. B. Rouse
%T A Fuzzy Rule-Based Model of Human Problem Solving
%J IEEE Transactions on Systems, Man, Cybernetics
%V SMC-14
%D JAN/FEB 1984
%N 1
%P 112-119
%K AI08 AI01 AA21
%X attempt to develop a model of how people diagnose engine performance

%A I. B. Turksen
%A D. D. W. Yao
%T Representations of Connectives in Fuzzy Reasoning: The View Through
Normal Forms
%J IEEE Transactions on Systems, Man, Cybernetics
%V SMC-14
%D JAN/FEB 1984
%N 1
%P 146-151
%K O04

%A W. X. Xie
%A S. D. Bedrosian
%T An Information Measure for Fuzzy Sets
%J IEEE Transactions on Systems, Man, Cybernetics
%V SMC-14
%D JAN/FEB 1984
%N 1
%P 151-157
%K O04

%A S. Miyamoto
%A K. Nakayama
%T Fuzzy Information Retrieval Based on a Fuzzy Pseudothesaurus
%J IEEE Transactions on Systems, Man and Cybernetics
%V SMC-16
%N 2
%D MAR/APR 1986
%P 278-282
%K AA14 O04
%X A fuzzy bibliographic information retrieval based on a fuzzy
thesaurus or on a fuzzy pseudothesaurus is described.  A fuzzy thesaurus
consists of two fuzzy relations defined on a set of keywords for the
bibliography.  The fuzzy relations are generated based on a fuzzy set model,
which describes association of keyword to its concepts.  If the set of
concepts in the fuzzy set model is replaced by the set of documents,
the fuzzy relations are called a pseudothesaurus, which is automatically
generated by using occurrence frequencies of the keywords in the set of
documents.  The fuzzy retrieval uses two fuzzy relations in addition,
that is, a fuzzy indexing and a fuzzy inverted file: the latter is the
inverse relation of the former.  They are, however, related to different
algorithms for indexing and retrieval, respectively.  An algorithm of
ordering retrieved documents  according to the values of the fuzzy
thesaurus is proposed.  This method of the ordering is optimal in the
sense that one can obtain documents of maximum relevance in a fixed time
interval.

------------------------------

End of AIList Digest
********************

From csnet_gateway Fri Oct 10 02:46:49 1986
Date: Fri, 10 Oct 86 02:46:34 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe
Subject: AIList Digest   V4 #209
Status: R


AIList Digest            Thursday, 9 Oct 1986     Volume 4 : Issue 209

Today's Topics:
  Bibliographies - Correction and Future SMU Bibliography Labels &
    Recent Kansas Technical Reports & UCLA Technical Reports

----------------------------------------------------------------------

Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Correction and Future SMU Bibliography Labels

[Lawrence Leff at SMU, who provides all those lengthy bibliographies and
article summaries, has sent the following correction for the Subject line
I added to one of the bibliographies.  -- KIL]

ai.bib35 was mistitled as references on computer vision/robotics.
This reference list contained articles on such subjects as neural
networks, urban planning, logic programming, and theorem proving as
well as vision/robotics.

In order to prevent this problem in the future, I will entitling the
materials as ai.bibnnxx
where nn is a consecutive number and
      xx is C for citations without descriptions
            TR for technical reports
            AB for citations for citations with descriptions
                   (annotated bibliographies)
Thus ai.bib40C means the 40th AI list in bibliography format
and the C indicates that we have a bunch of bib format references
without significant commentary.

The nn is unique over all types of bibliographies.  Thus, if there
were an ai.bib40C, then there will NOT be an ai.bib40TR or ai.bib40AB.

These designations are actually the file names for the list on my hard disk.
The shell script that wraps up the item for mailing will automatically put
the file name in the subject field.  If one of your readers uses this to
designate a file in mail to me, I can thus trivially match their query
against a specific file.


Note that I no longer will be separating out references by subject
matter.  The keyword system is much more effective for allowing people
interested in specific subfields of ai to see the articles they find relevant.

Sadly the bib system program "listrefs" is having problems with citations
that contain long abstracts or commentary information.  Thus TR and AB
type references will probably cause this program to spec check.  I spent
a whole day trying to isolate the problem but have been unsuccessful.
One other self-described bib expert has the same problem.  All references
are indexable by "invert".

TR and AB type references will not use bib definition files and thus
are usable with the refer package from AT&T.  If I were not to use bib
definition files with C type reference lists, the number of bytes transmitted
for their mailing would triple.

------------------------------

Date: Fri, 5 Sep 86 15:05:40 CDT
From: Glenn Veach <veach%ukans.csnet@CSNET-RELAY.ARPA>
Subject: Recent Kansas Technical Reports


Following is a list of technical reports which have recently
been issued by the department of Computer Science of The
University of Kansas in conjunction with research done in
the department's Artificial Intelligence Laboratory.

Requests for any and all Technical Reports from the Department of
Computer Science and it's various laboratories at The University
of Kansas should be sent to the following address:

Linda Decelles, Office Manager
110 Strong Hall
Department of Computer Science
The University of Kansas
Lawrence, KS  66045
U.S.A.

%A Glenn O. Veach
%T The Belief of Knowledge: Preliminary Report
%I Department of Computer Science, The University of Kansas
%R TR-86-15
%X As various researchers have attempted to present logics which
capture epistemic concepts they have encountered several difficulties.
After surveying the critiques of past efforts we propose a logic which
avoids these same faults.  We also closely explore fundamental issues
involved in representing knowledge in ideal and rational agents and
show how the similarities and differences are preserved in the logic
we present.  Several examples are given as supporting evidence for our
conclusions.  To be published in the proceedings of the 2nd Kansas
Conference: Knowledge-Based Software Development.  12 pp.

%A Glenn O. Veach
%T An Annotated Bibliography of Systems and Theory for Distributed
Artificial Intelligence
%I Department of Computer Science, The University of Kansas
%R TR-86-16
%X This paper summarizes, with extensive comment, the results of an
initial investigation of the work in distributed AI.  Some forty-plus
articles representing the major schools of thought and development are
cited and commented upon.

%A Frank M. Brown
%T Semantical Systems for Intensional Logics Based on the Modal
Logic S5+Leib
%I Department of Computer Science, The University of Kansas
%R TR-86-17
%X This paper contains two new results.  First it describes how
semantical systems for intensional logics can be represented in
the particular modal logic which captures the notion of logical
truth.  In particular, Kripke semantics is developed from this
modal logic.  The second result is the development in the modal
logic of a new semantical system for intensional logics called
B-semantics.  B-semantics is compared to Kripke semantics and it
is suggested that it is a better system in a number of ways.

------------------------------

Date: Tue, 7 Oct 86 13:32:32 PDT
From: Judea Pearl <judea@LOCUS.UCLA.EDU>
Subject: new Technical Reports

The following technical reports are now available from the
                 Cognitive Systems Laboratory
                 Room 4712, Boelter Hall
                 University of California
                 Los-Angeles, CA, 90024
           or: judea@locus.ucla.edu

_______

Pearl, J., ``Bayes and Markov Networks:    a  Comparison  of  Two
Graphical Representations of Probabilistic Knowledge,'' Cognitive
Systems Laboratory Technical Report (R-46), September 1986.

                        ABSTRACT

This paper deals with the task of configuring effective graphical
representation  for intervariable dependencies which are embedded
in a probabilistic model.  It first uncovers the axiomatic  basis
for the probabilistic relation ``  x  is independent of y , given
 z  ,'' and offers it as a formal definition for  the qualitative
notion of informational dependency.  Given an initial set of such
independence relationships, the axioms established permit  us  to
infer  new  independencies by non-numeric, logical manipulations.
Using this axiomatic basis, the paper determines those properties
of  probabilistic  models  that  can  be  captured  by  graphical
representations and compares  the  characteristics  of  two  such
representations,  Markov  Networks  and  Bayes Networks. A Markov
network  is  an  undirected  graph  where  the  links   represent
symmetrical  probabilistic dependencies, while a Bayes network is
a directed  acyclic  graph  where  the  arrows  represent  causal
influences  or  object-property relationships.  For each of these
two network types, we establish:  1) a  formal  semantic  of  the
dependencies   portrayed   by   the  networks,  2)  an  axiomatic
characterization of the class of dependencies capturable  by  the
network, 3) a method of constructing the network from either hard
data or expert judgments and 4) a summary of properties  relevant
to  its  use  as  a  knowledge representation scheme in inference
systems.

_______

Zukerman, I. & Pearl, J.,  ``Comprehension-Driven  Generation  of
Meta-Technical  Utterances  in  Math  Tutoring,''  UCLA  Computer
Science Department Technical Report CSD-860097  (R-61).

                        ABSTRACT

A technical discussion often contains conversational  expressions
like  ``however,''  ``as  I  have stated before,'' ``next,'' etc.
These expressions, denoted Meta-technical Utterances (MTUs) carry
important  information  which  the  listener uses to speed up the
comprehension process. In this research we model the  meaning  of
MTUs  in  terms  of  their  anticipated  effect  on  the listener
comprehension, and use these predictions to select MTUs and weave
them  into  a  computer  generated  discourse.  This paradigm was
implemented  in  a  system  called   FIGMENT,   which   generates
commentaries on the solution of algebraic equations.

_______

Pearl,  J.,  ``Jeffrey's  Rule  and  the  Problem  of  Autonomous
Inference  Agents,''  UCLA Cognitive Systems Laboratory Technical
Report (R-62), June 1986, UCLA CSD #860099, June 1986.

                        ABSTRACT

Jeffrey's rule of belief revision was devised by philosophers  to
replace  Bayes conditioning in cases where the evidence cannot be
articulated propositionally.  This paper shows  that  unqualified
application  of this rule often leads to paradoxical conclusions,
and that to determine whether or not the rule  is  valid  in  any
specific  case,  one  must first have topological knowledge about
one's belief structure.  However, if such  topological  knowledge
is, indeed, available, belief updating can be done by traditional
Bayes conditioning; thus, arises the question of  whether  it  is
ever  necessary  to  use  Jeffrey's  rule  in  formalizing belief
revision.

_______

Pearl, J., ``Distributed Revision of Belief Commitment in  Multi-
Hypotheses  Interpretation,''  UCLA  Computer  Science Department
Technical Report CSD-860045 (R-64), June 1986; presented  at  the
2nd  AAAI  Workshop  on  Uncertainty  in Artificial Intelligence,
Philadelphia, PA., August 1986.

                        ABSTRACT

This paper extends the applications of belief-networks models  to
include the revision of belief commitments, i.e., the categorical
instantiation of a subset of hypotheses which constitute the most
satisfactory  explanation of the evidence at hand.  We show that,
in singly-connected networks, the most  satisfactory  explanation
can  be  found  in  linear  time  by  a message-passing algorithm
similar to  the  one  used  in  belief  updating.   In  multiply-
connected networks, the problem may be exponentially hard but, if
the network is sparse, topological considerations can be used  to
render  the  interpretation  task tractable.  In general, finding
the most probable combination of hypotheses is  no  more  complex
than   computing   the   degree  of  belief  for  any  individual
hypothesis.

_______

Geffner, H. & Pearl, J., ``A Distributed Approach to Diagnosis,''
UCLA   Cognitive  Systems  Laboratory  Technical  Report  (R-66),
October 1986;

                        ABSTRACT

The paper describes a distributed scheme  for  finding  the  most
likely  diagnosis  of  systems  with multiple faults.  The scheme
uses the independencies embedded in a  system  to  decompose  the
task  of  finding a best overall interpretation into smaller sub-
tasks of finding the best interpretations  for  subparts  of  the
net,  then  combining them together.  This decomposition yields a
globally-optimum diagnosis by local and  concurrent  computations
using  a message-passing algorithm.  The proposed scheme offers a
drastic   reduction   in   complexity   compared    with    other
methods:  attaining linear time in singly-connected networks and,
at worst,  exp ( |  cycle-cutset  | )  time in multiply-connected
networks.

_______

Pearl, J., ``Evidential Reasoning Using Stochastic Simulation  of
Causal  Models,''  UCLA  Cognitive  Systems  Laboratory Technical
Report (R-68-I), October 1986.

                        ABSTRACT

Stochastic simulation is a method of computing  probabilities  by
recording  the  fraction  of  time  that events occur in a random
series of scenarios generated from some causal model.  This paper
presents  an  efficient,  concurrent  method  of  conducting  the
simulation which guarantees that all generated scenarios will  be
consistent  with  the  observed  data.   It  is  shown  that  the
simulation can  be  performed  by  purely  local  computations,
involving   products   of   parameters   given
with the initial specification of the model.   Thus,  the  method
proposed  renders  stochastic  simulation a powerful technique of
coherent  inferencing,  especially  suited  for  tasks  involving
complex,  non-decomposable models where ``ballpark'' estimates of
probabilities will suffice.

_______

Pearl, J., ``Legitimizing Causal Reasoning  in  Default  Logics''
(note),  UCLA  Cognitive  Systems Laboratory Technical Report (R-
69), September 1986.

                        ABSTRACT

The purpose of this note is to draw attention to certain  aspects
of  causal  reasoning  which  are pervasive in ordinary discourse
yet, based on the author's  scan  of  the  literature,  have  not
received  due  treatment  by  logical  formalisms of common-sense
reasoning. In a nutshell, it appears that  almost  every  default
rule  falls  into one of two categories:   expectation-evoking or
explanation-evoking.  The  former  describes  association   among
events  in the outside world (e.g., Fire is typically accompanied
by smoke.); the latter describes how we reason  about  the  world
(e.g.,  Smoke  normally  suggests  fire.).   This  distinction is
clearly and reliably recognized by all people and  serves  as  an
indispensible  tool for controlling the invocation of new default
rules. This note questions  the  ability  of  formal  systems  to
reflect   common-sense   inferences  without  acknowledging  such
distinction and outlines a way in which the flow of causation can
be summoned within the formal framework of default logic.

_______

Dechter, R. & Pearl, J., ``The Cycle-Cutset Method for  Improving
Search  Performance  in AI Applications,'' UCLA Cognitive Systems
Laboratory Technical Report (R-67); submitted  to  the  3rd  IEEE
Conference on Artificial Intelligence Applications.

                        ABSTRACT

This paper introduces a new way of improving search performance by
exploiting an efficient method available for solving tree-structured
problems.  The scheme is based on the following observation:  If, in
the course of a backtrack search, we remove from the constraint-graph
the nodes corresponding to instantiated variables and find that the
remaining subgraph is a tree, then the rest of the search can be
completed in linear time.  Thus, rather than continue the search
blindly, we invoke a tree-searching algorithm tailored to the topology
of the remaining subproblem.  The paper presents this  method in detail
and evaluates its merit both theoretically and experimentally.

------------------------------

End of AIList Digest
********************

From csnet_gateway Fri Oct 10 02:48:06 1986
Date: Fri, 10 Oct 86 02:47:53 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe
Subject: AIList Digest   V4 #210
Status: R


AIList Digest            Thursday, 9 Oct 1986     Volume 4 : Issue 210

Today's Topics:
  Conferences -  Expert Systems in Government &
    IEEE Systems, Man and Cybernetics

----------------------------------------------------------------------

Date: Wed, 01 Oct 86 13:03:52 -0500
From: Duke Briscoe <duke@mitre.ARPA>
Subject: Final Program for Expert Systems in Government Conference

The Second Annual Expert Systems in Government Conference, sponsored by
the Mitre Corporation and the IEEE Computer Society in association with
the AIAA National Capital Section will be held October 20-24, 1986 at
the Tyson's Westpark Hotel in McLean, VA.  There is still time to register,
but late registration charges will be added after October 6.

October 20-21  Tutorials

Monday, October 20

Full Day Tutorial:      Advanced Topics in Expert Systems
                        by Kamran Parsaye, IntelligenceWare, Inc.

Morning Tutorial:       Knowledge Base Design for Rule Based Expert Systems
                        by Casimir Kulikowski, Rutgers University

Afternoon Tutorial:     Knowledge Base Acquisition and Refinement
                        by Casimir Kulikowski, Rutgers University

Tuesday, October 21

Morning Tutorial:       Distributed Artificial Intelligence
                        by Barry Silverman, George Washington University

Morning Tutorial:       Introduction to Common Lisp
                        by Roy Harkow, Gold Hill

Afternoon Tutorial:     Lisp for Advanced Users
                        by Roy Harkow, Gold Hill

Afternoon Tutorial:     The Management of Expert System Development
                        by Nancy Martin, Softpert Systems


October 22-24  Technical Program

Wednesday, October 22

9 - 10:30
Conference Chairman's Welcome
Keynote Address: Douglas Lenat, MCC
Program Agenda

11am - 12pm

Track A: Military Applications I

K. Michels, J. Burger; Missile and Space Mission Determination

Major R. Bahnij, Major S. Cross;
A Fighter Pilot's Intelligent Aide for Tactical Mission Planning

Track B: Systems Engineering

R. Entner, D. Tosh; Expert Systems Architecture for Battle Management

H. Hertz; An Attribute Referenced Production System

B. Silverman; Facility Advisor: A Distributed Expert System Testbed for
Spacecraft Ground Facilities

12pm - 1pm Lunch, Distinguished Guest Address,
           Harry Pople, University of Pittsburgh

1pm - 2:30pm

Track A: Knowledge Acquisition

G. Loberg, G. Powell
Acquiring Expertise in Operational Planning: A Beginning

J. Boose, J. Bradshaw; NeoETS: Capturing Expert System Knowledge

K. Kitto, J. Boose; Heuristics for Expertise Transfer

M. Chignell; The Use of Ranking and Scaling in Knowledge Acquisition

Track B: Expert Systems in the Nuclear Industry

D. Sebo et al.; An Expert System for USNRC Emergency Response

D. Corsberg; An Object-Oriented Alarm Filtering System

J. Jenkins, W. Nelson; Expert Systems and Accident Management

3pm - 5pm

Track A: Expert Systems Applications I

W. Vera, R. Bolczac; AI Techniques Applied to Claims Processing

R. Tong, et al.; An Object-Oriented System for Information Retrieval

D. Niyogi, S. Srihari; A Knowledge-based System for Document Understanding

R. France, E. Fox; Knowledge Representation in Coder

Track B: Diagnosis and Fault Analysis

M. Taie, S. Srihari; Device Modeling for Fault Diagnosis

Z. Xiang, S. Srihari; Diagnosis Using Multi-level Reasoning

B. Dixon; A Lisp-Based Fault Tree Development Environment

Panel Track:
1pm - 5pm       Management of Uncertainty in Expert Systems
Chair:          Ronald Yager, IONA College
Participants:   Lofte Zadeh, UC Berkeley
                Piero Bonnisone, G.E.
                Laveen Kanal, University of Maryland
                Peter Cheeseman, NASA-Ames Research Center
                Prakash Shenoy, University of Kansas


Thursday, October 23

9am - 10:30am

Track A: Knowledge Acquistion and Applications

E. Tello; DIPOLE - An Integrated AI Architecture

H. Chung; Experimental Evaluation of Knowledge Acquisition Methods

H. Gabler; IGOR - An Expert System for Crash Trauma Assessment

K. Chhabra, K. Karna; Expert Systems in Electronic Filings

Track B: Aerospace Applications of Expert Systems

D. Zoch; A Real-time Production System for Telemetry Analysis

J. Schuetzle; A Mission Operations Planning Assistant

D. Brauer, P. Roach; Ada Knowledge Based Systems

F. Rook, T. Rubin; An Expert System for Conducting a
Sattelite Stationkeeping Maneuver

Panel Track: Star Wars and AI
Chair: John Quilty, Mitre Corp.
Participants:   Brian P. McCune, Advanced Decision Systems
                Lance A. Miller, IBM
                Edward C. Taylor, TRW

11am - 12pm
Plenary Address:
B. Chandrasekaran; The Future of Knowledge Acquisition

12pm - 1pm Lunch

1pm - 2:30pm

Track A: Inexact and Statistical Measures

K. Lecot; Logic Programs with Uncertainties

N. Lee; Fuzzy Inference Engines in Prolog/P-Shell

J. Blumberg; Statistical Entropy as a Measure of Diagnostic Uncertainty

Track B: High Level Tools for Expert Systems

S. Shum, J.Davis; Use of CSRL for Diagnostic Expert Systems

E. Dudzinski, J. Brink; CSRL: From Laboratory to Industry

D. Herman, J. Josephson, R. Hartung; Use of the DSPL
for the Design of a Mission Planning Assistant

J. Josephson, B. Punch, M. Tanner; PEIRCE: Design Considerations
for a Tool for Abductive Assembly for Best Explanation

Panel Track: Application of AI in Telecommunications
Chair: Shri Goyal, GTE Labs
Participants:   Susan Conary, Clarkson University
                Richard Gilbert, IBM Watson Research Center
                Raymond Hanson, Telenet Communications
                Edward Walker, BBN
                Richard Wolfe, ATT Bell Labs

3pm - 5pm

Track A: Expert System Implementations

S. Post; Simultaneous Evaluation of Rules to Find Most Likely Solutions

L. Fu; An Implementation of an Expert System that Learns

R. Frail, R. Freedman; OPGEN Revisited

R. Ahad, A. Basu; Explanation in an Expert System

Track B: Expert System Applications II

R. Holt; An Expert System for Finite Element Modeling

A. Courtemanche; A Rule-based System for Sonar Data Analysis

F. Merrem; A Weather Forecasting Expert System


Panel Track: Command and Control Expert Systems
Chair:          Andrew Sage, George Mason University

Participants:   Peter Bonasso, Mitre
                Stephen Andriole, International Information Systems
                Paul Lehner, PAR
                Leonard Adelman, Government Systems Corporation
                Walter Beam, George Mason University
                Jude Franklin, PRC


Friday, October 24

9am - 12pm: Expert Systems in the Classified Community
The community building expert systems for
classified applications is unsure of the value and feasibility of some
form of communication within the community.  This will be a session
consisting of discussions and working sessions, as appropriate, to
explore these issues in some depth for the first time, and to make
recommendations for future directions for the classified community.

9am - 10:30am

Track A: Military Applications

Bonasso, Benoit, et al.;
An Experiment in Cooperating Expert Systems for Command and Control

J. Baylog; An Intelligent System for Underwater Tracking

J. Neal et al.; An Expert Advisor on Tactical Support Jammer Configuration

Track B: Expert Systems in the Software Lifecycle

D. Rolston; An Expert System for Reducing Software Maintenance Costs

M. Rousseau, M. Kutzik; A Software Acquisition Consultant

R. Hobbs, P. Gorman; Extraction of Data System Requirements

Panel Track: Next Generation Expert System Shells
Chair: Art Murray, George Washington University
Participants:   Joseph Fox, Software A&E
                Barry Silverman, George Washington University
                Chuck Williams, Inference
                John Lewis, Martin Marietta Research Labs

11am - 12pm

Track A: Spacecraft Applications

D. Rosenthal; Transformation of Scientific Objectives
into Spacecraft Activities

M. Hamilton et al.; A Spacecraft Control Anomaly Resolution Expert System

Track B: Parallel Architectures

L. Sokol, D. Briscoe; Object-Oriented Simulation on a
Shared Memory Parallel Architecture

J. Gilmer; Parallelism Issues in the CORBAN C2I Representation

Panel Track: Government Funding of Expert Systems
Chair: Commander Allen Sears, DARPA
Participants: Randall Shumaker, and others

Conference Chairman: Kamal Karna
Unclassified Program Chairman: Kamran Parsaye
Classified Program Chairman: Richard Martin
Panels Chairman: Barry Silverman
Tutorials Chairman: Steven Oxman


Registration information can be requested from
Ms. Gerrie Katz
IEEE Computer Society
1730 Massachusetts Ave. N.W.
Washington, D.C.  20036-1903
(202) 371-0101

------------------------------

Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Conference - IEEE Systems, Man and Cybernetics

1986 IEEE International Conference on Systems, Man and Cybernetics, AI papers
October 14-17, 1986 Pierremont Plaza Hotel, Atlanta, GA 30308

Wednesday October 15 8AM - 9:40 AM

On Neural-Model Based Cognitive Theory and Engineering: Introduction
  N. DeClaris
Matrix and Convolution Models of Brain Organization in Cognition
  K. H. Pribram
Explorations in Brain Style Computations
  D. E. Rumelhart
Representing and Transforfming Recursive Objects in a Neural Network or "Trees
Do Grow on Boltzmann Machines
  D. S. touretzky
Competition-Based Connectionist Models of Associative Memory
  J. A. Reggia, S. Millheim, A. Freeman
A Parallel Network that Lears to Read Aloud
  T. J. Sejnowski
A Theory of Dialogue Structures to HElp Manage Human Computer Interaction
  D. L. Sanford, J. W. Roach
A User Interface for a Knowledge-Based Planning and Scheduling System
  A. M. Mulvehill
Orthonormal Decompositions in Adaptive Systems
  L. H. Sibul
An "Evolving Frame" Approach to Learning with Application to Adaptive
Navigation
  R. J. P. de Figueredo, K. H. Wang
Approaches to Machine Learning with Genetic Algorithms
  J. Grefenstette, C. B. Pettey
Use of Voice Recognition for Control of a Robotic Welding Workcell
  J. K. Watson, D. M. Todd, C. S. Jones
A Knowledge Based System for the CST Diagnosis
  C. Hernandez, A. Alonso, Z. Wu
A Qualitative Model of Human Interaction with Complex Dynamic Systems
  R. A. Hess
Evaluating Natural Language Interfaces to Expert Systems
  R. M. Weischedel
Expert System Metrics
  S. Kaisler
Global Issues in Evaluation of Expert Systems
  N. E. Lane
A Scenario-Based Test Tool for Examining Expert Systems
  E. T. Scambos

10:AM - 11:40AM

A Comparison of Some Inductive Learning Methodologies
  D. W. Patterson
INduction of Finite Automata by Genetic Algorithms
  H. Y. Zhou, J. J. Grefenstette
NUNS: A Machine Intelligence Concept for Learning Object Class Domains
  B. V. Dasarathy
Toward a Paradigm for Automated Understanding of 3-D Medical Images
  E. M.Stokely, T. L. Faber
Development of an Expert System for Interpreting Medical Images
  N. F. Ezquerra, E. V. Garcia, E. G. DePuey, W. L. Robbins
Edge Enhancement in Medical Images by 3D Processing
  J. E. Boyd, R. A. STein
Scheme for Three Dimensional Reconstruction of Surfaces from CT and MRI Images
of the Human Body
  R. Tello, R. W. Mann, D. Rowell
Using the Walsh-Hadamard Phase Spectrum to Generate Cardiac Activation Movies-
A Feasibility Study
  H. Li
Things We Learned by Making Expert Systems to Give Installation Advice for
UNIX 4.2BSD and to HElp Connect a Terminal to a Computer
  A. T. Bahill, E. Senn, P. Harris
A Heuristic Search/Information Theory Approach to Near Optimal Diagnostic
Test Sequencing
  K. R. Pattipati, J. C. Deckert, M. G. Alexandris
An Expert System for the Estimation of Parameter Values of Water Quality
MOdel
  W. J. Chen
Application of an Expert System to Error Detection and Correction in a
Speech Recognition System
  K. H. Loken-Kim, M. Joost, E. Fisher

1PM - 1:50PM
Topic: Holonomic Brain Theory and The Concept of Information
  Karl H. Pribram

2-3:40PM
An Interactive Machine Intelligence Development System for Generalized 2-D
Shapes Recognition
  B. V. Dasarathy
Modelling of Skilled Behaviour and Learning
  T. Bosser
Design of a User INterface for Automated Knowledge Acquisition
  A. S. Wolff, B. L. Hutchins, E. L. Cochran, J. R. Allard, P. J. Ludlow
OFMspert: An Operator Function Model Expert System
  C. M. Mitchell, K. S. Rubin, T. Govindaraj
An Adaptive Medical Information System
  N. DeClaris
Intermediate Level Heuristics for Road-finding Algorithms
  S. Vasudevan, R. L. cannon, J. C. Bezdek, W. C. Cameron
Computer-Disambiguation of Multi-Character Key Text Entry: An Adaptive Design
Approach
  S. H. Levine, S. Minneman, C. Getschow, C. Goodenough-Trepaigner
An INteractive and Data Adaptive Spectrum Analysis System
  C. H. Chen, A.  H. Costa
On How Two-Action Ergodic Learning Automata can Utilize Apriori Information
  B. J. Oommen
VLSI Implementation of an Iterative Image Restoration Algorithm
  A. K. Katsaggelos, P. R. Kumar, M. Samanthan
Development of Automated Health Testing and Services System via Fuzzy Reasoning
  E. Tazaki, Y. Hayashi, K. Yoshida, A. Koiwa
Knowledge-Based Interaction Tools
  R. Neches
Bibliographic Information Retrieval Systems: Using Ai Techniques to Improve
Cognitive Compatibility and Performance
  P. J. Smith, D. A. Krawczak, S. J. Shute, M. H. Chignell, M. Sater

4PM-5:40PM
An Evidential Approach to Robot Sensory Fusion
  J. H. Graham
Retinal Ganglion Cell Processing of Spatial Information in Cats
  J. Troy, J. G. Robson, C. Enroth--Cugel
Texture Discriminants from Spatial Frequency Channels
  G. A. Wright, M. E. Jernigan
Contextual Filters for Image Processing
  G. F. McLean, M. E. Jernigan
Using Cognitive Psychology Techniques for Knowledge Acquisition
  A. H. Silva, D. C. Regan
Transfer of kNowledge from Domain Expert to Expert System: Experience
Gained form JAMEX
  J. G. Neal, D. J. Funke
Metholdological Tools for kNowledg eAcquisition
  K. L. Kessel
Downloading the Expert: Efficient Knowledge Acquisition for Expert Systems
  J. H. Lind
Integration of Phenomenological and Fundamental Knowledge in Diagnostic
Expert Systems
  L. M. Fu
Integrating Knowledge Acquisitions Methods
  P. Dey, K. D. Reilly
Multi Processing of Logic Programs
  G. J. Li, B. W. Wah
A Model for Parallel Processing of Production Systems
  D. I. MOldovan
Several IMplementaitons of Prolog, the Microarchitecture Perspective
  Y. N. Patt
A Parallel Symbol-Matching Co-procesor for Rule Processing Sytems
  D. F. Newport, G. T. Alley, W. L. Bryan, R. O. Eason, D. W. Bouldin
The Connection Machine Architecture
  W. D. Hillis, B. S. Kahle

Thursday, October 16th 8AM-9:40PM
Transformation Invariance Using HIgh ORder Correlations in Neural Net
Architectures
  T. P. Maxwell, C. L. Giles, Y. C. Lee, H. H. Chen
A Neural Network Digit Recognizer
  D. J. Burr
Computational Properties of a Neural Net with a Triangular Lattice Structure
and a Traveling Activity Peak
  R. Eckmiller
Fuzzy Multiobjective Mathematical Programming's Application to Cost Benefit
Analysis
  L. Xu
Evaluation of the Cause Diagnosis Function of a Prototype Fuzzy-Logic-Based
Knowledge System for Financial Ratio Analysis Analysis
  F. J. Ganoe, T. H. Whalen, C. D. Tabor
Knowledge INtegration inFinancial Expert Systems
  P. D. Crigler, P. Dey
Pyramid and Quadtree ARchitectures in Point Pattern Segmentation and Boundary
Extraction
 B. G. Mobasseri
Causality in Pattern Recognition
  S. Vishnubhatla
Network Biovisitrons for HIgh-Level Pattern Recognition
  D. M. Clark, F. Vaziri
Giving Advice as Extemporaneous Elaboration
  M. A. Bienkowski
Dynamics of Man-Machine Interaction in a Conversational Advisory System
  A. V. Gershman, T. Wolf

10AM -11:40AM
A Method for Medial Line Transformation
  E. Salari
An Alternative IMplematnationStrategy for a Variety of Image Processing
Algorithms
  R. Saper, M. E. Jernigan
A Semantic Approach to Image Segmentation
  S. Basu
A SkeletonizingAlgorithm with Improved Isotropy
  D. J. Healy
The Application of Artificial INtelligence to Manufacturing control
  P. J. O'Grady, K. H. Lee, M. Brightman
An Expert System for designof Flexible Manufacturing Systems
  D. E. Brown, G. anandalingam
A Derivational Approach to Plan Refinement for Advice Giving
  R. Turner
TheRole of Plan Recognition in Design of an INtelligent User INterface
  C. A. Broverman, K. E. Khuff, V. Lesser
Discussant
  J. L. Koldner
Voice INput in Real time Decision Making
  M. G. Forren, C. M. Mitchell

2:00PM-3:40PM
The Use of Artificial Intelligence in CAI for Science Education
  G. S. Owen
Design of an Intelligent Tutoring System (ITS) for Aircraft Recognition
  D. R. Polwell, A. E. Andrews
A Rule-Based Bayesian ARchitecture for Monitoring Learnign Process in ICAI
Systems
  T. R. Sivasankaran T. Bui
A Knowledge Based System for Transit Planning
  A. Mallick, A. Boularas, F. DiCesare
On the Acquisition and Processingo f Uncertian Information in Rule-Based
Decision Support Systems
  S. Gaglio, R. Minciardi, P. P. Puliafito
Lambertian Spheres Parameter Estimation from a Single 2-D Image
  B. Cernuschi-Frias D. B. Cooper
A Solution to the STereo Correspondence Problem using Disparity Smoothness
Constraints
  N. H. Kim, A. C. Bovik
Registration of Serial Sectional Images for 3-D Reconstruction
  M. Sun, C. C. li
Rotation-Invariant contour DP Matching Method for 3D Object Recognition
  H. Yamada, M. Hospital, T. Kasvand
CAD Based 3-D Models for Computer Vision
  B. Bhanu, C. C. Ho, S. Lee
A Rule-Based System for Forming Sequence Design for Multistage Cold Forging
  K. Sevenler, T. Altan, P. S. Raghupathi, R. A. Miller
Automated Forging Design
  A. Tang
Geometry Representation to Aid Autoamted Design on Blocker Forging
  K. R. Vemuri
Intelligent Computing ("The Sixth Generation"): A Japanese Initiative
  R. E. Chapman
The Influenceof the United States and Japan on Knowledge Systems of the Future
  B. A. Galler
Knowledgeis Structured in Conscioiusness
  T. N. Scott, D. D. Scott
Knowledge Science-Towards the Prosthetic Brain
  M. L. Shaw
Socio-Economic Foundations of Knowledge Science
  B. R. Gaines

4:00 PM - 5:40PM
Fuzzy and Vector Measurement of Workload
  N. Moray, P. Eisen, G. Greco, E.Krushelnycky, L. Money, B. Muir, I. Noy,
  F. Shein, B. Turksen, L. Waldon
Toward an Empirically-based Process Model for a Machine Programming Tutor
  D. Littman, E. Soloway
An Intelligent Tutor for Thinking about Programming
  J. Bonar
An Expert System for Partitioning and Allocating Algorithms
  M. M. Jamali, G. A. Julien, S. L. Ahmad
A Knowledge INcreasing Model of Image Understanding
  G. Tascini, P. Puliti
An Artificial Intelligence Approach for Robot-Vision in Assembly Applications
Environment
  K. Ouriachi, M. Bourton
Visible Surface Reconstruction under a Minimax Criterion
  C. Chu, A. C. Bovak
A Measurement of Image Concordance Using Replacment Rules
  R. Lauzzana
High-Level Vision Using a Rule-Based Language
  M. Conlin
An Expert Consultant for Manufacturing Process Selection
  A. Kar
A Knowledge Representation Scheme for Processes in an Automated Manufacturing
Environment
  S. R. Ray
Making Scheduling Desisions in an F. M. S. Using the State-Operator Framework
in A. I.
  S. de, A. Lee
Intelligent Exception Processing for Manufacturing Workstation Control
  F. DiCesare, A. Desrochers, G. Goldbergen
Knowledge of Knowledge and the Comptuer
  J. A. Wojciechowski
Paradigm Chagne in the Sixth Generation Approach
  W. H. C. Simmonds
Educational Implications of Knowledge Science
  P. Zorkoczy
>From Brain Theory to the Sixth Generation Computer
  M. A. Arbib

Friday, October 17 8:00 AM - 9:40 AM
Development of an Intelligent Tutoring System
  K. Kawamura, J. R. Bourne, C. Kinzer, L. Cozean, N. Myasaka, M. Inui
CALEB: An Intelligent Second Language Tutor
  P. Cunningham, T. Iberall, B. Woolf
A Methodology for Development of a Computer-Aided Instruction Program in
Complex, Dynamic Systems
  J. L. Fath, C. M. MItchell, T. Govindaraj
Matching Strategies in Error Diagnosis: A Statistics Tutoring Aid
  M. M. Sebrechts, L. J. Schooler, L. LaClaire
Using Prolog for Signal Flow Graph Reduction
  C. P. Jobling, P. Grant
A Self-Organizing Soft Clustering Algorithm
  M. A. Ismail
A Modified Fisher Criterion for Feature Extraction
  A. Atiya
A Model of Human Kanji Character Recognition
  K. Yokosawa, M. Umeda, E. Yodogawa
Efficient Recognition of Omni-Font Characters using Models of Human
Pattern Perception
  D. A. Kerrick, A. C. Bovik
Printed Character Recognition Using an Artificial Visual System
  J. M. Coggins, J. T. Poole
Multiobjective INtelligent Computer Aided Design
  E. A. Sykes, C. C. White
Knowledge Engineering for Interactive Tactical Planning: A Tested Approach
with General Purpose Potential
  S. J. Andriole
ESP- A Knowledge-Aided Design Tool
  J. F. King, E. Hushebeck
A Study of Expert Decision Making in Design Processes
  R. M. Cohen, J. H. May, H. E. Pople
An Intelligent Design Aid for Large Scale Systems with Quantity Discount
Pricing
  A. R. Spillane, D. E. Brown

10AM - 1140AM
NeoETS: Interactive Expertise Transfer for Knowledge-Based Systems
  J. H. Boose, J. M. Bradshaw
PCS: A Knowledge-Based Interactive System for Group Problem Solving
  M. L. Shaw
Cognitive Models of Human-Computer INteraction in Distributed Systems
  B. R. Gaines
The Use of Expert Systems to Reduce Software Specification Errors
  S. B. Ahmed, K. Reside
Structure Analysis for Gray Level Pictures on a Mesh Connected Computer
  J. El Mesbahi, J. S. Cherkaoui
Pattern Classification on the Cartesian Join System: A General Tool for
Featue Selection
  M. Ichino
Texture Discrimination using a Model of the Visual Cortex
  M. Clark, A. C. Bovik
Surface Orientation from Texture
  J. M. Coggins, A. K. Jain
Classificationof Surface Defects on Wood Boards
  A. J. Koivo, C. Kim
ADEPT: An Expert System for Finite Element Modeling
  R. H. HoltU. Narayana
KADD: An Environment for Interactive Knowledge Aided Display Design
  P. R. Frey, B. J. Widerholt

3PM - 3:40PM
Assigning Weights and Ranking Information Importance in an Object Identification
Task
  D. M. Allen
Third Generation Expert Systems
  J. H. Murphy, S. C. Chay, M. M. Downs
Reasoning with Comparative Uncertainty
  B. K. Moore
On a Blackboard Architecture for an Object-Oriented Production System
  D. Doty, R. Wachter
Pattern Analysis of N-dimensionial Digital Images
  E. Khalimsky

------------------------------

End of AIList Digest
********************

From csnet_gateway Fri Oct 10 04:43:21 1986
Date: Fri, 10 Oct 86 04:43:07 edt
From: csnet_gateway (LAWS@SRI-STRIPE.ARPA)
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #211
Status: RO


AIList Digest            Friday, 10 Oct 1986      Volume 4 : Issue 211

Today's Topics:
  Queries - Line-Drawing Recognition & Cognitive Neuroscience,
  Schools - Cognitive Science at SUNY,
  AI Tools - XILOG & Public-Domain Prolog,
  Review - Canadian Artificial Intelligence,
  Logic Programming - Prolog Multiprocessors Book,
  Learning - Multilayer Connectionist Learning Dissertation

----------------------------------------------------------------------

Date: 9 Oct 86 07:52:00 EDT
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: request for references on drawings


I'd appreciate getting references to any work on automatic
comparison or classification of drawings, especially technical
drawings and blueprints.  For instance, a system which, when
presented with a blueprint, can recognize it as a left-handed
widget, etc.  Please send replies directly to me - thanks.

John Cugini <Cugini@NBS-VMS>

------------------------------

Date: Mon, 6 Oct 86 13:17:40 edt
From: klahr@nyu-csd2.arpa (Phillip Klahr)
Subject: Cognitive Neuroscience


        For my Neuroscience qualifying exam, I am looking for articles,
books, or reviews that discuss the interface/contribution of AI research on
vision and memory to "Cognitive Neuroscience".  By Cognitive Neuroscience, I
mean the study of theories and methods by which the different parts of the
brain go about processing information, such as vision and memory.  To give you
an idea of "ancient works" I am starting with, I am already looking at:

        Wiener's "Cybernetics", von Neumann's "The Computer and the Brain",
Rosenblatt's "Principles of Neurodynamics", Arbib's "Metaphorical Brain", and
Hebb's "The Organization of Behavior".

Some of the neurophysiology work I am looking at already includes work by
Mortimer Mishkin and Larry Squire on memory in the monkey.

Any pertinent references you can think of will be very much appreciated, and,
if there is any interest, I will post a summary of any responses I get.

Thank you very much.
                Phillip Klahr Albert Einstein College of Medicine
 klahr@NYU-CSD2.ARPA           UUCP: {allegra, seismo, ihnp4} !cmcl2!csd2!klahr

------------------------------

Date: Mon, 29 Sep 86 10:46:55 EDT
From: "William J. Rapaport" <rapaport%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Re: Cognitive Science Schools

In article <8609221503.AA15901@mitre.ARPA> schwamb@MITRE.ARPA writes:
>Well, now that some folks have commented on the best AI schools in
>the country, could we also hear about the best Cognitive Science
>programs?  Cog Sci has been providing a lot of fuel for thought to
>the AI community and I'd like to know where one might specialize
>in this.
>
>Thanks, Karl   (schwamb@mitre)

The SUNY Buffalo Graduate Group in Cognitive Science was formed to
facilitate cognitive-science research at SUNY Buffalo.  Its activities
have focused on language-related issues and knowledge representation.
These two areas are well-represented at SUNY Buffalo by the research
interests of faculty and graduate students in the Group.

The Group draws its membership primarily from the Departments of
Computer Science, Linguistics, Philosophy, Psychology, and Communicative
Disorders, with many faculty from other departments (e.g., Geography,
Education) involved on a more informal basis.  A current research project
on deixis in narrative is being undertaken by a research subgroup.

While the Group does not offer any degrees by itself, a Cognitive
Science "focus" in a Ph.D. program in one of the participating
disciplines is available.

There is also a Graduate Group in Vision.

For further details, see AI Magazine, Summer 1986, or contact:

                William J. Rapaport
                Assistant Professor of Computer Science
                Co-Director, Graduate Group in Cognitive Science

Dept. of Computer Science, SUNY Buffalo, Buffalo, NY 14260

(716) 636-3193, 3180

uucp:   ..!{allegra,decvax,watmath,rocksanne}!sunybcs!rapaport
csnet:  rapaport@buffalo.csnet
bitnet: rapaport@sunybcs.bitnet

------------------------------

Date: 1 Oct 86 20:02:38 GMT
From: mcvax!unido!ecrcvax!bruno@seismo.css.gov  (Bruno Poterie)
Subject: XILOG

Well, I know of at least one Prolog system on PC/AT which is:

        - fully C&M compatible,
                to the exception of the top-level mode
                        (consults by default (terms ended with a dot),
                         executes on request (terms ended with a question mark))

        - all defined C&M predicates,
                i/o, program manipulation, term scanning & construction,
                integer and float arithmetics, ...

        plus the following features:

        - full window, semi-graphics & color management

        - modularity for debugging, program handling, etc..
                ( but *no* separate dictionaries)
          through a hierarchy of "zone"

        - on-line precise help

        - on-line clause editor

        - complete typage mechanism, allowing full object definition,
          constraint checking, etc...

        - functional mechanism, allowing each call to return any prolog term
          as a return value through an accumulator (bactracked/trailed)
                (the arithmetic is implemented, using this mechanism,
                 and you may extend it as you wants)

        - non-bactrackable global cells and arrays

        - backtracking arrays, with functional notation and access

        - access to MSDOS

        - sound system

        and some others less (sic) important goodies, like a debugger based
        on the Box model, etc...


        oh, I forgot:

        under development, and rather advanced by now, are:

        - an incremental compiler to native code with incremental linking
          (with full integration with the interpreter, of course)

        - an interface to C programs

        - a toolkit for development of applications, with an utilities library

        - and maybe a message sending mechanism (but I'm not sure for it)


The name of this system is:

                XILOG

        and it is made and distributed by (the Research Center of) BULL,
        the biggest french computer compagny.

        if interested, contact:

                CEDIAG
                BULL
                68, route de Versailles
                F-78430 Louveciennes
                FRANCE

        or:

                Dominique Sciamma
                (same address)


        don't fear, they do speak english there! :-)


P.S.: I precise that I have no commercial interest at all in this product,
but I really think that this XILOG is the best Prolog for micro I ever met.

================================================================================
  Bruno Poterie         # ... une vie, c'est bien peu, compare' a un chat ...
  ECRC GmbH             #               tel: (49)89/92699-161
  Arabellastrasse 17    #               Tx: 5 216 910
  D-8000 MUNICH 90      #               mcvax!unido!ecrcvax!bruno
  West Germany          #               bruno%ecrcvax.UUCP@Germany.CSNET
================================================================================

------------------------------

Date: 8 Oct 86 20:26:15 GMT
From: ucdavis!ucrmath!hope!fiore@ucbvax.Berkeley.EDU  (David Fiore)
Subject: Re: pd prolog

> Xref: ucbvax net.micro:451 net.micro.pc:821 net.ai:91
>
> Does anyone have the public domain prolog package discussed in this month's
> BYTE magazine?
>
> John E. Jacobsen
> University of Wisconsin -- Madison Academic Computing Center

   I have a copy of pdprolog here with me.  It is the educational version.
   I don't know if that is the one described in BYTE as I haven't read that
   magazine lately.


      ||
      ||         David Fiore, University of California at Riverside.
 =============
      ||         Slow mail   :  1326 Wheaton Way
      ||                        Riverside, Ca.  92507
      ||         E-Mail
      ||            UseNet   : ...!ucdavis!ucrmath!hope!fiore
      ||            BITNET   : consult@ucrvms

                Have another day!

    "...and at warp eight, we're going nowhere mighty fast"

------------------------------

Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Canadian Artificial Intelligence/ September 1986

Summary No. 9

Report on current budget and increase in dues

The Dalhousie Business School got a Sperry Explorer Lisp Machine
and a copy of KEE.  They are developing a system to manage foreign
debts and plan an estimator for R&D projects, intelligent computer
aided instruction and auditing.

Xerox Canada has set up an AI support work

Logicware has been acquired by the Nexa Group

British Columbia Advanced Systems Institute will be set up to do
research on AI, robotics, microelectronics.

__________________________________________________________________________

Two assessments on the Japanese Fifth Generation project:

ICOT is developing AI systems for fishing fleets, train control,
microchip design, natural language transition.  There are 600 researchers
working on fifth generation projects and 600 on robotics.

1986-1988 funding is 102 billion yen and 1982-92 funding is 288 billion.

The English to Japanese system will require post-editing and applies
standard techniques.

The Japanese have abandoned 'Delta', their parallel inference engine
is 'gathering dust'  They alledgedly threw 'hardware engineers' into
a Prolog environment for which they 'had no background or interest'

__________________________________________________________________________

Report on Natural Language Understanding Research at University of
Toronto

Reviews of Bertholt Klaus Paul Horn's "Robot Vision"
Theoretical Aspects of Reasoning About Knowledge: Proceedings of the 1986
Conference

------------------------------

Date: Tue, 7 Oct 86 16:08:19 EST
From: munnari!nswitgould.oz!michaelw@seismo.css.gov
Subject: Book - Prolog Multiprocessors


  A book is soon  to  appear,  by  Michael  J.  Wise,  entitled  "Prolog
  Multiprocessors".  It is being published by Prentice-Hall (Australia).
  In a nutshell,  the  book  examines  the  execution  of  Prolog  on  a
  multiprocessor.

       Starting  from  a   survey   of   some   current   multiprocessor
  architectures,  and  a review of what is arguably the most influential
  counter-proposal - the "data-flow" model,  a  model  is  proposed  for
  executing  Prolog  on  a  multiprocessor.  Along with the model goes a
  language based on Prolog.  The  model  and  the  language  are  called
  EPILOG.  EPILOG employs both AND and OR parallelism.  Results are then
  reported for the simulated execution of some Prolog programs rewritten
  in  the  EPILOG language.  The book concludes with an extensive survey
  of other multiprocessor implementations of Prolog.

       The book will be available in Australia from mid November, and in
  US/UK/Europe  roughly  eight  weeks  later.   A  list  of  the Chapter
  headings follows.  A more detailed list  can  be  obtained  from  your
  local P-H representative, or by e-mailing to me directly.

                            TABLE OF CONTENTS

  Foreword by J. Alan Robinson
  Preface

    1.  Parallel Computation  and the Data-Flow Alternative
    2.  Informal Introduction to Prolog
    3.  Data-Flow Problems and a Prolog Solution
    4.  EPILOG Language and Model
    5.  Architectures for EPILOG
    6.  Experimenting with  EPILOG  Architectures  -  Results  and  Some
        Conclusions
    7.  Related Work

  Appendix 1 Data-Flow Research - the  First Generation

  Appendix 2 EBNF Specification for EPILOG

  Appendix 3 EPILOG Test Programs

  Appendix 4 Table of Results

------------------------------

Date: Thu, 9 Oct 86 10:21:18 EDT
From: "Charles W. Anderson" <cwa0%gte-labs.csnet@CSNET-RELAY.ARPA>
Subject: Dissertation - Multilayer Connectionist Learning

        The following is the abstract from my Ph.D. dissertation
completed in August, 1986, at the University of Massachusetts, Amherst.
Members of my committee are Andrew Barto, Michael Arbib, Paul Utgoff,
and William Kilmer.  I welcome all comments and questions.

                                    Chuck Anderson
                                    GTE Laboratories Inc.
                                    40 Sylvan Road
                                    Waltham, MA 02254
                                    617-466-4157
                                    cwa0@gte-labs



                     Learning and Problem Solving
                with Multilayer Connectionist Systems

        The difficulties of learning in multilayered networks of
computational units has limited the use of connectionist systems in
complex domains.  This dissertation elucidates the issues of learning in
a network's hidden units, and reviews methods for addressing these
issues that have been developed through the years.  Issues of learning
in hidden units are shown to be analogous to learning issues for
multilayer systems employing symbolic representations.

        Comparisons of a number of algorithms for learning in hidden
units are made by applying them in a consistent manner to several tasks.
Recently developed algorithms, including Rumelhart, et al.'s, error
back-propagation algorithm and Barto, et al.'s, reinforcement-learning
algorithms, learn the solutions to the tasks much more successfully than
methods of the past.  A novel algorithm is examined that combines
aspects of reinforcement learning and a data-directed search for useful
weights, and is shown to out perform reinforcement-learning algorithms.

        A connectionist framework for the learning of strategies is
described which combines the error back-propagation algorithm for
learning in hidden units with Sutton's AHC algorithm to learn evaluation
functions and with a reinforcement-learning algorithm to learn search
heuristics.  The generality of this hybrid system is demonstrated
through successful applications to a numerical, pole-balancing task and
to the Tower of Hanoi puzzle.  Features developed by the hidden units in
solving these tasks are analyzed.  Comparisons with other approaches to
each task are made.

------------------------------

End of AIList Digest
********************