From in%@vtcs1 Thu Dec 4 01:16:11 1986 Date: Thu, 4 Dec 86 01:16:04 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #273 Status: R AIList Digest Tuesday, 2 Dec 1986 Volume 4 : Issue 273 Today's Topics: Bibliography - ai.bib42AB ---------------------------------------------------------------------- Date: WED, 10 oct 86 17:02:23 CDT From: leff%smu@csnet-relay Subject: ai.bib42AB %A Ralph Grishman %A Richard Kittredge %T Analyzing Language in Restricted Domains %I Lawrence Erlbaum Associates Inc. %C Hillsdale, NJ %K AI02 AA01 %D 1986 %X 0-89859-620-3 1986 264 pages $29.95 .TS tab(~); l l. N.Sager~T{ Sublanguage: Linguistic Phenomenon, Computational Tool T} J. Lehrberger~Sublanguage Analysis E. Fitzpatrick~T{ The Status of Telegraphic Sublanguages T} J. Bachenko D. Hindle J. R. Hobbs~Sublanguage and Knowledge D. E. Walker~T{ The Use of Machine Readable Dictionaries in Sublanguage Analysis T} R. A. Amsler C. Friedman~T{ Automatic Structuring of Sublanguage Information: Application to Medical Narrative T} E. Marsh~T{ General Semantic Patterns in Different Sublanguages T} C. A. Montgomery~T{ A Sublanguage for Reporting and Analysis of Space Events T} B. C. Glover T. W. Finin~T{ Constraining the Interpretation of Nominal Compounds in a Limited Context T} G. Dunham~T{ The Role of Syntax in the Sublanguage of Medical Diagnostic Statements T} J. Slocum~T{ How One Might Automatically Identify and Adapt to a Sublanguage T} L. Hirschman~T{ Discovering Sublanguage Structures T} .TE %A Janet L. Kolodner %A Christopher K. Riesbeck %T Experience, Memory and Reasoning %I Lawrence Erlbaum Associates Inc. %C Hillsdale, NJ %D 1986 %K AT15 %X 0-89859-664-0 1986 272 pages $29.95 .TS tab(~); l l. R.Wilensky~T{ Knowledge Representation - A critique and a Proposal T} T{ R. H. Granger .br D. M. McNulty T}~T{ Learning and Memory in Machines and Animals that Accounts for Some Neurobiological Data T} T{ V. Sembugamoorthy .br B. Chandrasekaran T}~T{ Functional Representation of Devices and Compilation of Diagnostic Problem Solving Systems T} .TE %T Recovering from Execution Errors in \s-1SIPE\s0 %A David E. Wilkins %J Computational Intelligence %V 1 %D 1985 %K AI07 AI09 %X In real-world domains (a mobile robot is used as a motivating example), things do not always proceed as planned. Therefore it is important to develop better execution-monitoring techniques and replanning capabilities. This paper describes the execution-monitoring and replanning capabilities of the \s-1SIPE\s0 planning system. (\s-1SIPE\s0 assumes that new information to the execution monitor is in the form of predicates, thus avoiding the difficult problem of how to generate these predicates from information provided by sensors.) The execution-monitoring module takes advantage of the rich structure of \s-1SIPE\s0 plans (including a description of the plan rationale), and is intimately connected with the planner, which can be called as a subroutine. The major advantages of embedding the replanner within the planning system itself are: .IP 1. The replanning module can take advantage of the efficient frame reasoning mechanisms in \s-1SIPE\s0 to quickly discover problems and potential fixes. .IP 2. The deductive capabilities of \s-1SIPE\s0 are used to provide a reasonable solution to the truth maintenance problem. .IP 3. The planner can be called as a subroutine to solve problems after the replanning module has inserted new goals in the plan. .LP Another important contribution is the development of a general set of replanning actions that will form the basis for a language capable of specifying error-recovery operators, and a general replanning capability that has been implemented using these actions. %T Plan Parsing for Intended Response Recognition in Discourse %A Candace L. Sidner %J Computational Intelligence %V 1 %D 1985 %K Discourse task-oriented dialogues intended meaning AI02 speaker's plans discourse understanding plan parsing discourse markers %X In a discourse, the hearer must recognize the response intended by the speaker. To perform this recognition, the hearer must ascertain what plans the speaker is undertaking and how the utterances in the discourse further that plan. To do so, the hearer can parse the initial intentions (recoverable from the utterance) and recognize the plans the speaker has in mind and intends the hearer to know about. This paper reports on a theory of parsing the intentions in discourse. It also discusses the role of another aspect of discourse, discourse markers, that are valuable to intended response recognition. %T Knowledge Organization and its Role in Representation and Interpretation for Time-Varying Data: The \s-1ALVEN\s0 System %A John K. Tsotsos %J Computational Intelligence %V 1 %D 1985 %K Knowledge Representation, Expert Systems, Medical Consultation Systems, Time-Varying Interpretation, Knowledge-Based Vision. AI01 AI06 AA01 %X The so-called first generation'' expert systems were rule-based and offered a successful framework for building applications systems for certain kinds of tasks. Spatial, temporal and causal reasoning, knowledge abstractions, and structuring are among topics of research for second generation'' expert systems. .sp It is proposed that one of the keys for such research is \fIknowledge organization\fP. Knowledge organization determines control structure design, explanation and evaluation capabilities for the resultant knowledge base, and has strong influence on system performance. We are exploring a framework for expert system design that focuses on knowledge organization for a specific class of input data, namely, continuous, time-varying data (image sequences or other signal forms). Such data is rich in temporal relationships as well as temporal changes of spatial relations and is thus a very appropriate testbed for studies involving spatio-temporal reasoning. In particular, the representation facilitates and enforces the semantics of the organization of knowledge classes along the relationships of generalization / specification, decomposition / aggregation, temporal precedence, instantiation, and expectation-activated similarity. .sp A hypothesize-and-test control structure is driven by the class organizational principles, and includes several interacting dimensions of research (data-driven, model-driven, goal-driven temporal, and failure-driven search). The hypothesis ranking scheme is based on temporal cooperative computation with hypothesis fields of influence'' being defined by the hypotheses' organizational relationships. This control structure has proven to be robust enough to handle a variety of interpretation tasks for continuous temporal data. .sp A particular incarnation, the \s-1ALVEN\s0 system, for left ventricular performance assessment from X-ray image sequences, will be highlighted in this paper. %T On the Adequacy of Predicate Circumscription for Closed-World Reasoning %A David W. Etherington %A Robert E. Mercer %A Raymond Reiter %J Computational Intelligence %V 1 %D 1985 %K AI15 AI16 %X We focus on McCarthy's method of predicate circumscription in order to establish various results about its consistency, and about its ability to conjecture new information. A basic result is that predicate circumscription cannot account for the standard kinds of default reasoning. Another is that predicate circumscription yields no new information about the equality predicate. This has important consequences for the unique names and domain closure assumptions. %T What is a Heuristic? %A Je\(ffry Francis Pelletier and Marc H.J. Romanycia %J Computational Intelligence %V 1 %N 2 %D MAY 1985 %K AI16 %X From the mid-1950's to the present, the notion of a heuristic has played a crucial role in AI researchers' descriptions of their work. What has not been generally noticed is that different researchers have often applied the term to rather different aspects of their programs. Things that would be called a heuristic by one researcher would not be so called by others. This is because many heuristics embody a variety of different features, and the various researchers have emphasized different ones of these features as being essential to being a heuristic. This paper steps back from any particular research programme and investigates the question of what things, historically, have been thought to be central to the notion of a heuristic, and which ones conflict with others. After analyzing the previous definitions and examining current usage of the term, a synthesizing definition is provided. The hope is that with this broader account of `heuristic' in hand, researchers can benefit more fully from the insights of others, even if those insights are couched in a somewhat alien vocabulary. %T Analysis by Synthesis in Computational Vision with Application to Remote Sensing %A Robert Woodham %A E. Catanzariti %A Alan Mackworth %J Computational Intelligence %V 1 %N 2 %D MAY 1985 %K AI06 %X The problem in vision is to determine surface properties from image properties. This is difficult because the problem, formally posed, is underconstrained. Methods that infer scene properties from image properties make assumptions about how the world determines what we see. In this paper, some of these assumptions are dealt with explicitly, using examples from remote sensing. Ancillary knowledge of the scene domain, in the form of a digital terrain model and a ground cover map, is used to synthesize an image for a given date and time. The synthesis process assumes that surface material is lambertian and is based on simple models of direct sun illumination, diffuse sky illumination and atmospheric path radiance. Parameters of the model are estimated from the real image. A statistical comparison of the real image and the synthetic image is used to judge how well the model represents the mapping from scene domain to image domain. .sp 1 The methods presented for image synthesis are similar to those used in computer graphics. The motivation, however is different. In graphics, the goal is to produce an effective rendering of the scene domain. Here, the goal is to predict properties of real images. In vision, one must deal with a confounding of effects due to surface shape, surface material, illumination, shadows and atmosphere. These effects often detract from, rather than enhance, the determination of invariant scene characteristics. %T A Functional Approach to Non-Monotonic Logic %A Erik Sandewall %J Computational Intelligence %V 1 %N 2 %D MAY 1985 %K AI15 AI16 %X Axiom sets and their extensions are viewed as functions from the set of formulas in the language, to a set of four truth-values \fIt\fP, \fIf\fP, \fIu\fP for undefined, and \fIk\fP for contradiction. Such functions form a lattice with `contains less information' and the partial order \(ib, and `combination of several sources of knowledge' as the least-upper-bound operation \(IP. We demonstrate the relevance of this approach by giving concise proofs for some previously known results about normal default rules. For non-monotonic rules in general (not only normal default rules), we define a stronger version of the minimality requirement on consistent fixpoints, and prove that it is sufficient for the existence of a derivation of the fixpoint. %J Computational Intelligence %V 1 %N 3-4 %D August 1985 %T Generating paraphrases from meaning-text semantic networks %A Michel Boyer %A Guy Lapalme %K T02 %X This paper describes a first attempt to base a paraphrase generation system upon Mel'cuk and Zolkovskij's linguistic Meaning-Text (\s-1MT\s0) model whose purpose is to establish correspondences between meanings, represented by networks, and (ideally) all synonymous texts having this meaning. The system described in the paper contains a Prolog implementation of a small explanatory and combinatorial dictionary (the \s-1MT\s0 lexicon) and, using unification and backtracking, generates from a given network the sentences allowed by the dictionary and the lexical transformations of the model. The passage from the net to the final texts is done through a series of transformations of intermediary structures that closely correspond to \s-1MT\s0 utterance representations (semantic, deep-syntax, surface-syntax and morphological representations). These are graphs and trees with labeled arcs. The Prolog unification (equality predicate) was extended to extract information from these representations and build new ones. The notion of utterance path, used by many authors, is replaced by that of covering by defining subnetworks''. %T Spatiotemporal inseparability in early vision: Centre-surround models and velocity selectivity %A David J. Fleet %A Allan D. Jepson %J Computational Intelligence %V 1 %N 3-4 %D August 1985 %K AI08 AI06 %X Several computational theories of early visual processing, such as Marr's zero-crossing theory, are biologically motivated and based largely on the well-known difference of Gaussians (\s-1DOG\s0) receptive field model of early retinal processing. We examine the physiological relevance of the \s-1DOG\s0, particularly in the light of evidence indicating significant spatiotemporal inseparability in the behaviour of retinal cell type. .LP >From the form of the inseparability we find that commonly accepted functional interpretations of retinal processing based on the \s-1DOG\s0, such as the Laplacian of a Gaussian and zero-crossings, are not valid for time-varying images. In contrast to current machine-vision approaches, which attempt to separate form and motion information at an early stage, it appears that this is not the case in biological systems. It is further shown that the qualitative form of this inseparability provides a convenient precursor to the extraction of both form and motion information. We show the construction of efficient mechanisms for the extraction of orientation and 2-D normal velocity through the use of a hierarchical computational framework. The resultant mechanisms are well localized in space-time, and can be easily tuned to various degrees of orientation and speed specificity. %T A theory of schema labelling %A William Havens %J Computational Intelligence %V 1 %N 3-4 %D August 1985 %K AI16 AI06 AA04 %X Schema labelling is a representation theory that focuses on composition and specialization as two major aspects of machine perception. Previous research in computer vision and knowledge representation have identified computational mechanisms for these tasks. We show that the representational adequacy of schema knowledge structures can be combined advantageously with the constraint propagation capabilities of network consistency techniques. In particular, composition and specialization can be realized as mutually interdependent cooperative processes which operate on the same underlying knowledge representation. In this theory, a schema is a generative representation for a class of semantically related objects. Composition builds a structural description of the scene from rules defined in each schema. The scene description is represented as a network consistency graph which makes explicit the objects found in the scene and their semantic relationships. The graph is hierarchical and describes the input scene at varying levels of detail. Specialization applies network consistency techniques to refine the graph towards a global scene description. Schema labelling is being used for interpretating hand-printed Chinese characters, and for recognizing \s-1VLSI\s0 circuit designs from their mask layouts. %T Hierarchical arc consistency: Exploring structured domains in constraint satisfaction problems %A Alan K. Mackworth %A Jan A. Mulder %A William S. Havens %J Computational Intelligence %V 1 %N 3-4 %D August 1985 %K AI03 AI16 AI06 %X Constraint satisfaction problems can be solved by network consistency algorithms that eliminate local inconsistencies before constructing global solutions. We describe a new algorithm that is useful when the variable domains can be structured hierarchically into recursive subsets with common properties and common relationships to subsets of the domain values for related variables. The algorithm, \s-1HAC\s0, uses a technique known as hierarchical arc consistency. Its performance is analyzed theoretically and the conditions under which it is an improvement are outlined. The use of \s-1HAC\s0 in a program for understanding sketch maps, Mapsee3, is briefly discussed and experimental results consistent with the theory are reported. %T Expression of Syntactic and Semantic Features in Logic-Based Grammars %A Patrick Saint-Dizier %J Computational Intelligence %V 2 %N 1 %D February 1986 %K AI02 %X In this paper we introduce and motivate a formalism to represent syntactic and semantic features in logic-based grammars. We also introduce technical devices to express relations between features and inheritance mechanisms. This leads us to propose some extensions to the basic unification mechanism of Prolog. Finally, we consider the problem of long-distance dependency relations between constituents in Gapping Grammar rules from the point of view of morphosyntatic features that may change depending on the position occupied by the moved'' constituents. What we propose is not a new linguistic theory about features, but rather a formalism and a set of tools that we think to be useful to grammar writers to describe features and their relations in grammar rules. %T Natural Language Understanding and Theories of Natural Language Semantics %A Per-Kristian Halvorsen %J Computational Intelligence %V 2 %N 1 %D February 1986 %K AI02 %X In these short remarks, I examine the connection between Montague grammar, one of the most influential theories of natural language semantics during the past decade, and natural language understanding, one of the most recalcitrant problems in \(*AI and computational linguistics for more than the last decade. When we view Montague grammar in light of the requirements of a theory natural language understanding, new traits become prominent, and highly touted advantages of the approach become less significant. What emerges is a new set of criteria to apply to theories of natural language understanding. Once one has this measuring stick in hand, it is impossible to withstand the temptation of also applying it to the emerging contender to Montague grammar as a semantic theory, namely situation semantics. %T Unrestricted Gapping Grammars %A Fred Popowich %J Computational Intelligence %V 2 %N 1 %D February 1986 %K AI02 %X Since Colmerauer's introduction of metamorphosis grammars (MGs), with their associated type \fIO\fP\(milike grammar rules, there has been a desire to allow more general rule formats in logic grammars. Gap symbols were added to the MG rule by Pereria, resulting in extraposition grammars (XGs). Gaps, which are referenced by gap symbols, are sequences of zero or more unspecified symbols which may be present anywhere in a sentence or in a sentential form. However, XGs imposed restrictions on the position of gap symbols and on the contents of gaps. With the introduction of gapping grammars (GGs) by Dahl, these restrictions were removed, but the rule was still required to possess a nonterminal symbol as the first symbol on the left-hand side. This restriction is removed with the introduction of unrestricted gapping grammars. FIGG, a Flexible Implementation of Gapping Grammars, possesses a bottom-up parser which can process a large subset of unrestricted GGs for describing phenomena of natural languages such as free word order, and partially free word or constituent order. It can also be used as a programming language to implement natural language systems which are based on grammars (or metagrammars) that use the gap concept, such as Gazdar's generalized phrase structure grammars. ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Thu Dec 4 01:16:23 1986 Date: Thu, 4 Dec 86 01:16:15 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #274 Status: R AIList Digest Tuesday, 2 Dec 1986 Volume 4 : Issue 274 Today's Topics: Philosophy - Searle, Turing, Symbols, Categories & Turing Tests and Chinese Rooms ---------------------------------------------------------------------- Date: 26 Nov 86 12:41:50 GMT From: cartan!rathmann@ucbvax.Berkeley.EDU (the late Michael Ellis) Subject: Re: Searle, Turing, Symbols, Categories > Steve Harnad >> Keith Dancey >> [The turing test] should be timed as well as checked for accuracy... >> Turing would want a degree of humor... >> check for `personal values,' `compassion,'... >> should have a degree of dynamic problem solving... >> a whole body of psychometric literature which Turing did not consult. > >I think that these details are premature and arbitrary. We all know >(well enough) what people can DO: They can discriminate, categorize, >manipulate, identify and describe objects and events in the world, and >they can respond appropriately to such descriptions. Just who is being arbitrary here? Qualities like humor, compassion, artistic creativity and the like are precisely those which many of us consider to be those most characteristic of mind! As to the "prematurity" of all this, you seem to have suddenly and most conveniently forgotten that you were speaking of a "total turing test" -- I presume an ultimate test that would encompass all that we mean when we speak of something as having a "mind", a test that is actually a generations-long research program. As to whether or not "we all know what people do", I'm sure our cognitive science people are just *aching* to have you come and tell them that us humans "discriminate, categorize, manipulate, identify, and describe". Just attach those pretty labels and the enormous preverbal substratum of our consciousness just vanishes! Right? Oh yeah, I suppose you provide rigorous definitions for these terms -- in your as yet unpublished paper... >Now let's get devices to (1) do it all (formal component) and then >let's see whether (2) there's anything that we can detect informally >that distinguishes these devices from other people we judge to have >minds BY EXACTLY THE SAME CRITERIA (namely, total performance >capacity). If not, they are turing-indistinguishable and we have no >non-arbitrary basis for singling them out as not having minds. You have an awfully peculiar notion of what "total" and "arbitrary" mean, Steve: its not "arbitrary" to exclude those traits that most of us regard highly in other beings whom we presume to have minds. Nor is it "arbitrary" to exclude the future findings of brain research concerning the nature of our so-called "minds". Yet you presume to be describing a "total turing test". May I suggest that what you describing is not a "test for mind", but rather a "test for simulated intelligence", and the reason you will not or cannot distinguish between the two is that you would elevate today's primitive state of technology to a fixed methodological standard for future generations. If we cannot cope with the problem, why, we'll just define it away! Right? Is this not, to paraphrase Paul Feyerabend, incompetence upheld as a standard of excellence? -michael Blessed be you, mighty matter, irresistible march of evolution, reality ever new born; you who by constantly shattering our mental categories force us to go further and further in our pursuit of the truth. -Pierre Teilhard de Chardin "Hymn of the Universe" ------------------------------ Date: 27 Nov 86 12:02:50 GMT From: cartan!rathmann@ucbvax.Berkeley.EDU (the late Michael Ellis) Subject: Re: Turing Tests and Chinese Rooms > Ray Trent > 1) I've always been somewhat suspicious about the Turing Test. (1/2 :-) > > a) does anyone out there have any good references regarding > its shortcomings. :-| John Searle's notorious "Chinese Room" argument has probably drawn out more discussion on this topic in recent times than anything else I can think of. As far as I can tell, there seems to be no consensus of opinion on this issue, only a broad spectrum of philosophical stances, some of them apparently quite angry (Hofstadter, for example). The most complete presentation I have yet encountered is in the journal for the Behavioral and Brain Sciences 1980, with a complete statement of Searle's original argument, responses by folks like Fodor, Rorty, McCarthy, Dennett, Hofstadter, Eccles, etc, and Searle's counterresponse. People frequently have misconceptions of just what Searle is arguing, the most common of these being: Machines cannot have minds. What Searle really argues is that: The relation (mind:brain :: software:hardware) is fallacious. Computers cannot have minds solely by virtue of their running the correct program. His position seems to derive from his thoughts in the philosophy of language, and in particular his notion of Intentionality. Familiarity with the work of Frege, Russell, Wittgenstein, Quine, Austin, Putnam, and Kripke would really be helpful if you are interested in the motivation behind this concept, but Searle maintains that his Chinese room argument makes sense without any of that background. -michael ------------------------------ Date: 29 Nov 86 06:52:21 GMT From: rutgers!princeton!mind!harnad@lll-crg.arpa (Stevan Harnad) Subject: Re: Searle, Turing, Symbols, Categories Peter O. Mikes at S-1 Project, LLNL wrote: > An example of ["unexperienced experience"] is subliminal perception. > Similar case is perception of outside world during > dream, which can be recalled under hypnosis. Perception > is not same as experience, and sensation is an ambiguous word. Subliminal perception can hardly serve as a clarifying example since its own existence and nature is anything but clearly established. (See D. Holender (1986) "Semantic activation without conscious identification," Behavioral and Brain Sciences 9: 1 - 66.) If subliminal perception exists, the question is whether it is just a case of dim or weak awareness, quickly forgotten, or the unconscious registration of information. If it is the former, then it is merely a case of a weak and subsequently forgotten conscious experience. If it is the latter, then it is a case of unconscious processing -- one of many, for most processes is unconscious (and studying them is the theoretical burden of cognitive science). Dreaming is a similar case. It is generally agreed (from studies in which subjects are awakened during dreams) that subjects are conscious during their dreams, although they remain asleep. This state is called "paradoxical sleep," because the EEG shows signs of active, waking activity even though the subject's eyes are closed and he continues to sleep. Easily awakened in that stage of sleep, the subject can report the contents of his dream, and indicates that he has been consciously undergoing the experience, like a vivid day-dream or a hallucination. If the subject is not awakened, however, the dream is usually forgotten, and difficult if not impossible to recall. (As usual, recognition memory is stronger than recall, so sometimes cues will be recognized as having occurred in a forgotten dream.) None of this bears on the issue of consciousness, since the consciousness during dreams is relatively unproblematic, and the only other phenomenon involved is simply the forgetting of an experience. A third hypothetical possibility is slightly more interesting, but, unfortunately, virtually untestable: Can there be unconscious registration of information at time T, and then, at a later time, T1, conscious recall of that information AS IF it had been experienced consciously at T? This is a theoretical possibility. It would still not make the event at T a conscious experience, but it would mean that input information can be put on "hold" in such a way as to be retrospectively experienced at a later time. The later experience would still be a kind of illusion, in that the original event was NOT actually experienced at T, as it appears to have been upon reflection. The nervous system is probably playing many temporal (and causal) tricks like that within very short time intervals; the question only becomes dramatic when longer intervals (minutes, hours, days) are interposed between T and T1. None of these issues are merely definitional ones. It is true that "perception" and "sensation" are ambiguous, but, fortunately, "experience" seems to be less so. So one may want to separate sensations and perceptions into the conscious and unconscious ones. The conscious ones are the ones that we were consciously aware of -- i.e., that we experienced -- when they occurred in real time. The unconscious ones simply registered information in our brains at their moment of real-time occurrence (without being experienced), and the awareness, if any, came only later. > suggest that we follow the example of acoustics, which solved the > 'riddle' of falling tree by defining 'sound' as physical effect > (density wave) and noise as 'unwanted sound' - so that The tree > which falls in deserted place makes sound but does not make noise. > Accordingly, perception can be unconcious but experience can't. Based on the account you give, acoustics solved no problem. It merely missed the point. Again, the issue is not a definitional one. When a tree falls, all you have is acoustic events. If an organism is nearby, you have acoustic events and auditory events (i.e., physiological events in its nervous system). If the organism is conscious, it hears a sound. But, unless you are that organism, you can't know for sure about that. This is called the mind/body problem. "Noise" and "unwanted sound" has absolutely nothing to do with it. > mind and consciousness (or something like that) should be a universal > quantity, which could be applied to machine, computers... > Since we know that there is no sharp division between living and > nonliving, we should be able to apply the measure to everything We should indeed be able to apply the concept conscious/nonconscious to everything, just as we can apply the concept living/nonliving. The question, however, remains: What is and what isn't conscious? And how are we to know it? Here are some commonsense things to keep in mind. I know of only one case of a conscious entity directly and with certainty: My own. I infer that other organisms that behave more or less the way I would are also conscious, although of course I can't be sure. I also infer that a stone is not conscious, although of course I can't be sure about that either. The problem is finding a basis for making the inference in intermediate cases. Certainty will not be possible in any case but my own. I have argued that the Total Turing Test is a reasonable empirical criterion for cognitive science and a reasonable intuitive criterion for the rest of us. Moreover, it has the virtue of corresponding to the subjectively compelling criterion we're already using daily in the case of all other minds but our own. -- Stevan Harnad (609) - 921 7771 {allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad harnad%mind@princeton.csnet ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Thu Dec 4 01:16:45 1986 Date: Thu, 4 Dec 86 01:16:32 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #275 Status: R AIList Digest Tuesday, 2 Dec 1986 Volume 4 : Issue 275 Today's Topics: Philosophy - Searle, Turing, Symbols, Categories ---------------------------------------------------------------------- Date: 28 Nov 86 06:27:20 GMT From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov (Stevan Harnad) Subject: Re: Searle, Turing, Symbols, Categories Lambert Meertens (lambert@boring.uucp) of CWI, Amsterdam, writes: > for me it is not the case that I perceive/experience/ > am-directly-aware-of my performance being caused by anything. > It just happens. Phenomenology is of course not something it's easy to settle disagreements about, but I think I can say with some confidence that most people experience their (voluntary) behavior as caused by THEM. My point about free will's being an illusion is a subtler one. I am not doubting that we all experience our voluntary actions as freely willed by ourselves. That EXPERIENCE is certainly real, and no illusion. What I am doubting is that our will is actually the cause of our actions, as it seems to be. I think our actions are caused by our brain activity (and its causes) BEFORE we are aware of having willed them, and that our experience of willing and causing them involves a temporal illusion (see S. Harnad [1982] "Consciousness: An afterthought," Cognition and Brain Theory 5: 29 - 47, and B. Libet [1986] "Unconscious cerebral initiative and the role of conscious will in voluntary action," Behavioral and Brain Sciences 8: 529 - 566.) Of course, my task of supporting this position would be much easier if the phenomenology you describe were more prevalent... > How do I know I have a mind?... The problem is that if you > look up "mind" in an English-Dutch dictionary, some eight > translations are suggested. The mind/body problem is not just a lexical one; nor can it be settled by definitions. The question "How do I know I have a mind?" is synonymous with the question "How do I know I am experiencing anything at all [now, rather than just going through the motions AS IF I were having experience, but in fact being only an insentient automaton]?" And the answer is: By direct, first-hand experience. > "Consciousness" is more like "appetite"... How can we know for > sure that other people have appetites as well?... "Can machines > have an appetite?" I quite agree that consciousness is like appetite. Or, to put it more specifically: If consciousness is the ability to have (or the actual having of) experience in general, appetite is a particular experience most conscious subjects have. And, yes, the same questions that apply to consciousness in general apply to appetite in particular. But I'm afraid that this conclusion was not your objective here... > Now why is consciousness "real", if free will is an illusion? > Or, rather, why should the thesis that consciousness is "real" > be more compelling than the analogous thesis for free will? > In either case, the essential argument is: "Because I [the > proponent of that thesis] have direct, immediate, evidence of it." The difference is that in the case of the (Cartesian) thesis of the reality of consciousness (or mind) the question is whether there is any qualitative, subjective experience going on AT ALL, whereas in the case of the thesis of the reality of free will the question is whether the dictates of a particular CONTENT of experience (namely, the causal impression it gives us) is true of the world. The latter, like the existence of the outside world itself, is amenable to doubt. But the former, namely, THAT we are experiencing anything at all, is not open to doubt, and is settled by the very act of experiencing something. That is the celebrated Cartesian Cogito. > Sometimes we are conscious of certain sensations. Do these > sensations disappear if we are not conscious of them? Or do they go > on on a subconscious level? That is like the question "If a falling > tree..." The following point is crucial to a coherent discussion of the mind/body problem: The notion of an unconscious sensation (or, more generally, an unconscious experience) is a contradiction in terms! [Test it in the form: "unexperienced experience." Whatever might that mean? Don't answer. The Viennese delegation (as Nabokov used to call it) has already made almost a century's worth of hermeneutic hay with the myth of the "subconscious" -- a manifest nonsolution to the mind/body problem that simply consisted of multiplying the mystery by two. The problem isn't the unconscious causation of behavior: If we were all unconscious automata there would be no mind/body problem. The problem is conscious experience. And anthropomorphizing the sizeable portion of our behavior that we DON'T have the illusion of being the cause of is not only no solution to the mind/body problem but not even a contribution to the problem of finding the unconscious causes of behavior -- which calls for cognitive theory, not hermeneutics.] It would be best to stay away from the usually misunderstood and misused problem of the "unheard sound of the falling tree." Typically used to deride philosophers, the unheard last laugh is usually on the derider. > Let us agree that the sensations continue at least if it can be > shown that the person involved keeps behaving as if the concomitant > sensations continued, even though professing in retrospection not > to have been aware of them. So people can be afraid without > realizing it, say, or drive a car without being conscious of the > traffic lights (and still halt for a red light). I'm afraid I can't agree with any of this. A sensation may be experienced and then forgotten, and then perhaps again remembered. That's unproblematic, but that's not the issue here, is it? The issue is either (1) unexperienced sensations (which I suggest is a completely incoherent notion) or (2) unconsciously caused or guided behavior. The latter is of course the category most behavior falls into. So unconscious stopping for a red light is okay; so is unconscious avoidance or even unconscious escape. But unconscious fear is another matter, because fear is an experience, not a behavior (and, as I've argued, the concept of an unconscious experience is self-contradictory). If I may anticipate what I will be saying below: You seem to have altogether too much intuitive confidence in the explanatory power of the concept and phenomenology of memory in your views on the mind/body problem. But the problem is that of immediate, ongoing qualitative experience. Anything else -- including the specifics of the immediate content of the experience (apart from the fact THAT it is an experience) and its relation to the future, the past or the outside world -- is open to doubt and is merely a matter of inference, rather than one of direct, immediate certainty in the way experiential matters are. Hence whereas veridical memories and continuities may indeed happen to be present in our immediate experiences, there is no direct way that we can know that they are in fact veridical. Directly, we know only that they APPEAR to be veridical. But that's how all phenomenological experience is: An experience of how things appear. Sorting out what's what is an indirect, inferential matter, and that includes sorting out the experiences that I experience correctly as remembered from those that are really only "deja vu." (This is what much of the writing on the problem of the continuity of personal identity is concerned with.) > Maybe everything is conscious. Maybe stones are conscious... > Their problem is, they can hardly tell us. The other problem is, > they have no memory... They are like us with that traffic light... > Even if we experience something consciously, if we lose all > remembrance of it, there is no way in which we can tell for sure > that there was a conscious experience. Maybe we can infer > consciousness by an indirect argument, but that doesn't count. > Indirect evidence can be pretty strong, but it can never give > certainty. Barring false memories, we can only be sure if we > remember the experience itself. Stones have worse problems than not being able to tell us they're conscious and not being able to remember. And the mind/problem is not solved by animism (attributing conscious experience to everything); It is merely compounded by it. The question is: Do stones have experiences? I rather doubt it, and feel that a good part of the M/B problem is sorting out the kinds of things that do have experiences from the kinds of things, like stones, that do not (and how, and why, functionally speaking). If we experience something, we experience it consciously. That's what "experience" means. Otherwise it just "happens" to us (e.g., when we're distracted, asleep, comatose or dead), and then we may indeed be like the stone (rather than vice versa). And if we forget an experience, we forget it. So what? Being conscious of it does not consist in or depend on remembering it, but on actually experiencing it at the time. The same is true of remembering a previously forgotten experience: Maybe it was so, maybe it wasn't. The only thing we are directly conscious of is that we experience it AS something remembered. Inference may be involved in trying to determine whether or not a memory is veridical, but it is certainly not involved in determining THAT I am having any particular conscious experience. That fact is ascertained directly. Indeed it is the ONLY fact of consciousness, and it is immediate and incorrigible. The particulars of its content, on the other hand -- what an experience indicates about the outside world, the past, the future, etc. -- are indirect, inferential matters. (To put it another way, there is no way to "bar false memories." Experiences wear their experientiality on their ears, so to speak, but all of the rest of their apparel could be false, and requires inference for indirect confirmation.) > If some things we experience do not leave a recallable trace, then > why should we say that they were experienced consciously? Or, why > shouldn't we maintain the position that stones are conscious > as well?... More useful, then, to use "consciousness" only for > experiences that are, somehow, recallable. These stipulations would be arbitrary (and probably false). Moreover, they would simply fail to be faithful to our direct experience -- to "what it's like" to have an experience. The "recallability" criterion is a (weak) external one we apply to others, and to ourselves when we're wondering whether or not something really happened. But when we're judging whether we're consciously experiencing a tooth-ache NOW, recallability has nothing to do with it. And if we forget the experience (say, because of subsequent anesthesia) and never recall it again, that would not make the original experience any less conscious. > the things that go on in our heads are stored away: in order to use for > determining patterns, for better evaluation of the expected outcome of > alternatives, for collecting material that is useful for the > construction or refinement of the model we have of the outside world, > and so on. All these conjectures about the functions of memory and other cognitive processes are fine, but they do not provide (nor can they provide) the slightest hint as to why all these functional and behavioral objectives are not simply accomplished UNconsciously. This shows as graphically as anything how the mind/body problem is completely bypassed by such functional considerations. (This is also why I have been repeatedly recommending "methodological epiphenomenalism" as a research strategy in cognitive modeling.) > Imagine now a machine programmed to "eat" and also to keep up > some dinner conversation... IF hunger THEN eat... equipped with > a conflict-resolution module... dinner-conversation module... > Speaking anthropomorphically, we would say that the machine is > feeling uneasy... apology submodule... PROBABLE CAUSE OF eat > IS appetite... "<... >" > How different are we from that machine? On the information you give here, the difference is likely to be like night and day. What you have described is a standard anthropomorphic interpretation of simple symbol-manipulations. Overzealous AI workers do it all the time. What I believe is needed is not more over-interpretation of the pathetically simple toy tricks that current programs can perform, but an effort to model life-size performance capacity: The Total Turing Test. That will diminish the degrees of freedom of the model to the size of the normal underdetermination of scientific theories by their data, and it will augment the problem of machine minds to the size of the other-minds problem, with which we are already dealing daily by means of the TTT. In the process of pursuing that distant scientific goal, we may come to know certain constraints on the enterprise, such as: (1) Symbol-manipulation alone is not sufficient to pass the TTT. (2) The capacity to pass the TTT does not arise from a mere accretion of toy modules. (3) There is no autonomous symbolic macromodule or level: Symbolic representations must be grounded in nonsymbolic processes. And if methodological epiphenomenalism is faithfully adhered to, the only interpretative question we will ever need to ask about the mind of the candidate system will be precisely the same one we ask about one another's minds; and it will be answered on precisely the same basis as the one we use daily in dealing with the other-minds problem: the TTT. > if we ponder a question consciously... I think the outcome is not > the result of the conscious process, but, rather, that the > consciousness is a side-effect of the conflict-resolution > process going on. I think the same can be said about all "conscious" > processes. The process is there, anyway; it could (in principle) take > place without leaving a trace in memory, but for functional reasons > it does leave such a trace. And the word we use for these cognitive > processes that we can recall as having taken place is "conscious". Again, your account seems to be influenced by certain notions, such as memory and "conflict-resolution," that appear to be carrying more intuitive weight than they can bear. Not only is the issue not that of "leaving a trace" (as mentioned earlier), but there is no real functional argument here for why all this shouldn't or couldn't be accomplished unconsciously. [However, if you substitute for "side-effect" the word "epiphenomenon," you may be calling things by their proper name, and providing (inadevertently) a perfectly good rationale for ignoring them in trying to devise a model to pass the TTT.] > it is functional that I can raise my arm by "willing" it to raise, > although I can use that ability to raise it gratuitously. If the > free will here is an illusion (which I think is primarily a matter > of how you choose to define something as elusive as "free will"), > then so is the free will to direct your attention now to this, > then to that. Rather than to say that free will is an "illusion", > we might say that it is something that features in the model > people have about "themselves". Similarly, I think it is better to say > that consciousness is not so much an illusion, but rather something to > be found in that model. A relatively recent acquisition of that model is > known as the "subconscious". A quite recent addition are "programs", > "sub-programs", "wrong wiring", etc. My arm seems able to rise in two important ways: voluntarily and involuntarily (I don't know what "gratuitously" means). It is not a matter of definition that we feel as if we are causing the motion in the voluntary case; it is a matter of immediate experience. Whether or not that experience is veridical depends on various other factors, such as the true order of the events in question (brain activity, conscious experience, movement) in real time, and the relation of the experiential to the physical (i.e., whether or not it can be causal). The same question does indeed apply to willed changes in the focus of attention. If free will "is something that features in the model people have of 'themselves'," then the question to ask is whether that model is illusory. Consciousness itself cannot be something found in a model (although the concept of consciousness might be) because consciousness is simple the capacity to have (or the having of) experience. (My responses to the concept of the "subconscious" and the over-interpretation of programs and symbols are described earlier in this module. > A sufficiently "intelligent" machine, able to pass not only the > dinner-conversation test but also a sophisticated Turing test, > must have a model of itself. Using that model, and observing its > own behaviour (including "internal" behaviour!), it will be led to > conclude not only that it has an appetite, but also volition and > awareness...Is it mistaken then? Is the machine taken in by an illusion? > "Can machines have illusions?" What a successful candidate for the TTT will have to have is not something we can decide by introspection. Doing hermeneutics on its putative inner life before we build it would seem to be putting the cart before the horse. The question whether machines can have illusions (or appetites, or fears, etc.) is simply a variant on the basic question of whether any organism or device other than oneself can have experiences. -- Stevan Harnad (609) - 921 7771 {allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad harnad%mind@princeton.csnet ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Thu Dec 4 01:17:19 1986 Date: Thu, 4 Dec 86 01:17:15 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #276 Status: R AIList Digest Tuesday, 2 Dec 1986 Volume 4 : Issue 276 Today's Topics: Administrivia - Proposed Split of This Group, Philosophy - Searle, Turing, Symbols, Categories ---------------------------------------------------------------------- Date: 1 Dec 86 09:24:05 est From: Walter Hamscher Subject: Proposed: a split of this group I empathize with the spirit of the motion. But is it really neccessary to split the list? I think Ken does a really good thing by putting warning labels on the philosophical discussions: they're easy to skip over if you're not interested. As long as he's willing to put the time into doing that, there's no need for a split. ------------------------------ Date: Mon 1 Dec 86 10:10:19-PST From: Stephen Barnard Subject: One vote against splitting the list I for one would not like to see the AI-list divided into two --- one for "philosophising about" AI and one for "doing" AI. Even those of us who do AI sometimes like to read and think about philosophical issues. The problem, if there is one, is that certain people have been abusing the free access to the list that Ken rightfully encourages. Let's please keep our postings to a reasonable volume (per contributor). The list is not supposed to be anyone's personal soapbox. ------------------------------ Date: 1 Dec 86 18:48:31 GMT From: ihnp4!houxm!houem!marty1@ucbvax.Berkeley.EDU (M.BRILLIANT) Subject: Re: Proposed: a split of this group Just suggested by jbn@glacier.UUCP (John Nagle): > I would like to suggest that this group be split into two groups; >one about "doing AI" and one on "philosophising about AI", the latter >to contain the various discussions about Turing tests, sentient computers, >and suchlike. Good idea. I was beginning to think the discussions of "when is an artifice intelligent" might belong in "talk.ai." I was looking for articles about how to do AI, and not finding any. The trouble is, "comp.ai.how-to" might have no traffic at all. We seem to be trying to "create artificial intelligence," with the intent that we can finally achieve success at some point (if only we knew how to define success). Why don't we just try always to create something more intelligent than we created before? That way we can not only claim nearly instant success, but also continue to have further successes without end. Would the above question belong in "talk.ai" or "comp.ai.how-to"? Marty M. B. Brilliant (201)-949-1858 AT&T-BL HO 3D-520 houem!marty1 ------------------------------ Date: Sun, 30 Nov 1986 22:27 EST From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU Subject: Searle, Turing, Symbols, Categories Lambert Meertens asks: If some things we experience do not leave a recallable trace, then why should we say that they were experienced consciously? I absolutely agree. In my book, "The Society of Mind", which will be published in January, I argue, with Meertens, that the phenomena we call consciousness are involved with our short term memories. This explains why, as he Meertens suggests, it makes little sense to attribute consciousness to rocks. It also means that there are limits to what consciousness can tell us about itself. In order to do perfect self-experiments upon ourselves, we would need perfect records of what happens inside our memory machinery. But any such machinery must get confused by self-experiments that try to find out how it works - since such experiments must change the very records that they're trying to inspect! This doesn't mean that consciousness cannot be understood, in principle. It only means that, to study it, we'll have to use the methods of science, because we can't rely on introspection. Below are a few more extracts from the book that bear on this issue. If you want to get the book itself, it is being published by Simon and Schuster; it will be printed around New Year but won't get to bookstores until mid-February. If you want it sooner, send me your address and I should be able to send copies early in January. (Price will be 18.95 or less.) Or send name of your bookstore so I can get S&S to lobby the bookstore. They don't seem very experienced at books in the AI-Psychology-Philosophy area. In Section 15.2 I argue that although people usually assume that consciousness is knowing what is happening in the minds, right at the present time, consciousness never is really concerned with the present, but with how we think about the records of our recent thoughts. This explains why our descriptions of consciousness are so queer: whatever people mean to say, they just can't seem to make it clear. We feel we know what's going on, but can't describe it properly. How could anything seem so close, yet always keep beyond our reach? I answer, simply because of how thinking about our short term memories changes them! Still, there is a sense in which thinking about a thought is like from thinking about an ordinary thing. Our brains have various agencies that learn to recognize to recognize - and even name - various patterns of external sensations. Similarly, there must be other agencies that learn to recognize events *inside* the brain - for example, the activities of the agencies that manage memories. And those, I claim, are the bases of the awarenesses we recognize as consciousness. There is nothing peculiar about the idea of sensing events inside the brain; it is as easy for an agent (that is, a small portion of the brain) to be wired to detect a *brain-caused brain-event*, as to detect a world-caused brain-event. Indeed only a small minority of our agents are connected directly to sensors in the outer world, like those that sense the signals coming from the eye or skin; most of the agents in the brain detect events inside of the brain! IN particular, I claim that to understand what we call consciousness, we must understand the activities of the agents that are engaged in using and changing our most recent memories. Why, for example, do we become less conscious of some things when we become more conscious of others? Surely this is because some resource is approaching some limitation - and I'll argue that it is our limited capacity to keep good records of our recent thoughts. Why, for example, do thoughts so often seem to flow in serial streams? It is because whenever we lack room for both, the records of our recent thoughts must then displace the older ones. And why are we so unaware of how we get our new ideas? Because whenever we solve hard problems, our short term memories become so involved with doing *that* that they have neither time nor space for keeping detailed records of what they, themselves, have done. To think about our most recent thoughts, we must examine our recent memories. But these are exactly what we use for "thinking," in the first place - and any self-inspecting probe is prone to change just what it's looking at. Then the system is likely to break down. It is hard enough to describe something with a stable shape; it is even harder to describe something that changes its shape before your eyes; and it is virtually impossible to speak of the shapes of things that change into something else each time you try to think of them. And that's what happens when you try to think about your present thoughts - since each such thought must change your mental state! Would any process not become confused, which alters what it's looking at? What do we mean by words like "sentience," "consciousness," or "self-awareness? They all seem to refer to the sense of feeling one's mind at work. When you say something like "I am conscious of what I'm saying," your speaking agencies must use some records about the recent activity of other agencies. But, what about all the other agents and activities involved in causing everything you say and do? If you were truly self-aware, why wouldn't you know those other things as well? There is a common myth that what we view as consciousness is measurelessly deep and powerful - yet, actually, we scarcely know a thing about what happens in the great computers of our brains. Why is it so hard to describe your present state of mind? One reason is that the time-delays between the different parts of a mind mean that the concept of a "present state" is not a psychologically sound idea. Another reason is that each attempt to reflect upon your mental state will change that state, and this means that trying to know your state is like photographing something that is moving too fast: such pictures will be always blurred. And in any case, our brains did not evolve primarily to help us describe our mental states; we're more engaged with practical things, like making plans and carrying them out. When people ask, "Could a machine ever be conscious?" I'm often tempted to ask back, "Could a person ever be conscious?" I mean this as a serious reply, because we seem so ill equipped to understand ourselves. Long before we became concerned with understanding how we work, our evolution had already constrained the architecture of our brains. However we can design our new machines as we wish, and provide them with better ways to keep and examine records of their own activities - and this means that machines are potentially capable of far more consciousness than we are. To be sure, simply providing machines with such information would not automatically enable them to use it to promote their own development and until we can design more sensible machines, such knowledge might only help them find more ways to fail: the easier to change themselves, the easier to wreck themselves - until they learn to train themselves. Fortunately, we can leave this problem to the designers of the future, who surely would not build such things unless they found good reasons to. (Section 25.4) Why do we have the sense that things proceed in smooth, continuous ways? Is it because, as some mystics think, our minds are part of some flowing stream? think it's just the opposite: our sense of constant steady change emerges from the parts of mind that manage to insulate themselves against the continuous flow of time! In other words, our sense of smooth progression from one mental state to another emerges, not from the nature of that progression itself, but from the descriptions we use to represent it. Nothing can *seem* jerky, except what is *represented* as jerky. Paradoxically, our sense of continuity comes not from any genuine perceptiveness, but from our marvelous insensitivity to most kinds of changes. Existence seem continuous to us, not because we continually experience what is happening in the present, but because we hold to our memories of how things were in the recent past. Without those short-term memories, all would seem entirely new at every instant, and we would have no sense at all of continuity, or of existence. One might suppose that it would be wonderful to possess a faculty of "continual awareness." But such an affliction would be worse than useless because, the more frequently your higher-level agencies change their representations of reality, the harder for them to find significance in what they sense. The power of consciousness comes not from ceaseless change of state, but from having enough stability to discern significant changes in your surroundings. To "notice" change requires the ability to resist it, in order to sense what persists through time, but one can do this only by being able to examine and compare descriptions from the recent past. We notice change in spite of change, and not because of it. Our sense of constant contact with the world is not a genuine experience; instead, it is a form of what I call the "Immanence illusion". We have the sense of actuality when every question asked of our visual systems is answered so swiftly that it seems as though those answers were already there. And that's what frame-arrays provide us with: once any frame fills its terminals, this also fills the terminals of the other frames in its array. When every change of view engages frames whose terminals are already filled, albeit only by default, then sight seems instantaneous. ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Sun Dec 7 01:08:34 1986 Date: Sun, 7 Dec 86 01:08:26 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #277 Status: R AIList Digest Thursday, 4 Dec 1986 Volume 4 : Issue 277 Today's Topics: Seminars - Formal Properties of Version Spaces (Rutgers) & Machine Learning and Discovery (UTexas) & Nonmonotonic Inheritance Systems (CMU) & Possible-World Semantics (CMU) & Proofs, Deductions, Chains of Reasoning (Buffalo) & A Higher-Order Logic for Programming (UPenn) & Non-Strict Class Hierarchies in Modeling Laguages (UPenn) ---------------------------------------------------------------------- Date: 1 Dec 86 10:34:26 EST From: Tom Fawcett Subject: Seminar - Formal Properties of Version Spaces (Rutgers) This Thursday, December 4th, at 10 AM in Hill-250, Tony Vandermude will present a ML talk entitled "Some Formal Properties of Version Spaces". The abstract follows. Some Formal Properties of Version Spaces Tony Vandermude (vandermu@topaz.rutgers.edu) A general definition of problems and problem solving is presented and the technique of Version Spaces is formally defined. Learning using Version Spaces is compared to Identification in the Limit as found in the work on Inductive Inference, and some properties of Version Spaces are defined. The results given address the types of problems that Version Spaces are best equipped to solve, what characteristics make it possible to apply this technique and where problems may arise. It is found that when the standard notion of a Version Space is considered, the learning process is reliable and consistent with the input, and new versions added to the space must have a superset-subset relationship to the previous models. It is shown that if the finite sets and their complements are included as models in the space, then a Version Space will learn any recursively enumerable class of recursive sets. However, if the complements of the finite sets are removed, then even simple classes cannot be learned reliably with a Version Space. Mention is also made of the effects of error in data presentation - if there is no a priori method of determining correctness of the data, convergence to a correct model cannot be guaranteed. ------------------------------ Date: Mon 1 Dec 86 14:20:07-CST From: Robert L. Causey Subject: Seminar - Machine Learning and Discovery (UTexas) Philosophy Colloquy University of Texas at Austin A COMPUTER SYSTEM FOR LEARNING AND DISCOVERY by Arthur W. Burks, Professor Emeritus University of Michigan, Ann Arbor Friday, December 5, 3 - 5 p.m. Philosophy Conference Room, WAG 316 This colloquy will discuss relationships between inductive reasoning, learning, evolution, and computer designs. Professor Burks will discuss recent work on classifier systems that he has done together with his colleague, John Holland. Copies of a background paper, "A Radically Non-Von Architecture for Learning and Discovery", are available in the Philosophy Department's Brogan Reading Room. ------------------------------ Date: 1 December 1986 1020-EST From: Elaine Atkinson@A.CS.CMU.EDU Subject: Seminar - Nonmonotonic Inheritance Systems (CMU) SPEAKER: Richmond Thomason, University of Pittsburgh TITLE: "Issues in the design of nonmonotonic inheritance systems" DATE: Thursday, December 4 TIME: 4:00 p.m. PLACE: Adamson Wing, Baker Hall ABSTRACT: Early attempts at combining multiple inheritance with exceptions were based on straightforward extensions to tree- structured inheritance systems, and were theoretically unsound. Two well-know examples are FRL and NETL. In The Mathematics of Inheritance Systems (TMOIS), Touretzky described two classes of problems that these systems cannot handle. One involves reasoning with true but redundant assertions; the other involves ambiguity. The substance of TMOIS was the definition and analysis of a theoretically sound multiple inheritance sytem, along with some inference algorithms based on parallel market propagation. Now, however, we find that there appear to be other definitions for inheritance that are equally sound and intuitive, but which do not always agree with the system defined in TMOIS. In this presentation we lay out a partial design space for sound inheritance systems and describe some interesting properties that result from certain strategic choices of inheritance definitions. The best way to define inheritance -- if there is one best way -- may lie somewhere in this space, but we are not yet ready to say what it might be. ------------------------------ Date: 2 Dec 86 16:06:05 EST From: Daniel.Leivant@theory.cs.cmu.edu Subject: Seminar - Possible-World Semantics (CMU) Professor Robert Tennent of Queen's University (Ontario) will be visiting the Department from Wednesday (Dec 3rd) to Friday noon (Dec 5th). People interested in meeting with him should contact Theona Stefanis (@a, x3825). ====================================================================== LOGIC COLLOQUIUM (CMU/PITT) Speaker: Robert D. Tennent (Queen's University) Topic: Possible-World Semantics of Programming Languages and Logics Time: Thursday, December 4, 3:30 Place: Wean 4605 A category-theoretic formulation of a form of possible-world semantics allows elegant solutions to some difficult problems in the modeling of (i) stack-oriented storage management; (ii) Reynolds's "specification logic" (a generalization of Hoare's logic for Algol 60-like languages with procedures); and (iii) side-effect-free block expres- sions. A recent development has been the realization that it is possible and desirable to use a kind of generalized domain theory in this framework. Some additional possible applications of the approach to modeling abstract interpre- tations and the polymorphic lambda calculus will also be sketched. ------------------------------ Date: 2 Dec 86 19:51:19 GMT From: rutgers!clyde!watmath!sunybcs!rapaport@think.com (William J. Rapaport) Subject: Seminar - Proofs, Deductions, Chains of Reasoning (Buffalo) State University of New York at Buffalo BUFFALO LOGIC COLLOQUIUM 1986-1987 Fifth Meeting Tuesday, Dec. 9 4:00 p.m. Baldy 684, Amherst Campus John Corcoran Department of Philosophy SUNY Buffalo "Proofs, Deductions, Chains of Reasoning" This talk begins with a brief review of the deductive and hypothetico- deductive methods and then introduces the distinction between proofs and deductions. The core of the paper is a discussion of the logical, his- torical, epistemic, pragmatic, and heuristic ramifications of the dis- tinction between proofs and deductions. References: J. Corcoran, "Conceptual Structure of Classical Logic," _Phil. & Phen. Res_ 33 (1972) 25-47. A. Tarski, _Intro. to Logic_, Ch. 6 (1941). For more information, contact John Corcoran, (716) 636-2438. William J. Rapaport Assistant Professor Dept. of Computer Science, SUNY Buffalo, Buffalo, NY 14260 (716) 636-3193, 3180 uucp: .!{allegra,boulder,decvax,mit-ems,nike,rocksanne,sbcs,watmath}!sunybcs!rapaport csnet: rapaport@buffalo.csnet bitnet: rapaport@sunybcs.bitnet ------------------------------ Date: Wed, 3 Dec 86 13:13 EST From: Tim Finin Subject: Seminar - A Higher-Order Logic for Programming (UPenn) Dissertation Defense Computer and Information Science University of Pennsylvania A HIGHER-ORDER LOGIC AS THE BASIS FOR LOGIC PROGRAMMING GOPALAN NADATHUR (gopalan@cis.upenn.edu) The objective of this thesis is to provide a formal basis for higher-order features in the paradigm of logic programming. Towards this end, a non-extensional form of higher-order logic that is based on Church's simple theory of types is used to provide a generalisation to the definite clauses of first-order logic. Specifically, a class of formulas that are called higher-order definite sentences is described. These formulas extend definite clauses by replacing first-order terms by the terms of a typed lambda calculus and by providing for quantification over predicate and function variables. It is shown that these formulas together with the notion of a proof in the higher-order logic provide an abstract description of computation that is akin to the one in the first-order case. While the construction of a proof in a higher-order logic is often complicated by the task of finding appropriate substitutions for predicate variables, it is shown that the necessary substitutions for predicate variables can be tightly constrained in the context of higher-order definite sentences. This observation enables the description of a complete theorem-proving procedure for these formulas. The procedure constructs proofs essentially by interweaving higher-order unification with backchaining on implication, and constitutes a generalisation to the higher-order context of the well-known SLD-resolution procedure for definite clauses. The results of these investigations are used to describe a logic programming language called lambda Prolog. This language contains all the features of a language such as Prolog, and, in addition, possesses certain higher-order features. The uses of these additional features are illustrated, and it is shown how the use of the terms of a (typed) lambda calculus as data structures provides a source of richness to the logic programming paradigm. 2:30 pm December 5, 1986 Room 23, Moore School University of Pennsyulvania Thesis Supervisor: Dale Miller Committee: Tim Finin, Jean Gallier (Chairman), Andre Scedrov, Richard Statman ------------------------------ Date: Wed, 3 Dec 86 23:29 EST From: Tim Finin Subject: Seminar - Non-Strict Class Hierarchies in Modeling Laguages (UPenn) DBIG Meeting Computer and Information Science University of Pennsylvania 10:30am; 12-5-86; 555 Moore ON NON-STRICT CLASS HIERARCHIES IN CONCEPTUAL MODELING LAGUAGES. Alexander Borgida Rutgers University One of the cornerstones of the conceptual modeling languages devised for the specification and implementation of Information Systems is the idea of objects grouped into classes. I begin by reviewing the various roles played by this concept: specification of type constraints, repository of logical constraints to be verified, and maintenance of an associated set of objects (the "extent"). I then consider a second feature of these languages -- the notion of class hierarchies -- and after outlining its benefits, present arguments against a strict interpretation of class specialization and the notion of inheritance. Additional consideration of the concept of "default inheritance" leads to a list of desirable features for a language mechanism supporting non-strict taxonomies of classes: ones in which some class definitions may contradict portions of their superclass definitions, albeit in a controlled way. I conclude by presenting some preliminary thoughts on a type system and type verification mechanism which would allow one to check that programs written in the presence of exceptional types will not go wrong. ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Sun Dec 7 01:08:50 1986 Date: Sun, 7 Dec 86 01:08:39 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #278 Status: R AIList Digest Thursday, 4 Dec 1986 Volume 4 : Issue 278 Today's Topics: Course - Parallel Architecture and AI (UPenn) ---------------------------------------------------------------------- Date: 1 Dec 86 19:31:49 EST From: BORGIDA@RED.RUTGERS.EDU Subject: Course - Parallel Architecture and AI (UPenn) Posted-Date: Mon, 17 Nov 86 09:51 EST From: Tim Finin Here is a description of a 1 and 1/2 day course we are putting on for the Army Research Office. We are opening it up to some people from other universities and nearby industry. We have set a modest fee of $200. for non-academic attendees and is free for academic colleagues. Please forward this to anyone who might be interested. SPECIAL ARO COURSE ANNOUNCEMENT COMPUTER ARCHITECTURES FOR PARALLEL PROCESSING IN AI APPLICATIONS As a part of our collaboration with the Army Research Office, we are presenting a three-day course on computer architectures for parallel processing with an emphasis on their application to AI problems. Professor Insup Lee has organized the course which will include lectures by professors Hossam El Gindi, Vipin Kumar (from the University of Texas at Austin), Insup Lee, Eva Ma, Michael Palis, and Lokendra Shastri. Although the course is being sponored by the ARO for researchers from various Army research labs, we are making it available to colleagues from within the University of Pennsylvania as well as some nearby universities and research institutions. If you are interested in attending this course, please contact Glenda Kent at 898-3538 or send electronic mail to GLENDA@CIS.UPENN.EDU and indicate your intention to attend. Attached is some addiitonal information on the course. Tim Finin TITLE Computer Architectures for Parallel Processing in AI Applications WHEN December 10-12, 1986 (from 9:00 a.m. 12/10 to 12:00 p.m. 12/12) WHERE room 216, Moore School (33rd and Walnut), University of Pennsylvania, Philadelphia, PA. FEE $200. for non-academic attendees PRESENTERS Hossam El Gindi, Vipin Kumar, Insup Lee, Eva Ma, Michael Palis, Lokendra Shastri POC Glenda Kent, 215-898-3538, glenda@cis.upenn.edu Insup Lee, lee@cis.upenn.edu INTENDED FOR Research and application programmers, technically oriented managers. DESCRIPTION This course will provide a tutorial on parallel architectures, algorithms and programming languages, and their applications to Artificial Intelligence problems. PREREQUISITES Familiarity with basic computer architectures, high-level programming languages, and symbolic logic, knowledge of LISP and analysis of algorithms desirable. COURSE CONTENTS This three day tutorial seminar will present an overview of parallel computer architectures with an emphasis on their applications to AI problems. It will also supply the neccessary background in parallel algorithms, complexity analysis and programming languages. A tentative list of topics is as follows: - Introduction to Parallel Architectures - parallel computer architectures such as SIMD, MIMD, and pipeline; interconnection networks including ring, mesh, tree, multi-stage, and cross-bar. - Parallel Architectures for Logic Programming - parallelism in logic programs; parallel execution models; mapping of execution models to architectures. - Parallel Architectures for High Speed Symbolic Processing - production system machines (e.g, DADO); tree machines (e.g., NON-VON); massively parallel machines (e.g., Connection Machine, FAIM). - Massive Parallelism in AI - applications of the connectionist model in the areas of computer vision, knowledge representation, inference, and natural language understanding. - Introduction to Parallel Computational Complexity - formal parallel computation models such as Boolean circuits, alternating Turing machines, parallel random-access machines; relations between sequential and parallel models of computation; parallel computational complexity of AI problems such as tree, graph searches, unification and natural language parsing. - Parallel Algorithms and VLSI - interconnection networks for VLSI layout; systolic algorithms and their hardware implementations; - Parallel Programming Languages - language constructs for expressing parallelism and synchronization; implementation issues. COMPUTER ARCHITECTURES FOR PARALLEL PROCESSING IN AI APPLICATIONS COURSE OUTLINE The course will consist of seven lectures, where each lecture is between two to three hours. The first lecture introduces the basic concepts of parallel computer architectures. It explains the organization and applications of different classes of parallel computer architectures such as SIMD, MIMD, and pipeline. It then discusses the properties and design tradeoffs of various types of interconnection networks for parallel computer architectures. In particular, the ring, mesh, tree, multi-stage, and cross-bar will be evaluated and compared. The second and third lectures concentrate on parallel architectures for AI applications. The second lecture overviews current research efforts to develop parallel architectures for executing logic programs. Topics covered will include potential for exploiting parallelism in logic programs, parallel execution models, and mapping of execution models to architectures. Progress made so far and problems yet to be solved in developing such architectures will be discussed. The third lecture overviews the state-of-the-art of architectures for performing high speed symbolic processing. In particular, we will describe parallel architectures for executing production systems such as DADO, tree machines (e.g., NON-VON), massively parallel machines (e.g., Connection Machine, FAIM). The fourth lecture explains why the von Neuman architecture is inappropriate for AI applications and motivates the need for pursuing the connectionist approach. To justify the thesis, some specific applications of the connectionist model in the areas of computer vision, knowledge representation, inference, and natural language understanding will be discussed. Although the discussion will vary at the levels of detail, we plan to examine at least one effort in detail, namely the applicability and usefulness of adopting a connectionist approach to knowledge representation and limited inference. The fifth lecture introduces the basic notions of parallel computational complexity. Specifically, the notion of ``how difficult a problem can be solved in parallel'' is formalized. To formulate this notion precisely, we will define various formal models of parallel computation such as boolean circuits, alternating Turing machines, and parallel random-access machines. Then, the computational complexity of a problem is defined in terms of the amount of resources such as parallel time and number of processors needed to solve it. The relations between sequential and parallel models of computation, as well as characterizations of ``efficiently parallelizable'' and ``inherently sequential'' problems are also given. Finally, the parallel computational complexity of problems in AI (e.g., tree and graph searches, unification and natural language parsing) are discussed. The sixth lecture discusses how to bridge the gap between design of parallel algorithms and their hardware implementations using the present VLSI technology. This lecture will overview interconnection networks suitable for VLSI layout. Then, different systolic algorithms and their hardware implementations will be discussed. To evaluate their effectiveness, we compare how important data storage schemes, like queue (FIFO), dictionary, and matrix manipulation, can be implemented on various systolic architectures. The seventh lecture surveys various parallel programming languages. In particular, the lecture will describe extensions made to sequential procedural, functional, and logic programming languages for parallel programming. Language constructs for expressing parallelism and synchronization, either explicitly or implicitly, will be overviewed and their implementation issues will be discussed. ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Sun Dec 7 01:09:25 1986 Date: Sun, 7 Dec 86 01:08:54 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #279 Status: RO AIList Digest Thursday, 4 Dec 1986 Volume 4 : Issue 279 Today's Topics: Policy - AI Bibliographic Format & Splitting the List, Psychology - Subconscious, Philosophy - Searle, Turing, Nagel ---------------------------------------------------------------------- Date: Wed, 3 Dec 86 16:08:08 est From: amsler@flash.bellcore.com (Robert Amsler) Subject: AI Bibliographic format Can something be done to minimize the idiosyncratic special character usage in the bibliographies? I don't mind a format with tagged fields, but one only designed to be read by one particular text formatting system is a bit much for human readership. Is it really necessary to encode font changes as something as odd as \s-1...\s0 and I don't even know how to read entries such as the last one for the chapters in the Grishman and Kittridge Sublanguage book. ... .TS tab(~); l l. N.Sager~T{ Sublanguage: Linguistic Phenomenon, Computational Tool T} J. Lehrberger~Sublanguage Analysis E. Fitzpatrick~T{ The Status of Telegraphic Sublanguages T} J. Bachenko D. Hindle J. R. Hobbs~Sublanguage and Knowledge D. E. Walker~T{ The Use of Machine Readable Dictionaries in Sublanguage Analysis T} R. A. Amsler C. Friedman~T{ Automatic Structuring of Sublanguage Information: Application to Medical Narrative T} E. Marsh~T{ General Semantic Patterns in Different Sublanguages T} C. A. Montgomery~T{ A Sublanguage for Reporting and Analysis of Space Events T} B. C. Glover T. W. Finin~T{ Constraining the Interpretation of Nominal Compounds in a Limited Context T} G. Dunham~T{ The Role of Syntax in the Sublanguage of Medical Diagnostic Statements T} J. Slocum~T{ How One Might Automatically Identify and Adapt to a Sublanguage T} L. Hirschman~T{ Discovering Sublanguage Structures T} .TE Huh!!! ------------------------------ Date: Wed, 03 Dec 86 09:02:05 -0500 From: dchandra@ATHENA.MIT.EDU Subject: Re: AIList Digest V4 #276 Hi, Pls do NOT split the group. Several reasons: * It is nice to know what is going on in all parts of AI * One can always skip over stuff one does not want to read * If I have something of interest to more than one group then I will have to send info to all the groups * MOST importantly, reading notesfiles takes time. If we introduce more notesfiles, one will have to wade through many more mailing lists. Thanks Navin CHandra IESL MIT ------------------------------ Date: 3 Dec 86 13:08 EST From: SHAFFER%SCOVCB.decnet@ge-crd.arpa Subject: the long debate of philosophical issues In response to the idea of splitting the group I think that in the long-run it would be a bad idea. But I do support the later suggestion that the length of these dialogs must be limited. As we at GE get things on a limit "bunch" basis, we look through the topics first before reading all of the bulletins. Recently it has become overloaded with long-winded, one-sided, very, very long speeches. Besides the pure waste of computer time and disk space, the people arguing are not going to changes their minds, they are just exercising their fingers. I am not fluent in the language of this "turing, searle" debate, but I can see that the points of interest are becoming a bit on the "off on a tangent" side. Lets all encourage discussion, but lets give everyone a chance to bring up interesting and beneficial topics. Lets not spend the board's entire volume to whether a computer "feels". Earl Shaffer, GE, Philadelphia ------------------------------ Date: Wed, 3 Dec 86 15:58:50 est From: amsler@flash.bellcore.com (Robert Amsler) Subject: Splitting the List I don't think the idea of splitting the list is practical. The real question is whether the philosophy discussion can sustain a whole mailing list on its own. I doubt it could. This is a topic which will eventually fade and to split the list doubles the work for the moderator. Is someone offering to become the new moderator of the AI Philosophy list? [I should mention that there is a Metaphilosophers list at MIT-OZ@MC.LCS.MIT.EDU, as well as the Psychnet Newsletter from EPsynet%UHUMVM1.BITNET@WISCVM. The Phil-Sci list at MIT used to carry much more of such philosophical discussion than AIList has had recently. (Part of that was due to the quotations being nested four levels deep, which obviously multiplies the net traffic.) I am surprised -- but relieved -- that so few AIList readers have participated in these exchanges. Perhaps the philosophers dropped out long ago because AIList has had so little discussion of AI foundations. My own bias is toward computational techniques for coaxing more intelligent behavior from computers, regardless of theoretical adequacy. -- KIL] ------------------------------ Date: Tue, 2 Dec 86 23:44:14 EST From: "Keith F. Lynch" Subject: Subconscious From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov (Stevan Harnad) Lambert Meertens (lambert@boring.uucp) of CWI, Amsterdam, writes: > Sometimes we are conscious of certain sensations. Do these > sensations disappear if we are not conscious of them? Or do they > go on on a subconscious level? ... The following point is crucial to a coherent discussion of the mind/body problem: The notion of an unconscious sensation (or, more generally, an unconscious experience) is a contradiction in terms! [Test it in the form: "unexperienced experience." Whatever might that mean? Don't answer. The Viennese delegation (as Nabokov used to call it) has already made almost a century's worth of hermeneutic hay with the myth of the "subconscious" -- a manifest nonsolution to the mind/body problem that simply consisted of multiplying the mystery by two. There is plenty of evidence for the subconscious, i.e. something that acts like a person but which one is not conscious of the thoughts or actions of. One explanation is that the subconscious is a seperate consciousness. Split brain experiments give convincing evidence that there can be at least two seperate consciousnesses in one individual. Does the brain splitting operation create a new consciousness? Or were there always two? ...Keith ------------------------------ Date: 30 Nov 86 17:25:52 GMT From: mcvax!ukc!rjf@seismo.css.gov (R.J.Faichney) Subject: Re: Searle, Turing, Nagel In article <230@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes: >On mod.ai, rjf@ukc.UUCP <8611071431.AA18436@mcvax.uucp> >Rob Faichney (U of Kent at Canterbury, Canterbury, UK) made >nonspecific reference ... Sorry - the articles at issue were long gone, before I learned how to use this thing. >... I'm not altogther certain ... intended as a followup to ... >"Searle, Turing, Categories, Symbols," but ... >I am responding on the assumption that it was. It was not. See below. >... Whether consciousness is a necessary >condition for intelligence is probably undecidable, and goes to the >heart of the mind/body problem and its attendant uncertainties. We have various ways of getting around the problems of inadequate definitions in these discussions, but I think we've run right up against it here. In psychological circles, as you know, intelligence is notorious for being difficult to define. >The converse proposition -- that intelligence is a necessary condition for >consciousness is synonymous with the proposition that consciousness is >a sufficient condition for intelligence, and this is indeed being >claimed (e.g., by me). The problem here is whether we define intelligence as implying consciousness. I am simply suggesting that if we (re)define intelligence as *not* implying consciousness, we will lose nothing in terms of the utility of the concept of intelligence, and may gain a great deal regarding our understanding of the possibilities of machine intelligence and/or consciousness. >If the word >"intelligence" has any meaning at all, over and above displaying ANY >arbitrary performance at all... I'm afraid that I don't think it has very much meaning, beyond the naive, relative usage of 'graduates tend to be more intelligent than non-graduates'. >...the Total Turing Test...amounts to equating >intelligence with total performance capacities ... >... also coincides with our only basis for inferring that >anyone else but ourselves has a mind (i.e., is conscious). >There is no contradiction between agreeing that intelligence admits >of degrees and that mind is all-or-none. But intelligence implies mind? Where do we draw the line? Should an IQ of => 40 mean that something is conscious, while < 40 denotes a mindless automaton? You say your Test allows for cross-species and pathological variants, but surely this relative/absolute contradiction remains. >> Animals probably are conscious without being intelligent. Machines may >> perhaps be intelligent without being conscious. >Not too good to be true: Too easy. Granted. I failed to make clear that I was proposing a (re)definition of intelligence, which would retain the naive usage - including that animals are (relatively) unintelligent - while dispensing with the theoretical problems. >...the empirical question of what intelligence is cannot be settled by a >definition... Indeed, it cannot begin to be tackled without a definition, which is what I am trying to provide. My proposition does not settle the empirical question - it just makes it manageable. >Nagel's point is that there is >something it's "like" to have experience, i.e., to be conscious, and >that it's only open to the 1st person point of view. It's hence radically >unlike all other "objective" or "intersubjective" phenomena in science >(e.g., meter-readings)... Surely intersubjectivity is at least as close to subjectivity as to objectivity. Instead of meter readings, take as an example the mother- child relationship. Like any other, it requires responsive feedback, in terms in this case of cuddling, cooing, crying, smiling, and it is where the baby learns to relate and communicate with others. I say that it's one *essential* characteristic is intersubjectivity. Though the child does not consciously identify with the adult, there is nevertheless an intrinsic tendency to copy gestures, etc., which will be complemented and completed at maturity by a (relatively) unselfish appreciation of the other person's point of view. This tendency is so profound, and so bound to our origins, both ontogenic and philogenic(sp?) that to ascribe consciousness to something man-made, no matter how perfect it's performance, will always require an effort of will. Nor could it ever be intellectually justified. The ascription of consciousness says infinitely more about the ascriptor than the ascriptee. It means 'I am willing and able to identify with this thing - I really believe that it is like something to be this thing.' It is inevitably, intrinsically spontaneous and subjective. You may be willing to identify with something which can do anything you can. I am not. And, though this is obviously sheer guesswork, I'm willing to bet a lot of money that the vast majority of people (*not* of AIers) would be with me. And, if you agree that it's subjective, why should anyone know better than the man in the street? (I'm speaking here, of course, about what people would do, not what they think they might do - I'm not suggesting that the problem could be solved by an opinion poll!) >> So what, really, is consciousness? According to Nagel... >> This accords with Minsky (via Col. Sicherman): >> 'consciousness is an illusion to itself but a genuine and observable >> phenomenon to an outside observer...' >The quote above (via the Colonel) is PRECISELY THE OPPOSITE of Nagel's >point. The only aspect of conscious experience that involves direct >observability is the subjective, 1st-person aspect... >Let's call this private terrain Nagel-land. >The part others "can identify" is Turing-land: Objective, observable >performance (and its structural and functional substrates). Nagel's point >is that Nagel-land is not reducible to Turing-land. The part others "can identify with" is Nagel-land. People don't identify structural and functional substrates, they just know what it's like to be people. This fact does not belong to purely subjective Nagel-land or to perfectly objective Turing-land. It has some features of each, and transcends both. Consciousness as a fact is not directly observable - it is direct observation. Consciousness as a concept is not directly observable either, but it is observable in a very special way, which for *practical* purposes is incorrigible, to the extent that it is not testable, but our intuitions seem perfectly workable. It cannot examine itself ('...is an illusion to itself...') but may quite validly be seen in others ('...a genuine and observable fact to an outside observer...'). >... hardly amounts to an objective contribution to cognitive science. I'm not interested in the Turing Test (see above) but surely to clarify the limits of objectivity is an objective contribution. >> It may perhaps be supposed that the concept of consciousness evolved >> as part of a social adaptation... >Except that Nagel would no doubt suggest (and I would agree) that >there's no reason to believe that the asocial or minimally social >animals are not conscious too. I said the *concept* of consciousness... >> ...When I suppose myself to be conscious, I am imagining myself >> outside myself... >When I feel a pain -- when I am in the qualitative state of >knowing what it's like to be feeling a pain -- I am not "supposing" >anything at all. When I feel a pain I'm being conscious. When I suppose etc., I'm thinking about being conscious. I'm talking here about thinking about it, because in order to ascribe consciousness to a machine, we first have to think about it, unlike our ascription of consciousness to each other. Unfortunately, such intrinsically subjective ascriptions are much more easily made via spontanaeity than via rationalisation. I would say, in fact, that they may only be spontaneous. >Some crucial corrections that may set the whole matter in a rather different >light: Subjectively (and I would say objectively too), we all know that >OUR OWN consciousness is real. Agreed. >Objectively, we have no way of knowing >that anyone else's consciousness is real. Agreed. >Because of the relationship >between subjectivity and objectivity, direct knowledge of the kind we >have in our own case is impossible in any other. Agreed. >The pragmatic >compromise we practice every day with one another is called the Total >Turing Test: I call it natural, naive intersubjectivity. >Ascertaining that others behave indistinguishably from our >paradigmatic model for a creature with consciousness: ourselves. They may behave indistinguishably from ourselves, but it's not only snobs who ask 'What do we know about their background?'. That sort of information is perfectly relevant. Why disallow it? And why believe that a laboratory- constructed creature feels like I do, no matter how perfect it's social behaviour? Where subjectivity is all, prejudice can be valid, even necessary. What else do we have? >...a predictive and explanatory causal thoery of mind. Is not something that we can't get by without. >...if we follow Nagel, our inferences are not meaningless, but in some >respects incomplete and undecidable. I may be showing my ignorance, but to me if something is (inevitably?) 'incomplete and undecidable', it's pretty nearly meaningless for most purposes. To sum up: there is actually quite a substantial area of agreement between us, but I don't think that you go quite far enough. While I cannot deny that much may be learned from attempting computer and/or robot simulation of human performance, there remains the fact that similar ends may be achieved by different means; that a perfectly convincing robot might differ radically from us in software as well as hardware. In short, I think that the computer scientists have much more to gain from this than the psychologists. As a former member of the latter category, and a present member of the former (though not an AIer!), I am not complaining. -- Robin Faichney UUCP: ...mcvax!ukc!rjf Post: RJ Faichney, Computing Laboratory, JANET: rjf@uk.ac.ukc The University, Canterbury, Phone: 0227 66822 Ext 7681 Kent. CT2 7NF ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Sun Dec 14 00:55:44 1986 Date: Sun, 14 Dec 86 00:55:31 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #280 Status: R AIList Digest Tuesday, 9 Dec 1986 Volume 4 : Issue 280 Today's Topics: Correction - Parallel Archictures Course, Seminars - Interval Temporal Logic for Parallel Programming (SRI) & An EBG system which Learns from Failures (Rutgers) & Parallelization of Alpha-Beta Search (CMU), Conferences - Logical Solutions to the Frame Problem & AI in Engineering ---------------------------------------------------------------------- Date: Thu, 4 Dec 86 15:27 EST From: Tim Finin Subject: Correction - Parallel Archictures Course ***** IMPORTANT CORRECTION ***** Alex Borgida forwarded a message from me to AILIST that needs correcting. This message concerns a short course on "Computer Architectures For Parallel Processing In AI Applications" that we are giving here at Penn next week for the Army Research Office. In this message I announced that the course would be open to colleagues from nearby insitutions. Unfortunately, since I sent Alex the message, the status of the course has changed and it will no longer be open to outside people other than those whom ARO is sponsoring. We're sorry for this confusion. Tim [Actually, I was the one who forwarded the message from a local bboard. I should have added a note to that effect. -- KIL] ------------------------------ Date: Fri 5 Dec 86 11:44:40-PST From: Amy Lansky Subject: Seminar - Interval Temporal Logic for Parallel Programming (SRI) USING INTERVAL TEMPORAL LOGIC FOR PARALLEL PROGRAMMING Roger Hale Computer Laboratory Cambridge University, England 4:15 PM, WEDNESDAY, December 10 SRI International, Building E, Room EJ228 Interval Temporal Logic (ITL) was originally proposed by Moszkowski for reasoning about the behaviour of hardware devices. Since then it has shown itself to have a much wider field of application, and has been used to specify a variety of concurrent and time-dependent systems at different levels of abstraction. Moreover, it has been found that a useful subset of ITL specifications are executable in the programming language TEMPURA. Experience gained from prototyping temporal logic specifications in Tempura leads us to believe that this is a practical (and enjoyable) way to produce formal specifications. In the talk I will present some temporal logic specifications which are also Tempura programs, and will indicate how these programs are executed by the Tempura interpreter. I will give examples of both high- and low-level specifications, and will describe a way to relate different levels of abstraction. In conclusion, I will outline some future plans, which include the provision of a decision support system for Tempura. VISITORS: Please arrive 5 minutes early so that you can be escorted up from the E-building receptionist's desk. Thanks! NOTICE CHANGE IN USUAL DAY AND TIME!! (Wednesday, 4:15) ------------------------------ Date: 8 Dec 86 13:11:42 EST From: Tom Fawcett Subject: Seminar - An EBG system which Learns from Failures (Rutgers) On Thursday, December 11th in Hill-250 at 10 AM, Neeraj Bhatnagar will present a talk on learning from failures. The abstract follows. PLEASE BE PROMPT; we only have the room until 11:10. AN EBG SYSTEM THAT LEARNS FROM ITS FAILURES I shall discuss my implementation of a design system that learns from its failures. The learning technique used is the explanation based generalization widely reported in the literature with the modification that our system tries to explain the failures that it encounters in its search for solution. These explanations give necessary conditions for success which are used for pruning out the unacceptable solutions. The implemented system reported here acts as a Generate and Test (GT) problem solver in its general problem solver mode. In its learning mode it tries to explain the reason why a generated solution turned out to be unacceptable and generalizes this explanation to prune out the failure paths in future. The test bed for experimenting with the suggested technique is a restricted version of the floor planning domain. Due to the restrictions imposed by us on the operators used for planning the failures that can occur while planning have become monotonic in nature which has facilitated their detection, explanation and recovery from them. Time permitting, I shall also discuss some of the future directions of my research which include detection, proof and recovery from non-monotonic failures, defining new terms and new operators in the context of explanation based learning and a suggested method for making more effective use of the knowledge learned by explanation based generalization. ------------------------------ Date: 5 Dec 86 16:29:55 EST From: Feng-Hsiung.Hsu@unh.cs.cmu.edu Subject: Seminar - Parallelization of Alpha-Beta Search (CMU) Large Scale Parallelization of Alpha-beta Search: An Algorithmic and Architectural Study Feng-hsiung Hsu Time: Thursday, 6:00 pm, Dec. 11 Place: WH 4605 Abstract This proposal presents a class of new parallel alpha-beta algorithms that gives speedup arbitrarily close to linear when the game tree is best-first ordered and sufficiently deep. It will also be shown that the parallel algorithms strictly dominate the weaker form of alpha-beta algorithm that does not use deep cutoff; that is, they never search a node that is not explored by the weaker form of alpha-beta algorithm, and usually search fewer nodes. Preliminary simulation results indicate that the parallel algorithms are actually much better than the weak alpha-beta in terms of the number of nodes searched. Moreover, unlike previous parallel algorithms, the new parallel algorithms do not degrade drastically when the number of processors exceeds certain small number typically around 6 to 8. In fact, based on simulation data, it appears that no serious degradation of speedup would occur before technological considerations such as system reliability limit the maximum speedup. As an example of the applications of the parallel algorithms, the possibility and complications of applying the algorithms to computer chess will be examined. A new design for special purpose chess processors that is orders of magnitude smaller than existing designs is presented as the basis for a proposed multi-processor chess machine. Based on the measured data from a single chip chess move generator that has already been fabricated, it is estimated that with a 3-micron CMOS process a 3-chip chess processor, two custom chips and one commercial SRAM, searching about one million positions per second can be built. Some architectural considerations on how to coordinate vast number of such processors will be presented here. In the case that the proposed multi-processor machine cannot be completed in time, a small scale system will be built using off-the-shelf components and the move generators. ------------------------------ Date: Wed, 3 Dec 86 13:20:02 CST From: Glenn Veach Subject: Conference - Logical Solutions to the Frame Problem FINAL CALL FOR PARTICIPATION WORKSHOP ON LOGICAL SOLUTIONS TO THE FRAME PROBLEM The American Association for Artificial Intelligence (AAAI) is sponsoring this workshop in Lawrence, Kansas, 13, 14, 15 April 1987. The frame problem is one of the most fundamental problems in Artificial Intelligence and essentially is the problem of describing in a computationally reasonable manner what properties persist and what properties change as action are performed. The intrinsic problem lies in the fact that we cannot expect to be able to exhaustively list for every possible action (or combination of concurrent actions) and for every possible state of the world how that action (or concurrent actions) change the truth or falsity of each individual fact. We can only list the obvious results of the action and hope that our basic inferential system will be able to deduce the truth or falsity of the other less obvious facts. In recent years there have been a number of approaches to constructing new kinds of logical systems such as non-monotonic logics, default logics, circumscription logics, modal reflexive logics, and persistence logics which hopefully can be applied to solving the frame problem by allowing the missing facts to be deduced. This workshop will attempt to bring together the proponents of these various approaches. Papers on logics applicable to the problem of reasoning about such unintended consequences of actions are invited for consideration. Two copies of a full length paper should be sent to the workshop chairman before Dec. 19, 1986. Acceptance notices will be mailed by December 26, 1986 along with instructions for preparing the final versions of accepted papers. The final versions are due February 1, 1987. In order to encourage vigorous interaction and exchange of ideas the workshop will be kept small -- about 25 participants. There will be individual presentations and ample time for technical discussions. An attempt will be made to define the current state of the art and future research needs. Partial financial support for participants is available. Workshop Chairman: Dr. Frank M. Brown Dept. Computer Science 110 strong Hall The University of Kansas Lawrence, Kansas (913) 864-4482 mail net inquiries to: veach%ukans@csnet-relay.csnet ------------------------------ Date: Fri, 05 Dec 86 08:55:46 -0500 From: sriram@ATHENA.MIT.EDU Subject: Conference - AI in Engineering The call for papers for the SECOND INTERNATIONAL CONFERENCE ON APPLICATIONS OF ARTIFICIAL INTELLIGENCE IN ENGINEERING appeared a little late in the SIGART newsletter. A number of people requested that we extend the deadline. In response to their request the last date for submission of a 1000 word abstract is extended to Dec. 15th. For more information on this conference contact Bob Adey at 617-933-7374 (The call for papers appeared in a previous issue of the AILIST). Sriram ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Sun Dec 14 00:55:06 1986 Date: Sun, 14 Dec 86 00:54:57 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #281 Status: R AIList Digest Tuesday, 9 Dec 1986 Volume 4 : Issue 281 Today's Topics: Administrivia - BITNET Distribution, Queries - Lisp for the Mac & Lisp Lore & Little Lisper & Lisp Performance Benchmarks & Bibliographic Formatter, AI Tools - Object-Oriented Programming in AI, Ethics - AI and the Arms Race, Policy - Proposed Split ---------------------------------------------------------------------- Date: Mon 8 Dec 86 21:52:59-PST From: Ken Laws Subject: Administrivia - BITNET Distribution I have been told that some of the current Arpanet congestion could be cleared up if AIList (and the other lists) find BITNET moderators willing to maintain a mailing list and forward the digest to all interested BITNET sites. I could provide the initial list and could continue to send the welcome message to new participants, so the effort in maintaining the address list would be minimal. Is there someone willing to perform this service? -- Ken Laws ------------------------------ Date: 5 Dec 86 20:33:33 GMT From: rutgers!princeton!puvax2!6111231%PUCC.BITNET@lll-crg.arpa (Peter Wisnovsky) Subject: Lisp for the Mac Can anyone recommend a good Lisp for the Macintosh? Also, if someone has a working version of Mac XLisp (1.4 or higher) I would appreciate it if they would mail it to me: the copy on MacServe is defective and the author has not answered the mail I sent him. Peter Wisnovsky Virtual Address: UUCP: ...ihnp4!psuvax1!6111231@pucc.bitnet Physical Adddress: 179 Prospect Avenue Princeton, New Jersey 08540 (609)-734-7852 ------------------------------ Date: 1 Dec 86 21:50:55 GMT From: ubc-vision!razzell@beaver.cs.washington.edu (Dan Razzell) Subject: Book enquiry Has anybody read: Hank Bromley, "Lisp Lore: A Guide to Programming the Lisp Machine", 1986, Kluwer Academic Publishers, ISBN 0-89838-220-3 This purports to be a tutorial introduction to building programs on the Symbolics Lisp machine. Unfortunately, it is said to focus on details of Zetalisp, and seems a bit lightweight, judging by the table of contents that the publisher puts out in its brochure. -- ______________________________________________________ .^.^. Dan Razzell . o o . Laboratory for Computational Vision . >v< . University of British Columbia ______mm.mm___________________________________________ ------------------------------ Date: Sat, 6 Dec 86 22:56:03 PST From: Thomas Eric Brunner Subject: little (*fun*) lisper, title/author? When I worked in Bracknell, someone there was kind enough to let me read a little booklet called (I think) "THE LITTLE LISPER". I haven't found it in my post-move-to-sunny-California boxes...Does this ring a bell to anyone? I'd like to buy a copy - it was a "nice", and illustrated, text on lisp. Thanks for the pointers! Eric ------------------------------ Date: Mon, 8 Dec 86 22:19 EST From: Bill Pase Subject: Lisp Performance Benchmarks Does anyone know if the Lisp performance benchmarks used in the book by Gabriel are avaliable on the net somewhere?? /bill ------------------------------ Date: Thu, 4 Dec 86 08:39 ??? From: "William E. Hamilton, Jr." Subject: AI Bibliographic format I emphatically agree with the following comment from Robert Amsler: >Date: Wed, 3 Dec 86 16:08:08 est >From: amsler@flash.bellcore.com (Robert Amsler) >Subject: AI Bibliographic format >Can something be done to minimize the idiosyncratic special character >usage in the bibliographies? I don't mind a format with tagged fields, >but one only designed to be read by one particular text formatting system >is a bit much for human readership. Is it really necessary to encode >font changes as something as odd as \s-1...\s0 and I don't even know >how to read entries such as the last one for the chapters in the >Grishman and Kittridge Sublanguage book. For those of us who don't know how to interpret the bibiographic entries, why not circulate a specification for interpreting them, or tell us where we can get the text formatting software Amsler mentions. If this formatter is another piece of unix esoterica, are there versions which work under vms? Bill Hamilton GM Research Labs Computer Science Dept 313 986 1474 hamilton@gmr.com ------------------------------ Date: 4 Dec 86 10:19 PST From: Stern.pasa@Xerox.COM Subject: OOP in AI The responses to my question of a few weeks ago, regarding publications discussing OOP and AI: Xerox PARC work in OOP has a long history, with a flurry of publishing recently including AI Mag Spring 1976 for "Object-oriented Programming: Themes and Variations" and Science, 28 Feb, Vol 231 "Perspectives on AI Programming", both by D. Bobrow and M. Stefik. Other responses referred me to our (note - I work for Xerox) LOOPS knowledge programming system. Some responses referred me to KEE, ART (?) and Flavors. A couple of people described their own work in progress on OOP languages or systems. I had completely missed the SIGPlan issue (V21, #10, Oct 86) on the OOP Workshop at IBM Watson Research Center organized by them and Peter Wegner of Brown University, who includes his own excellent paper in the proceedings. Josh ------------------------------ Date: 4 Dec 86 00:16:44 GMT From: sdcrdcf!burdvax!blenko@OBERON.USC.EDU (Tom Blenko) Subject: Re: AI and the Arms Race In article <863@tekchips.UUCP> willc@tekchips.UUCP (Will Clinger) writes: |In article <2862@burdvax.UUCP> blenko@burdvax.UUCP (Tom Blenko) writes: |>If Weizenbaum or anyone else thinks he or she can succeeded in weighing |>possible good and bad applications, I think he is mistaken. Wildly |>mistaken. |> |>Why does Weizenbaum think technologists are, even within the bounds of |>conventional wisdom, competent to make such judgements in the first |>place? | |Is this supposed to mean that professors of moral philosophy are the only |people who should make moral judgments? Or is it supposed to mean that |we should trust the theologians to choose for us? Or that we should leave |all such matters to the politicians? Not at all. You and I apparently agree that everyone does, willingly or not, decide what they will do (not everyone would agree with even that). I claim that they are simply unable to decide on the basis of knowing what the good and bad consequences of introducing a technology will be. And I am claiming that technologists, by and large, are less competent than they might be by virtue of their ignorance of the criteria professors of moral philosophy, theologians, nuclear plant designers, and politicians bring to bear on such decisions. I propose that most technologists decide, explicitly or implicitly, that they will ride with the status quo, believing that 1) there are processes by which errant behavior on the part of political or military leaders is corrected; 2) they may subsequently have the option of taking a different role in deciding how the technology will be used; 3) the status quo is what they are most knowledgeable about, and other options are difficult to evaluate; 4) there is always a finite likelihood that a decision may, in retrospect, prove wrong, even though it was the best choice available to them as decision-maker. Such a decision is not that some set of consequences is, on balance, good or bad, but that there is a process by which one may hope to minimize catastrophic consequences of an imperfect, forced-choice decision-making process. |Representative democracy imposes upon citizens a responsibility for |judging moral choices made by the leaders they elect. It seems to me |that anyone presumed to be capable of judging others' moral choices |should be presumed capable of making their own. | |It also seems to me that responsibility for judging the likely outcome |of one's actions is not a thing that humans can evade, and I applaud |Weizenbaum for pointing out that scientists and engineers bear this |responsibility as much as anyone else. I think the exhortations attributed to Weizenbaum are shallow and simplistic. If one persuades oneself that one is doing what Weizenbaum proposes, one simply defers the more difficult task of modifying one's decision-making as further information/experience becomes available (e.g., by revising a belief set such as that above). Tom ------------------------------ Date: 4 Dec 86 19:05:55 GMT From: adobe!greid@decwrl.dec.com (Glenn Reid) Subject: Re: Proposed: a split of this group >> I would like to suggest that this group be split into two groups; >>one about "doing AI" and one on "philosophising about AI", the latter >>to contain the various discussions about Turing tests, sentient computers, >>and suchlike. > >Good idea. I was beginning to think the discussions of "when is an >artifice intelligent" might belong in "talk.ai." I was looking for >articles about how to do AI, and not finding any. The trouble is, >"comp.ai.how-to" might have no traffic at all. How do you "do" AI without talking about what it is that you are trying to do? Seems to me that discussions about cognitive modeling and Turing tests and whatever else are perfectly acceptable here, if not needed. But I could live without the "sentient computers" book lists. But you're right. Maybe we should post data structures or something. Doesn't it always come down to data structures? ------------------------------ Date: Tue, 2 Dec 86 21:00:34 cst From: Girish Kumthekar Subject: Proposed Split I do support the idea of splitting the group(especially till people stop abusing it by sending volumes on Searle & ugh .........). However I think it may put more workload on Ken,and also may sometimes put him in a quandry as to which group a message might belong. Hope we can come up with a decent solution. Girish Kumthekar kumthek%lsu@CSNET-RELAY.csnet Tel # (504)-388-1495 [Actually, routing messages to appropriate lists has seldom been a problem -- but thanks for the thought. As a theoretical issue, I agree with those who like to keep the digest flexible so that we can be stimulated by ideas outside our own subfields. In practice, though, the digest has gotten a bit large for a volunteer moderator to handle (in addition to professional and familial duties). I am worried that the Arpanet side of the list may collapse if I have give up this hobby. Perhaps the rate of mailer problems and other administrative matters will decrease as the network adjusts to all the new conventions and hosts that have been added lately. -- KIL] ------------------------------ Date: 5 Dec 86 16:55:54 GMT From: ihnp4!houxm!houem!marty1@ucbvax.Berkeley.EDU (M.BRILLIANT) Subject: Re: Proposed: a split of this group In <1991@adobe.UUCP>, greid@adobe.UUCP (Glenn Reid) replies to a suggestion by jbn@glacier.UUCP (John Nagle)... "that this group be split into two groups; one about 'doing AI' and one on 'philosophising about AI', the latter to contain the various discussions about Turing tests, sentient computers, and suchlike." ... with the question: "How do you 'do' AI without talking about what it is that you are trying to do?" Maybe we ought to split on the basis of what we are trying to do. I suggested in my own response <720@houem.UUCP> that "we just try always to create something more intelligent than we created before... That way we can not only claim nearly instant success, but also continue to have further successes without end." That joke has a serious component. What some of us are trying to do is imitate known intelligence, and particularly human intelligence. Others (including myself) are just trying to do artificially as much as possible of the work for which we now depend on human intelligence. Actually, I am looking at an application, not inventing methods. Those of us who are not trying to imitate human intelligence may ultimately surpass human intelligence. But we can pursue our goal without knowing how to measure or test artificial intelligence. My main problem is that I don't know how the people who do it think about their methods, so I want to hear about methods. Marty M. B. Brilliant (201)-949-1858 AT&T-BL HO 3D-520 houem!marty1 ------------------------------ Date: 5 Dec 86 16:16:00 GMT From: bsmith@p.cs.uiuc.edu Subject: Re: Proposed: a split of this group There is a serious problem with having any notesfile with "philosophy" in its name--just look at talk.philosophy.misc. There, an endless number of people who think philosophy consists of no more than just spewing forth unsubstantiated opinions conduct what are laughably called discussions but are really nothing other than name-calling sessions (interlaced with ample supplies of vulgarities). Steven Harnad has inspired discussions on this net which, perhaps, ought to be in a separate notesfile, but I shudder to think what such a notesfile would be like. One suggestion--given the ugliness of talk.philosophy.misc, I think this new notesfile ought to be moderated. ------------------------------ Date: 8 Dec 86 23:16:25 GMT From: ladkin@kestrel.arpa (Peter Ladkin) Subject: Re: Proposed: a split of this group In article <603@ubc-cs.UUCP>, andrews@ubc-cs.UUCP (Jamie Andrews) writes: > I should note at this point that, theoretically at least, > there is already a newsgroup that is perfect for the > philosophy of mind/intelligence/AI discussion. It's called > talk.philosophy.tech, and has been talked about as an official > newsgroup for some time. I am the `moderator' of this group, which is dormant pending submissions. There was some trouble starting it up, and so I maintained a mailing list for a while. I no longer do so. If there is interest, we can try to start it up again. The interested parties just went back to their old groups when we had so much trouble propagating it. peter ladkin ladkin@kestrel.arpa ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Sun Dec 14 00:55:29 1986 Date: Sun, 14 Dec 86 00:55:12 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #282 Status: R AIList Digest Tuesday, 9 Dec 1986 Volume 4 : Issue 282 Today's Topics: Philosophy - Conscious Computers & Dijkstra Quote & Brains vs. TTT as Criteria for Mind/Consciousness ---------------------------------------------------------------------- Date: Thu, 4 Dec 86 21:48:55 EST From: "Keith F. Lynch" Subject: Conscious computers From: mcvax!ukc!rjf@seismo.css.gov (R.J.Faichney) ... to ascribe consciousness to something man-made, no matter how perfect it's performance, will always require an effort of will. Nor could it ever be intellectually justified. ... You may be willing to identify with something which can do anything you can. I am not. And, though this is obviously sheer guesswork, I'm willing to bet a lot of money that the vast majority of people (*not* of AIers) would be with me. Don't forget that "performance" doesn't just mean that it can play chess or build a radio as well as you can. It also means it could write one of these net messages, claiming that it is conscious but that it has no way to be sure that anyone else is, etc. The net is an excellent medium for Turing tests. Other than our knowledge of the current state of the art, we have no evidence that any given contributor is human rather than a machine. Let me play the Turing game in reverse for a moment, and ask if you would bet a lot of money that nobody would regard a computer as conscious if it were to have written this message? ...Keith ------------------------------ Date: 4 Dec 86 15:16:24 EST From: David.Harel@theory.cs.cmu.edu Subject: another dijkstra quote [Forwarded from the CMU bboard by Laws@SRI-STRIPE.] I need a reference to another dijkstra quote: "The question of whether computers can think is just like the question of whether submarines can swim." (this is a realy nice one, I think...) Thanks in advance David Harel x3742, harel@theory ------------------------------ Date: 4 Dec 86 07:55:00 EST From: "CUGINI, JOHN" Reply-to: "CUGINI, JOHN" Subject: brains vs. TTT as criteria for mind/consciousness *** WARNING *** WARNING *** WARNING *** WARNING *** WARNING *** *** *** Philosophobes (Sophophobes?) beware, industrial-strength *** metaphysics dead ahead. The faint of heart should skip *** forward about 350 lines... *** ************************************************************************* Recall that the main issue here is how important a criterion brainedness (as opposed to performance/the TTT) is for mindedness. My main reason for asserting its importance is that I take "mind" to mean, roughly, "conscious intelligence", where consciousness is epitomized by such things as seeing colors, feeling pain, and intelligence by playing chess, catching mice. No one has objected strenously to this definition, so I'll assume we kind of agree. While performance/TTT can be decisive evidence for intelligence, it doesn't seem to me to be nearly as strong evidence for consciousness out of context, ie when applied to non-brained entities. So in the following I will try to assess in exactly what manner brains and/or performance provide evidence for consciousness. I had earlier written that one naively knows that his mind causes his performance and scientifically knows that his brain causes his mind, and that *both* of these provide justifiable bases for induction to other entities. S. Harnad, in reply, writes: > Now on to the substance of your criticism. I think the crucial points > will turn on the difference between what you call "naively know" and > "scientifically know." It will also involve (like it or not) the issue > of radical scepticicm, uncertainty and the intersubjectivity and validity of > inferences and correlations. ... > > Scientific knowing is indirect and inferential. It is based on > inference to the best explanation, the weight of the evidence, probability, > Popperian (testability, falsifiability) considerations, etc. It is the > paradigm for all empirical inquiry, and it is open to a kind of > radical scepticism (scepticism about induction) that we all reasonably > agree not to worry about... > > What you call "naive knowing," on the other hand (and about which you > ask "*how* do I know this?") is the special preserve of 1st-hand, > 1st-person subjective experience. It is "privileged" (no one has > access to it but me), direct (I do not INFER from evidence that I am > in pain, I know it directly), and it has been described as > "incorrigible" (can I be wrong that I am feeling pain?). .. > > You say that I "naively know" that my performance > is caused by my mind and I "scientifically know" that my mind is caused > by my brain. ...Let me translate that: I know directly that my > performance is caused by my mind, and I infer that my > mind is caused by my brain. I'll go even further (now that we're > steeped in phenomenology): It is part of my EXPERIENCE of my behavior > that it is caused by my mind. [I happen to believe (inferentially) that > "free will" is an illusion, but I admit it's a phenomenological fact > that free will sure doesn't FEEL like an illusion.] We do not experience our > performance in the passive way that we experience sensory input. We > experience it AS something we (our minds) are CAUSING. (In fact, that's > probably the source of our intuitions about what causation IS. I'll > return to this later.) > > So there is a very big difference between my direct knowledge that my > mind causes my behavior and my inference (say, in the dentist's chair) > that my brain causes my mind. ...So, to put it briefly, > what I've called the "informal component" of the Total Turing Test -- > does the candidate act as if it had a mind (i.e., roughly as I would)? -- > appeals to precisely those intuitions, and not the inferential kind, about > brains, etc. > > In summary: There is a vast difference between knowing causes > directly and inferring them; subjective phenomena are unique and > radically different from other phenomena in that they confer this > direct certainty; and inferences about other minds (i.e., about > subjective phenomena in others) are parasitic on these direct > experiences of causation, rather than on ordinary causal inference, > which carries little or no intuitive force in the case of mental > phenomena, in ourselves or others. And rightly not, because mind is a > private, direct, subjective matter, not something that can be > ascertained -- even in the normal inductive sense -- by public, > indirect, objective correlations. Completely agreed that one's knowledge about one's own consciousness is attained in a very different way than is "ordinary" knowledge. The issue is how the provenance of this knowledge bears upon its application to the inductive process for deciding who else has a mind. Rather than answer point-by-point, here is a scenario which I think illustrates the issues: Assume the following sequence of events: A1. a rock falls on your foot (public external event) B1. certain neural events occur within you (public internal event) C1. you experience a pain "in your foot" (private) D1. you get angry (private) E1. some more neural events occur (public internal) F1. you emit a stream of particularly evocative profanity (public external) (a more AI-oriented account would be: A1'. someone asks you what 57+62 is B1'. neural events C1'. you "mentally" add the 7 and 2, etc.. D1'. you decide to respond E1'. neural events F1'. you emit "119" ) Now, how much do you know, and how do you know it? Regarding the mere existence and, to some level of detail, the quality, of these events (ignoring any causal connections for the moment): You know about A1 and F1 through "normal sensory means" of finding out about the world. You know about C1 and D1 through "direct incorrigible(?) awareness" of your own consciousness (if you're not aware of your own consciousness, who is?) You know about B1 and E1 (upon reflection) only inferentially/ scientifically, via textbooks, microscopes, undergraduate courses... Now, even though we know about these things in different ways, they are all perfectly respectable cases of knowledge (not necessarily certain, of course). It's not clear why we should be shy about extrapolating *any* of these chunks of knowledge in other cases...but let's go on. What do we know about the causal connections among these events? Well, if you're an epiphenomalist, you probably believe something like: C1,D1 / A1 -> B1 -> E1 -> F1 the point being that mental events may be effects, but not causes, especially of non-mental events. If you're an interactionist: A1 -> B1 -> C1 -> D1 -> E1 -> F1 (Identity theorists believe B1=C1, E1=D1. Let's ignore them for now. Although, for what it's worth, since they *identify* neural and mental events, I assume that for them brainedness would be, literally, the definitive criterion for mentality.) Now, in either case, what is the basis for our belief in causation, especially causation of and by C1 and D1? This raises tricky questions - what, in general, is the rational basis for belief in causation? Does it always involve an implicit appeal to a kind of "scientific method" of experimentation, etc.? Can we ever detect causation in a single instance, without any knowledge of similar types of events? Does our feeling that we are causing some external event have any value as evidence? Fortunately, I think that we need to determine *neither* just what are the rational grounds for belief in causation, *nor* whether the epiphenomenal or interactionist picture is true. It's enough just to agree (don't we?) that B1 is a proximate (more than A1, anyway) cause of C1, and that we know this. Of course A1 is also a cause of C1, via B1. Now the only "fishy" thing about one's knowledge that B1 causes C1 is that C1 is a private event. But again, so what? If you're lying on the operating table, and every time the neurosurgeon pokes you at site X, you see a yellow patch, your inference about causal connections is just as sound as if you walked in a room and repeatedly flicked a switch to make the lights go on and off. It's too bad that in the first case the "lights" are private, but that in no way disbars the causation knowledge from being used freely. The main point here is that our knowledge that Bx's cause Cx's is entirely untainted and projectible. The mere fact that it is ultimately grounded in our direct knowledge of our own experience in no way disqualifies it (after all, isn't *all* knowledge ultimately so grounded?). [more below on this] Now then, suppose you see Mr. X undergoing a similar ordeal - A2, B2, ??, ??, E2, F2. You can see, with normal sensory means, that A2 is like A1, and that F2 is like F1 (perhaps somewhat less evocative, but similar). You can find out, with some trouble, that B2 is like B1 and E2 is like E1. On the basis of these observations, you fearlessly induce that Mr. X probably had a C2 and D2 similar to your C1 and D1, ie that he too is conscious, even though you can never observe C2 and D2, either through the normal means you used for A2, B2.. or the "privileged" means you used for C1 and D1. Absent any one of these visible similarities, the induction is weakened. Suppose, for instance he had B2 but not A2 - well OK, he was hallucinating a pain, maybe, but we're not as sure. Suppose he had A2, but not B2 - gee, the thing dropped on his foot and he yelled, but we didn't see the characteristic nerve firings.. hmmm (but at least he has a brain). But now suppose we observe an AI-system: A3. a rock falls on its foot BB3. certain electronic events occur within it C3. ?? D3. ?? EE3. some more electronic events occur F3. it emits a stream of particularly evocative profanity Granted A3 and F3 are similar to A1 and F1 - but you know that BB3 is, in many ways, not similar to B1, nor EE3 to E1. Of course, in some structural ways, they may be similar/isomorphic/whatever to B1 and E1, but not nearly as similar as B2 and E2 are (Mr. X's neural events). Surely your reasons for believing that C3, D3 exist/are similar to C1 and D1 are much weaker than for C2, D2, especially given that we agree at least that B1 *caused* C1, and that causation operates among relevantly similar events. Surely it's a much safer bet that B2 is relevantly similar to B1 than is BB3, no? (even given the decidedly imperfect state of current brain science. We needn't know exactly WHAT our brain events are like before we rationally conclude THAT they are similar. Eg, in 1700, people, if you asked them, probably believed that stars were somewhat similar in their internal structure, the way they worked, even though they didn't have any idea what that structure was.) The point being that brainedness supplies strong additional support to the hypothesis of consciousness. In fact, I'd be inclined to argue that brainedness is probably stronger evidence (for a conscious entity who knows himself to be brained) for consciousness than performance: 1. Proximate causation is more impressive than mediated causation. Consider briefly what we would say about someone (a brained someone) who lacked A and F, but had B and E, ie no outward stimulus or response, but in whom we observed neural patterns very similar to those normally characteristic of people feeling a sharp pain in their foot (never mind the grammar). If I were told that he or the AI-system (however sophisticated its performance) was in pain, and I had to bet which one, I'd bet on him, because of the *proximate causation* presumed to hold between B's and C's, but not established at all between BB's and C's. 2. Causation between B's and C's is more firmly established than between D's and F's. No one seriously doubts that brain events affect one's state of consciousness. Whether one's consciousness counts as a cause of performance is an open question. It certainly feels as if it's true, but I know of no knock-down refutation of epiphenomenalism. You seem to equivocate, sometimes simply saying we KNOW that our intentions cause performance, other times doubting. But the TTT criterion depends by analogy on questionable D-F causation; the brain criterion depends on the less problematic B-C causation. 3. Induction is more firmly based on analogy from causes than effects. If you believe in the scientific method, you believe "same cause ergo same effect". The same effect *suggests* the same cause, but doesn't strictly imply it, especially when the effect is not proximate. But the TTT criterion is based on the latter (weaker) kind of induction, the brain criterion on the former. > Consider ordinary scientific knowledge about "unobservables," say, > about quarks ...Were you to subtract this inferred entity from the > (complete) theory, the theory would lose its capacity to account for > all the (objective) data. That's the only reason we infer > unobservables in the first place, in ordinary science: to help > predict and causally explain all the observables. A complete, utopian > scientific theory of the "mind," in radical contrast with this, will > always be just as capable of accounting for all the (objective) data > (i.e., all the observable data on what organisms and brains do) WITH > or WITHOUT positing the existence of mind(s)! Well, not so fast... I agree that others' minds are unobservable in a way rather different from quarks - more on this below. The utopian theory explains all the objective data, as you say, but of course this is NOT all the data. Quite right, if I discount my own consciousness, I have no reason whatever to believe in that of others, but I decline the antecedent, thank you. All *my* data includes subjective data and I feel perfectly serene concocting a belief system which takes my own consciousness into account. If the objective-utopian theory does not, then I simply conclude that it is incomplete wrt to reality, even if not wrt, say, physics. > In other words, the complete explanatory/predictive theory of organisms > (and devices) WITH minds will be turing-indistinguishable from the > complete explanatory/predicitive theory of organisms (and devices) > WITHOUT minds, that simply behave in every observable way AS IF they > had minds. So the TTT is in principle incapable of distinguishing between minded and unminded entities? Even I didn't accuse it of that. If this theory does not explain the contents of my own consciousness, it does not completely explain to me every thing observable to me. Look, you agree, I believe, that "events in the world" include a large set S, publicly observable, and a lot of little sets P1, P2, ... each of which is observable only by one individual. An epistemological pain in the neck, I agree, but there it is. If utopian theory explains S, but not P1, P2, why shouldn't I hazard a slightly more ambitious formulation (eg, whenever you poke an x-like site in someone's brain, they will experience a yellow patch...) ? Don't we, in fact, all justly believe statements exactly like this ?? > That kind of inferential indeterminacy is a lot more serious than the > underdetermination of ordinary scientific inferences about > unobservables like quarks, gravitons or strings. And I believe that this > amounts to a demonstration that all ordinary inferential bets (about > brain-correlates, etc.) are off when it comes to the mind. I don't get this at all ... > The mind (subjectivity, consciousness, the capacity to have > qualitative experience) is NEITHER an ordinary, intersubjectively > verifiable objectively observable datum, as in normal science, NOR is > it an ordinary unobservable inferred entity, forced upon us so that > we can give a successful explanatory/predictive account of the > objective data. Yet the mind is undoubtedly real. We know that, > noninferentially, for one case: our own. I couldn't agree more. > Perhaps I should emphasize that in the two "correlations" we are > talking about -- performance/mind and brain/mind -- the basis for the > causal inference is radically different. The causal connection between > my mind and my performance is something I know directly from being the > performer. There is no corresponding intuition about causation from > being the possessor of my brain. That's just a correlation, depending > for its causal interpretation (if any), on what theory or metatheory I > happen to subscribe to. That's why nothing compelling follows from > being told what my insides are made of. Addressing the latter point first, I think there's nothing wrong with pre-theoretic beliefs about causation. If, every time I flip the switch on the wall, the lights come on, I will develop a true justified belief (=knowledge) about the causal links between the switch and the light, even in the absence of any knowledge on my part (or anyone else's for that matter) of how the thing works. But the main issue here is the difference in the way we know about the correlations. I think this difference is just incidental. We are familiar with A and F type events, not so much with B and E types, and so we develop intuitions regarding the former and not the latter. If you had your brain poked by a neurosurgeon every day, you'd quickly develop intuitions about brain-pokes and yellow patches. Conversely, if you were strapped down or paralyzed from birth, you would not develop intuitions about your mind's causal powers. Further, one may *scientifically* investigate the causal connections among B1, C1, D1, and E1, and among A1 and F1 as well, as long as you're willing to take people's word for it that they're in pain, etc (and why not?). Just because we usually find out about some correlations in certain ways doesn't mean we can't find out about them in others as well. And even if the difference weren't incidental it is unclear why mysterious Cartesian-type intuitions about causation between Ds and Fs are to be preferred to scientific inferential knowledge about Bs and Cs as a basis for induction. "It may be nonsense, but at least it's clever nonsense" - Tom Stoppard John Cugini ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Thu Dec 18 01:39:39 1986 Date: Thu, 18 Dec 86 01:39:33 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #283 Status: R AIList Digest Monday, 15 Dec 1986 Volume 4 : Issue 283 Today's Topics: Administrivia - Psychnet Correction & European BITNET Server, Queries - AI for Photolithography & Real-Time Expert Systems, Literature - AI Bibliographic Format & Parallel Alpha-Beta & Little LISPer, AI Tools - Lisp Benchmarks & Lisp for Mac ---------------------------------------------------------------------- Date: Mon 15 Dec 86 00:13:22-PST From: Ken Laws Reply-to: AIList-Request@SRI-AI.ARPA Subject: Correction -- Psychnet I gave the wrong address for the editor of the Psychnet Bulletin. It should have been EPsynet%UHUPVM1.BITNET@WISCVM.WISC.EDU or @WISCVM.ARPA, or whatever syntax your host needs to reach BITNET. -- Ken Laws ------------------------------ Date: Tue, 9 Dec 86 15:12 N From: LILIUS%FINFUN.BITNET@WISCVM.WISC.EDU Subject: Gateway problems ARPA <-> Bitnet Hello !! With the help of POSTMASTER@FINHUTC (Harri Salminen, Helsinki University of Technology, Finland) I have set up a redistribution list of your digest at LISTSERV@FINHUTC.EARN. The server is mainly used to redistribute digest through the finnish networks but I suppose at least european net- user could subscribe from it. [...] I will be the owner of the list, so please contact me in case of trouble. Cheers, Johan Lilius LILIUS@FINFUN.EARN ------------------------------ Date: Fri, 12 Dec 86 16:03:12 PDT From: Wendy Fong Subject: Query - AI for Photolithography I have been working on a frame and rule-based diagnosis advisor for photolithography (the patterning of integrated circuits). A discription of our system appeared in "An Expert Advisor for Photolithography," IJCAI-85, pp.411-413. I'd like to hear from others who are applying AI techniques, especially expert systems, to the domain of photolithography. Please reply to me directly at the address below. Thanks in advance. Wendy Fong fong@hplabs.HP.COM (ARPA) Hewlett-Packard Laboratories hplabs!fong (UUCP) 1501 Page Mill Road, Palo Alto, CA 94304 (415) 857-5425 ------------------------------ Date: Wed, 10 Dec 86 16:57 EST From: SUNDSTRO%FINABO.BITNET@WISCVM.WISC.EDU Subject: Real-Time Expert Systems I would like to added to the AILIST. Areas of intrests are expert systems and languages for "on-line" use for evaluating process-conditions and reporting it to the operator. I'm searching for a language for programming expert systems that support's subroutines in Fortran or other languages and can be used in real-time environment. Can you recomend any (we are using a microVax II with VMS)? Sincearly, Hans Sundstroem E-mail address: SUNDSTRO@FINABO.BITNET Location: Swedish Univeristy of Turku Heat Engineering Laboratory Piispankatu 8 20500 Turku FINLAND ------------------------------ Date: Tue, 9 Dec 86 08:58:55 CST From: preece%mycroft@gswd-vms.ARPA (Scott E. Preece) Subject: Re: AI Bibliographic format The chapter entries referred to were aparently set up for the Unix tbl pre-processor, which supports the description of tables. While it would be nice to have all the bibliographies in a single format, my first concern is that they be adequately tagged, so that the fields can be pulled out and identified, and that the tagging scheme be defined somewhere. Postings as untagged text should be discouraged (well, if that's the way the text is available, I suppose it's preferable to not posting); those of us who would like to do things with the bibliographies other than printing and reading them really need the tagging. Chapter-level tagging is great, but I don't think tbl is the appropriate method. -- scott preece gould/csd - urbana uucp: ihnp4!uiucdcs!ccvaxa!preece arpa: preece@gswd-vms ------------------------------ Date: Wed, 10 Dec 86 10:53:31 MST From: crs%lambda@LANL.ARPA (Charlie Sorsby) Subject: Improving usability of bibilographies In article <8612090611.AA11211@ucbvax.Berkeley.EDU>, HAMILTON%RCSMPA@gmr.com ("William E. Hamilton, Jr.") writes: > I emphatically agree with the following comment from Robert Amsler: > > >Can something be done to minimize the idiosyncratic special character > >usage in the bibliographies? First, I would like to mention that I appreciate the fact that those who are kind enough to share their bibliographies with us may not be eager go to a lot of additional trouble to put them into some standard form. Yet, there is a problem that should be addressed if the posted bibliographies are to provide the greatest possible benefit to net readers. One thing that I would find helpful is a note (perhaps a single line) at the beginning of the article specifying the program for which the bibliography was formatted. Realize that a "common" system, that you may recognize instantly, may be totally foreign to someone else. A few simple lines such as: This bibliography is formatted for the "refer" program (See Unix User's Manual Reference Guide and Unix User's Manual Supplementary Documents) may make the article immensely more useful to someone. If the bibliography is written for a particular formatting program and macro package, say so in a similar line, again pointing the reader to a source of additional information, if possible. Remember that we haven't all seen every possible text formatter and bibliographic program. For moderated news groups, how about collecting bibliographic/formatting programs (where possible) or pointers to where they can be obtained (where not) and archiving them in some way that makes them accessible to the readership of the group. Perhaps this information should be archived in, say, mod.sources instead, with only a pointer in the group that archives the bibliography. Perhaps a "new-user's" article can be routinely, or even automatically, posted, say once a month, to tell new users about this information and where it can be found. Finally, how about some of you out there, who may have the time and expertise, writing some translators that will allow bibliographies in one format to be translated to another. Then, if one collects a bibliography in a format that doesn't match the program available, it can be translated. These filters can also be archived in mod.sources with pointers in the various groups to which bibliographies are posted. -- Charlie Sorsby crs%lambda@lanl.arpa crs%lambda@lanl.uucp [I'm not sure I could legally distribute the bib/refer sources, even if they were of use to anyone who didn't have Unix. I'll leave distribution to AT&T, mod.sources, or the program author. I have answered a few AIList-Request queries about the format, but mostly I have forwarded queries to Lawrence Leff, the compiler of most of the bibliographies. The topic codes, MAG abbreviations, etc., are all under his control. Lawrence has agreed that the tbl formatting in the last installment was not successful. I'm sure he will consider other suggestions for making the material more useful. Personally, I find it amazing that the bibliography format problem still exists. I propose that it should be considered one of the great unsolved problems of AI. Computational AI has sprung from a clique of hackers who develop symbol-processing hardware and languages for the sole purpose of making it easier to get their own work done (developing symbol-processing ...). They and others in CS have applied incredible talent to the parsing of programs, natural language, and other strings. They have given us a few good spelling checkers and perhaps three reasonably good text formatters, and now we are even getting elegant fonts and professional text layout. All this, and yet I have never heard of a program for parsing citations. The best we can do is to use macro expansions for different journals after we have keyed each field by hand. People can generally parse any citation without even knowing the order of the fields or which other syntactic conventions are used. Boldface helps, but even if it were not perceived by the scanner the computer should still be able to figure out volume number, pages, etc. Yet I doubt that this problem has been solved even for the case of citations in a known format. Once someone develops the algorithm (including rules for Col. T.-W. Alphonse de Leon III, Ph.D., with or without misspellings) we can all communicate our citations in human-readable form and let the machines decode them for database use. -- KIL] ------------------------------ Date: Tuesday, 9 December 1986 15:32:49 EST From: Feng-Hsiung.Hsu@vlsi.cs.cmu.edu Subject: Clarification For those who were asking for papers about my work on parallel alpha-beta, sorry there is no paper available. The talk mentioned on this list is to be my thesis proposal and was posted to the list without my knowledge. The proposal is supposed to be for the CMU community and I have not written anything for general release yet. Cheers. --Hsu ------------------------------ Date: 9 Dec 86 19:21:08 est From: Walter Hamscher Subject: little (*fun*) lisper, title/author? Date: Sat, 6 Dec 86 22:56:03 PST From: Thomas Eric Brunner When I worked in Bracknell, someone there was kind enough to let me read a little booklet called (I think) "THE LITTLE LISPER". I haven't found it in my post-move-to-sunny-California boxes...Does this ring a bell to anyone? I'd like to buy a copy - it was a "nice", and illustrated, text on lisp. ``The Little LISPer,'' by Daniel P. Friedman. Published by Science Research Associates (SRA), copyright 1974. ISBN 0-574-19165-8. Haven't seen the book sold in years, though. I bought mine in 1980. ------------------------------ Date: Thu 11 Dec 86 12:46:28-PST From: Rich Alderson Subject: Re: AIList Digest V4 #281--Little LISPer Eric Brunner asked in V4, #281, about the book "The Little LISPer." "The Little LISPer," by Friedman, is now in it's second edition. The first edition was based on a generic Lisp 1.5 (as I recall); the second is based on Scheme. For those locals reading this list, the book is usually available at the Stanford Book Store. Rich Alderson Alderson@Score.Stanford.EDU ------------------------------ Date: Wed, 10 Dec 86 10:05:50 EST From: "William J. Rapaport" Subject: little (*fun*) lisper, title/author In reply to brunner@spam.istc.sri.com query about LISP: Daniel P. Friedman & Matthias Felleisen, _The Little LISPer_, 2nd edition, (Chicago: Science Research Associates, 1986). Another LISP text, along similar lines, is: Stuart C. Shapiro, _LISP: An Interactive Approach_ (Rockville, MD: Computer Science Press, 1986). ------------------------------ Date: Thu, 11 Dec 86 18:48 EDT From: DAVEB%UMass.BITNET@WISCVM.WISC.EDU Subject: RE: Little LISPer, a reply This is a response to the question about what happened to the Little LISPer. Well, it is still around, in fact it is in a second edition. (1986) It is published by Science Research Associates (a subsidiary of IBM) The authors are Daniel P. Friedman and Matthias Felleisen I picked it up for about $15. I'm brand new to artificial intelligence, a first year college student. This is my first reply to the net, I hope it helps. One simple request: Would it be possible to have little beginners discussions or something on the net? Thanx, Dave Bayendor, Hampshire College, MAAmherst, MA ------------------------------ Date: Wed, 10 Dec 86 17:28 N From: EDH%HNYKUN52.BITNET@WISCVM.WISC.EDU Subject: RE: little (*fun*) lisper, title/author? I recently found the Little Lisper's 2nd edition. It has been extended and still is a delight. See foreword by Jerry Sussman and blurbs by Hofstadter and like. The working Lisper (non-math) may want to take a glance at the derivation of the Y-operator. Here's the reference Friedman, D. & Felleisen, M. (1986). The Little LISPer, Second Edition. Science Research Associates, Chacago. I paid $14.- Edward Hoenkamp. ------------------------------ Date: Thu 11 Dec 86 12:52:26-PST From: Rich Alderson Subject: Re: AIList Digest V4 #281--Lisp Benchmarks In V4, #281, Bill Pase asked about the Lisp benchmarks used in Gabriel's book. I believe that they are kept on Sail.Stanford.EDU, the Stanford Artificial Intelligence Laboratory's PDP-10, in the same directory as the Common-Lisp mailing-list archives. A note to Common-Lisp-Request should clarify their availability. Rich Alderson Alderson@Score.Stanford.EDU ------------------------------ Date: Wed, 10 Dec 86 08:09:36 PST From: Stephen E. Miner Subject: Re: Lisp for Mac This message is in response to Peter Wisnovsky's request for information about a Lisp for the Macintosh. I couldn't send mail directly to him so I'm reposting a message I recently sent to the INFO-MAC list. -- Steve [from INFO-MAC] I just received the latest ExperNewsletter in the mail. The big news is that ExperTelligence has announced ExperCommon Lisp. (This is the new name for the long-awaited ExperLisp 2.0.) They plan to begin shipping in about a week to people who ordered upgrades to the original ExperLisp. Actually, they want registered owners to re-register by filling out a new address card that comes with the newsletter. (Maybe they're trying to stall for a little extra time?) They told me over the phone that they just want to make sure that they have the correct addresses. OK, so I'll have to wait a couple of more weeks. Here are some of the promises that have me interested: * Common Lisp compatibility * an extensible class system (for object-oriented programming) * Toolbox support through predefined classes * on-line symbolic debugger * support for "stand alone" applications * not copy-protected (since version 1.5) I have to admit that I was quite disappointed by all of the bugs I found in the old versions of ExperLisp, but I'm still hoping that this new version succeeds. If anyone has any experience with ExperCommon Lisp, I'd like to hear from you. I have a hard time figuring out their marketing and pricing strategies so please call them directly if you're interested. Basically, it's pretty expensive even after you talk them into giving you a discount. ExperTelligence can be reached by phone at: (800) 828-0113 USA (800) 826-6144 CA -- Steve Miner miner@spam.istc.sri.com ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Thu Dec 18 01:42:10 1986 Date: Thu, 18 Dec 86 01:42:03 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #284 Status: R AIList Digest Tuesday, 16 Dec 1986 Volume 4 : Issue 284 Today's Topics: Queries - Salary Survey & Dynamic Properties in Lisp, Fiction - Sentient-Computer Novels, Conference - CS and Statistics, Book - The T Programming Language, Philosophy - Consciousness, Policy - Debate of Proposed Split ---------------------------------------------------------------------- Date: Mon 15 Dec 86 10:39:42-CST From: David Throop Subject: Q: Salary Survey Are there any salary surveys for workers in AI? I'd be interested in things covering both industrial and academic jobs, and for both MS and PhD. David R Throop ------------------------------ Date: 15 Dec 86 16:22:03 GMT From: techunix.BITNET!ephraim@ucbvax.Berkeley.EDU (Ephraim Silverberg) Subject: Dynamic Properties I am looking for papers/projects concerning the implementation (on non-lisp machines, in particular) of dynamic properties in lisp and other languages. Please reply by e-mail. ------------------------------------------------------------------------------- Ephraim Silverberg, Faculty of Electrical Engineering, Israel Institute of Technology, Haifa, Israel. BITNET : ephraim@techunix ARPANET : ephraim%techunix.bitnet@wiscvm.arpa CSNET : ephraim%techunix.bitnet@csnet-relay UUCP : {almost anywhere}!ucbvax!ephraim@techunix.bitnet ------------------------------ Date: 15 Dec 86 22:54:21 GMT From: gknight@ngp.utexas.edu (Gary Knight) Subject: Report -- Canonical List of Sentient Computer Novels Hi gang, It seems that some of you are getting impatient waiting for my canonical list of sentient computer novels. I could post a simple list of everything submitted to me in a couple of days, but I wanted to do more than that. In the first place, some of the titles weren't about sentient computers at all, but just had a computer somewhere in the plot. Secondly, I was after novels that dealt with sentient computers as *principal characters*. I wanted stories about their genesis, growth, development, capabilities, etc. But a lot of the titles I received had sentient computers that were just props in the plot and were not themselves the object of investigation. So, what I planned to do was read all of them (at least far enough to know if they qualify or not) and then list each with a non-spoiler paragraph descriptor. But I'll do both -- in a few days I'll just barf back the list (without repeats, of course!). Then later in the holidays, when I finish my reading, I'll post the list (with commentary) that I originally intended. Okay? In the meantime, for the *really* impatient who are looking for some good holiday reads, the following meet my "principal character" criterion and are definitely worth perusing (asterisks are *highly* recommended): THE ADOLESCENCE OF P-1 (*) VALENTINA: SOUL IN SAPPHIRE MICHAELMAS THE TWO FACES OF TOMORROW (*) WHEN HARLIE WAS ONE CYBERNETIC SAMURAI (*) COLOSSUS (trilogy) That oughta hold you 'till Christmas, anyway. -- -------------------------------------------------------------------- Gary Knight, The University of Texas at Austin. "All these things will be lost in time, like tears in the rain." -- Roy Baty ------------------------------ Date: WED, 10 oct 86 17:02:23 CDT From: leff%smu@csnet-relay Subject: AI at Upcoming Conference Computer Science and Statistics 19th Symposium on the Interface March 8-11 1987 Hershey Philadelphia Hotel, Philadelphia, Pennsylvania An Expert System for Analysis of Clinical Trial Data Madeline Bauer, American College of Radiology, J. M. Weiner, R. Horowitz, University of Southern California Expert Systems for Problem Formulation in Operations Research Mark Gershon, Jyoti Paul, Temple University Experiences with an Automatic Transfer Function Algorithm David P. Reilly, Automatic Forecasting Systems Inc. Computational Geometry and Graphics for Modern Morphometric Analysis Fred L. Bookstein University of Michigan Procrustes Techniques for the Analysis of Shape and Shape Change Colin R. Goodal, Princeton University, Anjana Bose, Columbia University Multivariate Analysis of Size and Shape James E. Mosimann, National Institutes of Health ------------------------------ Date: Mon, 15 Dec 86 14:47:26 est From: slade%yale-ring@YALE.ARPA Subject: New Book "The T Programming Language: A Dialect of LISP" Stephen Slade. Prentice-Hall, Inc., 1987, 448pp, $19.95. Telephone orders via (201) 767-5049. >From the back cover: The T programming language is a version of the SCHEME dialect of LISP. The T language was designed and implemented at the Yale Computer Science Department. T offers a combination of procedural and object-oriented programming styles. ... The T language has been used in a range of college courses including artificial intelligence, data structures, computer systems, and compiler design. ... The T language is available on a variety of machines, and Appendix A explains how to get a copy of T. For readers who have another version of LISP, Appendix A discusses the adaptation of other dialects of LISP to the style of programming afforded by T. Many examples, exercises, and sample programs demonstrate fundamental programming concepts in familiar domains. ------------------------------ Date: Wed, 10 Dec 86 20:11:01 GMT From: mcvax!ukc!rjf@seismo.CSS.GOV Subject: Consciousness In <960671.861204.KFL@MX.LCS.MIT.EDU> KFL%MX.LCS.MIT.EDU@MC.LCS.MIT.EDU ("Keith F. Lynch") writes: > From: mcvax!ukc!rjf@seismo.css.gov (R.J.Faichney) > > ... to ascribe consciousness to something man-made, no matter how perfect > it's performance, will always require an effort of will... > > The net is an excellent medium for Turing tests... > Let me play the Turing game in reverse for a moment, and ask if you >would bet a lot of money that nobody would regard a computer as >conscious if it were to have written this message? > ...Keith I would certainly bet that *most* people would not regard a computer as conscious if it had written your message, or even mine. If someone had lived for several years with a supposed-person who turned out to be a robot, they would be severely shocked, when they discovered that fact, and would *not* say 'Well, you certainly had me fooled. I guess you robots must be conscious after all.' I explained in an earlier posting why I believe the naive reaction to be important. The problem is not just about what would deserve the attribution of consciousness, but about what we feel about making that attribution. And such feelings go much deeper than mere prejudice. I think they go as deep as love and sex, and are equally valid and valuable. I often turn machines on, but they don't do the same for me - they're not good enough, because they're not folks. And never will be. (A promise for all you hard AIers - no more on this, from me, at least. Well - no more in mod.ai, anyway.) Robin Faichney ("My employers don't know anything about this.") UUCP: ...mcvax!ukc!rjf Post: RJ Faichney, Computing Laboratory, JANET: rjf@uk.ac.ukc The University, Canterbury, Phone: 0227 66822 Ext 7681 Kent. CT2 7NF ------------------------------ Date: Tue 9 Dec 86 10:36-EST From: Randall Davis Subject: splitting the list There is no loss in splitting the list and a clear net gain. Issue: What You Will Receive Those who are concerned about seeing both kinds of material have a trivial solution available: ensure that you are on both mailing lists. Those who wish to be spared the philosophical discussions can escape. Issue: 400 Line Diatribes are OK Because You Can Skip Them Not if, as is the case for some people, you have a slow terminal and a less sophisticated mailer (one that reads the message as a whole, rather than splitting it into individual contributions). Issue: Where Can the Discussion Happen talk.philosophy.tech Issue: How to Classify Messages It is almost always completely obvious how to classify a given message. In the event the decision is at all debatable the moderator should flip a coin. The consequences are sufficiently minor that we can all live with it. Issue: Can You Do AI Without Philosophizing About Mind, Consciousnes, Etc. Yes. Issue: SHOULD You Do AI Without Philosophizing About Mind, Consciousnes, Etc. First topic of discussion for talk.philosophy.tech. ------------------------------ Date: 11 Dec 1986 0109-EST From: Bruce Krulwich Subject: spliting the AIList the whole question of what people want to appear on AIList is a question of degree. i enjoyed all of the philo-type talk when it started, but it has gotten to the point where entire digests are devoted to one person's posts on the subject. this, i think, is too much. the discussion should move to e-mail. however, i am against splitting the list in two assuming that discussions are kept within reason. Bruce Krulwich arpa: krulwich@c.cs.cmu.edu bitnet: bk0a%tc.cc.cmu.edu@cmuccvma.bitnet "if you're right 95% of the time, why worry about the other 3% ??" **any other former B-CC'ers out there??** ------------------------------ Date: 12 Dec 86 12:05:03 GMT From: mcvax!ukc!cheviot!rosa@seismo.css.gov (Rosa Michaelson) Subject: Re: Proposed: a split of this group Please split the group. I have an aversion to the words turing, cognative, brain, mind, identity, inteligence, etc (no not etc) which has been learnt through reading the various ai newsgroups. Do you realise that we get at least three copies of each news item(through various digests) as well????????????? ------------------------------ Date: 12 Dec 86 16:15:16 GMT From: uh2%psuvm.bitnet@ucbvax.Berkeley.EDU Subject: Re: Proposed: a split of this group I vote NO. Don't split this group. If you want to see nuts-and-bolts questions in this group, then POST THEM!! Suitable discussion will follow, I am sure. ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Thu Dec 18 01:42:21 1986 Date: Thu, 18 Dec 86 01:42:14 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #285 Status: R AIList Digest Tuesday, 16 Dec 1986 Volume 4 : Issue 285 Today's Topics: Seminars - Theory of Imperative Lisp (SRI) & Massively Concurrent Knowledge Representation (MIT) & Logic of Knowledge, Action, and Communication (BBN) & Concepts Defined via Approximate Theories (SU) & Commonsense Reasoning about Solid Objects (MIT) & Classification of States and Events (MIT), Course - Advanced Topics in Databases ---------------------------------------------------------------------- Date: Tue 9 Dec 86 17:35:12-PST From: Amy Lansky Subject: Seminar - Theory of Imperative Lisp (SRI) THEORY OF IMPERATIVE LISP Richard Waldinger (WALDINGER@SRI-AI) Artificial Intelligence Center, SRI International 11:00 AM, MONDAY, December 15 SRI International, Building E, Room EJ228 Imperative LISP is LISP with destructive operations, such as rplaca and setq, which can alter data structures. We present a theory, based on situational logic, intended for the specification and automatic synthesis of imperative LISP programs. Hand derivations of programs for destructive reverse and append have been conducted within this theory. ------------------------------ Date: Sat, 13 Dec 86 14:06:52 EST From: "Steven A. Swernofsky" Subject: Seminar - Massively Concurrent Knowledge Representation (MIT) Date: Mon, 8 Dec 1986 17:34 EST From: JHC%OZ.AI.MIT.EDU at XX.LCS.MIT.EDU MASSIVELY CONCURRENT SYSTEMS FOR KNOWLEDGE REPRESENTATION AND REASONING Gul A. Agha, MIT AI Lab The problem of reasoning is central to Artificial Intelligence systems. The effectiveness of a "reasoning method" is intimately tied to the "knowledge representation" scheme on which it operates. The seminar will discuss some recent theoretical work in inheritance-based models for knowledge representation. Problems germane to inheritance-based models include exception handling, multiple inheritance, and viewpoints. The talk will outline some mechanisms that have been proposed to address these issues. Methods of reasoning such as first-order logic, nonmonotonic logic and due process reasoning will be related to the knowledge representation schemes. Thursday, December 11, 4pm NE43 8th floor playroom ------------------------------ Date: Sat, 13 Dec 86 15:13:05 EST From: "Steven A. Swernofsky" Subject: Seminar - Logic of Knowledge, Action, and Communication (BBN) Date: Thu 11 Dec 86 17:00:25-EST From: AHAAS at G.BBN.COM Another BBN AI Seminar: Leora Morgenstern of New York University will speak on "Foundations of a Logic of Knowledge, Action and Communication" at 10:30 on Thurday December 18 in the 2nd floor large conference room at 10 Moulton St. Her abstract: Most AI planners work on the assumption that they have complete knowledge of their problem domain, so that formulating a plan consists of searching through some pre-packaged list of action operators for an action sequence that achieves some desired goal. Real life planning rarely works this way because we usuallly don't have enough information to map out a detailed plan of action when we start out. Instead, we initially draw up a sketchy plan and fill in details as we proceed and gain more exact information about the world. This talk will present a formalism that is expressive enough to describe this flexible planning process. We begin by discussing various requirements that such a formalism must meet, and present a syntactic theory of knowledge that meets these requirements. We discuss the paradoxes, such as the Knower Paradox, that arise from syntactic treatments of knowledge, and propose a solution based on Kripke's solution of the Liar Paradox. Next, we present a theory of action that is powerful enough to describe partial plans and joint-effort plans. We demonstrate how we can integrate this theory with an Austinian theory of communicative acts. Finally, we give solutions to the Knowledge Preconditions and Ignorant Agent Problems as part of our integrated theory of planning. This talk will include comparisons of our theory with other syntactic and modal theories such as Konolige's and Moore's. We will demonstrate that our theory is powerful enough to solve classes of problems that these theories cannot handle. ------------------------------ Date: 10 Dec 86 1242 PST From: Vladimir Lifschitz Subject: Seminar - Concepts Defined via Approximate Theories (SU) Commonsense and Non-Monotonic Reasoning Seminar CONCEPTS DEFINED VIA APPROXIMATE THEORIES John McCarthy Thursday, December 11, 4pm Jordan 050 Some important concepts for AI including "it can", "it believes" and counterfactuals may be precisely definable in theories that approximate reality in a generalized sense. Useful approximate theories of action are typically non-deterministic even when they approximate deterministic systems. The concepts are useful to the extent that the approximate theory answers questions about the real world, but they often become imprecise when attempts are made to define them directly in real world terms. The lecture will discuss the sense of approximation, give some examples, and make connections with the previous discussion of contexts. Some of the material is discussed in my paper "Ascribing Mental Qualities to Machines". ------------------------------ Date: Sat, 13 Dec 86 23:08:36 EST From: "Steven A. Swernofsky" Subject: Seminar - Commonsense Reasoning about Solid Objects (MIT) Date: 1 Dec 1986 13:57 EST (Mon) From: Daniel S. Weld ERNEST DAVIS A Logical Framework for Commonsense Reasoning about Solid Objects When a small die is dropped inside a large funnel, it comes out the bottom. How do you know that? I will discuss why this problem is harder than it looks; what kinds of knowledge could be used to solve it; and how this knowledge can be expressed formally. Friday, December 5; 1:00pm; 8th Floor Playroom ------------------------------ Date: Sun, 14 Dec 86 00:28:51 EST From: "Steven A. Swernofsky" Subject: Seminar - Classification of States and Events (MIT) Date: Fri 5 Dec 86 18:50:06-EST From: LPOLANYI at G.BBN.COM LINGUISTICS AND COGNITION SEMINAR SERIES SCIENCE DEVELOPMENT PROGRAM - BBN LABS TOPIC: On the Classification of States and Events SPEAKER: Professor Henk Verkuyl University of Utrecht/ UMASS Amherst WHEN & WHERE: Thursday December 11, 1986 2nd Floor Large Conference Room BBN Labs 10 Moulton Street Cambridge, MA ABSTRACT: In this talk, I shall argue that Zeno Vendler in his original classification of aspectual categories into STATES, ACTIVITIES, ACCOMPLISHMENTS and ACHIEVEMENTS basically proposed a two parameter cross classification (PROCESS and DEFINITENESS/COUNT) but that he redundantly introduced a third parameter, INTERVAL. This INTERVAL parameter based on the length of a temporal unit has led, over the years, to many problems and misunderstandings. In the talk, I shall argue that a re-analysis of aspectual categories based on partial orderings provides a more satisfying treatment of natural language aspectual phenomena. ------------------------------ Date: Thu, 11 Dec 86 10:37 EST From: Tim Finin Subject: Course - Advanced Topics in Databases Forwarded From: Peter Buneman on Thu 11 Dec 1986 at 9:37 Subj: Course announcement - CIS 684, Advanced Topics in Databases Advanced Topics in Databases Instructors: Peter Buneman and Susan Davidson The topics covered next semester will include heterogeneous and distributed databases, deductive databases and (time permitting) database theory. The course will be taught from a collection of papers which will be made available as a bulk pack from the copy center. Students will be expected to participate in the last third of the semester with presentations. Heterogeneous Databases We will examine the various data models and the integration of programming languages and databases. Particular attention will be given to the representation of databases as data types and strategies for the treatment of persistent data. There are some interesting recent programming languages that exploit type inheritance or an ``object-oriented'' approach to databases. Distributed Databases Topics will include the design of a distributed system, the translation of global to fragement queries, query amelioration, concurrency control, recovery, and an overview of sample systems. Deductive Databases There is a close connection between logic programming and relational query languages. We will examine the representation of database queries as logic programs, implementation problems and some extensions to the relational model that fit better with logic programs. If time permits we shall examine some of the underlying theory. Recommended text: Distributed Databases: Systems and Principles, Ceri and Pelagatti, McGraw Hill (1985) ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Sat Dec 20 03:32:29 1986 Date: Sat, 20 Dec 86 03:32:23 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #286 Status: R AIList Digest Wednesday, 17 Dec 1986 Volume 4 : Issue 286 Today's Topics: Conferences - Knowledge Engineering Using Forth & Philadelphia Regional AI Symposium & Directions and Implications of Advanced Computing & Foundations of Computer Science & ACM Symp. on Principles of Database Systems ---------------------------------------------------------------------- Date: 9 Dec 86 06:37:08 GMT From: decvax!linus!philabs!nyit!aca@ucbvax.Berkeley.EDU (Al Arthur) Subject: Conference - Knowledge Engineering Using Forth *** Announcement **** 1 9 8 7 C o m p u t e r T e c h n o l o g y S y m p o s i u m Knowledge Engineering Using Forth Date: 23 January, 1987 Place: C.W. Post Campus of Long Island University (Long Island, New York) About The Symposium: The Computer Technology Symposium on Knowledge Engineering Using Forth is being held at Long Island University's C.W. Post Campus and is being sponsored in colaboration with the Institute for Applied Forth Research. The symposium is an outgrowth of the increased use of Forth for real-time artificial intelligence applications. Forth's simplicity allows it to be used in high-performance RISC-like computers and its extensibility has given it a richness comperable to LISP. Together, these two aspects promise a future of real-time knowledge engineering. The Symposium presentations will include invited lecturers, tutorials, panel discussions, and exhibits on Forth technology. The symposium will benifit technical personnel involved in real-time applications and/or knowledge engineering as well as managers of such projects. About the C.W. Post Campus: The C.W. Post Campus is located on the scenic North Shore of Long Island and is conveniently located near major highways as well as the Long Island Railroad. S p e a k e r S e s s i o n s "Pulse Code Neural Networks" Dr. William Dress (Oak Ridge National Laboratory) "An Extensible Language for Autonomous Robots" Mr. Lawrence Forsley (Laboratory for Laser Energetics) "List Processing and Object-Oriented Programming Using Forth" Mr. Dennis Feucht (Innovatia, Inc.) "The Application of Artificial Intelligence Technology to Process Control" Major Steven LeClair (Wright Patterson Air Force Base) R e g i s t r a t i o n The symposium registration fee includes all lectures, tutorials, panel discussions, and exhibits. Also included in the fee is a continental breakfast, refreshments, a luncheon, and a wine and cheese reception to close out the day. Each registrant will receive symposium proceedings. *PLEASE MAIL YOUR REGISTRATION BY JANUARY 7th to:* Knowledge Engineering Symposium Department of Computer Science C.W. Post Campus Long Island University Brookville, N.Y. 11548 *PLEASE INCLUDE IN YOUR REGISTRATION* 1) Your Name 2) Your Address 3) Your Telephone Number 4) Check for $150 payable to: "Knowledge Engineering Symposium" 5) Indicate if you would prefer the Vegitarian option for Luncheon Need more information, or directions? Call Ms. Rita Moore at: (516) 299-2293 or Send Electronic Mail to the following USENET address... -- UUCPaddress:...{allegra,decvax,seizmo,vax135,ihnp4,mcvax}!philabs!nyit!aca CSNETaddress:...nyit!aca%suny-sb.CSnet@CSnet-Relay.ARPA US Mail: Alex Arthur / Systems Programmer New York Institute of Technology Computer Graphics Laboratory Old Westbury, New York 11568 Phone: (516) 686-7644 ------------------------------ Date: Fri, 12 Dec 86 13:19 EST From: Tim Finin Subject: Conference - Philadelphia Regional AI Symposium Villanova University and Symbolics are sponsoring a Philadelphia regional AI symposium on January 13th and 14th. The meeting will be held at the Connelly Center at Villanova. It will feature several tutorial talks and a number of presentations on current research by people from Penn, Princeton, Delaware, Temple, Lehigh, Drexel, Villanova, Swarthmore and Dickinson. There is a small fee for attending ($5.00for one day and $8.00 for both days). For more information, contact Alain Phares of Villanova (215-645-4861) or John Currie of Symbolics (215-828-8011). ------------------------------ Date: Sun, 14 Dec 86 17:31:49 PST From: jon@june.cs.washington.edu (Jon Jacky) Subject: Conference - Directions and Implications of Advanced Computing Call for Papers DIRECTIONS AND IMPLICATIONS OF ADVANCED COMPUTING Seattle, Washington July 12, 1987 The adoption of current computing technology, and of technologies that seem likely to emerge in the near future, will have a significant impact on the military, on financial affairs, on privacy and civil liberty, on the medical and educational professions, and on commerce and business. The aim of the symposium is to consider these influences in a social and political context as well as a technical one. The social implications of current computing technology, particularly in artificial intelligence, are such that attempts to separate science and policy are unrealistic. We therefore solicit papers that directly address the wide range of ethical and moral questions that lie at the junction of science and policy. Within this broad context, we request papers that address the following particular topics. The scope of the topics includes, but is not limited to, the sub-topics listed. RESEARCH FUNDING DEFENSE APPLICATIONS - - Sources of Research Funding - Machine Autonomy and the Conduct of War - - Effects of Research Funding - Practical Limits to the Automation of War - - Funding Alternatives - Can An Automated Defense System Make War Obsolete? COMPUTING IN A DEMOCRATIC SOCIETY COMPUTERS IN THE PUBLIC INTEREST - - Community Access - Computing Access for Handicapped People - - Computerized Voting - Resource Modeling - - Civil Liberties - Arbitration and Conflict Resolution - - Risks of the New Technology - Educational, Medical and Legal Software - - Computing and the Future of Work Submissions will be read by members of the program committee, with the assistance of outside referees. Tentative program committee includes Andrew Black (U. WA), Alan Borning (U. WA), Jonathan Jacky (U. WA), Nancy Leveson (UCI), Abbe Mowshowitz (CCNY), Herb Simon (CMU) and Terry Winograd (Stanford). Complete papers, not exceeding 6000 words, should include an abstract, and a heading indicating to which topic it relates. Papers related to AI and/or in-progress work will be favored. Submissions will be judged on clarity, insight, significance, and originality. Papers (3 copies) are due by April 1, 1987. Notices of acceptance or rejection will be mailed by May 1, 1987. Camera ready copy will be due by June 1, 1987. Proceedings will be distributed at the Symposium, and will be on sale during the 1987 AAAI conference. For further information contact Jonathan Jacky (206-548-4117) or Doug Schuler (206-783-0145). Sponsored by Computer Professionals for Social Responsibility P.O. Box 85481 Seattle, WA 98105 ------------------------------ Date: Mon, 15 Dec 86 12:36:00 est From: Dave Bray Subject: Conference - Foundations of Computer Science CALL FOR PAPERS 28th FOCS Symposium The 28th Annual IEEE Symposium on Foundations of Computer Science will be held at the Marina Beach Hotel in Los Angeles, California on October 12--14, 1987. The Symposium is sponsored by the IEEE Computer Society's Technical Committee on Mathematical Foundations of Computing in cooperation with the University of Southern California. Papers presenting original research on theoretical aspects of computer science are being sought. Suggested topic areas include: Algorithms and Data Structures Computability and Complexity Theory Cryptography Data Bases Formal Languages and Automata Logic of Programs Parallel and Distributed Computing Robotics and Machine Learning Semantics of Programming Languages VLSI Computation and Design Persons wishing to submit a paper should send 15 copies of a detailed abstract by APRIL 6, 1987 to the Program Committee Chair: Tom Leighton Room 2-377 Department of Mathematics Massachusetts Institute of Technology Cambridge, MA 02139 Authors will be notified of acceptance or rejection by June 8, 1987. A final copy of each accepted paper, typed on special forms for inclusion in the Symposium Proceedings, will be due by July 27, 1987. IMPORTANT. Because of the large number of submissions anticipated, authors are advised to prepare their abstracts carefully and to submit them on time. In order to be considered, an abstract must be airmail postmarked by April 6, 1987 or be received by April 13, 1987. THESE DEADLINES WILL BE STRICTLY ENFORCED. Additions and/or revised abstracts received after these deadlines will not be considered. Submission Format. To facilitate reading by the program committee, it is strongly recommended that each submission begin with a succinct statement of the problems that are considered in the paper, the main results that are achieved, and an explanation of the significance of the work as well as its relevance to past research. This material should be readily understandable by non- specialists. Technical development of the work, directed to the specialist, should follow as appropriate. The entire extended abstract should not exceed 2,500 words (10 double-spaced pages). NOTE: Papers that deviate significantly from these guidelines risk rejection without consideration of their merits. Meeting Format. Authors of accepted papers will be expected to present their work at the Symposium. The format of the meeting, including time allocations for presentations and scheduling of sessions, will be determined by the Program Committee. If submissions warrant, the committee will compose a program of parallel sessions. Machtey Award for Best Student Paper. This award of up to $400, to help defray expenses for attending the Symposium, will be given for that paper which the Program Committee judges to be the most outstanding paper written solely by a student or students. To be considered for the award, an abstract must be accompanied by a letter identifying all authors as full-time students at the time of submission. At its discretion, the Committee may decline to make the award or may split the award among two or more papers. Program Committee Chair Program Committee Tom Leighton Laszlo Babai Paris Kanellakis Rm. 2-377 Michael Ben-Or Rao Kosaraju Department of Mathematics Michael Fischer Michael Paterson Massachusetts Institute of Technology Shafi Goldwasser Robert Tarjan Cambridge, MA 02139 Leo Guibas Uzi Vishkin Joseph Halpern Conference Chair Local Arrangements Chairs Ashok Chandra Seymour Ginsburg and Ming-Deh Huang IBM T. J. Watson Research Center Computer Science Department P.O. Box 218 University of Southern California Yorktown Heights, NY 10598 Los Angeles, CA 90089 ------------------------------ Date: Tue, 16 Dec 86 22:09:47 PST From: Moshe Vardi Subject: Conference - ACM Symp. on Principles of Database Systems Sixth ACM SIGACT-SIGMOD-SIGART Symposium on PRINCIPLES OF DATABASE SYSTEMS March 22-25, 1987 San Diego, California INFORMATION [Despite the SIGART tie-in, I judged this lengthy list of sessions (and location/climate/etc.) to be of insufficient AIList interest. Contact the author if you need a copy. -- KIL] ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Sat Dec 20 03:32:48 1986 Date: Sat, 20 Dec 86 03:32:42 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #287 Status: R AIList Digest Thursday, 18 Dec 1986 Volume 4 : Issue 287 Today's Topics: Queries - Chess Program Needed & Text-to-Speech Conversion, Administrivia - Usenet Mod.ai/Comp.ai Distribution, Natural Language - Books on Parsing, Philosophy - Multidimensional Nature of Intelligence, Ethics - AI and Consequences ---------------------------------------------------------------------- Date: 15 Dec 86 18:08:20 GMT From: "S. Sridhar" Subject: Chess Program Needed I'm looking for sources of a "reasonably" well designed Chess program. I'd prefer to get a program written in Lisp, but would gladly settle for one written in any other "reasonable" programming language. My only requirement is that the said program should be "reasonably" (for the third time!) non-trivial and should employ "adequately reasonable" (yet again!!) game strategies. Does anyone have such a program that s/he would be willing to send me its source. Can you give me pointers ? Thanks for any help, -- S. Sridhar sridhar@tekecs.tektronix.COM ------------------------------ Date: Tue, 16 Dec 86 17:14:44 pst From: Umesh Joglekar Subject: Text-to-speech conversion I am a graduate student in UC Santa Barbara. At present I am writing a dissertation on text-to-speech conversion using a neural network model (similar to the NETtalk experiment conducted by Sejnowski & Rosenberg.) I would like to get information about people working on text-to-speech conversion projects using different approaches. Thanks. - Umesh D. Joglekar e-mail : joglekar@riacs.arpa USnail : Umesh D. Joglekar Mail Stop 230-5 NASA Ames Research Center Moffett Field, Ca 94035 (415) 694-6921 ------------------------------ From: SCHNEIDER Daniel Subject: Re: Administrivia - BITNET Distribution Newsgroups: mod.ai Organization: University of Geneva, Switzerland HI, Lot's of people in Switzerland (and elsewhere too I guess) have access to both BITNET and usenet, sometimes on two different machines, e.g. there is very often a VMS/BITNET and a UNIX/usenet VAX around. ... so maybe you could ask all these people to use rn (i.e. the newsgroup system) on unix and tell them that this way they save a lot of space for *themselves*. A lot of people (even on unix) just don't know about mod.ai ! -Daniel Daniel K.Schneider ISSCO, University of Geneva, 54 route des Acacias, 1227 Carouge (Switzerland) Tel. (..41) (22) 20 93 33 ext. 2114 to VMS/BITNET: to UNIX/EAN (preferable): BITNET: SCHNEIDER@CGEUGE51 shneider%cui.unige.chunet@CERNVAX ARPA: SCHNEIDER%CGEUGE51.BITNET@WISCVM shneider@cui.unige.CHUNET or: shneider%cui.unige.chunet@ubc.csnet uucp: mcvax!cernvax!cui!shneider ------------------------------ Date: 17 Dec 86 20:07:08 GMT From: rutgers!atux01!jlc@lll-crg.arpa (J. Collymore) Subject: Responses to Query on Books on Parsing A few weeks ago I posted a request for information on books that talked about parsing principles in some detail. I got more requests on posting my responses than I actually got in responses. However, I am hereby posting those reponses that I did receive. Thanks again to those of you who responded with information. Jim Collymore =============================================================================== >From seismo!unido!ecrcvax!tomasic Date: Thu, 4 Dec 86 10:18:33 -0100 From: "Anthony Tomasic" Subject: natural language A good book which addresses natural language understanding in a PROLOG context is: Natural Language Access to Databases by Mark Wallace Ellis Wood Publishing (1982?) >From akgua!bullwinkle!cornell!belmonte@svax.cs.cornell.edu Date: Sun, 7 Dec 86 00:55:17 EST From: ihnp4!bullwinkle!cornell!belmonte@svax.cs.cornell.edu (Matthew Belmonte) Subject: Re: References needed for learning parsing principles Organization: Cornell Univ. CS Dept. _Compiler_Construction:__Theory_and_Practice_ William A. Barrett, Rodney M. Bates, David A. Gustafson, John D. Couch Science Research Associates, 1986 (2nd edition) This is what gave me a lot of the background I needed for an internship I had, wworking on a natural language parsing project that used a CFG to represent a very restricted, but English-sounding, subset of English. It does not address any implementations in terms of languages and operating systems, but it does, I feel, present parsing concepts in a clear way. It is not, however, specific to the parsing of natural languages. -- "The spirit is willing but the flesh is under court injunction." Matthew Belmonte ARPA: BITNET: UUCP: ..!decvax!duke!duknbsr!mkb ------------------------------ Date: Wed, 17 Dec 86 00:14:24 PST From: larry@Jpl-VLSI.ARPA Subject: Multidimensional nature of intelligence I don't think we need a practioners' and a philosophers' AI discussion list, but more effort to bring the two types of discussion together. This is such an effort. There seems to me to be little gain from giving a Turing Test, which measures intelligence on a single dimension with a binary scale. Further, it's only useful after one's work has produced a (hopefully) intelligent machine, giving little help in the creation of the machine. More useful would be a test that treated intelligence as a multi-dimensional activity, somewhat like the various clinical IQ tests but considerably expanded, perhaps with social or emotional dimensions. I'd also like to see more microscopic measures, based on my belief that "higher" intellectual capabilities are composites of essentially independent capacities. Memory and emotion, for instance, seem to depend upon quite different mechanisms. (Though in an organism as complex as a human there may not be any measures that are completely orthogonal.) Consciousness might be one of those higher capacities, but my guess is that it is not essential for intelligence. On the other hand, I doubt that it is an epiphenomenon having no effect on intelligent systems. Perhaps it serves to integrate what would otherwise be disparate parts working against their individual and collective survival--in other words, consciousness insures that there are no orthogonal measures of intelligence! Before we can investigate (and duplicate) consciousness we first must investigate the functions on which it depends. One of them is memory, which seems to come in many varieties. Perhaps the most crucial dimension of memory (for the study of consciousness) is its volatility. The most volatile is very short term (a half to one-and-a-half seconds) and seems to be primarily electrical in nature. Short term memory (15-30 minutes) may be primarily a chemical phenomenon. Longer term memory seems more related to biological mechanisms, and seems to come in two types, which I call mid-term (half-hour to about a day) and long-term. The transfer between mid- and long-term memory apparently occurs during sleep or sleep-like phenomena. To relate this to consciousness, I would guess that consciousness is primarily a function of very-short-term memory but depends in successively lesser ways on the other volatile memory banks. So to duplicate conscious- ness we might have to utilize some kind of multi-buffered pipeline memory. Free will is another of those nebulous ideas that may seem not to relate to AI practice. I would first say that the connection between freedom and willing may be spurious. I see others, including machines, making decisions all the time, so will is obviously a real phenomenon and probably an indispensable one for intelligence (unlike consciousness). But at least in machines most decisions are based on information and rules stored in some kind of memory (with the remaining decisions the result of error). I surmise that human decisions are similarly determined. Secondly, some psych research indicates that decisions are rarely (or never) consciously made. Instead we seem to subconsciously perform a very rapid vector summation of many conflicting motives (some "rational," some emotional). Then we decide on motion along the vector (either in a positive or negative direction), and then create a publicly acceptable reason for our decision which finally pops up into the conscious. (And most of us are so quick and skilled at subconscious rationalization that it seems to us as if the "reason" preceded the decision.) To duplicate/emulate this form of decision-making analog computation may be more efficient than symbolic computation. Larry @ jpl-vlsi.arpa   ------------------------------ Date: 16 Dec 86 16:21:33 GMT From: ritcv!rocksvax!rocksanne!sunybcs!colonel@rochester.arpa (Col. G. L. Sicherman) Subject: Re: AI and consequences > And I am claiming that technologists, by and large, are less > competent than they might be by virtue of their ignorance of the > criteria professors of moral philosophy, theologians, nuclear plant > designers, and politicians bring to bear on such decisions. This seems too strong to me. Every specialist develops the habit of thinking within his specialized system. That's what being a special- ist is about. I do not trust AI researchers to make wise decisions, nor do I trust moral philosophers ,, politicians. They're all slaves to special habits. There's a young meta-science called (confusingly) Cybernetics. It studies the outer meanings of working with and using computers. That is, it strives to identify and override the automatic assumptions and habits of computer people--to escape from within the "system." It is one way out. A better one is for specialists from different fields to discuss, on several orders of magnitude beyond the present volume, what is being done. And that, perhaps, is where the Net comes in ... -- Col. G. L. Sicherman UU: ...{rocksvax|decvax}!sunybcs!colonel CS: colonel@buffalo-cs BI: colonel@sunybcs, csdsiche@sunyabvc ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Sat Dec 20 03:33:43 1986 Date: Sat, 20 Dec 86 03:33:33 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #288 Status: R AIList Digest Friday, 19 Dec 1986 Volume 4 : Issue 288 Today's Topics: Seminars - Uncertainty in AI: Is Probability Adequate (MIT) & Learning by Understanding Analogies (MCC) & Replay of Design Derivations (Rutgers) & The ISIS Project (SRI), Conference - European Simulation Multiconference 1987, Vienna ---------------------------------------------------------------------- Date: Tue, 16 Dec 86 02:14:58 EST From: "Steven A. Swernofsky" Subject: Seminar - Uncertainty in AI: Is Probability Adequate (MIT) From: Rosemary B. Hegg UNCERTAINTY SEMINAR ON MONDAY Date: Monday, November 10, 1986 Time: 3.45 pm...Refreshments 4.00 pm...Lecture Place: NE43-512A UNCERTAINTY IN AI: IS PROBABILITY EPISTEMOLOGICALLY AND HEURISTICALLY ADEQUATE? MAX HENRION Carnegie Mellon ABSTRACT New schemes for representing uncertainty continue to proliferate, and the debate about their relative merits seems to be heating up. I shall examine several criteria for comparing probabilistic representations to the alternatives. I shall argue that criticisms of the epistemological adequacy of probability have been misplaced. Indeed there are several important kinds of inference under uncertainty which are produced naturally from coherent probabilistic schemes, but are hard or impossible for alternatives. These include combining dependent evidence, integrating diagnostic and predictive reasoning, and "explaining away" symptoms. Encoding uncertain knowledge in predictive or causal form, as in Bayes' Networks, has important advantages over the currently more popular diagnostic rules, as used in Mycin-like systems, which confound knowledge about the domain and about inference methods. Suggestions that artificial systems should try to simulate human inference strategies, with all their documented biases and errors, seem ill-advised. There is increasing evidence that popular non-probabilistic schemes, including Mycin Certainty Factors and Fuzzy Set Theory, perform quite poorly under some circumstances. Even if one accepts the superiority of probability on epistemological grounds, the question of its heuristic adequacy remains. Recent work by Judea Pearl and myself uses stochastic simulation and probabilistic logic for propagating uncertainties through multiply connected Bayes' networks. This aims to produce probabilistic schemes that are both general and computationally tractable. HOST: PROF. PETER SZOLOVITS ------------------------------ Date: Mon 15 Dec 86 14:08:36-CST From: Ellie Huck Subject: Seminar - Learning by Understanding Analogies (MCC) Please join the AI Group for the following speaker: Russell Greiner Department of Computer Science University of Toronto Tuesday, December 16 Balcones Facility 10:00am - Room 2.806 LEARNING BY UNDERSTANDING ANALOGIES This presentation describes a method for learning by analogy -- i.e., for proposing new conjectures about a target analogue based on facts known about a source analogue. After formally defining this process, we present heuristics which efficiently guide it to the conjectures which can help solve a given problem. These rules are based on the view that a useful anlogy is one which provides the information needed to solve the problem, and no more. Experimental data confirms the effectiveness of this approach. Tuesday, December 16 10:00am Room 2.806 ------------------------------ Date: Tue, 16 Dec 86 16:13:28 est From: segall@caip.rutgers.edu (Ed Segall) Subject: Seminar - Replay of Design Derivations (Rutgers) There will be an III talk Thursday morning (12/18) at 10 am. in Hill 250 (the normal Machine Learning meeting room and time). Mike Barley will be speaking. His abstract follows: In his chapter "Why are design derivations hard to replay" in MACHINE LEARNING: A Guide to Current Research, Jack Mostow identified two classes of problems that any intelligent replay mechanism will need to address: (1) missing preconditions; and (2) the reference problem. These problems deal with applying a rule in the replay plan in the current environment. During this past summer I developed a simple replay facility, called Legal Replay, for the Vexed (VLSI EXpert Editor) system which showed that the difficulty of these problems is determined, to a large extent, by the architecture of the problem-solver in which it is embedded. The abstract refinement with constraint propagation architecture of Vexed made certain aspects of these replay problems disappear. In the course of implementing Legal Replay another replay problem became apparent: the correspondence problem. The correspondence problem deals with controlling rule selection in the current environment based upon the replay plan. In this talk I will briefly describe the replay paradigm, the replay problems that Jack identified, the Vexed architecture, and the correspondence problem. I will concentrate on describing the Legal Replay architecture, how it solves some of the replay problems, how it handles the correspondence problem, and my current research into a more intelligent handling of the correspondence problem. ------------------------------ Date: Wed 17 Dec 86 11:56:13-PST From: Amy Lansky <6 09:58:12-$PST Subject: Seminar - The ISIS Project (SRI) THE ISIS PROJECT: AN HISTORICAL PERSPECTIVE Mark S. Fox (FOX@CMUA) Intelligent Systems Laboratory Robotics Institute Carnegie Mellon University 11:00 AM, FRIDAY, December 19 SRI International, Building E, Room EJ228 ISIS is a knowledge-based system designed to provide intelligent support in the domain of job shop production management and control. Job-shop scheduling is an "uncooperative" multi-agent (i.e., each order is to be "optimized" separately) planning problem in which activities must be selected, sequenced, and assigned resources and time of execution. Resource contention is high, hence closely coupling decisions. Search is combinatorially explosive; for example, 85 orders moving through eight operations without alternatives, with a single machine substitution for each and no machine idle time has over 10^880 possible schedules. Many of which may be discarded given knowledge of shop constraints. At the core of ISIS is an approach to automatic scheduling that provides a framework for incorporating the full range of real world constraints that typically influence the decisions made by human schedulers. This results in an ability to generate detailed schedules for production that accurately reflect the current status of the shop floor, and distinguishes ISIS from traditional scheduling systems based on more restrictive management science models. ISIS is capable of incrementally scheduling orders as they are received by the shop as well as reactively rescheduling orders in response to unexpected events (e.g. machine breakdowns) that might occur. The construction of job shop schedules is a complex constraint-directed activity influenced by such diverse factors as due date requirements, cost restrictions, production levels, machine capabilities and substitutability, alternative production processes, order characteristics, resource requirements, and resource availability. The problem is a prime candidate for application of AI technology, as human schedulers are overburdened by its complexity and existing computer-based approaches provide little more than a high level predictive capability. It also raises some interesting research issues. Given the conflicting nature of the domain's constraints, the problem differs from typical constraint satisfaction problems. One cannot rely solely on propagation techniques to arrive at an acceptable solution. Rather, constraints must be selectively relaxed in which case the problem solving strategy becomes one of finding a solution that best satisfies the constraints. This implies that constraints must serve to discriminate among alternative hypotheses as well as to restrict the number of hypotheses generated. Thus, the design of ISIS has focused on o constructing a knowledge representation that captures the requisite knowledge of the job shop environment and its constraints to support constraint-directed search, and o developing a search architecture capable of exploiting this constraint knowledge to effectively control the combinatorics of the underlying search space. This presentation will provide an historical perspective on the development of ISIS family of systems. VISITORS: Please arrive 5 minutes early so that you can be escorted up from the E-building receptionist's desk. Thanks! ------------------------------ Date: Thu, 18 Dec 86 16:46:21 WUT From: ADELSBER%AWIWUW11.BITNET@WISCVM.WISC.EDU Subject: Conference - European Simulation Multiconference 1987, Vienna CALL FOR PAPERS =============== EUROPEAN SIMULATION MULTICONFERENCE 1987 AI and Simulation Discrete Event Simulation Simulation and Computer Integrated Manufacturing Simulation and Operational Research VIENNA, AUSTRIA July 8 - 10, 1987 Vienna University of Economics and Business Administration Call for Papers European Simulation Multiconference 1987 Organized by o ASIM - FA 4.5 Simulation in der GI o Vienna University of Economics and Business Administration, Department for Applied Statistics and Data Processing o OCG - Oesterreichische Computer Gesellschaft in cooperation with o DBSS - Dutch Benelux Society o ISCS - Italian Society for Computer Simulation o SIMS - Scandinavian Simulation Society o UKSC - United Kingdom Simulation Society o JSST - The Japan Society for Simulation Technology under the financial sponsorship of The Society for Computer Simulation. Conference Location Vienna University of Economics and Business Administration Augasse 2, A-1090 Vienna, Austria The Vienna University of Economics and Business Administra- tion was founded in 1898 as the 'Export Academy'. In 1919 it became the 'College of International Commerce'. In 1930 it was awarded the right to confer doctorates. The present name 'Wirtschaftsuniversitaet Wien' was laid down in 1975 in the University Organization Act. Congress Committee: =================== Multiconference General Chairman: Heimo H. Adelsberger Vienna University of Economics and Business Administration, Department for Applied Statistics and Data Processing Augasse 2, A-1090 Vienna, Austria (222) 34-05-25 Ext. 758 or 757 Earn (Bitnet): ADELSBER + AWIWUW11 Local Arrangements Chairman: Wolfgang Kleinert Hybridrechenzentrum der TU-Wien Gusshausstr. 27-29, A-1040 Vienna, Austria (222) 588 01 37 02 Conference Coordinator: Andy Symons BSO/Aerospace BV Beneluxlaan 39, Po Box 8112 3503 RC Utrecht, The Netherlands Tel.: 0031-30-897774 TLX.: 40342 AI and Simulation ----------------- General Chairman: Ivan Futo Computer Research Institute Donati u., H-1015 Budapest, Hungary 00361-350180 Program Chairman: Johannes Retti Siemens Aktiengesellschft Oesterreich Goellergasse 15, A-1031 Vienna, Austria (222) 72 93 - 50 30 Discrete Event Simulation ------------------------- General Chairman: Bernd Schmidt Institut fuer Mathematische Maschinen und Datenverarbeitung der Universitaet Erlangen-Nuernberg Martenstr. 3, D-8520 Erlangen, FRG Program Chairman: Heimo H. Adelsberger Vienna University of Economics and Business Administration, Department for Applied Statistics and Data Processing Augasse 2, A-1090 Vienna, Austria (222) 34-05-25 758 or 757 Simulation and Computer Integrated Manufacturing ------------------------------------------------ General Chairman: Axel Kuhn Fraunhofer-Institut fuer Transporttechnik und Warendistri- bution Emil-Figge-Strasse 75, D-4600 Dortmund 50, FRG (0231) 75 49 132 Program Chairman: Knud Erik Wichmann Institute of Production Management and Industrial Engineering, The Technical University of Denmark Building 423, DK-2800 Lyngby, Denmark +45-2-88 25 22 Ext. 3081 Simulation and Operational Research ----------------------------------- General Chairman: Jack P.C. Kleijnen Katholieke Universiteit Brabant P.O. 90153, NL-5000 Le Tilburg, The Netherlands 0031 13662029 Program Chairman: Fernand Broeckx Universiteit Antwerpen Middelheimlaan 1, B-2020 Antwerp, Belgium 03-218.07.84 Deadlines and Requirements Extended abstracts (two pages typewritten without drawings and tables) are due to arrive before February 10, 1987. Please indi- cate to which of the four conferences the paper you like to present belongs to. Only original papers which have not been published elsewhere will be accepted. Authors will be expected to register early (at a reduced fee) and to attend the conference at their own expense to present accepted papers. If you cannot present your paper at the conference, it will not be published in the Proceedings. Notification of acceptance or rejection will be sent by March 10. An authors kit with complete instructions on how to generate camera-ready copy for the Proceedings will be sent to those authors of accepted abstracts. If you plan to give state of the art reviews or to organize panel discussions please contact the appropriate program chairman. Camera ready copies of accepted papers must be sent to the European Office by May 1, 1987. A final review of each paper will take place at that time. For demonstrations, exhibitions or video sessions please contact the European Simulation Office. Language The official conference language is English. Registration fees: The registration fee is DM 350,- for those pre-registered before July 1, 1987. Registration at the conference itself will be DM 400,-. Correspondence Address ====================== European Simulation Office c/o Philippe Geril University of Ghent Coupure Links 653 B-9000 Ghent, Belgium 91-236961 Ext. 400 or Earn/Bitnet: ADELSBER + AWIWUW11 The 1987 European Simulation Multiconference will bring together four individual conferences. We invite papers for presentation at the conference and for the publication in each conference's Proceedings. AI and Simulation ================= More and more it becomes evident that AI and simulation has the same aim: learning about reality by mimicry. This conference should serve both sides and bring them together. Papers on the following topics are especially invited: - Intelligent Simulation Environments - Qualitative reasoning - Knowledge bases - Knowledge engineering - Expert systems - Complex models - Applications using AI techniques Discrete Event Simulation ========================= This conference will provide a forum for presenting new approaches to developing, validating, and using discrete event simulation models. Papers on the following topics are especially invited: - Tools for discrete event simulation: o Computer languages o Environments o On-line and real time simulation o Graphical model description - Modeling - Validation - Optimization - Statistical methods - Random number generation and testing - Applications: o Computer systems o Communication systems o Transport and traffic systems o Environmental design o Education This list is not exhaustive. Simulation and Computer Integrated Manufacturing ================================================ The program will focus on the application of simulation in Com- puter Integrated Manufacturing, CIM. CIM covers a very broad spectrum of different systems, techniques, tools, methods and tasks in order to achieve computer integration at all levels through a business. Simulation techniques have an important role to play in the creation and operation of CIM. Papers on the following topics are especially invited: - Simulation in manufacturing, design and operation - Simulation in manufacturing education for industry - Simulation in research and development in computer integrated manufacturing and the CIM philosophy Simulation and Operational Research =================================== Simulation is, and has always been, an important tool in O.R., and really deserves to be the subject of a special conference. Papers on the following topics are especially invited: - Simulation packages and tools, on main frames and micros - Animation graphics in simulation - Applications in queuing theory - Educational applications: business games for teaching O.R. - Simulation and networked microprocessor architecture - Monte Carlo methods in mathematics - Micro simulation in economics - Statistical design and analysis of simulation This list is not exhaustive. ********************************************* * Contact: Earn/Bitnet: ADELSBER + AWIWUW11 * ********************************************* ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Fri Dec 26 00:56:23 1986 Date: Fri, 26 Dec 86 00:56:15 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #289 Status: R AIList Digest Monday, 22 Dec 1986 Volume 4 : Issue 289 Today's Topics: Queries - Common Lisp for Dec 8700 & Public-Domain PC Expert System & Knowledge-Based Systems Periodicals & Knowledge Engineering Bibliography, Humor - CAE Taken to New Heights, Cognitive Science - Lifelong Memory ---------------------------------------------------------------------- Date: 18 Dec 86 18:42:06 GMT From: ames!rutgers!princeton!mind!bjr@cad.Berkeley.EDU (Brian J. Reiser) Subject: Common Lisp for Dec 8700 Has anyone compared Lucid's Common Lisp versus Dec's Common Lisp for Ultrex on the Dec Vax 8700? Any comments regarding speed, completeness of implementation, features, price, etc? Thanks, Brian Reiser uucp: ...seismo!princeton!mind!bjr arpanet: princeton!mind!bjr@siesmo.css.gov ------------------------------ Date: Thu 18 Dec 86 18:50:33-EST From: Mickey Kashtan Subject: Query: reasonable public domain PC expert system Is there a public domain expert system shell that runs on the PC? Has anyone out there actually used any? Please send your comments, and the address to get the software from. If there is any interest, I will publish the results of this. Mickey Kashtan Kashtan@cs.columbia.edu ------------------------------ Date: Fri, 19 Dec 86 15:37:57 CST From: gknight@ngp.utexas.edu (Gary Knight) Subject: Knowledge-based ("expert") systems periodicals I am trying to compile a comprehensive list of periodicals (academic journals, trade journals, business newsletters, etc.) that deal *EXCLUSIVELY* (or almost exclusively) with knowledge-based ("expert") systems. To date I know only of: AI EXPERT (CL Publications, San Francisco) EXPERT SYSTEMS (U.K. publication) IEEE EXPERT EXPERT SYSTEMS STRATEGIES (Cutter Information Corp., San Francisco) I will appreciate receiving the titles, publishers (with addresses if known), and a brief description of contents from anyone who is familiar with others. And, yes, when the replies stop coming in I'll post the entire list to the net. Many thanks for your help -- and have a happy holiday season! ------------------------------ Date: 21 Dec 86 23:20:15 GMT From: gknight@ngp.utexas.edu (Gary Knight) Subject: Knowledge engineering bibliography I'm compiling a bibliography on NON-automated KNOWLEDGE ENGINEERING techniques for building expert systems, i.e., how the human knowledge engineer goes about (1) extracting knowledge from the domain expert and (2) writing it into the knowledge representation system. So far I have the following references: Hayes-Roth, Waterman, and Lenat, BUILDING EXPERT SYSTEMS, chapter 5; Waterman, A GUIDE TO EXPERT SYSTEMS, chapters 14 & 18; and Buchanan and Shortliffe, RULE-BASED EXPERT SYSTEMS, Part 3. Can anyone add any articles, books, or chapters from books to the list? I'm trying to focus on basics, so arcane theoretical pieces would not be as helpful as more practical "how to" pieces. All help will be appreciated, and I'll post a compilation of references when the e-mail stops coming in. Thanks . . . Gary Knight, The University of Texas at Austin. "All these moments will be lost in time, like tears in the rain." -- Roy Baty ------------------------------ Date: 18 Dec 86 20:58:12 GMT From: sdcc6!sdacs!wade@sdcsvax.ucsd.edu (Wade Blomgren) Subject: CAE taken to new heights.... Apparently artificial intelligence/computer aided education software has reached new heights in sophistication. As evidence of this, I submit this direct qoute from comp.lang.c: Article 412 of 464, Mon 07:30. Subject: PDP 11/34 'C' Compiler Wanted (Non Unix) Summary: 'C' for RSTS/E ??? Path: ..!chinet!megabyte (Dr. Megabyte @ chi-net, Public Access UN*X) (13 lines) More? [ynq] My local college was a small DEC PDP-11/34 computer running the RSTS/E operating system with RSX and RTS available as alternative..... .... Please read the statement carefully. Wade ...sdcsvax!sdacs!wade ------------------------------ Date: 20 Dec 86 17:41:00 EST From: "PSOTKA, JOSEPH" Reply-to: "PSOTKA, JOSEPH" Subject: BASIC BRAIN BYTES AND LIFELONG MEMORY Landauer (Cognitive Science, 10, 1986, pp. 477 - 493) argues that there is a calculable limit to the amount people remember. Estimates based on input and forgetting rates ranged around (EXPT 10 9) OR 1,000,000,000 or one billion bits. This is vastly less than the figure (EXPT 10 20) quoted from Von Neumann. On the basis of this he argues that possibly " we should not be looking for models and mechanisms that produce storage economies, but rather ones in which marvels are produced by profligate use of capacity." The key estimates for this figure are shown in the following table: Task: Input rate total: bits/sec bits Reading 1.2 1.8E9 Picture Recog. 2.3 3.4E9 To me these figures are unbelievably low. A gigabyte of facts on a CD-ROM cannot possibly represent my memory system. For the moment let us look at the reasoning for reading and then a brief look at pictorial memory. READING: Landauer uses a relatively straightforward set of assumptions (not without their perils!) to infer rate of storage into longterm memory. He has some experiments that back them up. Basically he says that given any text (in the average) with words deleted at random people are able to predict about half the words (.48) from context and previous knowledge, and this increases only slightly (e.g. to .63) after reading the text quickly. The net gain is log2(.63/.48) = .42. >From this he argues that the new information available in the text is .42 BITS per word. Over a lifetime of reading 3 words per second, storage in memory would be roughly 1.8 X 10E9 bits. It seems reasonable enough, but it is not very convincing. For one thing, surely people are reading the "context" too, and not just getting information from the individual words: there are higher order chunks called sentences that are very meaningful. To eliminate the information value of the context so abruptly is a disservice to our information gathering abilities. Surely we are processing this "context" too! Another point is that there are aspects entering memory not just connected to the words: the episode itself provides information; that fact that this particular word is seen at this particular time is important; the auditory and somesthetic context comes along too ( e.g. the room was quiet; the chair was soft, etc.). BRAIN CYCLE TIME: Finally the natural cycle of information entry seems much too long; one to five seconds. There is much perceptual and cognitive information that suggests a basic cycle of 1/10th of a second ( e.g., perceptual integration, apparent movement, backward masking, sensory stores, etc.). BASIC BRAIN BYTE: As a counterexample for this low estimate, consider the following simple example: Two words are flashed on the screen for one-tenth of a second. Any person with eyes open reads and remembers them. If the words were chosen at random, the guessing rate would be very low (given approximately 1, 000, 000 words to choose from, the likelihood of getting both words right is roughly 10E-12) but the hit rate would surely be in the 90s for percent correct. Even after a few weeks it would be substantial. The storage transfer rate is now 17 bits in .1 sec. Over a lifetime, this comes out to 1.7 E 11, a factor of one hundred greater, without becoming too unreasonable in our assumptions. But there is yet another perspective on the same phenomenon. Much of the time, when I read a text my most prominent reaction is "Hohum. Nothing new here!" Has no information been transferred? Well, my text - prediction (Cloze) performance would probably be as only good as Landauer's claim, and even if it were much better, the baseline of .5 mitigates any drastic change in the total figures. Clearly, a lot of information has been transferred, not measured by this technique: I know the author of the text has wasted my time; I probably judged something about his writing and thinking abilities, his vocabulary, and other characteristics; I may have changed my desire to see him and any plans that went along with it; etc. etc. Surely a very large number of consequences arose from this interaction; consequences whose information content is surely constrained by the set size of potential reactions and current memories. In a sense this is the meaning and context of the reading task. Let me suggest a recursive procedure on the estimation of our lifetime memory. Given Landauer's basic lifetime estimate of information extracted from text of 1.0E9 bits, let us take an individual who lives 70 years and hypothesize a memory of 9.0E8 bits. Let us then suggest that any word he reads must be coded to be able to make contact with one of these (potential) memories and is stored in (some abstract) connection with that memory. The information content of that word is then 30 bits instead of .4 and total lifetime information (at 3 words per second) is 1.3E11 (given 1.5E9 sec. in a lifetime). Given this new measure of information we can redo the cycle. The next round is 1.7E11 bits. This is roughly stable, and it is about the same as our previous measure. Here is the function: (SETQ BitRate (QUOTIENT (LOG TotalBits )) (LOG 2))) (SETQ TotalBits (TIMES BitRate 4.5E9)) Both these procedures yield measures roughly 100 times higher than Landauer's. But there is a suggestion that the true measure is still much higher: that in fact we don't know how the brain codes information in all its many relationships. Really, we have very little information about the relative size of pictorial and other abstract knowledge structures. PICTORIAL REPRESENTATION; A series of experiments by careful and reputable researchers (Nickerson, Standing, and Shepard ) found very high recognition rates for pictures shown very briefly (4 to 6 seconds) even after hours, days, or weeks before testing. The relation between size of the set of pictures and accuracy is surprisingly flat: Number of Pictures Shown Percent Correct: 20 99 40 96 100 95 200 92 400 86 1000 88 4,000 81 10,000 83 One wonders when this function would break down so that showing a picture would result in no memory. Of course, that seems clearly impossible. At one second per picture over 70 years, one could only look at 2.268E9 pictures (WITHOUT SLEEPING) and these data show that at the very least one would remember 8, 300 of them and probably a lot more. Given the limited accuracy of these data it seems unwarranted to fit a curve to the numbers, but a rough estimate would say that recognition percentage becomes very small at about 1.0E9 pictures. At this point Landauer might say that the basic brain byte can no longer encode a new picture. The question then becomes "What is stored?" Landauer makes the parsimonious suggestion that all that is stored is the minimal code that would separate one picture from any other. Without any special coding procedures that make use of internal redundancies, it would take a 36 bit code to store all the pictures. This is about twice the estimate Landauer makes on other grounds: certainly within reasonable agreement for such rough estimates. However, it seems most improbable that only some abstract code is stored. Our computers need to store much more to do anything with these pictures: a 50,000 Bits bitmap is still a very rough representation of the real thing. A 35 Bits BITMAP would not represent very much at all. To say that stereoscopic vision adds one bit to the representation is to misrepresent the obvious. Naturally, the existence of veridical representations (e.g. eidetic images ) is difficult to verify; but fragmentary report suggests the decomposability of the memory code into useable fragments and features that are realistically detailed, with very fine grain size. Again, the estimate that Landauer suggests has to be considered as an absolute lower bound, with more realistic estimates surely orders of magnitude larger. The key to understanding memory size is the understanding of the transformations and codes the mind applies. Given this simple perspective, the conclusions that Landauer draws need to be modified. Given the many visual, auditory, and sensory storage systems that are possible, and the existence of abstract representation (ideas) in other forms, the used memory does indeed begin to approach the 10E12 figure that is a rough estimate for number of synapses. Profligacy of control structures is not quite in order: in fact there may be no room for control structures; everything may be in the code. None of this, to emphasize, disagrees with Landauer's basic conclusions that there is no one to one correspondence between functional memory and the component capacity needed for its support; this could always be much, much larger. What is so intriguing is that current computers are indeed beginning to approach these estimates of physical capacities. The brain's byte size and component stores are beginning to be realizable in silicon form. It is an audacious person who is no longer willing to admit the possibility of silicon intelligence. ------------------------------ End of AIList Digest ********************