From in%@vtcs1 Thu Dec 4 01:16:11 1986 Date: Thu, 4 Dec 86 01:16:04 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #273 Status: R AIList Digest Tuesday, 2 Dec 1986 Volume 4 : Issue 273 Today's Topics: Bibliography - ai.bib42AB ---------------------------------------------------------------------- Date: WED, 10 oct 86 17:02:23 CDT From: leff%smu@csnet-relay Subject: ai.bib42AB %A Ralph Grishman %A Richard Kittredge %T Analyzing Language in Restricted Domains %I Lawrence Erlbaum Associates Inc. %C Hillsdale, NJ %K AI02 AA01 %D 1986 %X 0-89859-620-3 1986 264 pages $29.95 .TS tab(~); l l. N.Sager~T{ Sublanguage: Linguistic Phenomenon, Computational Tool T} J. Lehrberger~Sublanguage Analysis E. Fitzpatrick~T{ The Status of Telegraphic Sublanguages T} J. Bachenko D. Hindle J. R. Hobbs~Sublanguage and Knowledge D. E. Walker~T{ The Use of Machine Readable Dictionaries in Sublanguage Analysis T} R. A. Amsler C. Friedman~T{ Automatic Structuring of Sublanguage Information: Application to Medical Narrative T} E. Marsh~T{ General Semantic Patterns in Different Sublanguages T} C. A. Montgomery~T{ A Sublanguage for Reporting and Analysis of Space Events T} B. C. Glover T. W. Finin~T{ Constraining the Interpretation of Nominal Compounds in a Limited Context T} G. Dunham~T{ The Role of Syntax in the Sublanguage of Medical Diagnostic Statements T} J. Slocum~T{ How One Might Automatically Identify and Adapt to a Sublanguage T} L. Hirschman~T{ Discovering Sublanguage Structures T} .TE %A Janet L. Kolodner %A Christopher K. Riesbeck %T Experience, Memory and Reasoning %I Lawrence Erlbaum Associates Inc. %C Hillsdale, NJ %D 1986 %K AT15 %X 0-89859-664-0 1986 272 pages $29.95 .TS tab(~); l l. R.Wilensky~T{ Knowledge Representation - A critique and a Proposal T} T{ R. H. Granger .br D. M. McNulty T}~T{ Learning and Memory in Machines and Animals that Accounts for Some Neurobiological Data T} T{ V. Sembugamoorthy .br B. Chandrasekaran T}~T{ Functional Representation of Devices and Compilation of Diagnostic Problem Solving Systems T} .TE %T Recovering from Execution Errors in \s-1SIPE\s0 %A David E. Wilkins %J Computational Intelligence %V 1 %D 1985 %K AI07 AI09 %X In real-world domains (a mobile robot is used as a motivating example), things do not always proceed as planned. Therefore it is important to develop better execution-monitoring techniques and replanning capabilities. This paper describes the execution-monitoring and replanning capabilities of the \s-1SIPE\s0 planning system. (\s-1SIPE\s0 assumes that new information to the execution monitor is in the form of predicates, thus avoiding the difficult problem of how to generate these predicates from information provided by sensors.) The execution-monitoring module takes advantage of the rich structure of \s-1SIPE\s0 plans (including a description of the plan rationale), and is intimately connected with the planner, which can be called as a subroutine. The major advantages of embedding the replanner within the planning system itself are: .IP 1. The replanning module can take advantage of the efficient frame reasoning mechanisms in \s-1SIPE\s0 to quickly discover problems and potential fixes. .IP 2. The deductive capabilities of \s-1SIPE\s0 are used to provide a reasonable solution to the truth maintenance problem. .IP 3. The planner can be called as a subroutine to solve problems after the replanning module has inserted new goals in the plan. .LP Another important contribution is the development of a general set of replanning actions that will form the basis for a language capable of specifying error-recovery operators, and a general replanning capability that has been implemented using these actions. %T Plan Parsing for Intended Response Recognition in Discourse %A Candace L. Sidner %J Computational Intelligence %V 1 %D 1985 %K Discourse task-oriented dialogues intended meaning AI02 speaker's plans discourse understanding plan parsing discourse markers %X In a discourse, the hearer must recognize the response intended by the speaker. To perform this recognition, the hearer must ascertain what plans the speaker is undertaking and how the utterances in the discourse further that plan. To do so, the hearer can parse the initial intentions (recoverable from the utterance) and recognize the plans the speaker has in mind and intends the hearer to know about. This paper reports on a theory of parsing the intentions in discourse. It also discusses the role of another aspect of discourse, discourse markers, that are valuable to intended response recognition. %T Knowledge Organization and its Role in Representation and Interpretation for Time-Varying Data: The \s-1ALVEN\s0 System %A John K. Tsotsos %J Computational Intelligence %V 1 %D 1985 %K Knowledge Representation, Expert Systems, Medical Consultation Systems, Time-Varying Interpretation, Knowledge-Based Vision. AI01 AI06 AA01 %X The so-called first generation'' expert systems were rule-based and offered a successful framework for building applications systems for certain kinds of tasks. Spatial, temporal and causal reasoning, knowledge abstractions, and structuring are among topics of research for second generation'' expert systems. .sp It is proposed that one of the keys for such research is \fIknowledge organization\fP. Knowledge organization determines control structure design, explanation and evaluation capabilities for the resultant knowledge base, and has strong influence on system performance. We are exploring a framework for expert system design that focuses on knowledge organization for a specific class of input data, namely, continuous, time-varying data (image sequences or other signal forms). Such data is rich in temporal relationships as well as temporal changes of spatial relations and is thus a very appropriate testbed for studies involving spatio-temporal reasoning. In particular, the representation facilitates and enforces the semantics of the organization of knowledge classes along the relationships of generalization / specification, decomposition / aggregation, temporal precedence, instantiation, and expectation-activated similarity. .sp A hypothesize-and-test control structure is driven by the class organizational principles, and includes several interacting dimensions of research (data-driven, model-driven, goal-driven temporal, and failure-driven search). The hypothesis ranking scheme is based on temporal cooperative computation with hypothesis fields of influence'' being defined by the hypotheses' organizational relationships. This control structure has proven to be robust enough to handle a variety of interpretation tasks for continuous temporal data. .sp A particular incarnation, the \s-1ALVEN\s0 system, for left ventricular performance assessment from X-ray image sequences, will be highlighted in this paper. %T On the Adequacy of Predicate Circumscription for Closed-World Reasoning %A David W. Etherington %A Robert E. Mercer %A Raymond Reiter %J Computational Intelligence %V 1 %D 1985 %K AI15 AI16 %X We focus on McCarthy's method of predicate circumscription in order to establish various results about its consistency, and about its ability to conjecture new information. A basic result is that predicate circumscription cannot account for the standard kinds of default reasoning. Another is that predicate circumscription yields no new information about the equality predicate. This has important consequences for the unique names and domain closure assumptions. %T What is a Heuristic? %A Je\(ffry Francis Pelletier and Marc H.J. Romanycia %J Computational Intelligence %V 1 %N 2 %D MAY 1985 %K AI16 %X From the mid-1950's to the present, the notion of a heuristic has played a crucial role in AI researchers' descriptions of their work. What has not been generally noticed is that different researchers have often applied the term to rather different aspects of their programs. Things that would be called a heuristic by one researcher would not be so called by others. This is because many heuristics embody a variety of different features, and the various researchers have emphasized different ones of these features as being essential to being a heuristic. This paper steps back from any particular research programme and investigates the question of what things, historically, have been thought to be central to the notion of a heuristic, and which ones conflict with others. After analyzing the previous definitions and examining current usage of the term, a synthesizing definition is provided. The hope is that with this broader account of `heuristic' in hand, researchers can benefit more fully from the insights of others, even if those insights are couched in a somewhat alien vocabulary. %T Analysis by Synthesis in Computational Vision with Application to Remote Sensing %A Robert Woodham %A E. Catanzariti %A Alan Mackworth %J Computational Intelligence %V 1 %N 2 %D MAY 1985 %K AI06 %X The problem in vision is to determine surface properties from image properties. This is difficult because the problem, formally posed, is underconstrained. Methods that infer scene properties from image properties make assumptions about how the world determines what we see. In this paper, some of these assumptions are dealt with explicitly, using examples from remote sensing. Ancillary knowledge of the scene domain, in the form of a digital terrain model and a ground cover map, is used to synthesize an image for a given date and time. The synthesis process assumes that surface material is lambertian and is based on simple models of direct sun illumination, diffuse sky illumination and atmospheric path radiance. Parameters of the model are estimated from the real image. A statistical comparison of the real image and the synthetic image is used to judge how well the model represents the mapping from scene domain to image domain. .sp 1 The methods presented for image synthesis are similar to those used in computer graphics. The motivation, however is different. In graphics, the goal is to produce an effective rendering of the scene domain. Here, the goal is to predict properties of real images. In vision, one must deal with a confounding of effects due to surface shape, surface material, illumination, shadows and atmosphere. These effects often detract from, rather than enhance, the determination of invariant scene characteristics. %T A Functional Approach to Non-Monotonic Logic %A Erik Sandewall %J Computational Intelligence %V 1 %N 2 %D MAY 1985 %K AI15 AI16 %X Axiom sets and their extensions are viewed as functions from the set of formulas in the language, to a set of four truth-values \fIt\fP, \fIf\fP, \fIu\fP for undefined, and \fIk\fP for contradiction. Such functions form a lattice with `contains less information' and the partial order \(ib, and `combination of several sources of knowledge' as the least-upper-bound operation \(IP. We demonstrate the relevance of this approach by giving concise proofs for some previously known results about normal default rules. For non-monotonic rules in general (not only normal default rules), we define a stronger version of the minimality requirement on consistent fixpoints, and prove that it is sufficient for the existence of a derivation of the fixpoint. %J Computational Intelligence %V 1 %N 3-4 %D August 1985 %T Generating paraphrases from meaning-text semantic networks %A Michel Boyer %A Guy Lapalme %K T02 %X This paper describes a first attempt to base a paraphrase generation system upon Mel'cuk and Zolkovskij's linguistic Meaning-Text (\s-1MT\s0) model whose purpose is to establish correspondences between meanings, represented by networks, and (ideally) all synonymous texts having this meaning. The system described in the paper contains a Prolog implementation of a small explanatory and combinatorial dictionary (the \s-1MT\s0 lexicon) and, using unification and backtracking, generates from a given network the sentences allowed by the dictionary and the lexical transformations of the model. The passage from the net to the final texts is done through a series of transformations of intermediary structures that closely correspond to \s-1MT\s0 utterance representations (semantic, deep-syntax, surface-syntax and morphological representations). These are graphs and trees with labeled arcs. The Prolog unification (equality predicate) was extended to extract information from these representations and build new ones. The notion of utterance path, used by many authors, is replaced by that of covering by defining subnetworks''. %T Spatiotemporal inseparability in early vision: Centre-surround models and velocity selectivity %A David J. Fleet %A Allan D. Jepson %J Computational Intelligence %V 1 %N 3-4 %D August 1985 %K AI08 AI06 %X Several computational theories of early visual processing, such as Marr's zero-crossing theory, are biologically motivated and based largely on the well-known difference of Gaussians (\s-1DOG\s0) receptive field model of early retinal processing. We examine the physiological relevance of the \s-1DOG\s0, particularly in the light of evidence indicating significant spatiotemporal inseparability in the behaviour of retinal cell type. .LP >From the form of the inseparability we find that commonly accepted functional interpretations of retinal processing based on the \s-1DOG\s0, such as the Laplacian of a Gaussian and zero-crossings, are not valid for time-varying images. In contrast to current machine-vision approaches, which attempt to separate form and motion information at an early stage, it appears that this is not the case in biological systems. It is further shown that the qualitative form of this inseparability provides a convenient precursor to the extraction of both form and motion information. We show the construction of efficient mechanisms for the extraction of orientation and 2-D normal velocity through the use of a hierarchical computational framework. The resultant mechanisms are well localized in space-time, and can be easily tuned to various degrees of orientation and speed specificity. %T A theory of schema labelling %A William Havens %J Computational Intelligence %V 1 %N 3-4 %D August 1985 %K AI16 AI06 AA04 %X Schema labelling is a representation theory that focuses on composition and specialization as two major aspects of machine perception. Previous research in computer vision and knowledge representation have identified computational mechanisms for these tasks. We show that the representational adequacy of schema knowledge structures can be combined advantageously with the constraint propagation capabilities of network consistency techniques. In particular, composition and specialization can be realized as mutually interdependent cooperative processes which operate on the same underlying knowledge representation. In this theory, a schema is a generative representation for a class of semantically related objects. Composition builds a structural description of the scene from rules defined in each schema. The scene description is represented as a network consistency graph which makes explicit the objects found in the scene and their semantic relationships. The graph is hierarchical and describes the input scene at varying levels of detail. Specialization applies network consistency techniques to refine the graph towards a global scene description. Schema labelling is being used for interpretating hand-printed Chinese characters, and for recognizing \s-1VLSI\s0 circuit designs from their mask layouts. %T Hierarchical arc consistency: Exploring structured domains in constraint satisfaction problems %A Alan K. Mackworth %A Jan A. Mulder %A William S. Havens %J Computational Intelligence %V 1 %N 3-4 %D August 1985 %K AI03 AI16 AI06 %X Constraint satisfaction problems can be solved by network consistency algorithms that eliminate local inconsistencies before constructing global solutions. We describe a new algorithm that is useful when the variable domains can be structured hierarchically into recursive subsets with common properties and common relationships to subsets of the domain values for related variables. The algorithm, \s-1HAC\s0, uses a technique known as hierarchical arc consistency. Its performance is analyzed theoretically and the conditions under which it is an improvement are outlined. The use of \s-1HAC\s0 in a program for understanding sketch maps, Mapsee3, is briefly discussed and experimental results consistent with the theory are reported. %T Expression of Syntactic and Semantic Features in Logic-Based Grammars %A Patrick Saint-Dizier %J Computational Intelligence %V 2 %N 1 %D February 1986 %K AI02 %X In this paper we introduce and motivate a formalism to represent syntactic and semantic features in logic-based grammars. We also introduce technical devices to express relations between features and inheritance mechanisms. This leads us to propose some extensions to the basic unification mechanism of Prolog. Finally, we consider the problem of long-distance dependency relations between constituents in Gapping Grammar rules from the point of view of morphosyntatic features that may change depending on the position occupied by the moved'' constituents. What we propose is not a new linguistic theory about features, but rather a formalism and a set of tools that we think to be useful to grammar writers to describe features and their relations in grammar rules. %T Natural Language Understanding and Theories of Natural Language Semantics %A Per-Kristian Halvorsen %J Computational Intelligence %V 2 %N 1 %D February 1986 %K AI02 %X In these short remarks, I examine the connection between Montague grammar, one of the most influential theories of natural language semantics during the past decade, and natural language understanding, one of the most recalcitrant problems in \(*AI and computational linguistics for more than the last decade. When we view Montague grammar in light of the requirements of a theory natural language understanding, new traits become prominent, and highly touted advantages of the approach become less significant. What emerges is a new set of criteria to apply to theories of natural language understanding. Once one has this measuring stick in hand, it is impossible to withstand the temptation of also applying it to the emerging contender to Montague grammar as a semantic theory, namely situation semantics. %T Unrestricted Gapping Grammars %A Fred Popowich %J Computational Intelligence %V 2 %N 1 %D February 1986 %K AI02 %X Since Colmerauer's introduction of metamorphosis grammars (MGs), with their associated type \fIO\fP\(milike grammar rules, there has been a desire to allow more general rule formats in logic grammars. Gap symbols were added to the MG rule by Pereria, resulting in extraposition grammars (XGs). Gaps, which are referenced by gap symbols, are sequences of zero or more unspecified symbols which may be present anywhere in a sentence or in a sentential form. However, XGs imposed restrictions on the position of gap symbols and on the contents of gaps. With the introduction of gapping grammars (GGs) by Dahl, these restrictions were removed, but the rule was still required to possess a nonterminal symbol as the first symbol on the left-hand side. This restriction is removed with the introduction of unrestricted gapping grammars. FIGG, a Flexible Implementation of Gapping Grammars, possesses a bottom-up parser which can process a large subset of unrestricted GGs for describing phenomena of natural languages such as free word order, and partially free word or constituent order. It can also be used as a programming language to implement natural language systems which are based on grammars (or metagrammars) that use the gap concept, such as Gazdar's generalized phrase structure grammars. ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Thu Dec 4 01:16:23 1986 Date: Thu, 4 Dec 86 01:16:15 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #274 Status: R AIList Digest Tuesday, 2 Dec 1986 Volume 4 : Issue 274 Today's Topics: Philosophy - Searle, Turing, Symbols, Categories & Turing Tests and Chinese Rooms ---------------------------------------------------------------------- Date: 26 Nov 86 12:41:50 GMT From: cartan!rathmann@ucbvax.Berkeley.EDU (the late Michael Ellis) Subject: Re: Searle, Turing, Symbols, Categories > Steve Harnad >> Keith Dancey >> [The turing test] should be timed as well as checked for accuracy... >> Turing would want a degree of humor... >> check for `personal values,' `compassion,'... >> should have a degree of dynamic problem solving... >> a whole body of psychometric literature which Turing did not consult. > >I think that these details are premature and arbitrary. We all know >(well enough) what people can DO: They can discriminate, categorize, >manipulate, identify and describe objects and events in the world, and >they can respond appropriately to such descriptions. Just who is being arbitrary here? Qualities like humor, compassion, artistic creativity and the like are precisely those which many of us consider to be those most characteristic of mind! As to the "prematurity" of all this, you seem to have suddenly and most conveniently forgotten that you were speaking of a "total turing test" -- I presume an ultimate test that would encompass all that we mean when we speak of something as having a "mind", a test that is actually a generations-long research program. As to whether or not "we all know what people do", I'm sure our cognitive science people are just *aching* to have you come and tell them that us humans "discriminate, categorize, manipulate, identify, and describe". Just attach those pretty labels and the enormous preverbal substratum of our consciousness just vanishes! Right? Oh yeah, I suppose you provide rigorous definitions for these terms -- in your as yet unpublished paper... >Now let's get devices to (1) do it all (formal component) and then >let's see whether (2) there's anything that we can detect informally >that distinguishes these devices from other people we judge to have >minds BY EXACTLY THE SAME CRITERIA (namely, total performance >capacity). If not, they are turing-indistinguishable and we have no >non-arbitrary basis for singling them out as not having minds. You have an awfully peculiar notion of what "total" and "arbitrary" mean, Steve: its not "arbitrary" to exclude those traits that most of us regard highly in other beings whom we presume to have minds. Nor is it "arbitrary" to exclude the future findings of brain research concerning the nature of our so-called "minds". Yet you presume to be describing a "total turing test". May I suggest that what you describing is not a "test for mind", but rather a "test for simulated intelligence", and the reason you will not or cannot distinguish between the two is that you would elevate today's primitive state of technology to a fixed methodological standard for future generations. If we cannot cope with the problem, why, we'll just define it away! Right? Is this not, to paraphrase Paul Feyerabend, incompetence upheld as a standard of excellence? -michael Blessed be you, mighty matter, irresistible march of evolution, reality ever new born; you who by constantly shattering our mental categories force us to go further and further in our pursuit of the truth. -Pierre Teilhard de Chardin "Hymn of the Universe" ------------------------------ Date: 27 Nov 86 12:02:50 GMT From: cartan!rathmann@ucbvax.Berkeley.EDU (the late Michael Ellis) Subject: Re: Turing Tests and Chinese Rooms > Ray Trent > 1) I've always been somewhat suspicious about the Turing Test. (1/2 :-) > > a) does anyone out there have any good references regarding > its shortcomings. :-| John Searle's notorious "Chinese Room" argument has probably drawn out more discussion on this topic in recent times than anything else I can think of. As far as I can tell, there seems to be no consensus of opinion on this issue, only a broad spectrum of philosophical stances, some of them apparently quite angry (Hofstadter, for example). The most complete presentation I have yet encountered is in the journal for the Behavioral and Brain Sciences 1980, with a complete statement of Searle's original argument, responses by folks like Fodor, Rorty, McCarthy, Dennett, Hofstadter, Eccles, etc, and Searle's counterresponse. People frequently have misconceptions of just what Searle is arguing, the most common of these being: Machines cannot have minds. What Searle really argues is that: The relation (mind:brain :: software:hardware) is fallacious. Computers cannot have minds solely by virtue of their running the correct program. His position seems to derive from his thoughts in the philosophy of language, and in particular his notion of Intentionality. Familiarity with the work of Frege, Russell, Wittgenstein, Quine, Austin, Putnam, and Kripke would really be helpful if you are interested in the motivation behind this concept, but Searle maintains that his Chinese room argument makes sense without any of that background. -michael ------------------------------ Date: 29 Nov 86 06:52:21 GMT From: rutgers!princeton!mind!harnad@lll-crg.arpa (Stevan Harnad) Subject: Re: Searle, Turing, Symbols, Categories Peter O. Mikes at S-1 Project, LLNL wrote: > An example of ["unexperienced experience"] is subliminal perception. > Similar case is perception of outside world during > dream, which can be recalled under hypnosis. Perception > is not same as experience, and sensation is an ambiguous word. Subliminal perception can hardly serve as a clarifying example since its own existence and nature is anything but clearly established. (See D. Holender (1986) "Semantic activation without conscious identification," Behavioral and Brain Sciences 9: 1 - 66.) If subliminal perception exists, the question is whether it is just a case of dim or weak awareness, quickly forgotten, or the unconscious registration of information. If it is the former, then it is merely a case of a weak and subsequently forgotten conscious experience. If it is the latter, then it is a case of unconscious processing -- one of many, for most processes is unconscious (and studying them is the theoretical burden of cognitive science). Dreaming is a similar case. It is generally agreed (from studies in which subjects are awakened during dreams) that subjects are conscious during their dreams, although they remain asleep. This state is called "paradoxical sleep," because the EEG shows signs of active, waking activity even though the subject's eyes are closed and he continues to sleep. Easily awakened in that stage of sleep, the subject can report the contents of his dream, and indicates that he has been consciously undergoing the experience, like a vivid day-dream or a hallucination. If the subject is not awakened, however, the dream is usually forgotten, and difficult if not impossible to recall. (As usual, recognition memory is stronger than recall, so sometimes cues will be recognized as having occurred in a forgotten dream.) None of this bears on the issue of consciousness, since the consciousness during dreams is relatively unproblematic, and the only other phenomenon involved is simply the forgetting of an experience. A third hypothetical possibility is slightly more interesting, but, unfortunately, virtually untestable: Can there be unconscious registration of information at time T, and then, at a later time, T1, conscious recall of that information AS IF it had been experienced consciously at T? This is a theoretical possibility. It would still not make the event at T a conscious experience, but it would mean that input information can be put on "hold" in such a way as to be retrospectively experienced at a later time. The later experience would still be a kind of illusion, in that the original event was NOT actually experienced at T, as it appears to have been upon reflection. The nervous system is probably playing many temporal (and causal) tricks like that within very short time intervals; the question only becomes dramatic when longer intervals (minutes, hours, days) are interposed between T and T1. None of these issues are merely definitional ones. It is true that "perception" and "sensation" are ambiguous, but, fortunately, "experience" seems to be less so. So one may want to separate sensations and perceptions into the conscious and unconscious ones. The conscious ones are the ones that we were consciously aware of -- i.e., that we experienced -- when they occurred in real time. The unconscious ones simply registered information in our brains at their moment of real-time occurrence (without being experienced), and the awareness, if any, came only later. > suggest that we follow the example of acoustics, which solved the > 'riddle' of falling tree by defining 'sound' as physical effect > (density wave) and noise as 'unwanted sound' - so that The tree > which falls in deserted place makes sound but does not make noise. > Accordingly, perception can be unconcious but experience can't. Based on the account you give, acoustics solved no problem. It merely missed the point. Again, the issue is not a definitional one. When a tree falls, all you have is acoustic events. If an organism is nearby, you have acoustic events and auditory events (i.e., physiological events in its nervous system). If the organism is conscious, it hears a sound. But, unless you are that organism, you can't know for sure about that. This is called the mind/body problem. "Noise" and "unwanted sound" has absolutely nothing to do with it. > mind and consciousness (or something like that) should be a universal > quantity, which could be applied to machine, computers... > Since we know that there is no sharp division between living and > nonliving, we should be able to apply the measure to everything We should indeed be able to apply the concept conscious/nonconscious to everything, just as we can apply the concept living/nonliving. The question, however, remains: What is and what isn't conscious? And how are we to know it? Here are some commonsense things to keep in mind. I know of only one case of a conscious entity directly and with certainty: My own. I infer that other organisms that behave more or less the way I would are also conscious, although of course I can't be sure. I also infer that a stone is not conscious, although of course I can't be sure about that either. The problem is finding a basis for making the inference in intermediate cases. Certainty will not be possible in any case but my own. I have argued that the Total Turing Test is a reasonable empirical criterion for cognitive science and a reasonable intuitive criterion for the rest of us. Moreover, it has the virtue of corresponding to the subjectively compelling criterion we're already using daily in the case of all other minds but our own. -- Stevan Harnad (609) - 921 7771 {allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad harnad%mind@princeton.csnet ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Thu Dec 4 01:16:45 1986 Date: Thu, 4 Dec 86 01:16:32 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #275 Status: R AIList Digest Tuesday, 2 Dec 1986 Volume 4 : Issue 275 Today's Topics: Philosophy - Searle, Turing, Symbols, Categories ---------------------------------------------------------------------- Date: 28 Nov 86 06:27:20 GMT From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov (Stevan Harnad) Subject: Re: Searle, Turing, Symbols, Categories Lambert Meertens (lambert@boring.uucp) of CWI, Amsterdam, writes: > for me it is not the case that I perceive/experience/ > am-directly-aware-of my performance being caused by anything. > It just happens. Phenomenology is of course not something it's easy to settle disagreements about, but I think I can say with some confidence that most people experience their (voluntary) behavior as caused by THEM. My point about free will's being an illusion is a subtler one. I am not doubting that we all experience our voluntary actions as freely willed by ourselves. That EXPERIENCE is certainly real, and no illusion. What I am doubting is that our will is actually the cause of our actions, as it seems to be. I think our actions are caused by our brain activity (and its causes) BEFORE we are aware of having willed them, and that our experience of willing and causing them involves a temporal illusion (see S. Harnad [1982] "Consciousness: An afterthought," Cognition and Brain Theory 5: 29 - 47, and B. Libet [1986] "Unconscious cerebral initiative and the role of conscious will in voluntary action," Behavioral and Brain Sciences 8: 529 - 566.) Of course, my task of supporting this position would be much easier if the phenomenology you describe were more prevalent... > How do I know I have a mind?... The problem is that if you > look up "mind" in an English-Dutch dictionary, some eight > translations are suggested. The mind/body problem is not just a lexical one; nor can it be settled by definitions. The question "How do I know I have a mind?" is synonymous with the question "How do I know I am experiencing anything at all [now, rather than just going through the motions AS IF I were having experience, but in fact being only an insentient automaton]?" And the answer is: By direct, first-hand experience. > "Consciousness" is more like "appetite"... How can we know for > sure that other people have appetites as well?... "Can machines > have an appetite?" I quite agree that consciousness is like appetite. Or, to put it more specifically: If consciousness is the ability to have (or the actual having of) experience in general, appetite is a particular experience most conscious subjects have. And, yes, the same questions that apply to consciousness in general apply to appetite in particular. But I'm afraid that this conclusion was not your objective here... > Now why is consciousness "real", if free will is an illusion? > Or, rather, why should the thesis that consciousness is "real" > be more compelling than the analogous thesis for free will? > In either case, the essential argument is: "Because I [the > proponent of that thesis] have direct, immediate, evidence of it." The difference is that in the case of the (Cartesian) thesis of the reality of consciousness (or mind) the question is whether there is any qualitative, subjective experience going on AT ALL, whereas in the case of the thesis of the reality of free will the question is whether the dictates of a particular CONTENT of experience (namely, the causal impression it gives us) is true of the world. The latter, like the existence of the outside world itself, is amenable to doubt. But the former, namely, THAT we are experiencing anything at all, is not open to doubt, and is settled by the very act of experiencing something. That is the celebrated Cartesian Cogito. > Sometimes we are conscious of certain sensations. Do these > sensations disappear if we are not conscious of them? Or do they go > on on a subconscious level? That is like the question "If a falling > tree..." The following point is crucial to a coherent discussion of the mind/body problem: The notion of an unconscious sensation (or, more generally, an unconscious experience) is a contradiction in terms! [Test it in the form: "unexperienced experience." Whatever might that mean? Don't answer. The Viennese delegation (as Nabokov used to call it) has already made almost a century's worth of hermeneutic hay with the myth of the "subconscious" -- a manifest nonsolution to the mind/body problem that simply consisted of multiplying the mystery by two. The problem isn't the unconscious causation of behavior: If we were all unconscious automata there would be no mind/body problem. The problem is conscious experience. And anthropomorphizing the sizeable portion of our behavior that we DON'T have the illusion of being the cause of is not only no solution to the mind/body problem but not even a contribution to the problem of finding the unconscious causes of behavior -- which calls for cognitive theory, not hermeneutics.] It would be best to stay away from the usually misunderstood and misused problem of the "unheard sound of the falling tree." Typically used to deride philosophers, the unheard last laugh is usually on the derider. > Let us agree that the sensations continue at least if it can be > shown that the person involved keeps behaving as if the concomitant > sensations continued, even though professing in retrospection not > to have been aware of them. So people can be afraid without > realizing it, say, or drive a car without being conscious of the > traffic lights (and still halt for a red light). I'm afraid I can't agree with any of this. A sensation may be experienced and then forgotten, and then perhaps again remembered. That's unproblematic, but that's not the issue here, is it? The issue is either (1) unexperienced sensations (which I suggest is a completely incoherent notion) or (2) unconsciously caused or guided behavior. The latter is of course the category most behavior falls into. So unconscious stopping for a red light is okay; so is unconscious avoidance or even unconscious escape. But unconscious fear is another matter, because fear is an experience, not a behavior (and, as I've argued, the concept of an unconscious experience is self-contradictory). If I may anticipate what I will be saying below: You seem to have altogether too much intuitive confidence in the explanatory power of the concept and phenomenology of memory in your views on the mind/body problem. But the problem is that of immediate, ongoing qualitative experience. Anything else -- including the specifics of the immediate content of the experience (apart from the fact THAT it is an experience) and its relation to the future, the past or the outside world -- is open to doubt and is merely a matter of inference, rather than one of direct, immediate certainty in the way experiential matters are. Hence whereas veridical memories and continuities may indeed happen to be present in our immediate experiences, there is no direct way that we can know that they are in fact veridical. Directly, we know only that they APPEAR to be veridical. But that's how all phenomenological experience is: An experience of how things appear. Sorting out what's what is an indirect, inferential matter, and that includes sorting out the experiences that I experience correctly as remembered from those that are really only "deja vu." (This is what much of the writing on the problem of the continuity of personal identity is concerned with.) > Maybe everything is conscious. Maybe stones are conscious... > Their problem is, they can hardly tell us. The other problem is, > they have no memory... They are like us with that traffic light... > Even if we experience something consciously, if we lose all > remembrance of it, there is no way in which we can tell for sure > that there was a conscious experience. Maybe we can infer > consciousness by an indirect argument, but that doesn't count. > Indirect evidence can be pretty strong, but it can never give > certainty. Barring false memories, we can only be sure if we > remember the experience itself. Stones have worse problems than not being able to tell us they're conscious and not being able to remember. And the mind/problem is not solved by animism (attributing conscious experience to everything); It is merely compounded by it. The question is: Do stones have experiences? I rather doubt it, and feel that a good part of the M/B problem is sorting out the kinds of things that do have experiences from the kinds of things, like stones, that do not (and how, and why, functionally speaking). If we experience something, we experience it consciously. That's what "experience" means. Otherwise it just "happens" to us (e.g., when we're distracted, asleep, comatose or dead), and then we may indeed be like the stone (rather than vice versa). And if we forget an experience, we forget it. So what? Being conscious of it does not consist in or depend on remembering it, but on actually experiencing it at the time. The same is true of remembering a previously forgotten experience: Maybe it was so, maybe it wasn't. The only thing we are directly conscious of is that we experience it AS something remembered. Inference may be involved in trying to determine whether or not a memory is veridical, but it is certainly not involved in determining THAT I am having any particular conscious experience. That fact is ascertained directly. Indeed it is the ONLY fact of consciousness, and it is immediate and incorrigible. The particulars of its content, on the other hand -- what an experience indicates about the outside world, the past, the future, etc. -- are indirect, inferential matters. (To put it another way, there is no way to "bar false memories." Experiences wear their experientiality on their ears, so to speak, but all of the rest of their apparel could be false, and requires inference for indirect confirmation.) > If some things we experience do not leave a recallable trace, then > why should we say that they were experienced consciously? Or, why > shouldn't we maintain the position that stones are conscious > as well?... More useful, then, to use "consciousness" only for > experiences that are, somehow, recallable. These stipulations would be arbitrary (and probably false). Moreover, they would simply fail to be faithful to our direct experience -- to "what it's like" to have an experience. The "recallability" criterion is a (weak) external one we apply to others, and to ourselves when we're wondering whether or not something really happened. But when we're judging whether we're consciously experiencing a tooth-ache NOW, recallability has nothing to do with it. And if we forget the experience (say, because of subsequent anesthesia) and never recall it again, that would not make the original experience any less conscious. > the things that go on in our heads are stored away: in order to use for > determining patterns, for better evaluation of the expected outcome of > alternatives, for collecting material that is useful for the > construction or refinement of the model we have of the outside world, > and so on. All these conjectures about the functions of memory and other cognitive processes are fine, but they do not provide (nor can they provide) the slightest hint as to why all these functional and behavioral objectives are not simply accomplished UNconsciously. This shows as graphically as anything how the mind/body problem is completely bypassed by such functional considerations. (This is also why I have been repeatedly recommending "methodological epiphenomenalism" as a research strategy in cognitive modeling.) > Imagine now a machine programmed to "eat" and also to keep up > some dinner conversation... IF hunger THEN eat... equipped with > a conflict-resolution module... dinner-conversation module... > Speaking anthropomorphically, we would say that the machine is > feeling uneasy... apology submodule... PROBABLE CAUSE OF eat > IS appetite... "<... >" > How different are we from that machine? On the information you give here, the difference is likely to be like night and day. What you have described is a standard anthropomorphic interpretation of simple symbol-manipulations. Overzealous AI workers do it all the time. What I believe is needed is not more over-interpretation of the pathetically simple toy tricks that current programs can perform, but an effort to model life-size performance capacity: The Total Turing Test. That will diminish the degrees of freedom of the model to the size of the normal underdetermination of scientific theories by their data, and it will augment the problem of machine minds to the size of the other-minds problem, with which we are already dealing daily by means of the TTT. In the process of pursuing that distant scientific goal, we may come to know certain constraints on the enterprise, such as: (1) Symbol-manipulation alone is not sufficient to pass the TTT. (2) The capacity to pass the TTT does not arise from a mere accretion of toy modules. (3) There is no autonomous symbolic macromodule or level: Symbolic representations must be grounded in nonsymbolic processes. And if methodological epiphenomenalism is faithfully adhered to, the only interpretative question we will ever need to ask about the mind of the candidate system will be precisely the same one we ask about one another's minds; and it will be answered on precisely the same basis as the one we use daily in dealing with the other-minds problem: the TTT. > if we ponder a question consciously... I think the outcome is not > the result of the conscious process, but, rather, that the > consciousness is a side-effect of the conflict-resolution > process going on. I think the same can be said about all "conscious" > processes. The process is there, anyway; it could (in principle) take > place without leaving a trace in memory, but for functional reasons > it does leave such a trace. And the word we use for these cognitive > processes that we can recall as having taken place is "conscious". Again, your account seems to be influenced by certain notions, such as memory and "conflict-resolution," that appear to be carrying more intuitive weight than they can bear. Not only is the issue not that of "leaving a trace" (as mentioned earlier), but there is no real functional argument here for why all this shouldn't or couldn't be accomplished unconsciously. [However, if you substitute for "side-effect" the word "epiphenomenon," you may be calling things by their proper name, and providing (inadevertently) a perfectly good rationale for ignoring them in trying to devise a model to pass the TTT.] > it is functional that I can raise my arm by "willing" it to raise, > although I can use that ability to raise it gratuitously. If the > free will here is an illusion (which I think is primarily a matter > of how you choose to define something as elusive as "free will"), > then so is the free will to direct your attention now to this, > then to that. Rather than to say that free will is an "illusion", > we might say that it is something that features in the model > people have about "themselves". Similarly, I think it is better to say > that consciousness is not so much an illusion, but rather something to > be found in that model. A relatively recent acquisition of that model is > known as the "subconscious". A quite recent addition are "programs", > "sub-programs", "wrong wiring", etc. My arm seems able to rise in two important ways: voluntarily and involuntarily (I don't know what "gratuitously" means). It is not a matter of definition that we feel as if we are causing the motion in the voluntary case; it is a matter of immediate experience. Whether or not that experience is veridical depends on various other factors, such as the true order of the events in question (brain activity, conscious experience, movement) in real time, and the relation of the experiential to the physical (i.e., whether or not it can be causal). The same question does indeed apply to willed changes in the focus of attention. If free will "is something that features in the model people have of 'themselves'," then the question to ask is whether that model is illusory. Consciousness itself cannot be something found in a model (although the concept of consciousness might be) because consciousness is simple the capacity to have (or the having of) experience. (My responses to the concept of the "subconscious" and the over-interpretation of programs and symbols are described earlier in this module. > A sufficiently "intelligent" machine, able to pass not only the > dinner-conversation test but also a sophisticated Turing test, > must have a model of itself. Using that model, and observing its > own behaviour (including "internal" behaviour!), it will be led to > conclude not only that it has an appetite, but also volition and > awareness...Is it mistaken then? Is the machine taken in by an illusion? > "Can machines have illusions?" What a successful candidate for the TTT will have to have is not something we can decide by introspection. Doing hermeneutics on its putative inner life before we build it would seem to be putting the cart before the horse. The question whether machines can have illusions (or appetites, or fears, etc.) is simply a variant on the basic question of whether any organism or device other than oneself can have experiences. -- Stevan Harnad (609) - 921 7771 {allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad harnad%mind@princeton.csnet ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Thu Dec 4 01:17:19 1986 Date: Thu, 4 Dec 86 01:17:15 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #276 Status: R AIList Digest Tuesday, 2 Dec 1986 Volume 4 : Issue 276 Today's Topics: Administrivia - Proposed Split of This Group, Philosophy - Searle, Turing, Symbols, Categories ---------------------------------------------------------------------- Date: 1 Dec 86 09:24:05 est From: Walter Hamscher Subject: Proposed: a split of this group I empathize with the spirit of the motion. But is it really neccessary to split the list? I think Ken does a really good thing by putting warning labels on the philosophical discussions: they're easy to skip over if you're not interested. As long as he's willing to put the time into doing that, there's no need for a split. ------------------------------ Date: Mon 1 Dec 86 10:10:19-PST From: Stephen Barnard Subject: One vote against splitting the list I for one would not like to see the AI-list divided into two --- one for "philosophising about" AI and one for "doing" AI. Even those of us who do AI sometimes like to read and think about philosophical issues. The problem, if there is one, is that certain people have been abusing the free access to the list that Ken rightfully encourages. Let's please keep our postings to a reasonable volume (per contributor). The list is not supposed to be anyone's personal soapbox. ------------------------------ Date: 1 Dec 86 18:48:31 GMT From: ihnp4!houxm!houem!marty1@ucbvax.Berkeley.EDU (M.BRILLIANT) Subject: Re: Proposed: a split of this group Just suggested by jbn@glacier.UUCP (John Nagle): > I would like to suggest that this group be split into two groups; >one about "doing AI" and one on "philosophising about AI", the latter >to contain the various discussions about Turing tests, sentient computers, >and suchlike. Good idea. I was beginning to think the discussions of "when is an artifice intelligent" might belong in "talk.ai." I was looking for articles about how to do AI, and not finding any. The trouble is, "comp.ai.how-to" might have no traffic at all. We seem to be trying to "create artificial intelligence," with the intent that we can finally achieve success at some point (if only we knew how to define success). Why don't we just try always to create something more intelligent than we created before? That way we can not only claim nearly instant success, but also continue to have further successes without end. Would the above question belong in "talk.ai" or "comp.ai.how-to"? Marty M. B. Brilliant (201)-949-1858 AT&T-BL HO 3D-520 houem!marty1 ------------------------------ Date: Sun, 30 Nov 1986 22:27 EST From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU Subject: Searle, Turing, Symbols, Categories Lambert Meertens asks: If some things we experience do not leave a recallable trace, then why should we say that they were experienced consciously? I absolutely agree. In my book, "The Society of Mind", which will be published in January, I argue, with Meertens, that the phenomena we call consciousness are involved with our short term memories. This explains why, as he Meertens suggests, it makes little sense to attribute consciousness to rocks. It also means that there are limits to what consciousness can tell us about itself. In order to do perfect self-experiments upon ourselves, we would need perfect records of what happens inside our memory machinery. But any such machinery must get confused by self-experiments that try to find out how it works - since such experiments must change the very records that they're trying to inspect! This doesn't mean that consciousness cannot be understood, in principle. It only means that, to study it, we'll have to use the methods of science, because we can't rely on introspection. Below are a few more extracts from the book that bear on this issue. If you want to get the book itself, it is being published by Simon and Schuster; it will be printed around New Year but won't get to bookstores until mid-February. If you want it sooner, send me your address and I should be able to send copies early in January. (Price will be 18.95 or less.) Or send name of your bookstore so I can get S&S to lobby the bookstore. They don't seem very experienced at books in the AI-Psychology-Philosophy area. In Section 15.2 I argue that although people usually assume that consciousness is knowing what is happening in the minds, right at the present time, consciousness never is really concerned with the present, but with how we think about the records of our recent thoughts. This explains why our descriptions of consciousness are so queer: whatever people mean to say, they just can't seem to make it clear. We feel we know what's going on, but can't describe it properly. How could anything seem so close, yet always keep beyond our reach? I answer, simply because of how thinking about our short term memories changes them! Still, there is a sense in which thinking about a thought is like from thinking about an ordinary thing. Our brains have various agencies that learn to recognize to recognize - and even name - various patterns of external sensations. Similarly, there must be other agencies that learn to recognize events *inside* the brain - for example, the activities of the agencies that manage memories. And those, I claim, are the bases of the awarenesses we recognize as consciousness. There is nothing peculiar about the idea of sensing events inside the brain; it is as easy for an agent (that is, a small portion of the brain) to be wired to detect a *brain-caused brain-event*, as to detect a world-caused brain-event. Indeed only a small minority of our agents are connected directly to sensors in the outer world, like those that sense the signals coming from the eye or skin; most of the agents in the brain detect events inside of the brain! IN particular, I claim that to understand what we call consciousness, we must understand the activities of the agents that are engaged in using and changing our most recent memories. Why, for example, do we become less conscious of some things when we become more conscious of others? Surely this is because some resource is approaching some limitation - and I'll argue that it is our limited capacity to keep good records of our recent thoughts. Why, for example, do thoughts so often seem to flow in serial streams? It is because whenever we lack room for both, the records of our recent thoughts must then displace the older ones. And why are we so unaware of how we get our new ideas? Because whenever we solve hard problems, our short term memories become so involved with doing *that* that they have neither time nor space for keeping detailed records of what they, themselves, have done. To think about our most recent thoughts, we must examine our recent memories. But these are exactly what we use for "thinking," in the first place - and any self-inspecting probe is prone to change just what it's looking at. Then the system is likely to break down. It is hard enough to describe something with a stable shape; it is even harder to describe something that changes its shape before your eyes; and it is virtually impossible to speak of the shapes of things that change into something else each time you try to think of them. And that's what happens when you try to think about your present thoughts - since each such thought must change your mental state! Would any process not become confused, which alters what it's looking at? What do we mean by words like "sentience," "consciousness," or "self-awareness? They all seem to refer to the sense of feeling one's mind at work. When you say something like "I am conscious of what I'm saying," your speaking agencies must use some records about the recent activity of other agencies. But, what about all the other agents and activities involved in causing everything you say and do? If you were truly self-aware, why wouldn't you know those other things as well? There is a common myth that what we view as consciousness is measurelessly deep and powerful - yet, actually, we scarcely know a thing about what happens in the great computers of our brains. Why is it so hard to describe your present state of mind? One reason is that the time-delays between the different parts of a mind mean that the concept of a "present state" is not a psychologically sound idea. Another reason is that each attempt to reflect upon your mental state will change that state, and this means that trying to know your state is like photographing something that is moving too fast: such pictures will be always blurred. And in any case, our brains did not evolve primarily to help us describe our mental states; we're more engaged with practical things, like making plans and carrying them out. When people ask, "Could a machine ever be conscious?" I'm often tempted to ask back, "Could a person ever be conscious?" I mean this as a serious reply, because we seem so ill equipped to understand ourselves. Long before we became concerned with understanding how we work, our evolution had already constrained the architecture of our brains. However we can design our new machines as we wish, and provide them with better ways to keep and examine records of their own activities - and this means that machines are potentially capable of far more consciousness than we are. To be sure, simply providing machines with such information would not automatically enable them to use it to promote their own development and until we can design more sensible machines, such knowledge might only help them find more ways to fail: the easier to change themselves, the easier to wreck themselves - until they learn to train themselves. Fortunately, we can leave this problem to the designers of the future, who surely would not build such things unless they found good reasons to. (Section 25.4) Why do we have the sense that things proceed in smooth, continuous ways? Is it because, as some mystics think, our minds are part of some flowing stream? think it's just the opposite: our sense of constant steady change emerges from the parts of mind that manage to insulate themselves against the continuous flow of time! In other words, our sense of smooth progression from one mental state to another emerges, not from the nature of that progression itself, but from the descriptions we use to represent it. Nothing can *seem* jerky, except what is *represented* as jerky. Paradoxically, our sense of continuity comes not from any genuine perceptiveness, but from our marvelous insensitivity to most kinds of changes. Existence seem continuous to us, not because we continually experience what is happening in the present, but because we hold to our memories of how things were in the recent past. Without those short-term memories, all would seem entirely new at every instant, and we would have no sense at all of continuity, or of existence. One might suppose that it would be wonderful to possess a faculty of "continual awareness." But such an affliction would be worse than useless because, the more frequently your higher-level agencies change their representations of reality, the harder for them to find significance in what they sense. The power of consciousness comes not from ceaseless change of state, but from having enough stability to discern significant changes in your surroundings. To "notice" change requires the ability to resist it, in order to sense what persists through time, but one can do this only by being able to examine and compare descriptions from the recent past. We notice change in spite of change, and not because of it. Our sense of constant contact with the world is not a genuine experience; instead, it is a form of what I call the "Immanence illusion". We have the sense of actuality when every question asked of our visual systems is answered so swiftly that it seems as though those answers were already there. And that's what frame-arrays provide us with: once any frame fills its terminals, this also fills the terminals of the other frames in its array. When every change of view engages frames whose terminals are already filled, albeit only by default, then sight seems instantaneous.