Date: Wed 12 Oct 1988 10:59-EDT From: AIList Moderator Nick Papadakis Reply-To: AIList@AI.AI.MIT.EDU Us-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139 Phone: (617) 253-6524 Subject: AIList Digest V8 #105 To: AIList@AI.AI.MIT.EDU Status: R AIList Digest Wednesday, 12 Oct 1988 Volume 8 : Issue 105 Philosophy -- Consciousness (4 messages) ---------------------------------------------------------------------- Date: Mon, 10 Oct 88 18:48:27 PDT From: larry@VLSI.JPL.NASA.GOV Subject: Philosophy: Consciousness >Self-awareness may, in turn, be defined as the capacity of a sentient >system to monitor itself. Over the years I've heard many people object to self-referential systems for a variety of reasons (in AIList, for instance, in the recent discussion of linguistic paradoxes.) Some of this seems to based on emotional grounds, others on the fact that we have no analytical theory to handle self-reference. Yet self-reference seems to be at the core of much human thought, certainly of consciousness, so we must develop such theory. ------------ >Julian Jaynes (The Origin of Consciousness and the Bicameral Mind) >is very persuasive when he argues that consciousness is not >required for use of human language or every-day human activities. Human thought seems to be a hierarchy of cooperating (and sometimes competing processes). Consciousness seems to have a (the?) major role of integrating these processes. So even though the components of human-language use might not require consciousness, the fullest use of human language would. ------------ > [W]ould such a machine have to be "creative"? And if so, how would >we measure the machine's creativity? (Apologies to any who've heard this before.) As an artist in several art forms (though expert in only a couple), I use creativity as routinely and reflexively as I walk. It's no more (and no less) mysterious than walking. Basically, creativity is the combining of memes (which I define as MEMory Elements) to form more complex memes. This combining has a random element but is guided to some extent. One form of guidance involves a goal-seeking mechanism that provides a mask against which new memes are compared. Parts of the mask have don't-care attributes that can be turned on or off to make the search for a solution more or less open. Those that filter through the mask then become part of the meme-pool and can be used as components of other memes, or mutated to form yet-newer memes. A fair amount of skill is involved in selecting the right amount of meme-passing. Too little and one is over-whelmed by wild ideas; too much and you may filter out the odd but elegant solution--or the ridiculous solution that forms the root of a search that does find the solution. Skill is also involved in setting up the creative search. Most of the search is done subconsciously, but it is launched by a conscious decision. Before this is done, you must stock up on memes relevant to the problem, which includes ingesting them (via reading, talking with co-workers, watching videos or experiments, etc.) and learning them (by playing with them and through repetition making them part of long-term memory). And skill is involved in insuring that conscious activity does not interefere with the subconscious search. Part of this involves staying away from the particular problem or similar problems, and keeping from launching a second creative episode before receiving the results of the first. I see no reason, however, why the conscious mechanisms that affect creativity should have to be conscious. This is not to conclude that it doesn't enrich or support the mechanisms of consciousness. Larry @ vlsi.jpl.nasa.gov ------------------------------ Date: 11 Oct 88 13:20:01 GMT From: bwk@mitre-bedford.arpa (Barry W. Kort) Subject: Re: Intelligence / Consciousness Test for Machines (Neural-Nets)??? In article <263@balder.esosun.UUCP> jackson@esosun.UUCP (Jerry Jackson) writes: >Consciousness is a *subjective* phenomenon. >It is truly not even possible to determine if your neighbor is conscious. I think the best way to determine if someone is conscious is to carry on a conversation with them. (The interaction need not be verbal. One can use visual or tactile channels, or non-verbal auditory channels.) There are interesting anecdotes about autistic children who were coaxed into normal modes of communication by starting with primitive stimulus- response modes. The Helen Keller story also dramatizes such a breakthrough. One of the frontiers is the creation of a common language between humans and other intelligent mammals such as chimps and dolphins. --Barry Kort ------------------------------ Date: 12 Oct 88 00:17:53 GMT From: clyde!watmath!watdcsu!smann@bellcore.bellcore.com (Shannon Mann - I.S.er) Subject: Re: Intelligence / Consciousness Test for Machines (Neural-Nets)??? In article <1141@usfvax2.EDU> mician@usfvax2.usf.edu.UUCP (Rudy Mician) writes: >When can a machine be considered a conscious entity? > >For instance, if a massive neural-net were to start from a stochastic state >and learn to interact with its environment in the same way that people do >(interact not think), how could one tell that such a machine thinks or exists >(in the same context as Descarte's "COGITO ERGO SUM"/"DUBITO ERGO SUM" >argument- that is, how could one tell whether or not an "I" exists for the >machine? Only the _machine_ can adequately answer the question. If the _machine_ asks 'What/Who Am I?', by the definition of self-awareness (any reasonable one I can think of) the machine is self-aware. If the _machine_ can sense and react to the environment, it is (on some primitive level) aware. Science has already provided us with machines that are far more _aware_ than the common amoeba. Until the science community refines its' ideas of what awareness, and self- awareness entails, the above question cannot be answered with any accuracy. Is it possible? Certainly! Consciousness occurs with in biological systems, so why not mechanical systems of sufficient complexity. If we consider the vastness of space and time, and that an event can occur once, it is reasonable to conclude that _self-awareness_ will occur out there again and that, more than likely, it will be in a different form than ours. Knowing this, is it so difficult to accept the possibility of creating the same? >Furthermore, would such a machine have to be "creative"? And if so, how would >we measure the machine's creativity? This question could/should be asked about humans. When is a human creative? When We invent something, is it not the re-application of some known idea? Or an accidental discovery? In my mind, creativity is the ability to syn- thesize _something_ from a group of _something_different_. My definition does not include the concept of self-direction, and so should be modified. Regardless, it does touch upon the basic idea that _to_create_ means to take _what_is_ and make _something_new_. By this definition, _life_ is creative :-) >I suspect that the Turing Test is no longer an adequate means of judging >whether or not a machine is intelligent. Here we go upon a different tack. Intelligence is quite different than self- awareness. I do not want to define intelligence as it is a term that is used and misused in so many ways that coherent dialog about the subject is highly suspect of worth. My definition certainly would not clear up any ambiguity, but would probably create a flame war of criticism. Self-awareness is exactly that, to be aware of one-self, separate from the environment you exist in. Intelligence... well, you go figure. However, there is a difference. >If anyone has any ideas, comments, or insights into the above questions or any >questions that might be raised by them, please don't hesitate to reply. Well, you asked.... I know about much of the research that has been done on the topic of self-learning systems. The idea is that, if a machine can learn like humans, then it must be like humans. However, humans do not learn in the simplified manner that these systems employ. Humans use a system where they learn how a particular system or process works, and then can re-apply that heuristic (am I using this term correctly?) under different circumstances. Has the heuristic approach be attempted in machine learning systems? I don't believe so, and would appreciate any response. >Rudy Mician mician@usfvax2.usf.edu >Usenet: ...!{ihnp4, cbatt}!codas!usfvax2!mician -=- -=- Shannon Mann -=- smann@watdcsu.UWaterloo.ca -=- P.S. Please do not respond with any egocentric views about what it is to be human, etc. I see Humanity as different than the rest of the animal kingdom, but, in no way superior. Having the power to damage our planet the way we do does not mean we our superior. Possessing and using that power only shows our foolishness. ------------------------------ Date: 12 Oct 88 00:33:52 GMT From: clyde!watmath!watdcsu!smann@bellcore.bellcore.com (Shannon Mann - I.S.er) Subject: Re: Here's one ... In article <409@soleil.UUCP> peru@soleil.UUCP (Dave Peru) writes: > >Have you ever thought about what the brain is doing between thoughts? Thinking, what else? We are aware of one thought/idea/concept etc. at one time. Evidently, the mind does not cease functioning when we choose not to focus upon it's internal workings. There is a continuous cosmic soup of thought circulating through your brain at any one time, operating at many different levels. Our awareness is of only a small segment of the total whole. For example. The mind is constantly deleting stimulus that doesn't change (stimulus adaptation). We are not conscious of the process, yet, it is con- tinuous. Next question... Is this thought? -=- -=- Shannon Mann -=- smann@watdcsu.UWaterloo.ca -=- ------------------------------ End of AIList Digest ********************