Date: Tue 21 Jun 1988 17:57-EDT From: AIList Moderator Nick Papadakis Reply-To: AIList@AI.AI.MIT.EDU Us-Mail: MIT Mail Stop 38-390, Cambridge MA 02139 Phone: (617) 253-2737 Subject: AIList Digest V7 #42 To: AIList@AI.AI.MIT.EDU Status: RO AIList Digest Wednesday, 22 Jun 1988 Volume 7 : Issue 42 Today's Topics: Free Will: Free Will & Self Awareness Disposing of the free will issue on the concept of will ---------------------------------------------------------------------- Date: 14 Jun 88 14:48:51 GMT From: geb@cadre.dsl.pittsburgh.edu (Gordon E. Banks) Subject: Re: Free Will & Self Awareness In article <2436@uvacs.CS.VIRGINIA.EDU> Carl F. Huber writes: >In article <5323@xanth.cs.odu.edu> Warren E. Taylor writes: >>In article <1176@cadre.dsl.PITTSBURGH.EDU>, Gordon E. Banks writes: >> >>"Spanking" IS, I repeat, IS a form of redesigning the behavior of a child. >>Many children listen to you only when they are feeling pain or are >>anticipating the feeling of pain if they do not listen. > Whoa! This is not a quote from me! Myself, I would prefer non-violent forms of punishment, since I think kids learn the legitamacy of violence from being spanked. But, I should mention, I don't have kids, so I may not be the one to ask about it. ------------------------------ Date: Sun, 19 Jun 88 10:16:38 BST From: Aaron Sloman Subject: Disposing of the free will issue (I wasn't going to contribute to this discussion, but a colleague encouraged me. I haven't read all the discussion, so apologise if there's some repetition of points already made.) Philosophy done well can contribute to technical problems (as shown by the influence of philosophy on logic, mathematics, and computing, e.g. via Aristotle, Leibniz, Frege, Russell). Technical developments can also help to solve or dissolve old philosophical problems. I think we are now in a position to dissolve the problems of free will as normally conceived, and in doing so we can make a contribution to AI as well as philosophy. The basic assumption behind much of the discussion of freewill is (A) there is a well-defined distinction between systems whose choices are free and those which are not. However, if you start examining possible designs for intelligent systems IN GREAT DETAIL you find that there is no one such distinction. Instead there are many "lesser" distinctions corresponding to design decisions that a robot engineer might or might not take -- and in many cases it is likely that biological evolution tried both (or several) alternatives. There are interesting, indeed fascinating, technical problems about the implications of these design distinctions. Exploring them shows that there is no longer any interest in the question whether we have free will because among the REAL distinctions between possible designs there is no one distinction that fits the presuppositions of the philosophical uses of the term "free will". It does not map directly onto any one of the many different interesting design distinctions. (A) is false. "Free will" has plenty of ordinary uses to which most of the philosophical discussion is irrelevant. E.g. "Did you go of your own free will or did she make you go?" That question presupposs a well understood distinction between two possible explanations for someone's action. But the answer "I went of my own free will" does not express a belief in any metaphysical truth about human freedom. It is merely a denial that certain sorts of influences operated. There is no implication that NO causes, or no mechanisms were involved. This is a frequently made common sense distinction between the existence or non-existence of particular sorts of influences on a particular individual's action. However there are other deeper distinctions that relate to to different sorts of designs for behaving systems. The deep technical question that I think lurks behind much of the discussion is "what kinds of designs are possible for agents and what are the implications of different designs as regards the determinants of their actions?" I'll use "agent" as short for "behaving system with something like motives". What that means is a topic for another day. Instead of one big division between things (agents) with and things (agents) without free will we'll then come up with a host of more or less significant divisions, expressing some aspect of the pre-theoretical free/unfree distinction. E.g. here are some examples of design distinctions (some of which would subdivide into smaller sub-distinctions on closer analysis): - Compare (a) agents that are able simultaneously to store and compare different motives with (b) agents that have no mechanisms enabling this: i.e. they can have only one motive at a time. - Compare (a) agents all of whose motives are generated by a single top level goal (e.g. "win this game") with (b) agents with several independent sources of motivation (motive generators - hardware or software), e.g. thirst, sex, curiosity, political ambition, aesthetic preferences, etc. - Contrast (a) an agent whose development includes modification of its motive generators and motive comparators in the light of experience, with (b) an agent whose generators and comparators are fixed for life (presumably the case for many animals). - Contrast (a) an agent whose motive generators and comparators change partly under the influence of genetically determined factors (e.g. puberty), with (b) an agent for whom they can change only in the light of interactions with the environment and inferences drawn therefrom. - Contrast (a) an agent whose motive generators and comparators (and higher order motivators) are themselves accessible to explicit internal scrutiny, analysis and change, with (b) an agent for which all the changes in motive generators and comparators are merely uncontrolled side effects of other processes (as in addictions, habituation, etc.) [A similar distinction can be made as regards motives themselves.] - Contrast (a) an agent pre-programmed to have motive generators and comparators change under the influence of likes and dislikes, or approval and disapproval, of other agents, and (b) an agent that is only influenced by how things affect it. - Compare (a) agents that are able to extend the formalisms they use for thinking about the environment and their methods of dealing with it (like human beings) and (b) agents that are not (most other animals?) - Compare (a) agents that are able to assess the merits of different inconsistent motives (desires, wishes, ideals, etc.) and then decide which (if any) to act on with (b) agents that are always controlled by the most recently generated motive (like very young children? some animals?). - Compare (a) agents with a monolithic hierarchical computational architecture where sub-processes cannot acquire any motives (goals) except via their "superiors", with only one top level executive process generating all the goals driving lower level systems with (b) agents where individual sub-systems can generate independent goals. In case (b) we can distinguish many sub-cases e.g. (b1) the system is hierarchical and sub-systems can pursue their independent goals if they don't conflict with the goals of their superiors (b2) there are procedures whereby sub-systems can (sometimes?) override their superiors. [e.g. reflexes?] - Compare (a) a system in which all the decisions among competing goals and sub-goals are taken on some kind of "democratic" voting basis or a numerical summation or comparison of some kind (a kind of vector addition perhaps) with (b) a system in which conflicts are resolved on the basis of qualitative rules, which are themselves partly there from birth and partly the product of a complex high level learning system. - Compare (a) a system designed entirely to take decisions that are optimal for its own well-being and long term survival with (b) a system that has built-in mechanisms to ensure that the well-being of others is also taken into account. (Human beings and many other animals seem to have some biologically determined mechanisms of the second sort - e.g. maternal/paternal reactions to offspring, sympathy, etc.). - There are many distinctions that can be made between systems according to how much knowledge they have about their own states, and how much they can or cannot change because they do or do not have appropriate mechanisms. (As usual there are many different sub-cases. Having something in a write-protected area is different from not having any mechanism for changing stored information at all.) There are some overlaps between these distinctions, and many of them are relatively imprecise, but all are capable of refinement and can be mapped onto real design decisions for a robot-designer (or evolution). They are just some of the many interesting design distinctions whose implications can be explored both theoretically and experimentally, though building models illustrating most of the alternatives will require significant advances in AI e.g. in perception, memory, learning, reasoning, motor control, etc. When we explore the fascinating space of possible designs for agents, the question which of the various sytems has free will loses interest: the pre-theoretic free/unfree contrast totally fails to produce any one interesting demarcation among the many possible designs -- it can be loosely mapped on to several of them. So the design distinctions define different notions of free:- free(1), free(2), free(3), .... However, if an object is free(i) but not free(j) (for i /= j) then the question "But is it really FREE?" has no answer. It's like asking: What's the difference between things that have life and things that don't? The question is (perhaps) OK if you are contrasting trees, mice and people with stones, rivers and clouds. But when you start looking at a larger class of cases, including viruses, complex molecules of various kinds, and other theoretically possible cases, the question loses its point because it uses a pre-theoretic concept ("life") that doesn't have a sufficiently rich and precise meaning to distinguish all the cases that can occur. (Which need not stop biologists introducing a new precise and technical concept and using the word "life" for it. But that doesn't answer the unanswerable pre-theoretical question about precisely where the boundary lies. Similarly "what's the difference between things with and things without free will?" This question makes the false assumpton (A). So, to ask whether we are free is to ask which side of a boundary we are on when there is no particular boundary in question. (Which is one reason why so many people are tempted to say "What I mean by free is..." and they then produce different incompatible definitions.) I.e. it's a non-issue. So let's examine the more interesting detailed technical questions in depth. (For more on motive generators, motive comparators, etc. see my (joint) article in IJCAI-81 on robots and emotions, or the sequel "Motives, Mechanisms and Emotions" in the journal of Cognition and Emotion Vol I no 3, 1987). Apologies for length. Now, shall I or shan't I post this.........???? Aaron Sloman, School of Cognitive Sciences, Univ of Sussex, Brighton, BN1 9QN, England ARPANET : aarons%uk.ac.sussex.cvaxa@nss.cs.ucl.ac.uk aarons%uk.ac.sussex.cvaxa%nss.cs.ucl.ac.uk@relay.cs.net JANET aarons@cvaxa.sussex.ac.uk BITNET: aarons%uk.ac.sussex.cvaxa@uk.ac or aarons%uk.ac.sussex.cvaxa%ukacrl.bitnet@cunyvm.cuny.edu As a last resort (it costs us more...) UUCP: ...mcvax!ukc!cvaxa!aarons or aarons@cvaxa.uucp ------------------------------ Date: Mon, 20 Jun 88 12:56 O From: Subject: on the concept of will Distribution-File: AILIST@AI.AI.MIT.EDU This is an attempt by me to do some research into the concept of free will. First, I would recommend to everyone Carlos Castaneda's books. They approach the concept of will from Yaqui Indian knowledge point of view. The Yaqui have their own scientific tradition anthropologically studied by Castaneda. Their science is very different from Western sci but non-trivial and honorable. Secondly - we might have a look at the very life itself and study what people do actually will in the real life. Examples: * marry a lovely spouse and raise smart children * exceed one's sales quota at IBM * beat the competition in Silicon Valley * travel to Israel and learn Hebrew * kill that enemy soldier with one's bayonette * find out what the life, the universe, and everything are * explain it to others * relax with a good book and California wine Andy Ylikoski ------------------------------ Date: 21 Jun 88 15:28:15 GMT From: uvaarpa!virginia!uvacs!cfh6r@umd5.umd.edu (Carl F. Huber) Subject: Re: Free Will & Self-Awareness In article <306@proxftl.UUCP> T. William Wells writes: >Let's consider a relatively uncontroversial example. Say I have >a hot stove and a pan over it. At the entity level, the stove >heats the pan. At the process level, the molecules in the stove >transfer energy to the molecules in the pan. > ... >Now, I can actually try to answer your question. At the entity >level, the question "how do I cause it" does not really have an >answer; like the hot stove, it just does it. However, at the >process level, one can look at the mechanisms of consciousness; >these constitute the answer to "how". I do not yet see your distinction in this example. What is the difference between saying the stove _heats_ or the molecules _transfer_energy_? The distinction must be made in the way we describe what's happening. In each case above, you seem to be giving the pan and the molecules volition. The stove does not heat the pan. The stove is hot. The pan later becomes hot. Molecules do not transfer energy. The molecules in the stove have energy s+e. Then the molecules in the pan have energy p+e and the molecules in the stove have energy s. So it seems that both cases here are entity level, since the answer to "how do I cause it" is the same. If I have totally missed the point, could you please try again? -carl ------------------------------ End of AIList Digest ********************