Date: Sun 18 Sep 1988 15:25-EDT From: AIList Moderator Nick Papadakis Reply-To: AIList@AI.AI.MIT.EDU Us-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139 Phone: (617) 253-6524 Subject: AIList Digest V8 #87 To: AIList@AI.AI.MIT.EDU Status: R AIList Digest Monday, 19 Sep 1988 Volume 8 : Issue 87 Philosophy: The Uncertainty Principle State and change/continuous actions (2 messages) Why? ---------------------------------------------------------------------- Date: Thu, 15 Sep 88 14:59:38 edt From: bph%buengc.bu.edu@bu-it.BU.EDU (Blair P. Houghton) Subject: Re: The Uncertainty Principle. >In Vol 8 # 78 Blair Houghton cries out:- >> I do wish people would keep *recursion* and *perturbation* straight >> and different from the Uncertainty Principle. And Gordon Joly Whines Back: >Perhaps... But what is the *perturbation* in question? "Observation"? By "recursion," I actually meant feedback, which was the process to which Heisenberg-o-morphic uncertainty was being applied in order to invoke chaos in artificially intelligent systems. Lessee if I can verbosify the intuitions: Uncertainty exists because one can not determine the state of a particle system unless one has: a. infinite time to make the measurement with zero energy; or, b. infinite energy to make the measurement in zero time. (it's usually equivalently described as: "determining the momentum requires a long distance over which to observe, hence the particle's position, which can be anywhere along that distance, is not known; and, determining the position requires a very short distance for observation, which causes the error of the momentum measurement to increase.) This is manifest in the fact that adding energy to the system in order to make an understandable observation will necessarily change the state of the system. This DOES NOT mean that observing the system creates uncertainty. Such a thing is equivalent to saying that observing the perfectly flat surface of the ocean causes waves to form, when in fact it is the observer's boat's bobbing in the water that causes those waves. THIS is the "perturbation in question." >Blair also observes >> Electrons "know" where they are and where they are going. > >And I know where I'm coming from too, Man! > >On page 55 (of the American edition) of "A Brief History of Time", >Professor Stephen Hawking says And I'm s'posed to argue? No Way. >``The uncertainty principle had profound implications for way in >which we view the world... The uncertainty principle signaled an >end to Laplace's dream of a theory of science, a model of the >universe that could be completely deterministic: one certainly >cannot predict future events exactly if one cannot even measure >the present state of the universe precisely!'' > >And what of "chaos"? Actually, it means we have to keep our error-bars polished and ready. I wasn't ready for infinite-precision laboratory equipment, anyway. Theoretically, it means our theory has to be treated the same way we treat experimental data; we could even begin to consider current theory to be the data of logical deduction experiments, which is I believe a view consistent with Einstein's of mathematics as an imprecise method for describing nature at the incept. --Blair "It's always a nice feeling to be consistent with Einstein." ------------------------------ Date: 16 Sep 88 21:25:29 GMT From: uflorida!fishwick@gatech.edu (Paul Fishwick) Subject: state and change/continuous actions An inquiry into concepts of "state" and "change": In browsing through Genesereth's and Nilsson's recent book "Logical Foundations of Artificial Intelligence," I find it interesting to compare and contrast the concepts described in Chapter 11 - "State and Change" with state/change concepts defined within systems theory and simulation modeling. The authors make the following statement: "Insufficient attention has been paid to the problem of continuous actions." Now, a question that immediately comes to mind is "What problem?" Perhaps, they are referring to the problem of defining semantics for "how humans think about continuous actions." This leads to some interesting questions: 1) Clearly, the vast literature on math modeling is indicative of "how humans think about continuous actions." This knowledge is in a compiled form, and use of this knowledge has served science in an untold number of circumstances. 2) If commonsense knowledge representation is the issue then we might want to ask a fundamental question "Why do we care about representing commonsense knowledge about continuous actions?" I can see 2 possible goals: One goal is to validate some given theory of commonsense "continuous action" knowledge against actual psychological data. Then we could say, for instance, that Theory XYZ reflects human thought and is therefore useful. I don't think it would be useful to increase our knowledge of mechanics or fluidics, for instance, but perhaps a psycho-therapist might find this knowledge useful. A second goal is to obtain a better model of the continuous action (this reflects the "AI is an approach to problem solving" method where one can study "how Johnny reasons when balls are bounced" and obtain a scientifically superior model regardless of its actual psychological validity). Has anyone seen a commonsense model of continuous action that is an improvement over systems of differential equations, graph based queueing models (and other assorted formal languages for systems and simulation)? Obviously, I'm trying to spark some inter-group discussion and so I hope that any responses will post to both the AI group (comp.ai) AND the SIMULATION group (comp.simulation). In addition (sci.math) and (comp.theory.dynamic-sys) may be appropriate. I believe that Genesereth and Nilsson are quite correct that "reasoning about time and continous actions" is an important issue. However, an even more important issue revolves around people discussing concepts about "state," "time," and "change" by crossing disciplines. Any thoughts? -paul +------------------------------------------------------------------------+ | Prof. Paul A. Fishwick.... INTERNET: fishwick@bikini.cis.ufl.edu | | Dept. of Computer Science. UUCP: gatech!uflorida!fishwick | | Univ. of Florida.......... PHONE: (904)-335-8036 | | Bldg. CSE, Room 301....... FAX is available | | Gainesville, FL 32611..... | +------------------------------------------------------------------------+ ------------------------------ Date: 17 Sep 88 16:14:13 GMT From: uhccux!lee@humu.nosc.mil (Greg Lee) Subject: Re: state and change/continuous actions >From a previous article by fishwick@uflorida.cis.ufl.EDU (Paul Fishwick): " " 2) If commonsense knowledge representation is the issue then we " might want to ask a fundamental question "Why do we care about " representing commonsense knowledge about continuous actions?" " I can see 2 possible goals: One goal is to validate some given " ... To reason about continuous actions where the physics hasn't been worked out or is computationally infeasible. How about that as a third goal? " Obviously, I'm trying to spark some inter-group discussion and so I hope " that any responses will post to both the AI group (comp.ai) AND " the SIMULATION group (comp.simulation). In addition (sci.math) and " (comp.theory.dynamic-sys) may be appropriate. Tsk, tsk. Left out sci.lang. The way people think about these things is reflected in the tense/aspect systems of natural languages. " I believe that Genesereth and Nilsson are quite correct that "reasoning " about time and continous actions" is an important issue. However, an " even more important issue revolves around people discussing " concepts about "state," "time," and "change" by crossing disciplines. " Any thoughts? In English, predicates which can occur with Agent subjects, those capable of deliberate action, can also occur in the progressive aspect, expressing continuous action. This suggests some connection between intent and continuity whose nature is not obvious, to me anyway. Greg, lee@uhccux.uhcc.hawaii.edu ------------------------------ Date: 17 Sep 88 23:40:47 GMT From: markh@csd4.milw.wisc.edu (Mark William Hopkins) Subject: Why? Any time that one sets out to deal with a major problem, there is usually some kind of end-state that is desired, an IDEAL if you will. It's a necessary component of the problem solving task; so much so that if you were to lack the goals and direction you would just end up floundering and meandering -- and that's what is often (wrongly) perceived as doing philosophy. So this brings up the question on my mind: Why does anyone want artificial intelligence? What is it that you're seeking to gain by it? What is it that you would have an intelligent machine do? And when you answer these questions then answer how and why considering AI seems more urgent today than ever before. Link what I've just said in the first two paragraphs. You'll see that it is a recursive problem. It applies both to AI and to you in the quest of seeking AI. If you want to successfully deal with the problem of AI, then you are going to have to know just what it is that you are trying to do. Human curiosity (about the nature of our mind) is one thing, but even that has to be directed toward a pressing need -- so the question remains just what the pressing need is. To say that we merely desire to understand the mind is just a way of rephrasing the question -- it is not an answer. I asked the question and raised the issue, so probably I should try to answer it too. The first thing that comes to mind is our current situation as regards science -- its increasing specialization. Most people will agree that this is a trend that has gone way too far ... to the extent that we may have sacrificed global perspective and competence in our specialists; and further that it is a trend that needs to be reversed. Yet fewer would dare to suggest that we can overcome the problem. I dare. One of the most important functions of AI will be to amplify our own intelligence. In fact, I believe that time is upon us that this symbiotic relation between human and potentially intelligent machine is triggering an evolutionary change in our species as far as its cognitive abilities are concerned. Seen this way, we'll realise that the axiom still holds that: THE COMPUTER IS A TOOL. It's an Intelligent Tool -- but a tool nevertheless. Nowadays, for instance, we credit ourselves with the ability to go at high speeds (60 mph in a car) even though it is really the machine that is doing it for us. Likewise it is going to be with intelligent tools. So in this way, the problem with the information explosion is going to be solved. Slowly, it is dawning on us that the very need for specialization is becoming obsolete. A major determinant of how fragmented science is is how much communication takes place. I submit here that the information explosion is for the most part an explosion in redundancy brought about by a communication bottleneck. Our goal is then to find a way to open up this bottle neck. It is here, again that AI (especially in relation to intelligent data bases) may come to the rescue. This is what I see as for the Why's. ------------------------------ End of AIList Digest ********************