Date: Sun 19 Jun 1988 21:18-EDT From: AIList Moderator Nick Papadakis Reply-To: AIList@AI.AI.MIT.EDU Us-Mail: MIT Mail Stop 38-390, Cambridge MA 02139 Phone: (617) 253-2737 Subject: AIList Digest V7 #39 To: AIList@AI.AI.MIT.EDU Status: RO AIList Digest Monday, 20 Jun 1988 Volume 7 : Issue 39 Today's Topics: Fuzzy systems theory Human-Human Communication Philosophy: Biological relevance and AI determinism a dead issue? Cognitive AI vs Expert Systems Biological relevance and AI Consensual realities are structurally unstable ---------------------------------------------------------------------- Date: 19 Jun 88 11:57:24 GMT From: uflorida!novavax!proxftl!bill@umd5.umd.edu (T. William Wells) Subject: Re: Fuzzy systems theory was (Re: Alternative to Probability) In article <1073@usfvax2.EDU>, pollock@usfvax2.EDU (Wayne Pollock) writes: > On the other hand, set theory, which underlies much of current theory, is > also based on fallacies; (given the basic premses of set theory one can > easily derive their negation). Just where DID you get that idea? While it was true of the set theory of around a century ago, it is NOT true of set theory today. > As long as fuzzy logic provides a framework > for dicussing various concepts and mathematical ideas, which would be hard > to describe in traditional terms, the theory serves a purpose. You seemed to miss my point: fuzzy systems theory MIGHT be an interesing form of mathematics (but ask a mathematician, don't ask me); BUT in its current form it is not valid as a means of representing the real world. ------------------------------ Date: Wed, 15 Jun 88 13:14:29 EDT From: "William J. Joel" Subject: Human-Human Communication It seems to me that recent discussion on this topic has been running around in circles. First off, all communication is coded. The types that humans use are merely ways to encapsulate thought so that another human might attempt to understand what the first human meant. In order to truely 'understand' each other we would first have to understand exactly how the brain works ... exactly. Since that's far off, then anything we do is but an approximation. ------------------------------ Date: 17 Jun 88 20:45:01 GMT From: uflorida!novavax!proxftl!tomh@gatech.edu (Tom Holroyd) Subject: Re: Human-human communication In article <33343@linus.UUCP>, bwk@mitre-bedford.ARPA (Barry W. Kort) writes: > How can we talk about that which cannot be encoded in language? [stuff deleted] > I know how to walk, how to balance on a bicycle, and how to reduce > my pulse. But I can't readily transmit that knowledge in English. > In fact, I don't even know how I know these things. You ride a bicycle by transforming input signals from your sensory system into output signals for your muscles. On the way, these signals are modified by a large number of factors, including some conscious ones which we will ignore. The input/output signals can be represented as vectors, and the transformation is a mapping from one vector space to another. If you train a neural net to learn the mapping from sense data to leg movement (and I'm only talking about simple motion here), the connections of the network encode the knowledge of how to ride a bicycle. Enough to build a robot that can ride a bike. Maybe not cross an intersection safely.. :-) Or, I could list a bunch of differential equations that describe the dynamics of riding a bike. Neither of these is complete, and the connectionist form would include a lot of floating point data, so they don't really count as describing anything in English. However, by analyzing the forms of the equations, it is often possible to develop an understanding of what is going on. Does reducing the problem to a mathematical description count? The next step would be to develop a jargon to cover the dynamics of the situation. Maybe we just don't have terms for many of the actions required for bike riding. Summary: Everything can be described mathematically, and the mathematics can be described in English. Caveat: we haven't figured out how to describe everything using mathematics yet. To me, this is the real problem. Some subjective phenomena may well prove to be irreducible in the sense that in order to understand why a person thinks something is beautiful (say), we'll need to have a large part of that person's brain state, and no amount of mathematical gymnastics will make the data any less complex. (For example, a list of numbers describing a stone falling can be reduced to a simple quadratic equation. Brain states don't seem to be this simple.) Tom Holroyd UUCP: {uunet,codas}!novavax!proxftl!tomh The white knight is talking backwards. ------------------------------ Date: 15 Jun 88 16:34:15 GMT From: wlieberm@teknowledge-vaxc.arpa (William Lieberman) Subject: Re: Biological relevance and AI (was Re: Who else isn't a science?) Just to add slightly to Ben and Mike's discussion, Ben's naturally good question about why should it be that anyone can assume that we humans on earth uniquely possess capabilities in intellgence, etc. (i.e. the biological system that makes us up), and Mike's reply that such an assumption is not really made, reminds me of the question asked in a not-too-long ago earlier age when scientists asked, 'How likely is it that the chemistry of the world, as we know it, exists in the same state outside the earth?' A reasonable question. Then when helium was demonstrated to exist on the sun (through spectrographic analysis around the 1860's??) and around the same time when the table of the elements was being built up empirically and intuitively, the evidence favored the idea that our local chemical and physical laws were probably universal. As a youngster I used to wonder why chemists, etc. kept saying there are only around 100 or so elements in the universe. Why couldn't there be millions? But the data do suggest the chemists are correct - with relatively few elements, such is the matter of the universe existing. What I'm saying here is that it may be prudent to expect not too many diverse 'forms' of intelligence around. Rough analogy, I agree; but sometimes the history of science can provide useful guideposts. Right now we have some sensible ideas about what it takes to do certain kinds of analyses; but no one really knows what it takes to enable a state of consciousness to exist, for example. One answer surely lies in research in biophysics (and probably CS-AI). Bill Lieberman ------------------------------ Date: Fri, 17 Jun 88 08:42:08 EDT From: "Bruce E. Nevin" Subject: determinism a dead issue? Is the notion of determinism not deeply undercut by developments in study of nonlinearity and Chaos? There is sufficient nonlinearity in the workings of brains, bodies, and interacting agents in the world to ensure that simple billiard ball click click click in the pocket determinism is not even an approximation. There seems to me a parallel to Bateson's discussion of creatura vs pleroma, terms borrowed from Jung. If I remember correctly which is which, creatura is the deterministic cause-effect realm amenable to description in simple, linear, Newtonian terms; pleroma (the term derives from a root having to do with "fullness", as in "plenary session") involves metabolism, where outputs are not directly predictable from inputs in terms of forces and impacts and what Bateson elaborates as "cybernetic explanation" applies. He argued that imagery of forces and impacts were inappropriate for most of what is important to us. He was not aware of or at any rate did not write about the relationship of this to nonlinearity and chaos before his death. What is the relationship between the two? Is it the case that systems involving nonlinearity always involve feedback or feedforward loops? My impression from reading is yes. (Isn't it mutual effect of the values of two or more variables on one another that makes an equation nonlinear, and isn't that a way of expressing feedback or feedforward? The effect of friction in a physical system varies according to velocity, even as it affects velocity.) Is it the (stronger) case that systems with such cybernetic loop structure always involve nonlinearity? No, computers are generally advertised as deterministic. Is it that nonlinear systems are not error correcting? Or perhaps that they are analog rather than digital systems? Are massively parallel systems nonlinear, or do they tend to be? Does the distinction apply to now familiar characterizations of brain hemisphere specialization? This has relevance to how an AI based on deterministic, linear systems can do what nonlinear organisms do. Bruce Nevin bn@cch.bbn.com ------------------------------ Date: Fri, 17 Jun 88 09:15:29 -0400 (EDT) From: David Greene Subject: Re: Cognitive AI vs Expert Systems (was Re: Me, Karl, Stephen, Gilbert) In article krulwich-bruce@yale-zoo.arpa (Bruce Krulwich) writes: >This says something about expert systems papers, not about papers >discussing serious attempts at modelling intelligence. It is wrong to >assume (as both you and Mr. Cockton are) that the expert system >work typical of the business world (in other words, applications >programs) is at all similar to work done by researchers investigating >serious intelligence. (See work on case based reasoning, >explanation based learning, expectation based processing, plan >transformation, and constraint based reasoning, to name a few areas.) Since my researchs concerns developing knowledge acquisition approaches (via machine learning) to address real world environments, I'm well aquainted with not only the above literature, but psych, cog psych, JDM (judgement and decision making), and BDT (behavioral decision theory). While I suspect AI researchers who work in Expert System might resent being excluded from work in "serious intelligence", I think my point is that, for a given phenomena, multiple viewpoints from different disciplines (literature) can provide important breadth and insights. Not an earth shattering assumption I admit, but then again, if you examine work in the fields you suggested, you'll frequently find a very narrow scope of references. Many of the papers I was describing come from various learning approaches to knowledge acquisition (eg. Workshop on Knowledge Acquisition for Knowledge Based Systems). @admittedsarcasm(Perhaps this was an unfortunate example since these indidviduals don't qualify as representative AI researchers.) Actually I think the proposition is that it would be encouraging to see more AI lit reviews which offered some viewpoints from different fields... not only might they suggest new issues to address but they might also identify useable solutions to be transferred. - David Greene dg1v@andrew.cmu.edu ------------------------------ Date: 17 Jun 88 17:28:36 GMT From: uhccux!lee@humu.nosc.mil (Greg Lee) Subject: Re: Biological relevance and AI (was Re: Who else isn't a science?) >From article <23201@teknowledge-vaxc.ARPA>, by William Lieberman: " ... " A reasonable question. Then when helium was demonstrated to exist " on the sun (through spectrographic analysis around the 1860's??) and around " the same time when the table of the elements was being built up empirically "... " the chemists are correct - with relatively few elements, such is the matter " of the universe existing. What I'm saying here is that it may be prudent " to expect not too many diverse 'forms' of intelligence around. Rough " analogy, I agree; but sometimes the history of science can provide useful " ... It's not even analogous unless you have a table of intelligence. Maybe you do. If so, how many entries does it have room for? Greg Lee, uhccux.uhcc.hawaii.edu ------------------------------ Date: Sat, 18 Jun 88 13:27:34 EDT From: George McKee Subject: Consensual realities are structurally unstable (another comment, better late than never, I hope.) As Pat Hayes points out, the right way to interpret the phrase "consensual reality" is as a belief system held by some group of participants about the nature of the universe. However, given a universe that contains more than one group and group-reality, it's reasonable to look at the origin, scope, and structure of the different systems and evaluate them with respect to each other. Now it's conceivable that you may find two or more systems with equivalent descriptive and predictive power, and with equally compact representations in the minds of the participants, and in this situation you might be justified in saying that there is more than one fundamental reality. But this doesn't seem to be the case, and there is in fact one description of the collective experience of humanity, namely science, that clearly outranks all the alternatives in just about any respect you may wish to examine it, except perhaps promises of present or future happiness. This is not to say that the scientific description of reality is complete or without weak spots, just that it's so much better than the others that it surprises me that people can argue against the primacy of scientific, physical reality and use a computer at the same time. But even leaving the content of a description of reality aside, I think it's provable that a constructive, exterior description of the universe, one that posits a single fundamental reality that generates the thoughts and perceptions of each observer, is more *stable* than an interior one that assumes the primacy of mental activity and doesn't assume a physical origin of thought, and consequently permits the observer to accept the validity of multiple descriptions. That is, as long as both the interior and exterior viewpoints are sensitive to new data, many if not all of the potential realities consistent with the interior view are susceptible to catastrophic reorganizations triggered by single new datums, while the single reality assumed by the exterior view can only be incrementally modified by any single fact. The proof of this is, as they say, "too long for this page", but one part of it rests on the observation, implicit in Turing's proof of universal computability, that a computer can't determine its microcode by executing instructions. That is, a mind can't determine its fundamental principles of operation by thinking. You have to look at the implementation -- the hardware and microcode. For computational minds we'll be sure to know the details of the implementation, because we did the design. For the human mind, designed as it is by the random processes of genetic variation and historical accident, it's very hard to know what aspects of its structure and organization are essential or important, and which ones aren't. But it's clear that we now have tools that are only a quantitative step away from telling us what we need to know about how the brain implements the mind. Those people who say "we have no idea about how the brain works" are just announcing their own ignorance. The best that a mind can do by thought alone is to determine an infinite equivalence class of possible implementations of itself. This is apparently one of the major conclusions of Hilary Putnam's soon-to-be-released book "Representation and Reality." It'll be interesting to read it to find out if he's able to take the next step and show the determination of a unique implementation of each human mind in the brain of each individual member of H. sapiens. I sure hope so... - George McKee NU Computer Science p.s. And you thought I was going to write about Catastrophe Theory. Not today... ------------------------------ End of AIList Digest ********************