Date: Thu 9 Jun 1988 00:38-EDT From: AIList Moderator Nick Papadakis Reply-To: AIList@AI.AI.MIT.EDU Us-Mail: MIT Mail Stop 38-390, Cambridge MA 02139 Phone: (617) 253-2737 Subject: AIList Digest V7 #24 To: AIList@AI.AI.MIT.EDU Status: R AIList Digest Thursday, 9 Jun 1988 Volume 7 : Issue 24 Today's Topics: Philosophy: Understanding, utility, rigour Human-human communication Definition of 'intelligent' "TV GENIE" transmitter used in robotics - WARNING consensual reality ---------------------------------------------------------------------- Date: 6 Jun 88 10:35:14 GMT From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert Cockton) Subject: Understanding, utility, rigour In article <3c671fbe.44e6@apollo.uucp> nelson_p@apollo.uucp writes: > > Because AI would like to make some progress (for a change!). Does it really think that ignoring the failures of others GUARANTEES it success, rather than even more dismal failure? Is there an argument behind this? > With the exception of some areas in physiological pyschology, > the field is not a science. What do we mean when we say this? What do you mean by 'scientific'? I ask because there are many definitions, many approaches, mostly polemic. > Its models and definitions are simply not rigorous enough to be useful. Lack of rigour does not imply lack of utility. Having applied many of the models and definitions I encountered in psychology as a teacher, I can say that I certainly found my psychology useful, even the behaviourism (it forced me to distinguish between things which could and could not be learned by rote, the former are good CAI/CAL fodder). Understanding and rigour and not the same thing. Nor is 'rigour' one thing. The difference between humans and computers is what can inspire them. Computers are inspired by mechanistic programs, humans by ideas, models, new understandings, alternative views. Not all are of course, and too much science in human areas directed towards the creation of cast-iron understanding for the uninspired dullard. > When you talk about an 'understanding of humanity' you clearly > have a different use of the term 'understanding' in mind than I do. Good, perhaps you might appreciate that it is not all without value. In fact, it is the understanding you must use daily in all the circumstances where science has not come to guide your actions. -- Gilbert Cockton, Department of Computing Science, The University, Glasgow gilbert@uk.ac.glasgow.cs !ukc!glasgow!gilbert The proper object of the study of humanity is humans, not machines ------------------------------ Date: 7 Jun 88 22:32:56 GMT From: esosun!jackson@seismo.css.gov (Jerry Jackson) Subject: Re: Human-human communication Some obvious examples of things inexpressible in language are: How to recognize the color red on sight (or any other color).. How to tell which of two sounds has a higher pitch by listening.. And so on... --Jerry Jackson ------------------------------ Date: Wed, 8 Jun 88 07:54:18 EDT From: "Bruce E. Nevin" Subject: re: Definition of 'intelligent' In AIList Digest 7.21, Kurt Godden writes: KG> The current amusement here at work is found in >Webster's New Collegiate KG> Dictionary< published by Merriam: KG> intelligent: [...] 3: able to perform some of the functions of a computer When you quote the entire definition of "intelligent" (sense 3) from Webster's 9th New Collegiate Dictionary, you find quite a sensible definition of a certain CS usage of the term: 3 : able to perform computer functions ; also : able to convert digital information to hard copy Bruce Nevin bn@cch.bbn.com ------------------------------ Date: Wed, 8 Jun 88 09:10:23 PDT From: John B. Nagle Subject: "TV GENIE" transmitter used in robotics - WARNING As I've seen the TV transmitter called the "TV Genie" in several robotics labs around the country, it seems appropriate to post this here. I was considering using one myself, but after consultation with some hams and the local FCC office, discovered that this is a bad idea. See below. John Nagle -- The ARRL Letter, Volume 6, No. 22, November 4, 1987 Published by: The American Radio Relay League, Inc. 225 Main St. Newington, CT 06111 Editor: Phil Sager, WB4FDT Material from The ARRL Letter may be reproduced in whole or in part, in any form, including photoreproduction and electronic databanks, provided that credit is given to The ARRL Letter and to the American Radio Relay League, Inc. ORION INDUSTRIES FINED $940,000 Orion Industries, Inc, of Las Vegas, Nevada, and its owners have been fined over $940,000 for importing and marketing illegal radiofrequency devices. In addition, one owner was sentenced to two years imprisonment. The illegal device the firm imported and marketed is a low powered video transmitter called "TV Genie" designed to retrans- mit video signals from cameras and VCRs over the air to nearby television receivers. The FCC said that recent complaints of interference to air flight communications in Tennessee and a rural Illinois ambulance service were traced to the device. According to the FCC public notice, warnings were issued to Orion Industries after the FCC received reports that these devices were being sold. Follow-up investigations revealed that Orion sold over 27,000 of the devices after receiving the warnings. The penalties assessed were based upon federal law, which allows maximum fines of twice the gross gain from sales of the device. ------------------------------ Date: 8 Jun 88 12:57 PDT From: hayes.pa@Xerox.COM Subject: consensual reality I can't help responding to Simon Brooke's acidic comments on William Wells' rather brusquely expressed response to Cockton's social-science screaming. Simon still has a three-hundred-year old DOUBT about the world, and how we know it's there. Most English-speaking analytical philosophy got over that around a century ago, but it seems to have lived on in German philosophy and become built into the foundations of a certain branch of social science theory. Look: of course we can only perceive the world through our perceptions, this is almost a tautology. So what? This is only a problem if one is anxious to obtain a different kind of knowledge, something which is ABSOLUTELY certain. The need for this came largely from religious thinking, and is now a matter of history. I'm not certain of anything more than that there is a CRT screen in front of me right now. Unlike Descartes, I'm not interested in nailing down truth more firmly than by empirical test. Once one takes this sort of an attitude, science becomes possible, and all these terrible feelings of alienation, doubt and the mysteriousness of communication simply evaporate. Of course, perception and communication are amazing phenomena, and we don't understand them; but they aren't isolated in some sort of philosophical cloud, they are just damned complicated and subtle. A question: if one doubts the existence of the physical world in which we live, what gives one such confidence in the existence of the other people who comprise the society which is supposed to determine our construction of the reality? You deny us "any verifyable access to a real world" , yet later in the same sentence refer to "normal conversation", which seems to me like a remarkable shift of level of ontological cynicism. Pat ------------------------------ Date: 8 Jun 88 20:56:21 GMT From: bbn.com!pineapple.bbn.com!barr@bbn.com (Hunter Barr) Subject: Re: Human-human communication In article <198@esosun.UUCP> jackson@esosun.UUCP (Jerry Jackson) writes: > >Some obvious examples of things inexpressible in language are: > >How to recognize the color red on sight (or any other color).. > >How to tell which of two sounds has a higher pitch by listening.. > >And so on... > >--Jerry Jackson All communication is based on common ground between the communicator and the audience. Symbols are established for colors and sounds as for anything else-- by common experience, i.e., common to both communicator and audience. Often the *easiest* way to establish this common ground, is to attach a symbol to something physical. For instance, to put a young gymnast's body in the "memory position" and say, "There. That is called 'arching your back.'" Or to point to a red object and say, "That object is red." Or to play a two notes on piano and say "The second one is higher." While it is true that most of what happens in our minds (all our acts of physical perception, emotion, and some of our goal resolution) is non-linguistic, there is nothing to stop us from attaching linguistic symbols to any part of it and expressing it in language. Thus AIers find the acts of the mind equivalent to (and indistinguishable from) the manipulation of symbols. You are mistaken in thinking that language is unable to deal with non-linguistic phenomena. I will now express in language: "How to recognize the color red on sight (or any other color)..": Find a person who knows the meaning of the word "red." Ask her to point out objects which are red, and objects which are not, distinguishing between them as she goes along. If you are physically able to distinguish colors, you will soon get the hang of it. This is no different from having an English teacher write sentences on the black-board, distinguishing between those words which are verbs and those which are not. That is probably how you learned the meaning of the English word "verb." What is the difference between learning the word "red" and learning the word "verb"? Surely the latter scenario shows that the concept "verb" is expressible in language. It seems to me that we commonly make use of the word "red" when nothing red is in sight, leading me to think that lanuage expresses both concepts quite reliably, without regard to their tangible or otherwise physical existance. I am experiencing something like this scenario myself these days. I just started to study Japanese, and I have yet to pin down *aoi*; as time goes on I will ask an expert Japanese-speaker to point out things that fall under that category, and I will eventually get a very good idea of what is meant by *aoi*. (My current understanding is that it covers virtually everything which English calls "blue", and possibly many shades which English calls "green".) Someone once theorized that over the centuries our understanding of the Latin color-words may have shifted slightly. The problem is that we have no-one whose native language is Latin, who can point to ruddy objects and say, "Well, this one is not quite *ruber*, but that other one surely is." When we translate a piece of Latin text as "He wore a red cloak," who is to say that an English-speaking eye-witness would not have called it "orange" or "brown". I cannot even agree with my girlfriend which things are purple and which are blue. This could never happen with terms like *maior* and *minor*, because there are so many common objects to keep the distinction clear. If you feel that I am cheating, try to express something in language which does *not* fall back on some experience like these. And please don't forget to express it where I can read it-- either into my mailbox, or into this newsgroup. Thanks for reading and responding-- I love the attention. ______ HUNTER ------------------------------ Date: 08 Jun 88 1619 PDT From: John McCarthy Subject: consensual reality The trouble with a consensual or any other subjective concept of reality is that it is scientifically implausible. That the world evolved, that life evolved, that humanity evolved, that civilization evolved and that science evolved is rather well accepted, and the advocates of subjective concepts of reality don't usually challenge it. However, if evolution of all these things is a fact, then it would be an additional fact about evolution if humans and human society evolved any privileged access to facts about the world. There isn't even any guarantee from evolution that all the facts of the world or even of mathematics are in any way conclusively decidable by such creatures as may evolve intelligence. What we can observe directly is an accident of the sense organs we happen to have evolved. Had we evolved as good echolocation as bats, we might be able to observe each other's innards directly. Likewise there is no mathematical theorem that the truth about any mathematical question fits within axiomatic systems with nice properties. Indeed science is a social activity and all information comes in through the senses. A cautious view of what we can learn would like to keep science close to observation and would pay attention to the consensus aspects of what we believe. However, our world is not constructed in a way that co-operates with such desires. Its basic aspects are far from observation, the truth about it is often hard to formulate in our languages, and some aspects of the truth may even be impossible to formulate. The consensus is often muddled or wrong. To deal with this matter I advocate a new branch of philosophy I call metaepistemology. It studies abstractly the relation between the structure of a world and what an intelligent system within the world can learn about it. This will depend on how the system is connected to the rest of the world and what the system regards as meaningful propositions about the world and what it accepts as evidence for these propositions. Curiously, there is a relevant paper - "Gedanken Experiments with Sequential Machines" by E. Moore. The paper is in "Automata Studies" edited by C. E. Shannon and J. McCarthy, Princeton University Press 1956. Moore only deals with finite automata observed from the outside and doesn't deal with criteria for meaningfulness, but it's a start. The issue is relevant for AI. Machines programmed to find out about the environment we put them in won't work very well if we provide them with only the ability to formulate hypotheses about the relations among their inputs and outputs. They need also to be able to hypothesize theoretical entities and conjecture about their existence and properties. It will be even worse if we try to program to regard reality as consensual, since such a view is worse than false; it's incoherent. ------------------------------ End of AIList Digest ********************