n this area (net addresses, phone numbers, references, etc). Thanks. dsw, fferd Fred S. Brundick USABRL, APG, MD. [Bill Gayle has been developing an expert system interface to the Bell Labs S statistical package. I believe it is based on the Stanford Centaur production/reasoning system and that it uses "pipes" to invoke S for analysis and display services. Gayle's system currently has little expertise in analyzing residuals, but it does know what types of transformations might be applied to different data types. It is basically a helpful user interface rather than an automated analysis system. Rich Becker, one of the developers of S, has informed me that source code for S is available. Call 800-828-UNIX for information, or write to AT&T Technologies Software Sales PO Box 25000 Greensboro, NC 27420 For a description of the S package philosophy see Communications of the ACM, May 1984, Vol. 27, No. 5, pp. 486-495. Another automated data analysis system is the RADIX (formerly RX) system being developed at Stanford by Dr. Robert Blum and his students. It has knowledge about drug interactions, symptom onset times, and other special considerations for medical database analysis. It is designed to romp through a database looking for interesting correlations, then to design and run more (statistically) controlled analyses to attempt confirmation of the discovered effects. -- Ken Laws ] ------------------------------ Date: Tue 19 Jun 84 12:44:50-EDT From: Michael Rubin Subject: Re: Q'NIAL According to an advertisement I got, NIAL is "nested interactive array language" and Q'NIAL is a Unix implementation from Queen's University at Kingston, Ontario. It claims to be a sort of cross between LISP and APL with "nested arrays" instead of APL flat arrays or LISP nested lists, "structured control constructs... and a substantial functional programming subset." The address is Nial Systems Ltd., 20 Hatter St., Kingston, Ontario K7M 2L5 (no phone # or net address listed). I don't know anything about it other than what the ad says. ------------------------------ Date: Sun 17 Jun 84 16:28:44-EDT From: MDC.WAYNE%MIT-OZ@MIT-MC.ARPA Subject: Pandora Project In the July 1984 issue of Esquire appears an article by Frank Rose entitled "The Pandora Project." Rose provides some glimpses into work at Berkeley by Robert Wilensky and Joe Faletti on the commensense reasoning programs, PAMELA and PANDORA. --Wayne McGuire ------------------------------ Date: 17 June 1984 0019-EDT From: Dave Touretzky at CMU-CS-A Subject: AAAI-84 Program Now Available [Forwarded from the CMU bboard by Laws@SRI-AI.] The program for AAAI-84, which lists papers, tutorials, panel discussions, etc., is now available on-line, in the following files: TEMP:AAAI84.SCH[C410DT50] on CMUA AAAI84.SCH on CMUC [g]/usr/dst/aaai84.sch on the GP-Vax The program is 36 pages long if you print it on the dover in Sail10 font. ------------------------------ Date: Tue 19 Jun 84 18:26:11-CDT From: Gordon Novak Jr. Subject: Army A.I. Grant to Texas [Forwarded from the UTexas-20 bboard by Laws@SRI-AI.] The U.S. Army Research Office, headquartered in Research Triangle Park, North Carolina, has announced the award of a contract to the University of Texas at Austin for research and education in Artificial Intelligence. The award is for approximately $6.5 million over a period of five years. The University of Texas has established an Artificial Intelligence Laboratory as an organized research unit. Dr. Gordon S. Novak Jr. is principal investigator of the project and has been named Director of the Laboratory. Dr. Robert L. Causey is Associate Director. Other faculty whose research is funded by the contract and who will be members of the Laboratory include professors Robert F. Simmons, Vipin Kumar, and Elaine Rich. All are members of the Department of Computer Sciences except Dr. Causey, who is Chairman of the Philosophy Department. The contract is from the Electronics Division of the Army Research Office, under the direction of Dr. Jimmie Suttle. The contract will provide fellowships and research assistantships for graduate students, faculty research funding, research computer equipment, and staff support. The research areas covered by the Army Research Office contract include automatic programming and solving of physics problems by computer (Novak), computer understanding of mechanical devices described by English text and diagrams (Simmons), parallel programs and computer architectures for solving problems involving searching (Kumar), reasoning under conditions of uncertainty, and intelligent interfaces to computer programs (Rich). ------------------------------ Date: Tuesday, 19-Jun-84 12:19:22-BST From: BUNDY HPS (on ERCC DEC-10) Subject: Maintaining High Quality in AI Products Credibility has always been a precious asset for AI, but never more so than now. We are being given the chance to prove ourselves. If the range of AI products now coming onto the market are shown to provide genuine solutions to hard problems then we have a rosy future. A few such products have been produced, but our future could still be jeopardized by a few, well publised, failures. Genuine failures - where there was determined, but ultimately unsuccesful, effort to solve a problem - are regretable, but not fatal. Every technology has its limitations. What we have to worry about are charlatans and incompentents taking advantage of the current fashion and selling products which are overrated or useless. AI might then be sigmatized as a giant con-trick, and the current tide of enthusiasm would ebb as fast as it flowed. (Remember Machine Translation - it could still happen.) The academic field guards itself against charlatans and incompentents by the peer review of research papers, grants, PhDs, etc. There is no equivalent in the commercial AI field. Faced with this problem other fields set up professional associations and codes of practice. We need a similar set-up and we needed it yesterday. The 'blue chip' AI companies should get together now to found such an association. Membership should depend on a continuing high standard of AI product and in-house expertise. Members would be able to advertise their membership and customers would have some assurance of quality. Charlatans and incompetents would be excluded or ejected, so that the failure of their products would not be seen to reflect on the field as a whole. A mechanism needs to be devised to prevent a few companies annexing the association to themselves and excluding worthy competition. But this is not a big worry. Firstly, in the current state of the field AI companies have a lot to gain by encouraging quality in other companies. Every success increases the market for everyone, whereas failure decreases it. Until the size of the market has been established and the capacity of the companies risen to meet it, they have more to gain than to lose by mutual support. Secondly, excluded companies can always set up a rival association. This association needs a code of practice, which members would agree to adhere to and which would serve as a basis for refusing membership. What form should such a code take, i.e. what counts as malpractice in AI? I suspect malpractice may be a lot harder to define in AI than in insurance, or medicine, or travel agency. Due to the state of the art, AI products cannot be perfect. No-one expects 100% accurate diagnosis of all known diseases. On the other hand a program which only works for slight variations of the standard demo is clearly a con. Where is the threshold to be drawn and how can it be defined? What consitutes an extravagent claim? Any product which claims to: understand any natural language input, or to make programming redundant, or to allow the user to volunteer any information, sounds decidedly smelly to me. Where do we draw the line? I would welcome suggestions and comments. Alan Bundy ------------------------------ Date: 22 Jun 84 6:44:56-EDT (Fri) From: hplabs!tektronix!uw-beaver!cornell!vax135!ukc!west44!greenw @ Ucb-Vax.arpa Subject: Human models Article-I.D.: west44.243 [The time has come, the Walrus said, to talk of many things...] Consider... With present computer technology, it is possible to build (simple) molecular models, and get the machine to emulate exactly what the atoms in the `real` molecule will do in any situation. Consider also... Software and hardware are getting more powerful; larger models can be built all the time. [...Of shoes and Ships...] One day someone may be able to build a model that will be an exact duplicate of a human brain. Since it will be perfect down to the last atom, it will also be able to act just like a human brain. i.e. It will be capable of thought. [...And Sealing Wax...] Would such an entity be considered `human`, for, though it would not be `alive` in the biological sense, someone talking on the telephone to its very sophisticated speech synthesiser, or reading a letter typed from it would consider it to be a perfectly normal, if not rather intelligent person. Hmmmmmm. One last thought... Even if all the correct education could be given it, might it still suffer from the HAL9000 syndrome [2001]; fear of being turned off if it did something wrong? [...of Cabbages and Kings.] Jules Greenwall, Westfield College, London, England. from... vax135 greenw (UNIX) \ / mcvax- !ukc!west44! / \ hou3b westf!greenw (PR1ME) The MCP is watching you... End of Line. ------------------------------ Date: 18 Jun 84 13:27:47-PDT (Mon) From: hplabs!hpda!fortune!crane @ Ucb-Vax.arpa Subject: Re: A Quick Question - Mind and Brain Article-I.D.: fortune.3615 Up to this point the ongoing discussion has neglected to take two things into account: (1) Subconscious memory - a person can be enabled (through hypnosis or by asking him the right way) to remember infinite details of any experience of this or prior life times. Does the mind selectively block out trivia in order focus on what's important currently? (2) Intuition - by this I mean huge leaps into discovery that have nothing to do with the application of logical association or sensual observation. This kind of stuff happens to all of us and cannot easily be explained by the physical/mechanical model of the human mind. I agree that if you could build a computer big enough and fast enough and taught it all the "right stuff", you could duplicate the human brain, but not the human mind. I don't intend to start a metaphysical discussion, but the above needs to be pointed out once in a while. John Crane ------------------------------ Date: Wed 20 Jun 84 10:01:39-PDT From: WYLAND@SRI-KL.ARPA Subject: The Turing Test - machines vs people Tony Robison (AIList V2 #74) and his comments about machine "soul" brings up the unsettling point - what happens when we make a machine that passes the Turing test? For: o One of the goals of AI (or at least some workers in the field - hedge, hedge) is to make a machine that will pass the Turing test. o Passing the Turing test means that you cannot distinguish between man and machine by their written responses to written questions (i.e., over a teletype). Today, we could extend the definition to include oral questions (i.e., over the telephone) by adding speech synthesis and recognition. o If you cannot tell the difference between person and machine by the formal social interaction of conversation, *how will the legal and social systems differentiate between them!!* Our culture(s) is set up to judge people using conversation, written or oral: the legal arguments of courts, all of the testing through schools, psychological examination, etc. We have chosen the capability for rational conversation (including the potential capability for it in infants, etc.) as the test for membership in human society, rejecting membership based on physical characteristics such as body shape (men/women, "foreigners") and skin color, and the content of the conversations such as provided by cultural/ religious/political beliefs, etc. If we really do make machines that are *conversationally indistinguishable* from humans, we are going to have some interesting social problems, whether or not machines have "souls". Will we have to reject rational conversation as the test of membership in society? If so, what do we fall back on? (The term "meathead" may become socially significant!) And what sort of interesting things are going to happen to the social/legal/religious systems in the meantime? Dave Wyland WYLAND@SRI P.S. Asimov addressed these problems nicely in his renowned "I, Robot" series of stories. ------------------------------ Date: 18 Jun 1984 14:21 EDT (Mon) From: Peter Andreae Subject: Seminar - Precondition Analysis [Forwarded from the MIT bboard by SASW@MIT-MC.] PRECONDITION ANALYSIS - LEARNING CONTROL INFORMATION Bernard Silver Dept. of AI, University of Edinburgh 2pm Wednesday, June 20. 8th Floor Playroom I will describe LP, a program that learns equation solving strategies from worked examples. LP uses a new learning technique called Precondition Analysis. Precondition Analysis learns the control information that is needed for efficient progblem solving in domains with large search spaces. Precondition Analysis is similar in spirit to the recent work of Winston, Mitchell and DeJong. It is an analytic learning technique, and is capable of learning from a single example. LP has successfully learned many new equation solving strategies. ------------------------------ End of AIList Digest ******************** 21-Jun-84 22:18:40-PDT,13115;000000000001 Mail-From: LAWS created at 21-Jun-84 22:12:56 Date: Thu 21 Jun 1984 22:03-PDT From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V2 #76 To: AIList@SRI-AI AIList Digest Friday, 22 Jun 1984 Volume 2 : Issue 76 Today's Topics: VLSI - Panel on Chips for AI & Trilogy CPU Failure, Databases - Oxford English Dictionary goes On-Line, Logic - Common Sense Summer, Mind & Brain - Artificial People & Neural Connections & Recall, Seminar - Natural Language Parsing ---------------------------------------------------------------------- Date: 20 June 1984 0512-EDT From: Dave Touretzky at CMU-CS-A Subject: panel on chips for AI [Forwarded from the CMU bboard by Laws@SRI-AI.] Dana Seccombe is looking for people to participate in a panel discussion at ISSCC (International Solid State Circuits Conference) to be held in February '85 in New York City. The topic of the panel is issues in the realization of AI systems using VLSI technology, e.g. AI inference engines, 5th generation architectures, or Lisp processors that are or could be implemented using VLSI. If you would be interested in participating in this panel, please contact Mr. Seccombe at (408) 257-7000 x4854. DON'T contact me, because I don't know any more about it than what you've just read. ------------------------------ Date: 19 Jun 1984 11:07:46-EDT From: Doug.Jensen at CMU-CS-G Subject: Trilogy CPU design fails [Forwarded from the CMU bboard by Laws@SRI-AI.] After 4 years and $220 million, Gene Amdahl's Trilogy Corp. has declared their attempt to build a computer from 2.5" diameter whole wafer VLSI a failure. They never got even one wafer functioning correctly much less ever powered up a machine. Trilogy thus follows in the path of TI and many other whole wafer failures before them over the past decade; the others were less known because they were military projects. Trilogy was one of, and probably THE, most publicized and heavily funded new startup in the history of the computer business. They were spending $7 million/month and estimated that they would need at least another $100 million just to get them to mid-85, while their first machine was still two years beyond that (more than two years later than they estimated when they started in 1980). Each 2.5" wafer was to contain about 60K ECL gates, with four layers of metalization, and dissipate about 1000 watts. The CPU was to have nine wafers and excute 32 MIPS. Trilogy was even further behind on the other computer subsystems. They now say they may try a smaller machine, or just subsystems (e.g., memories), or just wafers and related technology. DEC, Sperry, and CII-HB were among the investors in Trilogy. ------------------------------ Date: 13-Jun-84 02:30 PDT From: William Daul Augmentation Systems Division / MDC Subject: Oxford English Dictionary goes On-Line [Forwarded from the Human-Nets Digest by Laws@SRI-AI.] LONDON -- ...the Oxford University Press has announced plans to publish a computerized version of the venerable Oxford English Dictionary. With the help of a $1.4 million donation from IBM United Kingdom Ltd., the British publisher will produce the first fully integrated edition of the 13-volume dictionary since the original work was begun in 1884. That first edition took 44 years to complete; the publisher said it will be able to complete the second edition in a fraction of that time. ... The New Oxford English Dictionary, as the new version has been named, will constitute the largest electronic dictionary data base in the world. The present multi-volume version consists of more than 20,000 printed pages. Computerization of the dictionary is a massive undertaking that will involve the data entry of about 60 million words used to record, describe and illustrate 500,000 words and phrases. The Oxford University Press has hired International Computaprint Corp. of Fort Washington, Pa., to do the data entry. A staff of 120 people has been assigned the task of completing the data entry by this September. ... Additionally, the company (IBM) is providing two data processing specialists who will work on the dictionary project for two years. Once the electronic dictionary is finished, it could be made available on-line, on magnetic tape, on laser/video disk or possible, on a single integrated circuit... The publisher estimated the project will cost $10 million. The British government awarded the company a 3 year grant of roughly $420,000 -- or 25% of the development cost -- for the dictionary. The University of Waterloo in Ontario will conduct a survey for the publisher of the potential users of an electronic dictionary. The university will also help develop software that would be needed to take advantage of an electronic dictionary. ------------------------------ Date: Wed 20 Jun 84 22:06:12-PDT From: Dikran Karagueuzian Subject: Newsletter, June 21, No. 37 [Forwarded from the CSLI Newsletter by Laws@SRI-AI.] COMMON SENSE SUMMER CSLI is sponsoring a summer-long workshop called "Common Sense Summer." It has long been agreed that language use and intelligent behavior in general require a great deal of knowledge about the commonsense world. But heretofore no one has embarked on a large-scale effort to encode this knowledge. The aim of Common Sense Summer is to make the first three months of such an effort. We are attempting to axiomatize in formal logic significant amounts of commonsense knowledge about the physical, psychological and social worlds. We are concentrating on eight domains: shape and texture, spatial relationships, lexical semantics of cause and possession, properties of materials, certain mental phenomena, communication, relations between textual entities and entities in the world, and responsibility. We are attempting to make these axiomatizations mutually consistent and mutually supportive. We realize, of course, that all that can be accomplished during the summer is tracing the broad outlines of each of the domains and, perhaps, discovering several hard problems. Nine graduate students from several universities are participating in the workshop full-time. In addition, a number of other active researchers in the fields of knowledge representation, natural language, and vision are participating in meetings of various sizes and purposes. There will be two or three presentations during the summer, giving progress reports for the general public. The workshop is being coordinated by the writer. --Jerry Hobbs [...] ------------------------------ Date: Wed, 20 Jun 84 17:25:03 PDT From: Charlie Crummer Subject: Human Models The foundation of the reasoning constructed by Jules Greenwall in his note depends on being able to specify exactly the behavior of atoms in molecules. The precise description required depends on the molecular physics. Unfor- tunately study is still going on. The study of the molecule is a many-body problem for which there is no closed-form solution. Another fly in the ointment is the fact that the behavior of atoms in molecules depends, albeit in second order, on the nature of the nucleus. This is another branch of physics that is very active, i.e. much is not known. What one would get for a model built on such a fuzzy foundation is of dubious value. --Charlie ------------------------------ Date: 18 Jun 84 10:07:07-PDT (Mon) From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!aplvax!lwt1 @ Ucb-Vax.arpa Subject: Re: A Quick Question - Mind and Brain Article-I.D.: aplvax.663 The other thing to note is that while each 'memory cell' in a computer has ~2 connections, each 'memory cell' in the brain has ~100. Since processing power is relative to (cells * connections), a measure of relative capacities is not sufficient for comparison between the brain and the CRAY. -Lloyd W. Taylor ... seismo!umcp-cs!aplvax!lwt1 ---I will have had been there before, soon--- ------------------------------ Date: Thu, 21 Jun 84 06:39 EDT From: dmrussell.pa@XEROX.ARPA Subject: Objection to Crane: A Quick Question - Mind and Brain -- V2 Sorry, but I must make a serious objection to your claim that "... a person can be enabled (through hypnosis or by asking him the right way) to remember infinite details of any experience of this or prior life times ... " I object to the use of the term "infinite" in describing memory. That simply isn't true. If you just mean "large number", then say so. The infinite memory capacity problem was addressed once (in either AIDigest or HumanNets, I've forgotten) and found indefensible. The phrase "prior life times" assumes reincarnation, a completely unsupported assumption. "of any experience" demands that all experiences can be recalled, not just *recognized*, or *restored* but recalled! Do you really want the references to show that this isn't true? Memory recall under hypnosis has been found to be just as reconstructive (perhaps more so) as normal memory. Hypnotic states buy you some recall, but not that much! We haven't taken these things into account because they simply aren't true, or at the very least, can't be supported by anything other than religious belief. -- D.M. Russell. -- ------------------------------ Date: 18 Jun 84 15:08:10-PDT (Mon) From: ihnp4!ihldt!stewart @ Ucb-Vax.arpa Subject: Re: A Quick Question - Mind and Brain Article-I.D.: ihldt.2382 > (1) Subconscious memory - a person can be enabled (through > hypnosis or by asking him the right way) to remember > infinite details of any experience of this or prior life > times. I don't know where the "prior life" part came from, but this claim is usually an incorrect extrapolation of studies that indicate no such thing. What has been established is that people can be induced to remember things that they considered forgotten. This isn't by a long shot the same thing as saying that we remember everything that's ever happened to us. If you have evidence to support this claim, by all means present it. If not, please spare us. Bob Stewart ihldt!stewart ------------------------------ Date: Thu, 21 Jun 84 08:23 EDT From: Dehn@MIT-MULTICS.ARPA (Joseph W. Dehn III) Subject: Turing test - legal implications ...computers someday might act like people... ...legal system is based on capability for rational conversation... ...what will we do????... ...will we have to reject rational conversation as the test of membership in society?... Sorry, I must have forgotten, but why exactly do we WANT to distinguish between humans and machines? -jwd3 ------------------------------ Date: Thu, 21 Jun 84 14:14 EST From: Huhns Subject: Seminar - Natural Language Parsing CONSTRAINT PROPAGATION SENTENCE PARSING Somnuek Anakwat Center for Machine Intelligence College of Engineering University of South Carolina 2pm Thursday, June 21, Room 230 An algorithm for parsing English sentences by the method of constraint propagation is presented. This method can be used to recognize English sentences and indicate whether those sentences are syntactically correct or incorrect according to grammar rules. The central idea of constraint propagation in sentence analysis is to form all possible combinations of the parts of speech from adjacent words in the input sentence, and then compare those combinations with English grammar rules for allowable combinations. The parts of speech for each word may be modified, left alone, or eliminated according to these rules. The analysis of these combinations of the parts of speech normally proceeds from left to right. The most significant feature of the algorithm presented is that grammar constraints propagate backward when it is possible. The algorithm is very useful when the given sentence contains words which have multiple properties. The algorithm also has an efficient parallel implementation. Results of applying the algorithm to several English sentences are included. An interpretation of the algorithm's performance and some topics for future research are discussed as well. ------------------------------ End of AIList Digest ******************** 22-Jun-84 05:39:05-PDT,15944;000000000001 Mail-From: LAWS created at 22-Jun-84 05:34:49 Date: Fri 22 Jun 1984 05:12-PDT From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V2 #77 To: AIList@SRI-AI AIList Digest Friday, 22 Jun 1984 Volume 2 : Issue 77 Today's Topics: AI Tools - Q'NIAL, Cognition - Mathematical Methods & Commonsense Reasoning, Books - Softwar, A New Weapon to Deal with the Soviets ---------------------------------------------------------------------- Date: 19 Jun 84 14:59:27-PDT (Tue) From: hplabs!hao!seismo!cmcl2!lanl-a!unm-cvax!janney @ Ucb-Vax.arpa Subject: Re: Q'NIAL Article-I.D.: unm-cvax.962 The April 1984 issue of Computer Design has an article on Nial (Nested Interactive Array Language). ------------------------------ Date: 18 Jun 84 15:10:07-PDT (Mon) From: ihnp4!houxm!mhuxl!ulysses!unc!mcnc!ncsu!jcz @ Ucb-Vax.arpa Subject: Re: Mathematical Methods Article-I.D.: ncsu.2622 It is not surprising that mathematicians cannot remember what they do when they first construct proofs, especially 'difficult' proofs. Difficult proofs probably take quite a bit of processing power, with none left over for observing and recording what was done. In order to get a record of what exactly occurs ( a 'protocol' ) when a proof is being constructed, we would have to interrupt the subject and get him to tell us what he is doing - interferring with the precise things we want to measure! There is much the same problem with studying how programmers write programs. We can approach a recording by saving every scrap of paper and recording every keystroke, but that is not such a great clue to mental processes. It would be nice if some mathematician would save EVERY single scrap of paper ( timestamped, please! ) involved in a proof, from start to finish. Maybe we would find some insight in that. . . John Carl Zeigler North Carolina State University ------------------------------ Date: Wed, 20 Jun 84 20:39:01 edt From: Roger L. Hale Subject: Re: Commonsense Reasoning? From: Roger L. Hale Subject: Re: Commonsense Reasoning? I get 4 quite a different way: If 3 (2) were half of 5 (4), what would a third of 10 (9) be? 4 (3). This way twice 5 (4) is 9 (8), [rather than twice 6 (5) is 12 (10) the way you describe.] The transformation I have in mind is "say 3 and mean 2", which is simply difference-of-1. The numbers I *mean* are in the stated relations (half, a third, twice) but they are renamed by a distorting filter, a homomorphism. "If arithmetic were shifted right one, what would half of 5, a third of 10, twice 5 be? (Answer: 3, 4 and 9.)" Partly it is a different choice of who to believe, the numbers or the relations; but I find this form most compelling due to the components being so fundamental. The extended proportion [for your method in parallel form] would be 3 : 5/2 :: 4 : 10/3 :: 12 : 2*5, the 12 (v. 9) serving to show that our two methods differ concretely. I think that the critical point for AI is that we make sense of a nonsense problem by postulating an unmentioned linear transformation since only a linear transformation permits a unique solution. [...] -- KIL In the first place, any critically constrained transformation has discrete (locally unique) solutions, barring singularities; and it is false that they are only unique for linear transformations: it takes a fairly special domain, like the complex analytic, to make it true. In the second place, what confidence should one gain in a theory on fixing a free parameter against one datum? Surely one should aim to constrain the theory as well as the parameter, and you have used up all your constraints. Where would we be if twice 5 were neither 12 nor 9? ?8-[ Back to square one. Yours in inquiry, Roger Hale rlh%mit-eddie@mit-mc ------------------------------ Date: 19 Jun 84 14:00:58-PDT (Tue) From: hplabs!tektronix!orca!shark!brianp @ Ucb-Vax.arpa Subject: Re: Commonsense Reasoning? Article-I.D.: shark.836 About "if 3 is half of 5, what is a third of 10?" It is interesting to note the assumptions that might be made here. One could assume that all numbers retain their good-old standard meaning, except 3, when compared to 5. Then the chain of relationships (3:5/2, 6:5, 12:10, 4:10/3) can be made. What I first thought was "so what's a '10'? " I.e, let's toss out all the definitions of the numbers along with 3. 'Half' could be redefined, but that says nothing about what to do with 'third'. One could redefine 'is', in effect, making it mean the ':' relation of the previous article. Anybody have hypotheses on which assumptions or definitions one would tend to drop first, when solving a puzzle of this sort? Brian Peterson ...!ucbvax!tektronix!shark!brianp ------------------------------ Date: 19 Jun 84 18:34:05-PDT (Tue) From: hplabs!tektronix!orca!tekecs!davep @ Ucb-Vax.arpa Subject: Re: Commonsense Reasoning? Article-I.D.: tekecs.3861 > From: brianp@shark.UUCP (Brian Peterson) > > It is interesting to note the assumptions that might be made here. > One could assume that all numbers retain their good-old standard meaning, > except 3, when compared to 5. Then the chain of relationships > (3:5/2, 6:5, 12:10, 4:10/3) can be made. If one redefines "3, when compared to 5", shouldn't the 3 be redefined in all instances of the "chain of relationships"? If so, one could conclude that one-"third" of 10 is 24/5 via 3:5/2, 6:5, 12:10, 12/(5/2):10/3, 24/5:10/3. David Patterson Tektronix, Inc. Wilsonville Industrial Park P.O. Box 1000 Wilsonville, Oregon 97070 (503) 685-2568 {ucb or dec}vax!tektronix!tekecs!davep uucp address davep@tektronix csnet address davep.tektronix@rand-relay arpa address ------------------------------ Date: Wed 20 Jun 84 18:37:45-PDT From: Jean-Luc Bonnetain Subject: softwar, a new weapon to deal with the Soviets ? [Forwarded from the Stnaford bboard by Laws@SRI-AI.] This is my translation of an article published in a French news magazine, "Le Point"; i have done my best to translate it, but i am sure there are some inadequacies. I just hope they don't occur in important places. I am just wondering if any one has heard about that, and if this is real, pure computer fiction or so well known that it's not worth flaming about. "Between the atomic bomb and conventional weapons, there was nothing in the American warfare equipment against the USSR. Now the time has come for "soft bombs", to launch a destructive war without any bloodshed. This is the topic of "Softwar", a forthcoming book written by a French computer scientist working in New York. The idea: as simple as it is machiavelic. In the programs that Soviet people get from Western countries are placed what amounts to "time bombs": devices that can be triggered from afar to hamper the functioning of Russian computers and paralyze the economy. With "Softwar", nuclear blackmail becomes obsolete. Le Point asked the author, Thierry Breton, how his relations with highly skilled American engineers has convinced him of the existence of the new type of weapon. LePoint: is "Softwar" just an computer thriller, or do "soft bombs" really exist ? ThierryBreton: I never used any, but they have been used for a few years already in our trade. Some countries from Africa or South America, who are customers of big American software companies, have booby-trapped programs running in their administrations. The aim of the providers of the software is to be protected against customers who won't pay. These soft bombs are set in vital areas, like payroll routines, which are then paralyzed. The customer has to call the company, and won't get any help until debts are cleared. In this case people talk about technical problems in the computer, but obviously never say that the program contained a bomb. Since now, these techniques had never been used for aggressive purposes. But there is absolutely no technical difficulty in doing that, so we are led to believe that this new weapon could be used through non strategic networks giving access to databases. For example, the Stockex network, which gives information on stock exchange values, or the WMO network, about worldwide meteorological information. LePoint: Has softwar begun yet ? ThierryBreton: For me, there is no doubt about that. The Soviets use 80% of the American databases. It is this dependency on communication between computer which is new, and which allows to enter a territory. Until now, the "bombs" had to be triggered on the spot by someone inside the place. The bombs were there, but could not be triggered remotely. Today, thanks to data transfer, they can be reached from thousands of kilometers. In the book, I imagine that one bomb is controlled, through Stockex, by the rate of exchange for a particular company determined in the software, and the Pentagon, as long as it does not want to detonate the "bomb", avoid the critical value by buying or selling actions. LePoint: You give some names of American organisms working for the Pentagon whose work is to set bombs in the programs, and to activate them. Is this real ? ThierryBreton: The names quoted have been slightly modified from the real ones. I took my data from a group founded in 1982 by the American Army, called NSI (National Software Institute). This institute works on all programs which have military applications. In 1983, the Army has spent 500 million dollars to debug its programs. Written in different languages, they have now been unified by the ADA language. This is the official objective of NSI. But for these military computer scientists, there is not much difference between finding unvoluntary errors and adding voluntary ones... LePoint: What is the Trojan horse used to send those soft bombs to the USSR ? ThierryBreton: The USSR has a lag of about 10 to 15 years in computer science, which is the equivalent of 2 or 3 new generations of computers. This lag in hardware causes an even more important lag, in artificial intelligence, which is the type of software running on the machines Soviet people have to buy from Western countries. They are very eager to get those programs, and some estimate that 60% of the software running there comes from the USA. The most important source is India, which has very good computer scientists. Overnight, IBM has been kicked out, to be replaced by Soviet Elorg computers ES10-20 and ES 10-60, which are copied from IBM. The Indians buy software from Western countries, port it to Elorgs, and then this software goes to the USSR. LePoint: Can a trap be invisible, like a buried mole ? ThierryBreton: Today, people know how to make bombs completely invisible. The first generation was fixed bombs, lines of code never activated unless a special signal was sent. Then the Polaris-type traps: like for the rockets, the programs contain baits to fool the enemy, multiple traps, only one of which is active. Then the stochastic bomb, the most dangerous one, which moves in the program each time it is loaded. These bombs are all the more discreet that they can be stopped from a distance, failures then disappearing in an unexplicable way. LePoint: Have there been cases in USSR of problems that could be explained by a soft bomb ? ThierryBreton: Some unexplained cases, yes. In November 1982, the unit for international phone calls has been down for 48 hours. Officially, the Soviets said it was a failure of the main computer. We still have to know what caused it. Every day in the Soviet papers one can read that such and such factory had to stop its production because of a shortage of some items. When the Gosplan computers break down, there are direct consequences on the production and functioning of factories. LePoint: By talking about softwar, aren't you helping the Soviets ? ThierryBreton: No. For 30 years, we have seen obvious attempts from the Soviets to destabilize Western countries by infiltrating trade unions, pacifist movements. The Eastern block can remotely cause strikes. But since now, there was now way to retaliate by doing precise desorganizing actions. In the context of the ideological war, softwar gives another way to strike back. The book also shows that the Soviets have no choice. They know that by buying or getting by other means this software, they are taking a big risk. But if they stop getting this software, the time it will take them to develop it by themselves will increase the gap. This is a fact. So soft bombs, like atomic bombs, can be a means of deterrence. For political people who are just dicovering this new strategy, the book is that of a new generation showing to the old one that what was a tool has become a weapon." [This reminds me of an anecdote I heard Captain (now Cmdr) Grace Hopper tell. It seems some company began to pass off a Navy-developed COBOL compiler verifier as their own, removing the print statement that gave credit to the Navy. When the Navy came out with an improved version, the company had the gall to ask for a copy. Her development group complied, but embedded concealed checks in the code so that it would fail to work if the credit printout were ever altered. -- KIL] ------------------------------ Date: Wed 20 Jun 84 20:07:35-PDT From: Richard Treitel Subject: softwar @= [Forwarded from the Stanford bboard by Laws@SRI-AI.] The article Jean-Luc (or whoever) translates sounds like a typical piece of National Enquirer-style "reporting", namely it describes something that is *just* feasible theoretically but against which countermeasures exist, and which has wider ramifications than are mentioned. I'm sure the Russians are too paranoid to allow network access to important computers in such a way as to trigger these "bombs". But: it is widely rumoured that IBM puts time-delayed self-destruct operations into some of its programs so as to force you to buy the new release when it comes out (and heaven help you if it's late?). And in John Brunner's book "The Shockwave Rider", one of America's defence systems is a program that would bring down the entire national network, thus making it impossible for an invader to control the country. I love science fiction discussions, but I love them even more when they're not on BBoard. - Richard [Another SF analogy: there is a story about the consequences of developing some type of "ray" or nondirectional energy field capable of igniting all unstable compounds within a large radius, notably ammunition, propellants, and fuels. This didn't stop the outbreak of global war, but did reduce it to the stone age. All that has nothing to do with AI, of course, except that computers may yet be the only intelligent beings on the planet. -- KIL] ------------------------------ End of AIList Digest ******************** 24-Jun-84 10:36:46-PDT,17098;000000000001 Mail-From: LAWS created at 24-Jun-84 10:34:07 Date: Sun 24 Jun 1984 10:19-PDT From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V2 #78 To: AIList@SRI-AI AIList Digest Sunday, 24 Jun 1984 Volume 2 : Issue 78 Today's Topics: AI Programming - Characteristics, Commonsense Reasoning - Hypothetical Math, Cognition - Humor & Memory & Intuition, Seminar - Full Abstraction and Semantic Equivalence ---------------------------------------------------------------------- Date: 20 Jun 84 12:14:49-PDT (Wed) From: hplabs!hpda!fortune!amd70!intelca!glen @ Ucb-Vax.arpa Subject: Re: Definition of an AI program Article-I.D.: intelca.317 As a half-serious/half humorous suggestion: Consider the fact that most of man's machines are built to do the same thing over and over and do it very well. Some random examples: - washing machine - automobile hood fastner in production line - pacman video game AI programs (hopefully) don't fit the mold, they don't spend their lives performing the same routine but change as they go. ^ ^ Glen Shires, Intel, Santa Clara, Ca. O O Usenet: {ucbvax!amd70,pur-ee,hplabs}!intelca!glen > ARPA: "amd70!intelca!glen"@BERKELEY \-/ --- stay mellow ------------------------------ Date: Fri 22 Jun 84 11:28:46-PDT From: Richard Treitel Subject: a third of ten Please. Everyone knows that 2*2=5 for sufficiently large values of 2. More to the point, if you take the square root of 5 and round to the nearest integer, you get 2. Again, if you take half of 5 and round to nearest using accepted method, get 3. A third of ten now becomes 3 as well. How many AI people does it take to change a lightbulb? - Richard [One graduate student, but it takes eight years. -- KIL (from John Hartman, CS.Hartman@UTexas-20) ] ------------------------------ Date: 21 Jun 84 10:51:26-PDT (Thu) From: decvax!decwrl!dec-rhea!dec-rayna!swart @ Ucb-Vax.arpa Subject: Re: Commonsense Reasoning? Article-I.D.: decwrl.1845 I am reminded of an old children's riddle: Q. If you call a tail a leg, how many legs does a horse have? A. Four. Calling a tail a leg doesn't make it so. Mark Swartwout UUCP {allegra,decvax,ihnp4,ucbvax}!decwrl!rhea!rayna!swart ARPA MSWART@DEC-MARLBORO ------------------------------ Date: 21 Jun 84 22:07 PDT From: Shrager.pa@XEROX.ARPA Subject: Memory This might amuse. Authorship credit to Dave Touretzky@CMU. From: Dave Touretzky (DT50)@CMU-CS-A To: Jeff Shrager Subject: Q-registers in the brain ENGRAM (en'-gram) n. 1. The physical manifestation of human memory -- "the engram." 2. A particular memory in physical form. [Usage note: this term is no longer in common use. Prior to Wilson & Magruder's historic discovery, the nature of the engram was a topic of intense speculation among neuroscientists, psychologists, and even computer scientists. In 1994 Professors M. R. Wilson and W. V. Magruder, both of Mount St. Coax University in Palo Alto, proved conclusively that the mammalian brain is hardwired to interpret a set of thirty seven genetically-transmitted cooperating TECO macros. Human memory was shown to reside in 1 million Q-registers as Huffman-coded uppercase-only ASCII strings. Interest in the engram has declined substantially since that time.] --- from the New Century Unabridged English Dictionary, 3rd edition, A.D. 2007. David S. Touretzky (Ed.) ------------------------------ Date: 19 Jun 84 16:02:49-PDT (Tue) From: ihnp4!houxm!mhuxl!ulysses!gamma!pyuxww!pyuxn!rlr @ Ucb-Vax.arpa Subject: Re: A Quick Question - Mind and Brain Article-I.D.: pyuxn.769 > (2) Intuition - by this I mean huge leaps into discovery > that have nothing to do with the application of logical > association or sensual observation. This kind of stuff > happens to all of us and cannot easily be explained by > the physical/mechanical model of the human mind. > > I agree that if you could build a computer big enough and fast > enough and taught it all the "right stuff", you could duplicate > the human brain, but not the human mind. Intuition is nothing more than one's subconscious employing logical thought faster than the conscious brain can understand or realize it. What's all the fuss about? And where's the difference between the "brain" and the "mind"? What can this "mind" do that the physical brain doesn't? A good dose of Hofstadterisms and Smullyanisms ("The Mind's 'I'" provides good examples) puts to rest some of those notions of mind and brain. "I take your opinions and multiply them by -1." Rich Rosen pyuxn!rlr ------------------------------ Date: 19 Jun 84 13:55:43-PDT (Tue) From: hplabs!hao!seismo!ut-sally!utastro!bill @ Ucb-Vax.arpa Subject: Re: A Quick Question - Mind and Brain Article-I.D.: utastro.114 > (1) Subconscious memory - a person can be enabled (through > hypnosis or by asking him the right way) to remember > infinite details of any experience of this or prior life > times. Does the mind selectively block out trivia in order > focus on what's important currently? One of the reasons that evidence obtained under hypnosis is inadmissable in many courts is that hypnotically induced memories are notoriously unreliable, and can often be completely false, even though they can seem extremely vivid. In some states, the mere fact that a witness has been under hypnosis is enough to disqualify the individual's testimony in the case. I have personal, tragic experience with this phenomenon in my own family. I don't intend to burden the net with this, but if anyone doubts what I say, I will be glad to discuss it by E-mail. Bill Jefferys 8-% Astronomy Dept, University of Texas, Austin TX 78712 (USnail) {allegra,ihnp4}!{ut-sally,noao}!utastro!bill (uucp) utastro!bill@ut-ngp (ARPANET) ------------------------------ Date: 20 Jun 84 9:22:50-PDT (Wed) From: hplabs!hao!seismo!ut-sally!riddle @ Ucb-Vax.arpa Subject: Re: A Quick Question - Mind and Brain Article-I.D.: ut-sally.2301 Now that Chuqui's obligingly created net.sci, why don't we move this discussion there? Is there any reason for it to go on in five newsgroups simultaneously? If interest continues, perhaps this topic will form the basis for net.sci.psych. Followups to net.sci, please. --- Prentiss Riddle ("Aprendiz de todo, maestro de nada.") --- {ihnp4,harvard,seismo,gatech,ctvax}!ut-sally!riddle ------------------------------ Date: Thu, 21 Jun 84 15:47 CST From: Nichael Cramer Subject: Memory > >From: hplabs!hpda!fortune!crane @ Ucb-Vax.arpa > > (1) Subconscious memory - a person can be [...] But, brain is mind is brain is mind is brain is mind is brain... [what else have you got to work with?] So long and thanks for all the fish, NLC ------------------------------ Date: 22 Jun 1984 1825-PDT (Friday) From: gd@sri-spam (Greg DesBrisay) Subject: Re: A Quick Question - Mind and Brain Article-I.D.: aplvax.663 >The other thing to note is that while each 'memory cell' in a computer >has ~2 connections, each 'memory cell' in the brain has ~100. Since >processing power is relative to (cells * connections), a measure of >relative capacities is not sufficient for comparison between the brain >and the CRAY. -Lloyd W. Taylor In addition, many connections in the human brain are analog in character, so any comparison with a binary digital computer must multiply the number of connections by the number of bits necessary to digitize the analog range of each synapse. To do that, one would have to know what analog resolution is required to accurately model the behavior of a synapse. I'm not sure if any one has figured that one out yet. Greg DesBrisay SRI ------------------------------ Date: 20 Jun 84 9:20:43-PDT (Wed) From: decvax!mcnc!unc!ulysses!allegra!princeton!eosp1!robison @ Ucb-Vax.arpa Subject: Re: Mind and Brain Article-I.D.: eosp1.954 I'm not comfortable with Rich Rosen's assertion that intuition is just the mind's unconscious LOGICAL reasoning that happens too fast for the conscious to track. If intuition is simply ordinary logical reasoning, we should be just as able to simulate it as we can other tyes of reasoning. In fact, attempts to simulate intuition account for some rather noteworthy successes and failures, and seem to require a number of discoveries before we can make much real progress. E.g.: I think it is fair to claim that chess players use intuition to evaluate chess positions. We acknowledge that computers have failed to be intuitive in playing chess in at least two ways that are easy for people: - knowing what kinds of tactical shots to look for in a position - knowing how to plan longterm strategy in a position In backgammon, Hans Berliner has a very successful program that seems to have overcome the comparable backgammon problem. His program has a way of deciding, in a smooth, continuous fashion, when to shift from one set of assumptions to another while analyzing. I am not aware of whether other people have been able to develop his techniques to other kinds of analysis, or whether this is one flash of success. Berliner has not been comparably successful applying this idea to a chess program. (The backgammon program defeated then world champion in a short match, in which the doubling cube was used.) [There was general agreement that the program's play was inferior, however. Another point: while smooth transitioning between strategies is more "human" and easier to follow or explain (and thus to debug or improve), I can't see that it is inherently as powerful as switching to a new optimal strategy at each turn. -- KIL] Artists and composers use intuition as part of the process of creating art. It is likely that one of the benefits they gain from intuition is that a good work of art has many more internal relationships among its parts than the creator could have planned. It is hard to see how this result can be derived from "logical" reasoning of any ordinary deductive or inductive kind. It is easier to see how artists obtain this result by making various kinds of intuitive decisions to limit their scope of free choice in the creative process. Computer-generated art has come closest to emulating this process by using f-numbers rather than random numbers to generate artistic decisions. It is unlikely that the artist's intuition is working as "simply" as deriving decision from f-numbers. It remains a likely possibility that a type of reasoning that we know little about is involved. We are still pretty bad at programming pattern recognition, which intuitive thinking does spectacularly well. If one wishes to assert that the pattern recognition is done by well-known logical processes, I would like to see some substantiation. - Toby Robison (not Robinson!) allegra!eosp1!robison decvax!ittvax!eosp1!robison princeton!eosp1!robison ------------------------------ Date: 20 Jun 84 18:14:17-PDT (Wed) From: decvax!linus!utzoo!henry @ Ucb-Vax.arpa Subject: Re: A Quick Question - Mind and Brain Article-I.D.: utzoo.3971 John Crane cites, as evidence for the human mind being impossible to duplicate by computer, two phenomena. (1) Subconscious memory - a person can be enabled (through hypnosis or by asking him the right way) to remember infinite details of any experience of this or prior life times. Does the mind selectively block out trivia in order focus on what's important currently? As far as I know, there's no evidence of this that will stand up to critical examination. Even disregarding the "prior life times" part, for which the reliable evidence is, roughly speaking, nonexistent, the accuracy of recall under hypnosis is very doubtful. True, the subject can describe things in great detail, but it's not at all proven that this detail represents *memory*, as opposed to imagination. In fact, although it's quite likely that hypnosis can help bring out things that have been mostly forgotten, there is serious doubt that the memories can be disentangled from the imagination well enough for, say, testimony in court to be reliable when hypnosis is used. (2) Intuition - by this I mean huge leaps into discovery that have nothing to do with the application of logical association or sensual observation. This kind of stuff happens to all of us and cannot easily be explained by the physical/mechanical model of the human mind. The trouble here is that "...have nothing to do with the application of logical association or sensual observation..." is an assumption, not a verified fact. There is (weak) evidence suggesting that intuition may be nothing more remarkable than reasoning and observation on a subconscious level. The human mind actually seems to be much more of a pattern-matching engine than a reasoning engine, and it's not really surprising if pattern-matching proceeds in a haphazard way that can sometimes produce unexpected leaps. Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,linus,decvax}!utzoo!henry ------------------------------ Date: 20 Jun 84 17:14:58-PDT (Wed) From: ucbcad!tektronix!orca!shark!hutch @ Ucb-Vax.arpa Subject: Re: A Quick Question - Mind and Brain Article-I.D.: shark.838 | Intuition is nothing more than one's subconscious employing logical | thought faster than the conscious brain can understand or realize it. | What's all the fuss about? And where's the difference between the | "brain" and the "mind"? What can this "mind" do that the physical brain | doesn't? | Rich Rosen pyuxn!rlr Thank you, Rich, for so succinctly laying to rest all the questions mankind has ever had about self and mind and consciousness. Now, how about proving it. Oh, and by the way, what is a "subconscious" and how do you differentiate between a "conscious" brain and a "subconscious" in any meaningful way? And once you have told us exactly what a physical brain can do, then we can tell you what a mind could do that it doesn't. Hutch ------------------------------ Date: 21 June 1984 0802-EDT From: Lydia Defilippo at CMU-CS-A Subject: Seminar - Full Abstraction and Semantic Equivalence [Forwarded from the CMU bboard by Laws@SRI-AI.] Speaker: Ketan Mulmuley Date: Friday, June 22 Time: 11:00 Place: 5409 Title: Full Abstraction and Semantic Equivalence The Denotational Approach of Scott-Strachey in giving semantics to programming languages is well known. In this approach each construct of the programming language is given a meaning in a domain which has nice mathematical properties. Semantic equivalence is the problem of showing that this map -- the denotational semantics -- is faithful to the operational semantics. Because known methods showing such equivalences were too complicated, very few such proofs have been carried out. Many authors had expressed a need for mechanization of these proofs. But it remained unclear whether such proofs could be mechanized at all. We shall give in this thesis a general theory to prove such equivalences which has a distinct advantage of being mechanizable. A mechanized tool was acually built on top of LCF to aid the proofs of semantic equivalence. Other central problem of denotational semantics is the problem of full abstraction, i.e., determining whether the meanings given to two different language constructs by the denotational semantics are equal whenever they are operationally equivalent. This has been known to be a hard problem and the only known general method of constructing such models was the {\it syntactic } method of Milner. But whether such models could be constructed semantically remained an important open problem. In this thesis we show that this is indeed the case. ------------------------------ End of AIList Digest ******************** 24-Jun-84 23:19:28-PDT,21635;000000000000 Mail-From: LAWS created at 24-Jun-84 23:17:26 Date: Sun 24 Jun 1984 22:49-PDT From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V2 #79 To: AIList@SRI-AI AIList Digest Monday, 25 Jun 1984 Volume 2 : Issue 79 Today's Topics: Combinatory Logic - Request, AI Tools - NIAL, AI and Society - Relevance of "souls" to AI, Problem Solving - Commonsense Reasoning, AI Programming - Spelling Correction, Cognition - Intuition & Mind vs. Brain ---------------------------------------------------------------------- Date: 28 Jun 84 6:56:08-EDT (Thu) From: hplabs!hao!seismo!cmcl2!floyd!vax135!ukc!srlm @ Ucb-Vax.arpa Subject: combinatory logic Article-I.D.: ukc.4280 [-: kipple :-] [I couldn't bear to delete this one. -- KIL] In the hope that many of you are also interested in combinatory logic... please have a look at this and mail me any suggestions, references, etc. ------------------ [by a. pettorossi in notre dame j. form. logic 22 (4) 81] define: marking is a function that assigns, for each combinator in a term (tree) the number of left choices (of path) that one has to make to go from the root to the combinator. ex.: marking SII = the set right applied subterms of a combinator X is defined as: 1) if X is a basic combinator or a variable ras(X) = {X) 2) if X is (YZ) then ras(YZ) = union (ras(X)) Z a combinator X with reduction axiom X x1 x2 x3 ... xk -> Y has non-ascending property iff for all i, 1<=i<=k, if occurs in marking (X x1...xk) and occurs in marking Y, then p >= q. a combinator (X x1 x2 ... xk -> Y) has compositive effect iff a right applied subterm of Y is not a variable. ------------------ Theorem: given a subbase B={X1,...Xk} such that all Xi in B have non-ascending property and no compositive effect, every reduction strategy applied to any Y in B+ leads to normal form. ------------------ Open Problem: does the theorem hold if non-ascending property is the only condition? ------------------ My personal questions: if one specifies leftmost-outermost reduction only, would the Open Problem be any easier? how much of combinatory logic can we do with B? and with non-ascending property only? silvio lemos meira UUCP: ...!{vax135,mcvax}!ukc!srlm Post: computing laboratory university of kent at canterbury canterbury ct2 7nf uk Phone: +44 227 66822 extension 568 ------------------------------ Date: 20 Jun 84 10:35:51-PDT (Wed) From: decvax!linus!utzoo!utcsrgv!qucis!carl @ Ucb-Vax.arpa Subject: what is NIAL? Article-I.D.: qucis.70 Nial is the "Nested Interactive Array Language." It is based on the nested, rectangular arrays of T. More, and has aspects of Lisp, APL, FP, and Pascal. Nial runs on lots of Unix(&etc) systems, VAX/VMS, PC-DOS, and VM/CMS (almost). Nial is being used primarily for prototyping and logic programming. Distribution is through Nial Systems Limited, PO Box 2128, Kingston, Ontario, Canada, K7L 5J8. (613) 549-1432. Here are some trivial s