Date: Mon 26 Sep 1988 23:22-EDT From: AIList Moderator Nick Papadakis Reply-To: AIList@AI.AI.MIT.EDU Us-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139 Phone: (617) 253-6524 Subject: AIList Digest V8 #93 To: AIList@AI.AI.MIT.EDU Status: RO AIList Digest Tuesday, 27 Sep 1988 Volume 8 : Issue 93 Philosophy -- The Grand Challenge (4 messages) ---------------------------------------------------------------------- Date: 22 Sep 88 08:20:17 GMT From: peregrine!zardoz!dhw68k!feedme!doug@jpl-elroy.arpa (Doug Salot) Subject: Grand Challenges In the 16 Sept. issue of Science, there's a blurb about the recently released report of the National Academy of Sciences' Computer Science and Technology Board ("The National Challenge in Computer Science and Technology," National Academy Press, Washington, DC, 1988). Just when you thought you had the blocks world figured out, something like this comes along. Their idea is to start a U.S. Big Science (computer science, that is) effort ala Japan. In addition to the usual clamoring for software IC's, fault tolerance, parallel processing and a million mips (ya, 10^12 ips), here's YOUR assignment: 1) A speaker-independent, continuous speech, multilingual real-time translation system. Make sure you don't mess up when the the speech is ambiguous, nongramatical, or a phrase is incomplete. Be sure to maintain speaker characteristics (what's Chinese sound like with a Texas accent?). As you may know, Japan is funding a 7 year effort at $120 million to put a neural-net in a telephone which accomplishes this feat for Japanese <-> English (it's a picture phone too, so part of the problem is to make lips sync with the speech, I guess). 2) Build a machine which can read a chapter of a physics text and then answer the questions at the end. At least this one can be done by some humans! While I'm sure some interesting results would come from attempting such projects, these sorts of things could probably be done sooner by tossing out ethical considerations and cloning humanoids. If we were to accept the premise that Big Science is a Good Thing, what should our one big goal? I personally think an effort to develop a true man-machine interface (i.e., neural i/o) would be the most beneficial in terms of both applications and as a driving force for several disciplines. -- Doug Salot || doug@feedme.UUCP || ...{zardoz,dhw68k,conexch}!feedme!doug Raisin Deters - Breakfast never tasted so good. ------------------------------ Date: 23 Sep 88 13:39:57 GMT From: ndcheg!uceng!dmocsny@iuvax.cs.indiana.edu (daniel mocsny) Subject: Re: Grand Challenges In article <123@feedme.UUCP>, doug@feedme.UUCP (Doug Salot) writes: [ goals for computer science ] > 2) Build a machine which can read a chapter of a physics text and > then answer the questions at the end. At least this one can be > done by some humans! > > While I'm sure some interesting results would come from attempting > such projects, these sorts of things could probably be done sooner > by tossing out ethical considerations and cloning humanoids. A machine that could digest a physics text and then answer questions about the material would be of atronomical value. Sure, humanoids can do this after a fashion, but they have at least three drawbacks: (1) Some are much better than others, and the really good ones are rare and thus expensive, (2) None are immortal or particularly speedy (which limits the amount of useful knowledge you can pack into one individual), (3) No matter how much the previous humanoids learn, the next one still has to start from scratch. We spend billions of dollars piling up research results. The result, which we call ``human knowledge,'' we inscribe on paper sheets and stack in libraries. ``Human knowledge'' is hardly monolithic. Instead we partition it arbitrarily and assign high-priced specialists to each piece. As a result, ``human knowledge'' is hardly available in any sort of general, meaningful sense. To find all the previous work relevant to a new problem is often quite an arduous task, especially when it spans several disciplines (as it does with increasing frequency). I submit that our failure to provide ourselves with transparent, simple access to human knowledge stands as one of the leading impediments to human progress. We can't provide such access with a system that dates back to the days of square-rigged ships. In my own field (chemical process design) we had a problem (synthesizing heat recovery networks in process plants) that occupied scores of researchers from 1970-1985. Lots of people tried all sorts of approaches and eventually (after who knows how many grants, etc.) someone spotted some important analogies with some problems from Operations Research work of the '50's. We did have to develop some additional theory, but we could have saved a decade or so with a machine that ``knew'' the literature. Another example of an industrially significant problem in my field is this: given a target molecule and a list of available precursors, along with whatever data you can scrape together on possible chemical reactions, find the best sequence of reactions to yield the target from the precursors. Chemists call this the design of chemical syntheses, and chemical engineers call it the reaction path synthesis problem. Since no general method exists to accurately predict the success of a chemical reaction, one must use experimental data. And the chemical literature contains references to literally millions of compounds and reactions, with more appearing every day. Researchers have constructed successful programs to solve these types of problems, but they suffer from a big drawback: no such program embodies enough knowledge of chemistry to be really useful. The programs have some elaborate methods to represent to represent reaction data, but these knowledge bases had to be hand-coded. Due to the chaos in the literature, no general method of compiling reaction data automatically has worked yet. Here we have an example of the literature containing information of enormous potential value, but it is effectively useless. If someone handed me a machine that could digest all (or at least large subsets) of the technical literature and then answer any question that was answerable from the literature, I could become a wealthy man in short order. I doubt that many of us can imagine how valuable such a device would be. I hope to live to see such a thing. Dan Mocsny ------------------------------ Date: 24 Sep 88 17:53:11 GMT From: ncar!tank!arthur!daryl@gatech.edu (Daryl McLaurine) Subject: Re: Grand Challenges On "Human Knowledge"... I am one of many people who makes a living by generation solutions to complex problems or tasks in a specific field by understanding the relationships between my field and many 'unrelated' fields of study. As the complexety of today's world increases, The realm of "Human Knowledge" cannot remain 'monolithic', to solve many problems, _especialy_ in AI, one must acquire the feel of the dynamic 'flow' of human experence and sense the conectives within. Few people are adept at this, and the ones who are, ether become _the_ leading edge of their field, or are called opon to consult for others by acting as that mythical construct that will 'understand' human experence on demand. In my field, both acedemic and profesinal, I strive to make systems that will acquire knowledge and make ,_AT BEST_, moderately simple corrila- tions in data that may point to solutions to a specified task. It is still the realm of the Human Investigator to take these suggestions and make a compl- ete analysis of them by drawing on his/her(?) own heurestic capability to arive at a solution. To this date, the most advanced construct I have seen, only does a type of informational investigative 'leg work', and rarly can it corr- alate facts that seem to be unrelated, but may actualy be ontological. (But, I am working on it ;-} ) It is true that the computer model of what we do would be more effective for a research investigator, but the point to which we can program 'intuituitive knoledge' beyond simple relationships in pattern recognition is far off. The human in this equasionis still an unknown factor to itself (can YOU tell me how you think, and if you can, there are MANY cognitive sci. people [psycologists, AI researchers, etc] who want to talk to you...), and until we can solve the grand chalenge of knowing ourselves, our creations are little more than idiot savants (and bloody expencive ones at that!) -kill me, not my clients (Translated from the legalese...) ^ <{[-]}>----------------------------------------------------------------------- V Daryl McLaurine, Programmer/Analyst (Consultant) | Contact: | Home: 1-312-955-2803 (Voice M-F 7pm/1am) | Office: Computer Innovations 1-312-663-5930 (Voice M-F 9am/5pm) | daryl@arthur (or zaphod,daisy,neuro,zem,beeblebrox) .UChicago.edu ==\*/========================================================================= ------------------------------ Date: 26 Sep 88 05:33:07 GMT From: glacier!jbn@labrea.stanford.edu (John B. Nagle) Subject: Re: Grand Challenges The lesson of the last five years seems to be that throwing money at AI is not enormously productive. The promise of expert systems has not been fulfilled (I will refrain from quoting some of the promises today), the Japanese Fifth Generation effort has not resulted in any visible breakthroughs (although there are some who say that its real purpose was to divert American attention from the efforts of Hitachi and Fujitsu to move into the mainframe computer business), the DARPA/Army Tank Command autonomous land vehicle effort has resulted in vehicles that are bigger, but just barely able to stay on a well-defined road on good days. What real progress there is doesn't seem to be coming from the big-bucks projects. People like Rod Brooks, Doug Lenat, and a few others seem to be makeing progress. But they're not part of the big-science system. I will not comment on why this is so, but it does, indeed, seem to be so. There are areas in which throwing money at the problem does work, but AI may not be one of them at this stage of our ignorance. John Nagle ------------------------------ End of AIList Digest ********************