%A Gordon S. Novak %A Robert L. Causey %A et al. %T Artificial intelligence project at the University of Texas at Austin %R Technical Report AI84-01 %I Artificial Intelligence Laboratory, University of Texas at Austin %D 1984 %X This report is the technical part of a proposal to the U.S. Army Research Office (Electronics Division, Dr. Jimmie R. Suttle, Director) in late 1983 in response to their call for proposals on "Artificial Intelligence Research and Education." The University of Texas at Austin (along with The University of Pennsylvania) was selected for substantial funding over a five-year period out of a total of thirty-five proposals submitted. This report also contains a subsequent proposal by Dr. Bruce Porter. The individual project proposals in this document illustrate the breadth of research in Artificial Intelligence at the University of Texas, though not all of the proposed projects were selected for funding. The research reports that are currently supported by the Army grant are those of Novak, Simmons, Kumar, and Porter. %A Ezat Karimi %T Computing discourse conceptual coherence: a means to contextual reference resolution %R Technical Report AI84-02 %I Artificial Intelligence Laboratory, University of Texas at Austin %D August 1984 %O Master's thesis %X This thesis discusses the problem central to the interpretation of the discourse of a text: contextual reference resolution. The problem itself devolves to problems about the nature of the inferencing mechanism, knowledge base organization and discourse representation. A framework for an inferencing mechanism based on a theory of discourse coherence and focusing is discussed. A framework is described for a knowledge base which is composed of the knowledge about entities (through frame structures) and the knowledge about the coherence relations between different event/states. A model for discourse representation based on a small set of intersentential (coherence) relations and the relations between the conceptual structures for the entities discussed in the text is also presented. %A Yeong-Ho Yu %T Translating Horn clauses from English %R Technical Report AI84-03 %I Artificial Intelligence Laboratory, University of Texas at Austin %D August 1984 %O Master's thesis %X This work introduces a small grammar which translates a subset of English into Horn Clauses. The parallel between the syntactic structures of sentences and corresponding literals of their Procedural Logic representation facilitates the translation. An interpreter is also described which accepts English descriptions of problem solving methods and facts about an example domain, the Blocks World. The interpreter translates then into Horn Clauses by using the grammar, and interprets them as programs and data. It also carries out commands on data by using the programs, and answers queries about data. In addition, it provides a mechanism called "Consistency Rules" which maintains integrity of the database. This experiment shows that Procedural Logic provides a unified system to accomplish a wide variety of computations, and that a small grammar without any semantic or domain-specific knowledge can translate a small subset of English sentences into Horn Clauses. Further research on several other domains is suggested to evaluate the usefulness of this translation. %A Robert F. Simmons %T From menus to intentions in man-machine dialogue %R Technical Report AI84-04 %I Artificial Intelligence Laboratory, University of Texas at Austin %D November 1984 %X Operating systems are designed to achieve goals, not to recognize user intentions. But the use of Help systems and Menu-selection make them more useful and friendly. Natural Language interfaces must go farther by guessing what the user wants and what is meant but not specified. Natural language programming systems -- still in infancy -- promise explicit capability for a user to define his/her intentions explicitly. Text Knowledge systems -- at the research frontier -- bewilder us with the complexity of intentions expressed and implied. Current techniques for recognizing intentions and computing appropriate representations and responses are discussed in this paper. %A Robert F. Simmons %T A text knowledge base for the AI handbook %R Technical Report AI84-05 %I Artificial Intelligence Laboratory, University of Texas at Austin %D December 1983 %X This research aims at defining a consistent set of text representation conventions for organizing fifty pages of the AI handbook as an inferential knowledge base founded on a procedural logic system of general inference schemas for answering questions from it. After a year of research on the AI handbook project, we have developed a prototype, natural-language, text knowledge system that includes a data base manager to compile the text knowledge and to make it available to navigational commands. The text is represented as logical propositions which form a set of text axioms to model its content. English questions and commands are translated to corresponding logical formulae and treated as theorems to be proved with respect to the text model. The logical form is that of semantic relations (SR) -- logical Predicates with varying numbers and orders of arguments. To compute effectively with such a free form, a relaxed unification procedure was defined as the basis of the SR theorem prover. The use of procedural logic augmented with fast, compiled Lisp functions has shown that questions can be answered in times ranging from a few tenths of a second to minutes of CPU time on a DEC2060 system. The navigational capabilities of the data base manager make available larger contexts surrounding the text and offer the user complete freedom to explore the text and to extract any desired information from it. %A Michael Kavanaugh Smith %T Knowledge based contextual reference resolution for text understanding %R Technical Report AI85-02 %I Artificial Intelligence Laboratory, University of Texas at Austin %D January 1985 %O PhD dissertation %X This report extends the concept of reference resolution in a discourse context to cover a broad range of connective inference required for text understanding. Access to all conceptual relations is restricted or facilitated by the context established by preceding text. This contextual filter greatly simplifies the establishment of connections between the surface text and previously instantiated discourse representation. The reference procedure requires a taxonomically organized knowledge base of structured concepts, in the sense of frames and scripts. The procedure selects lexical senses and generates reference candidates, which may be either explicit or implicit in the discourse context. These are matched against constraints imposed by the surface text and a conceptual representation is constructed and integrated with the accumulated discourse structure. %A Bruce W. Porter %T Learning problem solving: a proposal for continued research %R Technical Report AI85-03 %I Artificial Intelligence Laboratory, University of Texas at Austin %D March 1985 %X Many of the tasks which people perform involve problem solving: applying a sequence of operators to solve a problem. This research explores how efficient problem solutions are discovered from the myriad of less efficient alternatives. Results in machine learning are applied both to explain findings from psychological experimentation and to expand the utility of computers. Learning to problem solve requires acquiring multiple forms of knowledge. Problem solving is viewed as a search of a state-space formulation of a problem. With this formalism, operators are applied to states to transit from the initial state to the goal state. The learning task is to acquire knowledge of the state-space to guide search. In particular, three forms of knowledge are required: why each operator is useful, when to apply each operator, and what each operator does. This research builds on an existing PROLOG system that learns problem solving in the domains of simultaneous linear equations and symbolic integration. First the current learning system is described. Then new research directions are proposed. These include: - critically comparing machine learning techniques demonstrated in a variety of problem solving domains. - using learned knowledge to guide the acquisition of further learning. - dynamically re-defining the concept description language by discovering useful descriptors from the training. %A Bruce W. Porter %T Using and revising learned concept models: a research proposal %R Technical Report AI85-04 %I Artificial Intelligence Laboratory, University of Texas at Austin %D May 1985 %X People improve their learning performance with experience yet most machine learning systems do not. The premise of this research is that a learning system can use learn knowledge to guide the acquisition of further knowledge. Furthermore, learned knowledge can guide the interpretation of training examples that are not strictly prototypical of the (potentially fuzzy) concept being learned. The learning system can revise this learned bias when new training violates expectations. This research combines results from learning from examples, learning with background knowledge, and learning with a domain model to study the progression from knowledge-poor to knowledge-rich learning. This research is directed toward development of a computational model of concept acquisition which learns, uses, and revises domain knowledge. %A Robert A. Levinson %T A self organizing retrieval system for graphs %R Technical Report AI85-05 %I Artificial Intelligence Laboratory, University of Texas at Austin %D May 1985 %O PhD dissertation %X This report describes the theory, design and implementation of a graph-based, self-organizing database retrieval system. The system is designed to support the expert problem solving tasks of recall, design and recovery. The fundamental design principle is the production of a partial ordering by the relation subgraph-of. This relation is considered to be equivalent to more-general-than. This document discusses this design from three different levels: an abstract level in which the nodes in the partial ordering are concepts, the implementation level described above (the nodes are graphs), and an application level in which the nodes are domain specific objects such as molecules or reactions. The primary problem domain explored is organic chemistry. A large database of organic reactions and starting materials can be queried to extract reactions or molecules that match, either exactly or approximately, desired structures. The system may also suggest precursors to a desired target molecule. The queries are answered by exploiting a set of concepts that are commonly subgraphs of molecule or reaction graphs. Concepts serve multiple purposes: They constrain the search involved in the matching process so that the time required to answer a query grows sub-linearly in the size of the database. Concepts define the notion of "similarity" that is crucial if approximate match is desired. They also may be useful generalizations of reactions or molecular structures. The concepts can be "discovered" (i.e., constructed) by the system itself using largely syntactic criteria based on the topology of the database. A variety of performance tests are performed, including a comparison of the system's precursor recommendation capability with graduate students in organic chemistry. The system is also applied to the retrieval and generalization of chess positions. %A Gordon S. Novak %T Lisp programming lecture notes %R Technical Report AI85-06 %I Artificial Intelligence Laboratory, University of Texas at Austin %D July 1985 %X This tutorial is intended as an introduction to Lisp programming for persons who already have experience programming in some language, e.g. FORTRAN. This course presents a set of basic system functions that are frequently used and are present in virtually every Lisp implementation.The material follows the conventions of Common Lisp. Five programming assignments are included. %A William R. Murray %T Heuristic and formal methods in automatic program debugging %R Technical Report AI85-07 %I Artificial Intelligence Laboratory, University of Texas at Austin %D June 1985 %O Appears in IJCAI-85 proceedings. %X TALUS is an automatic program debugging system that both detects and corrects nonsyntactic bugs in student programs written to solve small but nontrivial tasks in pure Lisp. TALUS permits significant variability in student solutions by using heuristic methods to recognize different algorithms and formal methods to reason about computational equivalence of program fragments. A theorem prover intensionally represents an infinite database of rewrite rules, thus allowing for unanticipated implementations. TALUS detects bugs using formal methods in both symbolic evaluation and case analysis. Heuristic methods conjecture the exact location of bugs and alterations necessary to correct the bugs. Finally, formal methods establish or disprove these heuristic conjectures, reflecting a generate and test methodology. %A Vipin Kumar %T A general heuristic bottom-up procedure for searching AND/OR graphs %R Technical Report AI85-08 %I Artificial Intelligence Laboratory, University of Texas at Austin %D August 1985 %X This paper presents a general heuristic bottom-up procedure for finding a least-cost solution tree of an AND/OR graph when the cost functions associated with the arcs are monotone. Since monotone cost functions are very general, the procedure is applicable to a very large number of problems. The procedure works for both cyclic and acyclic AND/OR graphs, and subsumes most of the known bottom-up procedures for searching AND/OR graphs. Many state-space search procedures and dynamic procedures are also special cases of our procedure. %A Vipin Kumar %A Dana S. Nau %A Laveen N. Kanal %T A general paradigm for AND/OR graph and game tree search %R Technical Report AI85-09 %I Artificial Intelligence Laboratory, University of Texas at Austin %D August, 1985 %X This paper represents a general procedure for finding an optimal solution tree of an acyclic AND/OR graph with monotone cost functions. Due to the relationship between AND/OR graphs and game trees, it can also be used as a game tree search procedure. Seemingly disparate procedures like AO*, SSS*, alpha-beta, B* are instantiations of this general procedure. This sheds new light on their interrelationships and nature, and simplifies their correctness proofs. Furthermore, the procedure is applicable to a very large class of problems, and thus provides a way of synthesizing algorithms for new applications. The procedure searches an AND/OR graph in a top-down manner (by selectively developing various potential solutions) and can be viewed as a general branch-and-bound (B&B) procedure. %A Vipin Kumar %T Parallel processing for artificial intelligence %R Technical Report AI85-10 %I Artificial Intelligence Laboratory, University of Texas at Austin %D August 1985 %A Vipin Kumar %T Branch-and-bound search %R Technical Report AI85-11 %I Artificial Intelligence Laboratory, University of Texas at Austin %D August 1985 %A Olivier Winghart %T Computational treatment of metaphor in text understanding: a first approach %R Technical Report AI85-12 %I Artificial Intelligence Laboratory, University of Texas at Austin %D August 1985 %O Master's thesis %X This thesis starts with a general presentation of the phenomenon of metaphor, necessary to define our object of study. It also contains an ordered presentation of the current research approaches to deal computationally with metaphors in text analysis. As an illustration, a case study system that analyses a mundane text (containing metaphors) is included, with a description of its functioning. Finally, more general issues are raised, that require elucidation to enlarge the field of application of such systems. %A Robert F. Simmons %T Computer science and medical information retrieval %R Technical Report AI85-13 %I Artificial Intelligence Laboratory, University of Texas at Austin %D August 1985 %X Presented at the Conference on the Medical Information Sciences, University of Health Science Center, San Antonio, Texas, July 1, 1985. %A Robert F. Simmons %T Technologies for machine translation %R Technical Report AI85-14 %I Artificial Intelligence Laboratory, University of Texas at Austin %D August 1985 %X Advances in hardware have made available micro-coded LISP and PROLOG workstations, supported by text editing and formatting software. Some of these have been augmented with linguistic technology including large bilingual dictionaries, parsers, generators, and translators to make them powerful tools for research and development of automated translation. Some techniques of linguistic engineering for accomplishing translation are described, and it is suggested that the present barely satisfactory approach involving sentence-by-sentence translation will eventually be improved by incorporating the results of research on analyzing discourse. %A Nicholas Asher %A Hans Kamp %T The knower's paradox and the logics of attitudes %R Technical Report AI85-15 %I Artificial Intelligence Laboratory, University of Texas at Austin %D August 1985 %X As Montague and Kaplan (Kaplan and Montaque, 1960, Montaque, 1963) pointed out long ago, a syntactic treatment of propositional attitudes has a fundamental weakness; virtually all of epistemic logic must be sacrificed, if it, like ordinary logic, is to be applicable to arbitrary subject matter. Thomason (Thomason, 1980) extended these results to include a syntactic treatment of the logic of belief, and showed how they also apply to analyses of the attitudes that, while not strictly "syntactic", use representations or structured propositions as the objects of the attitudes. In what follows, we shall call all such treatments "representational". The Hangman's Paradox as represented by Montague and Kaplan is another example of how even a minimal amount of epistemic logic in conjunction with a self referential attitude can lead to disaster. A more abstract formulation of the same self-referential attitude is exemplified in the "Knower's Paradox" and is perhaps best expressed in colloquial English by 'my negation is known' (we shall call this sentence the "Knower Sentence"). The usual moral drawn from these results is that a formally serious analysis of the attitudes should treat expressions of propositional attitudes, like modality, as predicates of propositions qua intensions, not as predicates of sentences or sentence-like entities. It is well-known how to provide a coherent logic of propositional attitudes with this approach. One of our main points, however, is that this "solution" is bought at far too dear a price: it leads neither to a satisfactory analysis of attitude reports nor to a general theory of attitudes. Since we think that these are desiderata of any theory of attitudes, we think that the standard "solution" is no solution at all, but rather refusal to take a theory of the attitudes seriously. A real solution to the Knower Paradox, one developed within the framework that leads to the desiderata, demands a thorough reanalysis of the logic of attitudes. We have both been developing such frameworks in other papers using discourse representation theory (Asher, 1984a, Asher, 1984b, Kamp, 1985a, Kamp 1985b). In this paper, however, we shall only make use of very general features of those frameworks that are relevant to the discussion of the Knower Paradox. %A Rick Hill %T Negotiated interfaces for software reusability %R Technical Report AI85-16 %I Artificial Intelligence Laboratory, University of Texas at Austin %D December 1985 %O Master's thesis %X This thesis presents an algorithm which constructs an interface between a user's structure and a generic program, allowing the GLISP compiler to specialize the generic program to work with the user's data structure. The interface between the user's data and the data structure expected by the generic program is "negotiated" by the user through menu selection. %A Benjamin J. Kuipers %T The map-learning Critter %R Technical Report AI85-17 %I Artificial Intelligence Laboratory, University of Texas at Austin %D December 1985 %X The Critter is an artificial creature which learns, not only the structure of its (simulated) environment, but also the interpretation of the actions and senses that give it access to that environment. The Map-Learning Critter embodies a strong a priori hypothesis: that its environment is, at least in part, structured as a large-scale space consisting of places connected by paths. The Critter's learning strategy begins by diagnosing its actions and senses. By performing experiments and examining the periodicity of its sense-impressions, it classifies actions as ``turn-like,'' ``travel-like,'' and ``others.'' After the actions are classified, it becomes possible to aggregate sets of sense-impressions to define places; then places are linked into paths. An exploration strategy assures that the entire environment will be explored and assimilated into this model. The Map-Learning Critter has been implemented and has experienced a variety of reasonable and unreasonable environments. Some implications of the results are discussed, and directions for future research are outlined. %A Man-Lee Wan %T Menu-based creation of procedures for display of data %R Technical Report AI85-18 %I Artificial Intelligence Laboratory, University of Texas at Austin %D December 1985 %O Master's thesis %X We have investigated methods of creation of interface procedures that extract data needed by a library display procedure from user data of arbitrary structure and present it to the display procedure for generation of a display for presentation to the user. A procedure is created by the GLISP compiler from the following pieces of information: .lp 1. the knowledge about how to produce the display, which is stored as a library procedure. 2. description of the user's data in the GLISP structure description language. 3. selection of the data from the user object that are to be displayed. 4. specification of the data needed by the display procedure. .lp A system called LINK has been written that creates and runs interface procedures automatically using the information outlined above. The LINK system has been integrated with the GEV data inspector of the GLISP system. %A Stuart Laughton %T Explanation of mechanical systems through qualitative simulation %R Technical Report AI85-19 %I Artificial Intelligence Laboratory, University of Texas at Austin %D December 1985 %O Master's thesis %X This thesis documents an investigation into the feasibility of Qualitative Simulation as the basis of a computer program that can represent and explain the function and structure of mechanical systems. First, the general principles of qualitative reasoning and the theories of several researchers in this field are reviewed. Next, the process of modelling a specific simple mechanical device, a piston driven pump, and the re sulting model's simulation output are discussed in detail. Finally, some techniques are presented for generating explanatory animation and natural language from a qualitative simulation model and its output. %A Bruce W. Porter %A Dennis Kibler %T Experimental goal regression: a method for learning problem solving heuristics %R Technical Report AI86-20 %I Artificial Intelligence Laboratory, University of Texas at Austin %D January 1986 %X This research examines the process of learning problem solving with minimal requirements for a priori knowledge and teacher involvement. Experience indicates that knowledge about the problem solving task can be used to improve problem solving performance. This research addresses the issue of what knowledge is useful, how it is applied during problem solving, and how it can be acquired. For each operator used in the problem solving domain, knowledge is incrementally learned concerning why it is useful, when it is applicable, and what transformation it performs. The method of experimental goal regression is introduced for improving the learning rate by approximating the results of analytic learning. The ideas are formalized in an algorithm for learning and problem solving and demonstrated with examples from the domains of simultaneous linear equations and symbolic integration. %A Wing-Kwong Wong %T GT: a conjecture generator for graph theory %R Technical Report AI86-21 %I Artificial Intelligence Laboratory, University of Texas at Austin %D January 1986 %O Master's thesis %X The process of knowledge acquisition can be automated with programs that learn from observation and discovery. A better understanding of this learning strategy would facilitate the construction of expert systems and the exploration of scientific domains. GT, a frame-based program written for this thesis, learns relationships among classes of graphs by observing the examples and non-examples of these classes. The relationships considered are subclass, superclass, exclusion, and complement exclusion. GT's knowledge is partly represented as frames and partly as formulas of predicate calculus. The learned relationships are stored hierarchically. GT has found non-trivial, well-known theorems of graph theory. %A Yow-Jian Lin %A Vipin Kumar %A Clement Leung %T An intelligent backtracking algorithm for parallel execution of logic programs %R Technical Report AI86-22 %I Artificial Intelligence Laboratory, University of Texas at Austin %D March 1986. %X In this paper we present a simple but efficient backtracking scheme which works when AND-parallelism is exploited in a logic program. The scheme is well suited for implementation on a parallel hardware. We show that the backtracking scheme presented by Conery and Kibler in the context of AND/OR process model is incorrect, i.e., in some cases it may miss solutions while performing backtracking. Even if no AND-parallelism is exploited (i.e., all literals are solved sequentially), our scheme is more efficient than the "naive" depth-first backtracking strategy used by Prolog because our scheme makes use of the dependencies between literals in a clause. Chang and Despain have recently presented a backtracking scheme which also makes use of the dependencies between literals. We show that our scheme is more efficient than their scheme in the sense that our scheme does less backtracking. %A Yow-Jian Lin %A Vipin Kumar %T A parallel execution scheme for exploiting AND-parallelism of logic programs %R Technical Report AI86-23 %I Artificial Intelligence Laboratory, University of Texas at Austin %D March 1986 %X In this paper we present a parallel execution scheme for exploiting AND-parallelism of logic programs. This scheme follows the generator-consumer approach of the AND/OR process model. Using problem-specific knowledge, literals of a clause are linearly ordered at compile time. This ordering and run-time binding conditions are then used to dynamically select generators and consumers for different variables at run time. The scheme can exploit all the AND-parallelism available in the framework of generator-consumer approach. Compared with other execution schemes based on the same approach, our scheme is simpler, and potentially more efficient. %A Benjamin J. Kuipers %T Qualitative simulation as causal explanation %R Technical Report AI86-24 %I Artificial Intelligence Laboratory, University of Texas at Austin %D April 1986 %X To give a causal explanation of an event means to deduce a statement which describes it, using as premises of the deduction one or more universal laws, together with certain singular statements, the initial conditions." (Karl Popper) Traditional diagnostic systems do not explain their findings in this sense. Using the QSIM qualitative simulation representation and algorithm, we demonstrate that we can construct models of physiological mechanisms, capable of explaining the observed behavior of the healthy mechanism, of the disease state, and of the response to therapy. We also present a new type of abstraction relation for linking simple equilibrium mechanisms into a hierarchical description of a complex mechanism such as human physiology. %A Ray Bareiss %A Adam Farquhar %T Fault diagnosis using qualitative simulation %R Technical Report AI86-25 %I Artificial Intelligence Laboratory, University of Texas at Austin %D April 1986. %X This paper presents a new approach to using qualitative simulation to diagnose faults in mechanisms. A mechanism is described by three models. A component model associates a set of measurement parameters with aspects of its physical structure. A constraint model defines relationships which must hold between parameters during the mechanism's normal behavior. A fault model embodies relevant aspects of a domain theory for a class of mechanisms. The Diagnose algorithm evaluates an observed state of a mechanism in terms of its constraint model. If violated constraints exist, Findfaults uses the component model to localize the problem and then uses the fault model to interpret it. These algorithms have been implemented in Prolog and have successfully identified faults in a variety of simple mechanisms. %A Wanying Jin %A Robert F. Simmons %T Symmetric rules for translation of English and Chinese %R Technical Report AI86-26 %I Artificial Intelligence Laboratory, University of Texas at Austin %D May 1986 %X A system of grammars using symmetric phrase structure and translation rules in a Lisp version of Prolog is shown to provide symmetric bidirectional translation between English and Chinese for a fragment of the two languages. It is argued that symmetric grammars and translation rules significantly reduce the total grammar writing requirement for translation systems, and that research on symmetric translation systems deserves further study. %A William R. Murray %T Automatic program debugging for intelligent tutoring systems %R Technical Report AI86-27 %I Artificial Intelligence Laboratory, University of Texas at Austin %D June, 1986 %O PhD dissertation %X Program debugging is an important part of the domain expertise required for intelligent tutoring systems that teach programming languages. This dissertation explores the process by which student programs can be automatically debugged in order to increase the instructional capabilities of these systems. This research presents a methodology and implementation for the diagnosis and correction of nontrivial recursive programs. In this approach, recursive programs are debugged by repairing induction proofs in the Boyer-Moore Logic. The potential of a program debugger to automatically debug widely varying novice programs in a nontrivial domain is proportional to its capabilities to reason about computational semantics. By increasing these reasoning capabilities a more powerful and robust system can result. This thesis supports these claims by examining related work in automated program debugging and by discussing the design, implementation, and evaluation of Talus, an automatic debugger for LISP programs. Talus relies on its abilities to reason about computational semantics to perform algorithm recognition, infer code teleology and to automatically detect and correct nonsyntactic errors in student programs written in a restricted, but nontrivial, subset of LISP. Solutions can vary significantly in algorithm, functional decomposition, role of variables, data flow, control flow, values returned by functions, LISP primitives used, and identifiers used. Solutions can consist of multiple functions, each containing multiple bugs. Empirical evaluation demonstrates that Talus achieves high performance in debugging widely varying student solutions to challenging tasks. %A Mark V. Lapolla %T The role of inversion, clecting and PP-Fronting in relating discourse elements %R Technical Report AI86-28 %I Artificial Intelligence Laboratory, University of Texas at Austin %D July 1986 %X Languages use various types of syntactic constructions to convey more information than what is expressed by the sum of words in a sentence. This paper will explore and discuss the less obvious ways syntactic structure is used to convey information and how this information could be used by a natural language database system to organize and search a discourse space. %A Wing-Kwong C. Wong %T A theory of argument coherence %R Technical Report AI86-29 %I Artificial Intelligence Laboratory, University of Texas at Austin %D July 1986 %X Previous research has suggested different approaches to analyze the coherence of discourses. But these approaches are not suitable to represent the structure of logical arguments. We develop a notion of reasoning relations and argue that it is more appropriate for characterizing argument coherence. Reasoning relations are naturally classified into two types, deductive and rhetorical, based on theories of classical logics and argumentation rhetorics. A set of commonly used relations is described. Ways of employing these relations to restructure and evaluate arguments are discussed. Possible connections between discourse coherence and Discourse Representation Theory are also explored. %A Phillipe M. Alcouffe %T Metaphorical shift and the induction of similarities %R Technical Report AI86-30 %I Artificial Intelligence Laboratory, University of Texas at Austin %D July 1986 %O Master's thesis %X This thesis presents a view of metaphor as the syntactic realization of an underlying cognitive process: analogical reasoning. The semantic changes resulting from the 'anomalous' juxtaposition of the metaphor's referents can be represented in a conceptual semantic space by rules of construal. These rules describe the semantic changes induced by a comparison component part of analogical reasoning. The meaning of metaphor is then a set of valid transferred similarities. Some similarities are pre-encoded as part of our general knowledge and are `discovered' through the transfer process, whereas other similarities are `created'. A taxonomy of conventional metaphorical themes is presented that supports the creativity of metaphor by the means of similarities induced by metaphorical shifts and inherited through this taxonomy. %A Christopher A. Rath %T A rule language for the GLISP programming system %R Technical Report AI86-31 %I Artificial Intelligence Laboratory, University of Texas at Austin %D August 1986 %O Master's thesis %X GLISP is a programming language written by Dr. Gordon Novak at the University of Texas at Austin. It is an object oriented language based on LISP, a list processing language used in artificial intelligence programming. GLISP is often used in the university environment for the construction of expert systems programs, yet it has no formal rule language. The thesis is the design and implementation of a general rule language incorporated into GLISP. The project includes an investigation into currently implemented rule languages, an implementation of the rule language compiler in InterLisp, and sample applications. The resulting system is expected to be efficient, general purpose, and easy to learn for those programmers already familiar with GLISP. %A William R. Murray %T Talus: automatic program debugging for intelligent tutoring systems %R Technical Report AI86-32 %I Artificial Intelligence Laboratory, University of Texas at Austin %D August 1986 %X This report summarizes the author's dissertation (AI86-27). %A Charles J. Petrie %T New algorithms for dependency-directed backtracking %R Technical Report AI86-33 %I Artificial Intelligence Laboratory, University of Texas at Austin %D September 1986 %O Master's thesis %X The problem of providing dependency-directed backtracking in a Doyle-style Truth Maintenance System (TMS) is solved with three algorithms. The first modifies Doyle's basic algorithm by eliminating the necessity for conditional proof justifications. Unlike other attempts to do so, this method maintains the capability of the TMS to provide complete explanations from justifications. The second algorithm, which eliminates the generation of a maximal assumption set, provides a novel description of backtracking as a simple search. Finally, an extension of this algorithm allows contradiction resolution to be extended as an inference technique for expert systems. The dependency-directed backtracking algorithms presented are also the first to correctly ensure that an assertion is not justified unnecessarily and that unsatisfiable circularities are not created. %A Yow-Jian Lin %A Vipin Kumar %T An execution model for exploiting AND-Parallelism in logic programs %R Technical Report AI86-34 %I Artificial Intelligence Laboratory, University of Texas at Austin %D September 1986 %X This paper presents a parallel execution model for exploiting AND-parallelism in pure Horn Clause logic programs. The model is based upon the execution model of Conery and Kibler, and compares favorably with various related execution schemes (including Conery and Kibler's). We also present two implementation schemes to realize our model: one which has a coordinator to control the activities of processes solving different literals; and the other one which achieves synchronization by letting processes pass messages to each other in a distributed fashion. Trade-offs between these two schemes are then discussed. %A Bruce W. Porter %A E. Ray Bareiss %T PROTOS: an experiment in knowledge acquisition for heuristic classification tasks %R Technical Report AI86-35 %I Artificial Intelligence Laboratory, University of Texas at Austin %D August 1986 %X PROTOS is an experiment in acquiring, applying and revising expert knowledge for heuristic classification. Learned knowledge includes exemplar-based categories and domain knowledge to support explanation and pattern matching. Research on PROTOS has revealed fundamental issues in concept formation and classification concerning representation of ill-defined, ``fuzzy'' categories, multiple applications of learned knowledge, and the context in which learning takes place. This paper highlights these issues, describes the PROTOS approach to learning heuristic classification, and surveys related research. %A Vipin Kumar %A Yow-Jian Lin %T A framework for intelligent backtracking in logic programs %R Technical Report AI86-36 %I Artificial Intelligence Laboratory, University of Texas at Austin %D November 1986 %X This paper presents a scheme for intelligent backtracking in Horn-clause logic programs. The scheme is simple and yet effective for backtracking within a clause. We also present a framework for using extra analysis to make within-clause- backtracking even more intelligent and also to perform across-the-clause backtracking intelligently. The primary strength of our scheme over other schemes is that it incurs very small overhead and yet can eliminate a lot of redundant backtracking. Our backtracking scheme can also be used when AND-parallelism is exploited in logic programs (i.e., when multiple literals of a clause are executed simultaneously). %A Bruce W. Porter %T A review of the first international meeting on advances in learning %R Technical Report AI86-37 %I Artificial Intelligence Laboratory, University of Texas at Austin %D November 1986 %O To appear in Machine Learning, Kluwer Academic Publishers, 1987 %X This report is a review of the First International Meeting on Advances in Learning (IMAL) held the week of July 28, 1986 in Les Arcs, France. %A Chin Yang Chee %T Intuitionistic modelling and simulation of mechanical devices %R Technical Report AI86-38 %I Artificial Intelligence Laboratory, University of Texas at Austin %D December 1986 %O Master's Thesis %X The objective of this thesis is to explore and develop representational schemes to represent both the textual and pictorial description of a mechanical device (e.g. an engine) to produce an explanation of the device's operation by generating an animation of the device while highlighting the part of the text being simulated. For this purpose we have built a prototype which includes a highly interactive and hierarchical 2-D graphics editor to construct/edit devices and a semantic parser which parses the text into events of the device operation which is then used as a "script" for animating the device. %A Kim Korner %T An intelligent remote file server %R Technical Report AI86-39 %I Artificial Intelligence Laboratory, University of Texas at Austin %D December 1986 %O PhD Dissertation %X Limitations of current disk blocking caching strategies are discussed. A new model for providing remote file service using knowledge based caching algorithms is proposed. The knowledge based algorithms generate expectations of user process behavior which are used to provide hints to the file server. Surplus resources of the remote file server permit incorporation of these hints into caching algorithms. The research involves gathering trace data from a modified Unix kernel and driven simulation of remote file server models. Comparisons are made between conventional, knowledge based and optimal models. Further applications of knowledge based strategies in operating systems are discussed. %A Wanying Jin %A Robert F. Simmons %T Question answering with rhetorical relations %R Technical Report AI86-40 %I Artificial Intelligence Laboratory, University of Texas at Austin %D December 1986 %X A method is presented for computing discourse relations among text sentences and using the resulting data to generate new sentences to enrich a text knowledge base for the purpose of improving question answering. The coherence relations defined by Hobbs and Lockman and Klappholz, and the event coherence relations developed by Alterman are employed as a theoretical basis. Rhetorical sentences, which enrich the knowledge base, are derived from the discourse relations. The resulting text knowledge base is able to answer questions that would otherwise fail to match the text. %A Vipin Kumar %A Yow-Jian Lin %T An intelligent backtracking scheme for Prolog %R Technical Report AI86-41 %I Artificial Intelligence Laboratory, University of Texas at Austin %D December 1986 %X This paper presents a scheme for intelligent backtracking in prolog programs. This scheme incurs a small overhead and yet can eliminate a lot of redundant backtracking. The main features of our scheme are as follows. Overhead due to intelligent backtracking occurs only when a goal fails. Upon failure of a goal, our scheme invokes some extra work to select a backtrack point. But unlike the overhead in many other approaches this overhead is not very large. To demonstrate the usefulness of our scheme, we have modified PLM level I simulator to incorporate our intelligent backtracking scheme, and have investigated its performance on a number of problems. %A Hae-Chang Rim %A Robert F. Simmons %T Extracting data base knowledge from medical text %R Technical Report AI86-42 %I Artificial Intelligence Laboratory, University of Texas at Austin %D December 1986 %X Relational database knowledge can be extracted automatically from appropriate sublanguage texts. This paper shows the feasibility of translating a particular sublanguage text concerning cold symptoms and treatments into the semantically uniform propositional forms, and thus a possible method for reducing the number of inference rules needed to relate a text to a question. %A Daniel P. Miranker %T A survey of specialized parallel architectures designed to support knowledge representation %R Technical Report AI87-43 %I Artificial Intelligence Laboratory, University of Texas at Austin %D January 1987 %X While modern data processing systems have evolved from arithmetic calculating machines, knowledge representation schemes, developed for artificial intelligence programs, have evolved from cognitive models of intelligent behavior. Instead of the basic arithmetic operations used in data processing, knowledge representation schemes require extensive set manipulation, pattern matching and graph theoretic abilities. Thus, the implementation of artificial intelligence programs on conventional computers has suffered from a mismatch of basic abilities. This paper surveys computer architectures that attempt to overcome this mismatch by developing computer organizations that intrinsically support the structure and basic operations required to manipulate artificial intelligence knowledge bases. Common to all these proposals is the introduction of large scale parallelism in the form of many simple processing elements. %A W. W. Lee %A C. Chiu %A B. J. Kuipers %T Developments towards constraining qualitative simulation %R Technical Report AI87-44 %I Artificial Intelligence Laboratory, University of Texas at Austin %D January 1987 %X Qualitative simulation is a useful and powerful tool for causal reasoning with physical mechanisms. However, qualitative simulation algorithms can predict impossible behaviors. In working with Kuipers' QSIM algorithm, an approach to identify and eliminate spurious predictions was developed. This approach addresses underlying shortcomings of qualitative simulation, which are due to a combination of locality and qualitative description. The differential equation describing a mechanism is analyzed to produce constraints which eliminate spurious predictions. This paper describes the application of this approach to the simulation of a damped spring. One local and two global constraints eliminated all spurious behaviors in simulating one period of oscillation. Some implications are discussed. %A V. Nageshwara Rao %A V. Kumar %A K. Ramesh %T Parallel heuristic search on a shared memory multiprocessor %R Technical Report AI87-45 %I Artificial Intelligence Laboratory, University of Texas at Austin %D February 1987 %X In this paper we discuss two different ways of parallelizing the A* state space search algorithm for shared memory multiprocessors. We implement these parallel algorithms to solve the 16-puzzle problem and the TSP problem on the Sequent Balance 8000, a shared memory multiprocessor, and present performance results. The results are very encouraging, as we are able to obtain speedups of as much as 7 or 8 processors. Since A* is a very commonly used heuristic search algorithm, we expect these parallel versions to be effective for a variety of problems. %A V. Nageshwara Rao %A Vipin Kumar %A K. Ramesh %T A parallel implementation of Iterative-Deepening-A* %R Technical Report AI87-46 %I Artificial Intelligence Laboratory, University of Texas at Austin %D February 1987 %X This paper presents a parallel version of the iterative-deepening A* (IDA*) algorithm. Iterative-Deepening-A* is an important admissible algorithm for state-space search which has been shown to be optimal both in time and space for a wide variety of state-space search problems. Our parallel version retains all the nice properties of the sequential IDA* and yet does not appear to be limited in the amount of parallelism. To test its effectiveness, we have implemented this algorithm on Sequent Balance 8000 parallel processor to solve the 15-puzzle problem, and have been able to obtain almost linear speedups on the 9 processors that are available on our machine. On machines with larger number of processors, we expect that the speedup will still grow linearly. The parallel version seems suitable even for loosely coupled architectures such as the Hypercube. %A A. Bhattacharjee %T Interpreting narratives with evaluative statements %R Technical Report AI87-47 %I Artificial Intelligence Laboratory, University of Texas at Austin %D January 1987 %O Master's thesis %X Discourse structure is one of the foremost areas of study in natural language. Discourse is concluded to be governed by structures that go beyond the sentence %A A. Flatau %T A program for translating English sentences into Lisp programs %R Technical Report AI87-48 %I Artificial Intelligence Laboratory, University of Texas at Austin %D February 1987 %A L. K. Branting %T A prototype natural language understanding legal consultant %R Technical Report AI87-49 %I Artificial Intelligence Laboratory, University of Texas at Austin %D February 1987 %X Because of the intimate connection between legal reasoning and natural language understanding, a robust natural language interface is important for a useful automated legal reasoning system. This project attempts to demonstrate how existing techniques of natural language understanding and discourse analysis could be used to construct a system able to interpret and resolve legal problems and express their solutions. The program uses frame instantiation both in text understanding and in legal analysis. %A E. R. Bareiss %A B. W. Porter %T A survey of psychological models of concept representation %R Technical Report AI87-50 %I Artificial Intelligence Laboratory, University of Texas at Austin %D March 1987 %X This paper surveys the psychological literature on human representation of object concepts. There are three major views of how people represent concepts: the Classical, Probabilistic, and Exemplar Views. The Classical View holds that a concept is defined by a collection of features which are singly necessary and jointly sufficient for classifying an object as an instance of the concept. The Probabilistic View relaxes the notion of a definition to make classification a probabilistic function of the subset of concept members' possible features which are present in an object. The Exemplar View holds that a concept is defined extensionally; past instances of a concept are retained for use in classification. Each of these views is discussed in turn. Its strengths and weaknesses are pointed out, and major research in the spirit of that view is identified. The paper also provides an appendix which defines some of the terms used in this area of the psychological literature and discusses some important supporting ideas. %A V. Hwang %T The UTIPS image processing system %R Technical Report AI87-51 %I Artificial Intelligence Laboratory, University of Texas at Austin %D April 1987 %X This document describes the UTIPS image processing software for the SYMBOLICS 3600 series Lisp machine with optional 8-bit color system running Genera 7.0 operating system. UTIPS contains more than 40 image manipulation and image processing functions and is implemented using Common-Lisp. %A G. Novak %T TMYCIN expert system tool %R Technical Report AI87-52 %I Artificial Intelligence Laboratory, University of Texas at Austin %D April 1987 %X TMYCIN ("Tiny EMYCIN") is a simple expert system tool patterned after the EMYCIN tool developed at Stanford. TMYCIN does not attempt to provide all of the features of EMYCIN; it is intended to provide the most commonly used features in a package that is small and simple. The internal implementation of TMYCIN has been written from scratch and is therefore different from EMYCIN. %A E. R. Bareiss %A B. W. Porter %A C. C. Weir %T Protos: an exemplar-based learning apprentice %R Technical Report AI87-53 %I Artificial Intelligence Laboratory, University of Texas at Austin %D April 1987 %X Building Protos, a learning apprentice system for heuristic classification, has forced us to scrutinize the usefulness of inductive learning and deductive problem solving. While these inference methods have been widely studied in machine learning, their seductive elegance in artificial domains (e.g., mathematics) does not carry-over to natural domains (e.g., medicine). This paper briefly describes our rationale in the Protos system for relegating inductive learning and deductive problem solving to minor roles in support of retaining, indexing, and matching exemplars. The problems that arise from "lazy generalization" are described along with their solutions in Protos. Finally, an example of Protos in the domain of clinical audiology is discussed. (to appear in the Proceedings of the Fourth International Workshop on Machine Learning, University of California at Irvine, Morgan Kaufmann Publishers, 1987.) %A D. Plummer %T Coda: an extended debuger for Prolog %R Technical Report AI87-54 %I Artificial Intelligence Laboratory, University of Texas at Austin %D April 1987 %X In this paper I describe Coda, an extension of the de facto standard debugger which presents more information about the execution of the program to the user as the program is debugged. Coda extends the standard debugger in a number of ways. First, Coda allows the user to interact with the pattern matching computation step. Thus the reason for the failure of a particular goal may be more precisely determined by the programmer. Second, Coda displays the program trace in terms of the clauses of the program rather than the goals that are executed. Thus, the program trace is directly related to the program that was written, and is at a level more appropriate to the programmer than that of the standard debugger. Finally, Coda allows finer control over the information that is displayed by the debugger, by an extended command set and a more powerful language for describing ``spy points''. %A D. Dvorak %T Expert systems for monitoring and control %R Technical Report AI87-55 %I Artificial Intelligence Laboratory, University of Texas at Austin %D May 1987 %X Many large-scale industrial processes and services are centrally monitored and controlled under the supervision of trained operators. Common examples are electrical power plants, chemical refineries, air-traffic control, and telephone networks - all impressively complex systems that are challenging to understand and operate correctly. The task of the operator is one of continuous, real-time monitoring and control, with feedback. The job can be difficult when the physical system is complex (tight coupling and complex interactions). Also, there may be faults not only in the system but also in its sensors and controls. Deciding the correct control action during a crisis can be difficult; a bad decision can be disastrous. This paper surveys existing work in the field of knowledge-based systems that assist plant/process operators in the task of monitoring and control. The goal here is to better define the information processing problems and identify key requirements for an automated operator assistant. A high-level design of an ``expert system for operators'' is presented. The design relies on a functional/causal model of the physical system as a source of deep knowledge (in addition to several sources of shallow knowledge). The major processes described in this design are focusing, model-building, tracking, envisioning and advising.