%A Don Libes %T A Guide to Applying to Graduate School %R Technical Memorandum DCS-TM-13 %I Rutgers University, Department of Computer Science %D May 1980 %D January 1982 %X Introduction - As a senior at Rutgers, you should have some idea of what interests you. Start having serious conversations with yourself about what you want to do for the next few years of your life. If you liked school, liked the atmosphere, the people, the freedom to pick and choose projects to be involved with and the challenge of stimulating intellectual problems, graduate school should be heavily considered. In this sense, graduate school provides a similar environment to the one that you have just passed through though it is much more intense. You only take courses in your major. You are surrounded by people who think, talk and dream about Computer Science (and not just hacking, but research). .lp The advantages of a graduate degree in CS are almost guaranteed to include a better chance of choosing the job that you want or perhaps, to command a higher salary. "Satisfactory employment" of last year's Computer Science Ph.D. graduates in the United States is currently 100%. .lp Its drawbacks should be considered, too. Dedicating at least 4 years of your life to schooling is not necessary to your survival and it should be recognized that billions of people are alive today without the help of an advanced degree. If school life is not the life for you or you are sick and tired of studying, graduate school might not be the path to follow, at least in the immediate future. Putting off graduate school will also give you the opportunity to earn a nice pad of wealth that you probably wouldn't accumulate as a student. %A R. N. Goldberg %T Software Design Issues in the Architecture and Implementation of Distributed Text Editors %R Technical Memorandum DCS-TM-14 %I Rutgers University, Department of Computer Science %D July 1980 %D January 1982 %X Conventional text editors consist of a program which runs on a single host machine and utilizes the CPU, disk, and other resources of the host to process every character that the user types on his terminal. This arrangement has several serious drawbacks, namely the computational load on the host (CPU cycles and I/O interrupts), the response time seen by the user (delay in processing user keystrokes), and the communications cost. These undesirable aspects of editing files residing at a remote host have become important issues to users of text editors on large mainframe systems. The availability of cheap personal microprocessors suggests that they be used to help solve the problems. Thus the motivation for the distributed editor capabilities to be proposed in this work results from the insufficiency of existing editing facilities on timeshared computers as well as from the trend toward personal computers. %A R. M. Keller %A D. J. Nagel %T Some Experiments in Abstraction of Relational Characteristics %R Technical Memorandum DCS-TM-15 %I Rutgers University, Department of Computer Science %D March 1981 %D January 1982 %X Two experiments performed in knowledge-based inference are discussed in this paper. The experiments are directed at abstracting structural regularities and patterns inherent in a database of binary relations. A novel graph representation to facilitate abstraction is used in approaching some classical problem areas. This representation is compact and powerful, and an efficient algorithm has been developed to help control the exhaustive nature of certain types of inductive problems. .lp One area of experimentation concerns the discovery of intensionally definable relations in a family databse. Another is the recognition of alphabetic characters using directional relations defined for points on a grid. Within a test bed system, KBLS, a scheme for computing abstractions is briefly summarized, and implications for future extensions are discussed in light of experimental results. %A Liben Xu %T Solving the Plane Geometry Problem by Learning %R Technical Memorandum DCS-TM-16 %I Rutgers University, Department of Computer Science %D March 1983 %X The top-down technique for solving a geometry problem is described. The top-down method uses "general rules," they are obtained by learning. This report focuses on general heuristics to obtain the general rules for solving a geometry problem. %A R. N. Goldberg %T Using BRIGHT for Maintaining Class Rosters and Grades %R Technical Memorandum LCSR-TM-1 %I Rutgers University, Department of Computer Science %D July 1980 %D January 1982 %X This document describes an application of BRIGHT version 3, an interactive data management system designed and implemented by the author. The development of BRIGHT version 3 was supported by the Biotechnology Resources Program, Division of Research Resources, National Institutes of Health, Department of Health, Education, and Welfare, under Grant 5 P41 RR00643 to the Special Research Resource: Computers in Biomedicine at Rutgers University, New Brunswick, New Jersey. %A Paul E. Utgoff %T Acquisition of Appropriate Bias for Inductive Concept Learning Dissertation Proposal %R Technical Memorandum LCSR-TM-2 %I Rutgers University, Department of Computer Science %D May 1982 %D October 1982 %X The inductive concept learning task is to observe examples and counterexamples of the concept and, based on these observations, to inductively infer the correct classifications for the unobserved instances in the domain. This inductive inference process must be guided by some form of information which we call bias. The proposed thesis is that, based on information available to the concept learner, bias which is needed to guide the inductive inference process in concept learning can be acquired during concept learning. The overall approach is to represent bias as a structure which can be manipulated, specifically as an incomplete concept description language. By forcing such an equivalence between bias and the concept description language, bias can be manipulated by revising the concept description language. For the concept attainment task, Mitchell's Candidate Elimination Algorithm will be employed because it requires that bias be represented as an incomplete concept description language. For the concept formation task, we propose new methods which can be used to revise the vocabulary of concepts to be considered during concept attainment. %A M. Dowd %T Gaussian Elimination Runs in Polynomial Time %R Technical Memorandum LCSR-TM-3 %I Rutgers University, Department of Computer Science %D October 1982 %D November 1982 %X We show that the ordinary Gaussian elimination algorithm runs in polynomial time on a Turing machine. We do this by obtaining an explicit expression for the matrix entries at any stage of the procedure. %A John A. Roach %A Jeffrey S. Shulman %T VEDIT Users Manual %R Technical Memorandum LCSR-TM-4 %I Rutgers University, Department of Computer Science %D February 1983 %D April 1983 %X VEDIT is a graphics based editor for VLSI circuits. It is based on CIF which defines symbols that contain objects (currently only rectangles) on a given mask layer and/or possible calls to other symbols (the actual internal representation is CIF-like but not CIF, see section 2.1 on how to convert a CIF file to internal representation). %A D. J. Nagel %T An Experiment in Extracting Some Properties of Binary Relations %R Technical Report CBM-TM-84 %I Rutgers University, Department of Computer Science %D April 1980 (revised January 1982) %X A method of overlaying training instances generated from a database in a graph representation is used to learn definitional information about the relations in the database. An experiment is described where algebraic properties of relations such as functionality and symmetry are discovered in a simple family database using this method. Once these properties are discovered the capability for making inferences in the database is extended. The efficiency of answering queries may be improved and new insights into the data may be developed. %A D. J. Nagel %T Some Considerations on Extracting Definitional Information about Relations %R Technical Report CBM-TM-85 %I Rutgers University, Department of Computer Science %D April 1980 (revised January 1982) %X Several of the current systems in Artificial Intelligence are represented in binary relational databases and rely on the semantics of relations as a source of knowledge for information retrieval. Examples of these systems include those developed by Lindsay [5,6], Raphael [10], Elliott [2], Brown [1], and Sridharan [11]. In these systems inferences can be made from a set of properties specified for each relation. Inferences can also be made from specified associations between relations. One interesting aspect is the degree to which making these inferences can be automated. Some methods are proposed in this paper for using machine learning to extract relational properties and recognize semantic ties between relations so that this definitional information will not have to be prespecified. In some cases these methods may not technically be categorized as learning because they primarily involve summarization. It is also difficult to pin down what is encompassed by semantics. However, this paper discusses concepts of learning, and the presentation is directed at capturing semantics. Extracting definitional information or more broadly, learning semantics of relations, provides a base for the study of interesting databases. This could be done in a symbiotic system where the interaction between the researcher and the system provides a means for improving the performance of the system in general and obtaining new insights in the scientific data. It could also be coupled with a system for automatic theory formation. Presently, applications using semantics of relations for making inferences have been most successful in areas where properties and relationships are well understood such as kinship relations. %A N. S. Sridharan %T Representational Facilities of AIMDS: A Sampling %R Technical Report CBM-TM-86 %I Rutgers University, Department of Computer Science %D May 1980 (revised January 1982) %X The quest for fundamental and general mechanisms of intelligence, especially problem solving and heuristic search techniques, that guided early research in Artificial Intelligence has given way in the last decade to the search for equally fundamental and general methods for structuring and representing knowledge. This is the result of the realization that a duality exists between knowledge and search: Knowledge of the task domain can abbreviate search and search thru a problem space can yield new knowledge. AIMDS is one of the recently developed systems which permits experimentation with knowledge representation in the course of building an AI program. %A C. F. Schmidt %T The Role of Object Knowledge in Human Planning %R Technical Report CBM-TM-87 %I Rutgers University, Department of Computer Science %D June 1980 (revised January 1982) %X AI research on planning provides an important reference point from which the cognitive psychologist can build an understanding of human planning. It is argued that the human planning context differs from this reference point due to the incomplete knowledge that persons typically possess about the situation within which the plan will be executed. Various types of general functional knowledge about objects are then defined. This knowledge serves as a source of default assumptions for use in the planning process, and thus allows planning to continue despite the absence of complete knowledge of the planning situation. However, such assumption-based expectations must be tested. From this point of view, planning must also include a process for a kind of hypothesis testing and plan revision. The implications of this claim are briefly discussed. %A Saul Amarel %T Initial Thoughts on Characterization of Expert Systems %R Technical Report CBM-TM-88 %I Rutgers University, Department of Computer Science %D September 1980 (revised January 1982) %X Expertise in a given domain is commonly characterized by skillful, high performance, problem solving activity in the domain. An expert solves problems in a domain more rapidly, more accurately, and with less conscious deliberation about his plan of attack than a novice does. An excellent discussion of general characteristics of expert behavior appears in a recent article in ``Science'' by Larkin et al. .lp Expert behavior is equivalent to high performance problem solving behavior in a specific domain. It requires: knowledge of the domain, knowledge of problem solving schemas and methods, knowledge/experience about solution of specific problems in the domain with given methods, knowledge about special properties and regularities in the problem space, and highly effective ways of @u(using) all these bodies of knowledge in approaching the solution of new problems in the domain. Essentially, expert problem solving requires the conceptualization/formulation of a given problem within a framework wherein knowledge is embodied in definitions of states, moves, constraints, evaluation functions, etc. in such a way that solutions are attained with very little search. In other words, an expert problem solver works within a highly 'appropriate' problem representation: he describes situations and problem types within 'appropriate'conceptual frameworks, he specifies problem decompositions that minimize subproblem interactions, he often uses hierarchies of abstractions in his planning, he uses 'macromoves' where a novice would painstakingly have to piece together elementary moves, and he has rules for early recognition of unpromising as well as of promising developments. An expert problem solver behaves as if the great variety of knowledge sources needed for his solution-construction activities are available to him in a compiled procedural form. .lp Usually, expertise in a domain requires @u(problem solving experience) in the domain. One can be scholar in a domain, and not an expert--if he does not know how to effectively @u(apply) domain knowledge to a variety of specific situations. Also, expertise implies a certain amount of robustness in performance-- which means that it is not sufficient to know how to handle a few 'textbook' cases; it is important to be able to handle a broad range of variations. %A Saul Amarel %T Review of Characteristics of Current Expert Systems* %R Technical Report CBM-TM-89 %I Rutgers University, Department of Computer Science %D March 1981 (revised January 1982) %X This report does not cover all current work in the area of Expert systems. It is intended to introduce a set of dimensions for characterizing Expert systems and to describe some of the important Expert systems that are now in existence (or are under active development) in terms of these dimensions. .lp We have a dual purpose: (a) to illustrate via concrete examples the dimensions that are being introduced, and (b) to show what is the current state of the field from the perspective of this system of dimensions. .lp We are using here ten main dimensions, and an optional eleventh called ``Special Features'', which provides added flexibility for the presentation of relevant information about a system. Two of the main dimensions, Performance and Utility, are concerned with the quality of the system's behavior and the impact of the system on the domain of application and on AI. Another two dimensions are concerned with the system's scope, its ability to handle situations that are outside its area of major expertise, and its ability to improve: they are called Breadth, Intelligence, Robustness and Expertise Improvement ability. The remaining six dimensions are concerned with the type of tasks performed by the system, its structure and its means of interacting with users: they are called Task type, Main Method, Mode of Knowledge Representation, User Interface for main task, Explanation facilities and Reasoning under Uncertainty. .lp The systems considered are DENDRAL, CASNET/GLAUCOMA, MACSYMA, MYCIN, INTERNIST, PROSPECTOR and CRYSALIS. .lp *This report covers material which was prepared for inclusion in the Chapter 'What are Expert Systems' (co-authored with Ron Brachman, Carl Engelman, Robert Engelmore, Edward Feigenbaum and David Wilkins) of a book on Expert Systems which is currently under preparation; the book is based on the Rand Workshop on Expert Systems which took place in San Diego, California on August 25-28, 1980. %A John Karl Kastner %A Sholom M. Weiss %T A Precedence Scheme for Selection and Explanation of Therapies %R Technical Report CBM-TM-90 %I Rutgers University, Department of Computer Science %D March 1981 (revised January 1982) %X A general scheme to aid in the selection of therapies is described. A topological sorting procedure within a general production rule representation is introduced. The procedure is used to choose among competing therapies on the basis of precedence rules. This approach has a degree of naturalness that lends itself to automatic explanation of the choices made. A system has been implemented using this approach to develop an expert system for planning therapies for patients diagnosed as having ocular herpes simples. An abstracted example of the system's output on an actual case is given. %A P. G. Politakis %A Sholom M. Weiss %T A System for Empherical Experimentation with Expert Knowledge %R Technical Report CBM-TM-91 %I Rutgers University, Department of Computer Science %D March 1981 (revised January 1982) %X An approach to the acquisition of expert knowledge is presented based on the comparison of dual sources of knowledge: expert-modeled rules and cases with known conclusions. A system called SEEK has been implemented to give to the expert interactive advice about rule refinement. SEEK uses a simple frame model for expressing expert-modeled rules. The advice takes the form of suggestions of possible experiments in generalizing or specializing rules in the model. This approach has proven particularly valuable in assisting the expert in domains where two diagnoses are difficult to distinguish. Examples are given from an expert consultation system being developed for rheumatology. %A Casimir A. Kulikowski %A George A. Drastal %T Knowledge-Based Acquisition of Rules for Medical Diagnosis %R Technical Report CBM-TM-92 %I Rutgers University, Department of Computer Science %D October 1981 (revised January 1982) %X Medical consultation systems in the EXPERT framework contain rules written under the guidance of expert physicians. We present a methodology and preliminary implementation of a system which learns compiled rule chains from positive case examples of a diagnostic class and negative examples of alternative diagnostic classes. Rule acquisition is guided by the constraints of physiological process models represented in the system. Evaluation of the system is proceeding in the area of glaucoma diagnosis, and an example of an experiment in this domain is included. %A N. S. Sridharan %T AIMDS: Applications and Performance Enhancements %R Technical Report CBM-TM-93 %I Rutgers University, Department of Computer Science %D January 1982 %X AIMDS is a programming environment (language, editors, display drivers, file system) in which several programs are being constructed for modeling commonsense reasoning and legal argumentation. The main obstacle to realistic applications in these and other areas is system performance when the knowledge bases used are scaled up one or two orders of magnitude. The other obstacle is user performance resulting from the complexity of constructing and debugging large scale knowledge bases. This proposal argues that performance enhancement of AIMDS as a system is needed and that the usual solutions of software tuning have been exhausted and that new hardware ideas fitted to the characteristics of the task need to be experimented with. We adopt as important constraints: the requirement that existing programs should receive graded enhancement of performance, maintaining continuity of application programs; that user programs should not reflect changing machine configurations or architectures. Redesign and recoding of AIMDS should provide the necessary opacity to the user. With these constraints in mind, we suggest interim solutions and long-term solutions. The interim solutions include: converting large-address space personal Lisp machines with bit-mapped graphics; fast coding of low-level functionalities via microprogramming. The long-term solutions include the building and testing of multiprocessors. The long-term solutions open up a number of rather difficult software and hardware research problems whose solutions depend upon having good facilities to experiment in the search for answers. %A B. Lantz %T The AIMDS Interactive Command-Parser %R Technical Report CBM-TM-94 %I Rutgers University, Department of Computer Science %D September 1982 %X Characters entered by the user are parsed immediately in order to provide interactive services to the user while he is entering comands. Services provided to the user include immediate verification of syntax, supplying the user with information about the correct syntax and semantics of a command, completion of long descriptive atom names, pretty printing the entered command, and defining special functions for selected characters. The parser accepts user defined grammars, thus providing a useful command parser for a great variety of applications. %A B. Lantz %T The AIMDS On-Line Documentation Facility %R Technical Report CBM-TM-95 %I Rutgers University, Department of Computer Science %D September 1982 %X The documentation system for the AIMDS language is designed to be suitable for both beginning and expert users, and to be capable of serving the needs of a changing system such as AIMDS. The documentation must be quickly and easily updatable, and the updated information should be available to, and easily used by, a wide variety of users. .lp This paper is a short description of the documentation system for the AIMDS language. It includes a discussion of the considerations taken during the design of the documentation system, a description of the implemented system, and instructions for using the system for other documentation tasks. %A John A. Roach %A N. S. Sridharan %T Implementing Aimds on a Multiprocessor Machine. Some Considerations %R Technical Report CBM-TM-96 %I Rutgers University, Department of Computer Science %D September 1982 (revised April 1983) %X As a possible long term solution for performanc enhancement of AIMDS, a Lisp based multiprocessor system was proposed. Converting an existing AI knowledge based system from the current uniprocessor environment into a multiprocessor based regime is a largely unexplored research question. This report discusses some of the issues raised by such a proposal and attempts to evaluate some of the current models of parallel processing in regards to implementing an AIMDS based system. An extensive bibliography with commentary is included. %A George A. Drastal %A Casimir A. Kulikowski %T Knowledge-Based Acquisition of Rule for Medical Diagnosis %R Technical Report CBM-TM-97 %I Rutgers University, Department of Computer Science %D January 1982 (revised November 1982) %X Medical consultation systems in the EXPERT framework contain rules written under the guidance of expert physicians. We present a methodology and preliminary implementation of a system that learns compiled rule chains from positive case examples of a diagnostic class and negative examples of alternative diagnostic classes. Rule acquisition is guided by the constraints of physiological process models represented in the system. Evaluation of the system is proceeding in the area of glaucoma diagnosis, and an example of an experiment in this domain is included. %A R. Banerji %A Tom M. Mitchell %T Description Languages and Learning Algorithms: A Paradigm for Comparison %R Technical Report CBM-TR-107 %I Rutgers University, Department of Computer Science %D January 1980 %D January 1982 %K inductive inference, learning, generalization, description languages %X We propose and apply a framework for comparing various methods for learning descriptions of classes of objects given a set of training exemplars. Such systems may be usefully characterized in terms of their descriptive languages, and the learning algorithms they employ. The basis for our characterization and comparison is a general-to-specific partial ordering over the description language, which allows characterizing learning algorithms independent of the description language with which they are associated. Two existing learning systems are characterized within this framework, and correspondences between them made clear. %A ?? %T Plausible Inference in a Structured Concept Space %R Technical Report CBM-TR-108 %I Rutgers University, Department of Computer Science %D January 1982 %O (NOT ISSUED) %A N. S. Sridharan %A C. F. Schmidt %A J. L. Goodson %T The Role of World Knowledge in Planning %R Technical Report CBM-TR-109 %I Rutgers University, Department of Computer Science %D May 1980 %D January 1982 %X Common-sense planning demands a rich variety of world knowledge. We have examined here the view that world knowledge can be structured to form the interface between a hierarchy of action types and a hierarchy of types of objects. World knowledge forming this interface includes not only the traditional statements about preconditions and outcomes of actions, but also the normal states of objects participating in the actions and normative actions associated with the objects. Common-sense plans are decomposed into goal-directed, preparation, and the normative components. This has heuristic value and may serve to simplify the planning algorithm. The algorithm invokes world knowledge for goal customization, action specification, computation of preconditions and outcomes, object selection, and for setting up subgoals. %A R. N. Goldberg %A Sholom M. Weiss %T An Experimental Transformation of a Large Expert Knowledge-Base %R Technical Report CBM-TR-110 %I Rutgers University, Department of Computer Science %D May 1980 %D January 1982 %X An experiment is described in which a significant part of the INTERNIST knowledge base for diagnosis in internal medicine is translated into an EXPERT model. INTERNIST employs the largest and broadest knowledge base of all the medical consultation systems which have been developed in recent years. EXPERT is a general system for designing consultation models. The translated model shows reasonable competence in the final diagnostic classification of 431 test cases. There are differences in the internal representation and reasoning strategies of the two systems. However, when a knowledge base has been encoded in a relatively uniform manner, this experiment demonstrates the feasibility of transfer of knowledge between large-scale expert systems. %A J. L. Goodson %T A Process for Evaluating Tree-Consistency %R Technical Report CBM-TR-111 %I Rutgers University, Department of Computer Science %D June 1980 %D January 1982 %X General knowledge about conceptual classes represented in a concept hierarchy can provide a basis for various types of inferences about an individual. However, the various sources of inference may not lead to a consistent set of conclusions about the individual. This paper provides a brief glimpse at how we represent beliefs about specific individuals and conceptual knowledge, discusses some of the sources of inference we have defined, and describes procedures and structures that can be used to evaluate agreement among sources whose conclusions can be viewed as advocating various values in a tree partition of alternate values. %A V. Ciesielski %T A Methodology for the Construction of Natural Language Front Ends for Medical Consultation Systems %R Technical Report CBM-TR-112 %I Rutgers University, Department of Computer Science %D September 1980 %D January 1982 %X A methodology for constructing natural language front ends for Associational Knowledge type (AK-type) medical consultation systems is described. AK-type consultation systems use associational knowledge of the form "if A and B and C then conclude D with a weight of w" to perform diagnostic reasoning. It is shown that the knowledge needed to "understand" patient description is not the associational knowledge in the consultation system but rather knowledge of structural relations and the way they are expressed in surface language. The two main structural relations involved are: (1) ATTRIBUTE of OBJECT = VALUE. Surface forms of this relation are variants and augmentations of the template "The X of Y is V". (2) OBJECT have-component COMPONENT. Surface forms of this relation are variants and augmentations of the template "The X has/contains/includes Y". This kind of knowledge can be represented in the Attribute-Component/Structured Object (AC/SO) package which was developed as part of this research. The AC/SO package is given a definition of the @u(concept) "PATIENT" for a disease area and the corresponding lexicon. %A P. G. Politakis %A Sholom M. Weiss %T Designing Consistent Knowledge Bases: An Knowledge Acquisition Approach to Expert Systems %R Technical Report CBM-TR-113 %I Rutgers University, Department of Computer Science %D September 1980 %D March 1982 %X This paper reports on an empirical evaluation approach to assist in the acquisition of expert knowledge for high performance knowledge-based consultation systems. A framework is provided by a general consultation system EXPERT and a tabular scheme for acquiring expert decision rules. Two levels of abstraction are identified in the knowledge acquisition process: logical analysis of the tables and expectation-driven analysis of the knowledge base. Possible ways of recognizing redundancy and "gaps" in knowledge within a table are illustrated. Tools are described which facilitate the recognition of potential problems in the knowledge base. These include the extraction of incorrectly diagnosed cases organized by their known conclusions; the grouping of rules with a tabulation of their utility on all cases; and the determination of the impact of rule changes on all cases. Possible approaches to learning and modifying rules in the knowledge base are described. %A Tom M. Mitchell %A Paul E. Utgoff %A R. Banerji %T Learning Problem-Solving Heuristics by Experimentation %R Technical Report CBM-TR-114 %I Rutgers University, Department of Computer Science %D September 1980 %D January 1982 %X This paper concerns building heuristic problem solvers capable of improving their performance through experience. Specifically, we consider improving performance by learning domain-specific heuristics for directing the problem solving search. Each heuristic characterizes conditions under which a specific operator will be useful in leading to problem solutions. .lp The discussion takes the form of a progress report on the design of a computer program, called LEX, to acquire problem solving heuristics by experimentation. LEX is designed to (1) propose practice problems, (2) attempt to solve them, (3) analyze the steps it performed in solving the problem, then (4) formulate domain-specific heuristics designed to improve its performance on subsequent problems. The major results reported here are an initial design for LEX in the domain of symbolic integration, a hand trace illustrating its operation, and an analysis of the successes and failures of this initial design. %A C. F. Schmidt %T Plan Recognition and Revision: Understanding the Observed Actions of Another Actor %R Technical Report CBM-TR-115 %I Rutgers University, Department of Computer Science %D September 1980 %D January 1982 %X [Introduction] In Heider's (l959) classic work in cognitive social psychology he developed the claim that a naive or comonsense theory of social causation underlies our understanding of the observed actions of others. Heider arrived at this conclusion largely by carefully considering the systematic ways in which linguistic terms that refer to persons and actions are used. More recently philosophers, linguists, logicians, and computer scientists in the area of artificial intelligence have contributed to our understanding of the relations between the concepts that we use to describe behavior from an intentional point of view. This includes terms such as can, believe, know, want, and the various verbs of action. .lp These investigations alert us to the structure of intentional terms but tell us nothing about the processes that are involved in using these intentional concepts to understand and describe the actions of others. Process models of knowledge-directed cognitive tasks such as action understanding can be constructed, evaluated, and tested if we adapt the tools of AI research to this task. %A Donald E. Smith %T FOCUSER: A Strategic Interaction Paradigm for Language Acquisition %R Technical Report CBM-TR-116 %I Rutgers University, Department of Computer Science %D October 1980 %D January 1982 %X FOCUSER learns language: it acquires lexicon, syntax, and semantics from a cooperative teacher using a strategic interaction paradigm. The system uses knowledge of teaching/learning strategies to establish and maintain the foci of the interaction. This paper presents several focusing mechanisms and demonstrates the use of these mechanisms in a teaching/learning environment. %A Tom M. Mitchell %T The Need for Biases in Learning Generalizations %R Technical Report CBM-TR-117 %I Rutgers University, Department of Computer Science %D May 1980 %D January 1982 %X The ability to make an appropriate "inductive leap" when generalizing from a small set of @u(training) instances is possible only under a priori biases for choosing an appropriate generalization out of the many possible. Understanding the origins and justification of such biases is critical to further progress in the field of machine learning. The notion of an UNbiased learner is defined, then the notion of bias, its usefulness, and some classes of justifiable biases are considered. %A Saul Amarel %T Problems of Representation in Heuristic Problem Solving; Related Issues in the Development of Expert Systems %R Technical Report CBM-TR-118 %I Rutgers University, Department of Computer Science %D February 1981 %D January 1982 %X The problem of representation in problem solving is concerned with the choice of formulation of a problem for a system which is organized in accordance with a given problem solving schema. Key issues are: (a) understanding the relationships between such a choice and the efficiency with which the system can be expected to find a solution to the problem; (b) finding ways of choosing an 'appropriate' problem formulation -- from the point of view of minimizing the computational effort needed to construct a solution; and (c) finding ways of shifting from one problem formulation to another in a manner that increases problem solving performance. Progress in this difficult area has been stimulated in recent years by work on expert systems and by related research on theory formation and expertise acquisition. In this paper, a conceptual framework is presented for handling problem representations. The grammatical specification of solutions for a problem class plays an important role in this framework. A detailed analysis of representational shifts in a specific class of problems -- the Tower of Hanoi problems -- is presented in terms of the proposed framework. Ten formulations of this class of problems are considered in the context of production, reduction and relaxed reduction schemas of problem solving. Possible developmental paths for moving between these formulations are explored. The emphasis is on an analysis of the acquisition and transformation of various bodies of knowledge that are components of problem formulations, and on the characterization of problems encountered in trying to mechanize knowledge transformations that lead to problem formulations of increased power. %A N. S. Sridharan %A C. F. Schmidt %A J. L. Goodson %T Reasoning by Default %R Technical Report CBM-TR-119 %I Rutgers University, Department of Computer Science %D April 1981 %D January 1982 %X Plausible reasoning is used on many occasions when deductive reasoning fails. One form of such reasoning permits making assumptions that are consistent with what is known provided there is support for it. Support refers to the availability of a default rule whose prerequisites are true or believable. In this paper we relate a concept hierarchy to default theories, advance several ``principles of default rule formulation'' and discuss how these principles may be partially ordered by their ``permissiveness''. %A Bernard Nudel %A Paul E. Utgoff %T A Bibliography on Machine Learning %R Technical Report CBM-TR-120 %I Rutgers University, Department of Computer Science %D April 1981 %D January 1982 %X The following is a bibliography on machine learning. It is structured by having references labelled with pointers to one or more of a set of six categories from which we make a taxonomy of the field. We exclude, or de-emphasize, areas where extensive bibliographies already exist, such as induction of numerical discriminants (pattern recognition), the adaptive system approach to learning (within control theory), and data interpolation and approxmation by curve fitting (within numerical analysis). %A Sholom M. Weiss %A Casimir A. Kulikowski %A R. Galen %T Developing Microprocessor Based Expert Models Instrument Interpretation %R Technical Report CBM-TR-121 %I Rutgers University, Department of Computer Science %D June 1981 %D January 1982 %X We describe a scheme for developing expert interpretive systems and transferring them to a microprocessor environment. The scheme has been successfully implemented and tested by producing a program for interpreting results from widely used medical laboratory instrument: a scanning densitometer. Specialists in the field of serum protein electrophoresis analysis provided the knowledge needed to build an interpretive model using EXPERT, a general production rule system for developing consultation models. By constraining a few of the structures used in the general model, it was possible to develop procedures for automatically translating the model to a specialized application program and then to a microprocessor assembly language program. Thus, the model development can take place on a large machine, using established techniques for capturing and conveniently updating expert knowledge structures, while the final interpretive program can be targeted to a microprocessor depending on the application. Our experience in carrying out the complete process illustrates many of the requirements involved in taking an expert system from its early development phase to actual implementation and use in a real world application. %A N. S. Sridharan %A J. L. Goodson %A C. F. Schmidt %T A Research Strategy for Computational Studies of Event and Action Perception %R Technical Report CBM-TR-122 %I Rutgers University, Department of Computer Science %D July 1981 %D January 1982 %X An initial step towards an information processing theory of human event and action perception can be taken by defining the computational problem constructing a framework in which plausible processes and representations can be explored. The problem of event and action perception is defined as moving from a sequence of "images," descriptions of objects and their positions, into representations of changes, movements, events, and actions. A computational framework for exploring various expectation and data-driven process disciplines is proposed. This framework addresses two problems of choice posed to the human information processing system: 1) the @b[selection] of observations; and 2) the selection of @b[grain] or levels of representation. %A John L. Bresina %T An Interactive Planner that Creates a Structured, Annotated Trace of its Operation %R Technical Report CBM-TR-123 %I Rutgers University, Department of Computer Science %D September 1981 %D March 1982 %X This report presents a planning system which is capable of performing commonsense reasoning in incompletely specified real world domains. The major focus is on the plan generation/extension component - PLANX10 which makes plausible assumptions, is capable of being restarted, and is guided by plan critics and/or interactively by the user. PLANX10 creates and uses an explicit structured, annotated trace of its planning process. %A R. Schwanke %T Common Virtual Arrays in PDP-11 Fortran: An Exercise in Software Engineering %R Technical Report CBM-TR-124 %I Rutgers University, Department of Computer Science %D October 1981 %D January 1982 %X DEC has added ``virtual arrays'' to RSX-11 Fortran-IV-Plus, as a means of allowing user programs to contain more than 32K words of code and data. However, these arrays may not be used in COMMON or EQUIVALENCE statements, which severely limits their usefulness. The Expert consultation system at Rutgers ran headlong into the 32K word limit when it was transported from a DECSystem-20 to a PDP-11/60 running RSX-11M. This provided the impetus to make virtual arrays more useful. I describe a simple program which, in combination with existing DEC software and a simple programming standard, makes virtual arrays behave as if they resided in an unnamed COMMON block of virtual storage. The mechanism imposes no additional run time overhead. This technique is an example of modifying a language by means of simple auxiliary processors, and of applying software engineering principles to a severely constrained program adaptation problem. %A N. S. Sridharan %T Representing Knowledge in AIMDS: Introduction using TAXMAN Examples %R Technical Report CBM-TR-125 %I Rutgers University, Department of Computer Science %D November 1981 %D January 1982 %O also listed as LRP-TR-12 %X This report begins with a quick overview of artificial intelligence; and come to focus on the idea of knowledge as being the essence of how our intelligence operates. Representation of knowledge in a machine is the sytematic expression of knowledge as for representing knowledge as symbol structures. The remaining sections illustrate the facilities, of the language AIMDS, for representing knowledge in a machine. Examples and illustrations are drawn from the TAXMAN application, so that you will see some representation of legal concepts and facts. The main representations are of objects, object classes, intension and extension of classes, class hierarchies, state changes, actions and patterns of actions forming action hierarchies. These are the elements of a "logical template" representation of those concepts of law that are precise unambiguous. The report is aimed at a multi-disciplinary audience comprised of logicians, philosophers, lawyers, politicians and computer scientists, who gathered for a conference, "Logica, Informatica, Diritto" [Logic, Informatics and Law] in Florence, Italy held in April 1981. Therefore, I have tried to keep my presentation at a very simple level. My purpose will be served if I arouse enough interest in you that you will follow up the newly developing field of Artificial Intelligence. %A Saul Amarel %T Expert Behavior and Problem Representations %R Technical Report CBM-TR-126 %I Rutgers University, Department of Computer Science %D February 1982 %D January 1982 %X Expert behavior is characterized by high-performance problem solving behavior in a specific domain. Commonly, this type of behavior requires the conceptualization/formulation of a given problem within a highly 'appropriate' representational framework - where 'appropriateness' refers to a computational efficiency, and it is task specific. The problem of how to choose an 'appropriate' problem formulation and how to change it to fit the characteristics of a task, is it at the heart of the problem of problem representations in AI. In this paper, relationships between characteristics of 'intelligent expert' systems and developmental processes of problem reformulation are discussed. The nature of problem reformulation sequences is explored in the context of expertise acquisition in the domain of Tower of Hanoi problems. Our work suggests that 'intelligent expert' systems must have available large amounts of knowledge of various kinds - both domain independent and domain specific; they must have the ability to maintain several alternative formulations of a problem class and to move among them in a flexible manner as required by the specific problem on hand; and they must be able to perform a variety of theory formation and program synthesis tasks in order to improve their performance with experience. %A N. S. Sridharan %A John L. Bresina %T Plan Formation in Large, Realistic Domains %R Technical Report CBM-TR-127 %I Rutgers University, Department of Computer Science %D March 1982 %X In this paper we focus on the engineering issue of how a planning system may be organized to work effectively in large and realistic task domains. We consider three aspects of this issue: (i) generating and manipulating large and complex plans, (ii) handling large and complex knowledge bases, and (iii) operating with an incomplete knowledge base and/or world model. A knowledge description formalism is introduced, which allows a rule base to be well-structured for efficient retrieval by inheritance. An interactive planning system, PLANX10, is described which builds an explicit plan graph, that allows interactive guidance, plan refinement and revision. %A N. S. Sridharan %A John L. Bresina %A C. F. Schmidt %T Evolution of a Plan Generation System %R Technical Report CBM-TR-128 %I Rutgers University, Department of Computer Science %D September 1982 %X We begin by discussing the role of a plan generation system in a plan recognition process. A planning system in this context should be able to plan robustly despite the absence of specific plan relevant knowledge, making assumptions where appropriate, be capable of revising plans and be able to make no more committments to specifics than needed. Three plan generators (PGEN, PLANX10, and PLANX10-D), forming stages in the evolution, are described and critiqued. The sequence moves toward plan representations supporting greater flexibility and modularity of the planner's control structure, containing appropriate information (annotations, constraint descriptions and ordered description spaces) to enable plan revision. Out of this emerges a set of uniform plan modification operators usable for plan generation as well as plan revision. We then discuss briefly plan execution monitoring and revision within this framework. %A N. S. Sridharan %T A Future for Knowledge Representation Systems %R Technical Report CBM-TR-129 %I Rutgers University, Department of Computer Science %D September 1982 %A P. G. Politakis %T Using Empirical Analysis to Refine Expert System Knowledge Bases %R Technical Report CBM-TR-130 %I Rutgers University, Department of Computer Science %D January 1983 %D November 1982 %X This thesis describes a system that provides a unified design framework for building and empirically verifying an expert system knowledge base. SEEK is a system which gives interactive advice about rule refinement during the design of an expert system. The advice takes the form of suggestions for possible experiments in generalizing or specializing rules in a model of reasoning rules produced by an expert. Case experience, in the form of stored cases with known conclusions, is used to interactively guide the expert in refining the rules of a model. This approach is most effective when a model of the expert's knowledge is relatively accurate and small changes in the model may improve performance. The system is interactive; we rely on the expert to focus the system on those experiments that appear to be most consistent with his domain knowledge. The design framework of SEEK consists of a tabular model for expressing expert-derived rules and a general consultation system for applying a model to specific cases. The system has been used in building large-scale expert medical consultation systems, with examples taken from an expert consultation model for the diagnosis of rheumatic diseases. %A Yao Yuchuan %A Casimir A. Kulikowski %T Multiple Strategies of Reasoning for Expert Systems %R Technical Report CBM-TR-131 %I Rutgers University, Department of Computer Science %D October 1982 %D November 1982 %X In expert systems the heuristics used for combining the weight of evidence can be based on probabilistic, fuzzy set, or subjective confidence factors. Although the underlying assumptions for each of the methods differ, it can be shown that there are correspondences between them and that it is possible to develop a model of expert reasoning for medical consultation using any one of the methods. We have developed a system for representing expert knowledge, called ESMES which is an outgrowth of the EXPERT scheme developed earlier at Rutgers. ESMES allows the use of alternative strategies in the solution of a consultation problem. In this paper we report on the performance of ESMES for a prototype glaucoma consultation model, using reasoning mechanisms similar to those of the EXPERT, MYCIN, INTERNIST I, and PROSPECTOR systems. %A John Karl Kastner %A Sholom M. Weiss %A Casimir A. Kulikowski %T Treatment Selection and Explanation in Expert Medical Consultation: Application to a Model of Ocular Herpes Simplex %R Technical Report CBM-TR-132 %I Rutgers University, Department of Computer Science %D September 1982 %D November 1982 %X This paper describes a general scheme for use in expert consultation systems to aid in the selection of therapies. The scheme consists of a topological sorting procedure within a general production rule representation. The procedure is used to choose among competing therapies on the basis of precedence rules. This approach has a degree of naturalness that lends itself to automatic explanation of the choices made. An expert system for planning therapies for patients diagnosed as having ocular herpes simplex has been implemented using this approach. Examples of the system's output on actual cases are given. %A N. S. Sridharan %A John L. Bresina %T Knowledge Structures for a Modular Planning System %R Technical Report CBM-TR-133 %I Rutgers University, Department of Computer Science %D July 1983 %X In this paper, we discuss knowledge structures used in a modular system for user-interactive planning in realistic task domains. A theme which is reflected throughout the paper is the policy of developing common representations useful in many modules of the planning system. We believe thse representations will be useful in other Artificial Intelligence systems with similar demands on knowledge and reasoning skills. We present uniform representations for (i) descriptions of objects and actions, including partial and indefinite descriptions, (ii) world knowledge of task domains, (iii) a trace of the planning process which includes alternate solutions, and (iv) an organizational mechanism for both the world knowledge as well as the knowledge gathered during the planning process (e.g., the constraints among the objects and actions contained in a plan solution). %A N. S. Sridharan %A John L. Bresina %T A Mechanism for the Management of Partial and Indifinite Descriptions %R Technical Report CBM-TR-134 %I Rutgers University, Department of Computer Science %D July 1983 %X Schemes of knowledge representation are designed to represent definite and indefinite descriptions. We relate here our experieince in using partial and indefinite descriptions of actions and objects arising in a planner. We identify three sources of indefinite descriptions: (i) need to refer to objects and actions that are yet unrealized (even in a complete world model, (ii) the incompleteness of the world model, and (iii) the need for the planner to postpone choices by expressing constraints on the selection of objects and the sequencing of actions. We demonstrate a uniform mechanism for the representation of descriptions that allows computing referents, maintaining co-references and the detection of inconsistent indefinite descriptions. The mechanism allows the grouping of object and action descriptions, structured by @i[specificity] and equivalence relationships. It is hoped that our experiencxe may be valuable to anyone concerend with knowledge representation, whether it is applied to planning, natural language processing, knowledge acquisition, or expert system engineering. %A Saul Amarel %T Program Synthesis as a Theory Formation Task-- Problem Representations and Solution Method %R Technical Report CBM-TR-135 %I Rutgers University, Department of Computer Science %D October 1983 %X This paper is concerned with theory formation processes in the context of a program synthesis task. The problem is to synthesize a program, in a given programming language, that satisfies a given set of input-output data associations in the domain of finite partially ordered structures. The associations are presented in a data-space, and the possible solutions/programs, i.e., the programming language, define a program-space. Several formulations of the theory formation problem are presented. The movement to formulations of higher effectiveness involves structuring of program-space in a way that facilitates links to be established with data-space, and increasing the amount of reasoning that takes place in data-space. The role of models (algebraic, geometric) is extremely important in the strong problem formulations. Three main approaches are discussed. The first two were developed in previous work: One is based on heuristic hill climbing in program-space; and the second is more goal-oriented, and it involves "navigation" in program-space under the guidance of an algebraic model. Work on the third approach is more recent, and it is based on detailed reasoning in data-space. Two methods are presented in connection with the data-driven approach: a combination method and an elimination method. By shifting the reasoning towards the data-space, several interesting domain-dependent problems of concept specialization and generalization are encountered. Basic AI issues that are identified, and on which more work is needed, include: the formation of macromoves in "appropriate" representations of program-space and data-space; development of methods for combining "partially correct" programs, and for modifying "almost correct" programs; and exploring the interplay between the choice of a domain of a theory and the formation of an "interesting" theory for the domain. %A John Karl Kastner %T Strategies for Expert Consultation in Therapy Planning %R Technical Report CBM-TR-136 %I Rutgers University, Department of Computer Science %D January 1984 %X This thesis deals with the problem of designing tools for building consultation systems for therapy planning. In the past, several therapy planning systems have been developed for which the necessary code was specially crafted for a particular problem or a paticular domain. In contrast, this work presents a set of procedures that have been found to be useful in the development of consultation models for several different problems in several different domains. In addition, the procedures are applicable to a general class of problems known as "classification problems". %A Allen Ginsberg %T Hardware Fault Diagnosis & Expert Systems %R Technical Report CBM-TR-138 %I Rutgers University, Department of Computer Science %D May 1984 %X Recent research in Expert Systems has begun to deal with problem domains that do not fit into the "classification problem" mode, the latter being the sort of problems that have been most amenable to Expert System technology. Hardware Fault Diagnosis(HFD) is an example of such a problem domain. Problems in HFD typically involve a "localization" problem as a component, i.e., *where* is the location of the fault? This paper takes a critical look at some current work in HFD, viz. Genesereth, Davis, with a view towards determining the differences between classification and localization problems that are likely to necessitate new approaches to knowledge representation and acquisition if Expert Systems are to be sucessful in such a domain. %A Allen Ginsberg %T Localization Problems and Expert Systems %R Technical Report CBM-TR-139 %I Rutgers University, Department of Computer Science %D May 1984 %X Expert systems approaches to problem solving have recently had enormous success and influence in the field of AI. The most successful of these systems tend to deal with a certain kind of problem type which have been called "classification problems." Very recently, we have seen the emergence of a number of expert systems that deal with a different category of problem, a category that I will call "localization problems." The purpose of this paper is to characterize this class of problems, contrast it with the classification problem category, give some examples of localization problems, and suggest some new avenues for expert system research dealing with problems in this category. %A Allen Ginsberg %T Investigations in the Mathematical Theory of Problem Space Representations and Problem Solving Methods %R Technical Report CBM-TR-140 %I Rutgers University, Department of Computer Science %D May 1984 %X In this paper I address the issue of how a system that has the ability to do problem solving and planning - in the sense of being in possession of generalized schemas or templates for carrying out these activities - can know whether a particular type of planning or, if you will, problem solving strategy, is a "good" one to employ in solving problems in a particular domain? It seems to me that, in general, in order to make such judgements in a reasonable fashion a problem solver must either be in possession of some general theoretical facts concerning the nature and structure of problem types, i.e., a theory of problem types, or at the very least, have been programmed by someone having such a theory. This paper is a step in the direction of constructing such a theory. .lp The structure of the paper is as follows. First I discuss the nature of problem solving and planning in general, and give a preliminary description of a particular planning template. Next I describe and illustrate a mathematical framework within which one can formulate problem representations. Finally I deal with the question of what facts about the structure of a problem representation are relevant to the determination of whether or not the aforementioned planning template is applicable to the problem at hand. %A Allen Ginsberg %T Representation & Problem Solving: Theoretical Foundations %R Technical Report CBM-TR-141 %I Rutgers University, Department of Computer Science %D May 1984 %D October 1984 %X The word "representation" and its cognates is probably the most popular word in AI today. If anything qualifies as "the fundamental assumption of AI," it is probably the view that intelligence is essentially the ability to construct and manipulate symbolic representations of some "reality" in order to achieve desired ends. Furthermore, probably every researcher in AI would agree that the key to AI's success lies with the general area known as "knowledge representation." This point of view has been buttressed not only by the failures of early "general purpose" AI systems, but much more so by the recent success of expert systems. The philosophy behind the expert systems approach is one that has, rightfully come to infect the entire field of AI: intelligence essentially depends upon the ability to represent and store a potentially vast amount of knowledge in ways that enable it to be easily accessed and utilized in the performance of various tasks. The key concept here is ``representation''. .lp Given the fact that AI has come to embrace these doctrines, and the likelihood that there is a good deal of truth in them, it is incumbent upon us to examine their foundations, for better or for worse. It would be nice to have answers to questions such as What is a representation?, When are two or more representations representations of the same or different real world situations?, What are the ways in which representations can be "manipulated?" It would be even nicer if the answers to such questions were provided by a general formal theory of representation. In this paper I attempt to lay some of the groundwork for such a theory, with emphasis on the role of representation in problem solving. %A Allen Ginsberg %T A Model for Automated Theory Formation for Problem Solving Systems %R Technical Report CBM-TR-142 %I Rutgers University, Department of Computer Science %D May 1984 %X The goal of this paper is to contribute towards the understanding and eventual mechanization of the processes whereby an ``intelligent'' problem solver learns to improve its performance in a given task domain by formulating and using theories regarding that domain. In order to achieve this goal it is necessary for us, as designers of such a system, to have a fairly good idea of a) the various sorts of knowledge that are required for a problem solver to acquire new knowledge that will hopefully improve performance, and of b) how each of these types or sources of knowledge comes into play in this process. In this paper I give an abstract description of the domains of knowledege required for theory formation, and also illustrate the ideas with a concrete example. The type of system contemplated in this paper incorporates ways of structuring background knowledge that are natural and will, I believe, prove to be useful in designing self-improving AI programs. .lp In this paper I address the issue of how a system that has the ability to do problem solving and planning - in the sense of being in possession of generalized schemas or templates for carrying out these activities - can know whether a particular type of planning or, if you will, problem solving strategy, is a "good" one to employ in solving problems in a particular domain? It seems to me that, in general, in order to make such judgements in a reasonable fashion a problem solver must either be in possession of some general theoretical facts concerning the nature and structure of problem types, i.e., a theory of problem types, or at the very least, have been programmed by someone having such a theory. This paper is a step in the direction of constructing such a theory. .lp The structure of the paper is as follows. First I discuss the nature of problem solving and planning in general, and give a preliminary description of a particular planning template. Next I describe and illustrate a mathematical framework within which one can formulate problem representations. Finally I deal with the question of what facts about the structure of a problem representation are relevant to the determination of whether or not the aforementioned planning template is applicable to the problem at hand. %A Chidanand V. Apte %A Sholom M. Weiss %T A Knowledge Representation Framework for Expert Control of Interactive Software Systems %R Technical Report CBM-TR-143 %I Rutgers University, Department of Computer Science %D May 1984 %D October 1984 %X Expert problem solving strategies in many domains make use of detailed quantitative or mathematical techniques coupled with experiential knowledge about how these techniques can be used to solve problems. In many such domains, these techniques are available as part of complex software packages. In attempting to build expert systems in these domains, we wish to make use of these existing packages, and are therefore faced with an important problem: how to integrate the existing software, and knowledge about its use, into a practical expert system. We define a framework of a hybrid model for representing problem solving knowledge in such domains. A hybrid model consists of a ``surface'' and a ``deep'' model. The surface model is the production rule-based expert subsystem that is driven by domain specific control and interpretive knowledge. The deep model is the existing software, reorganized as necessary for its interpretation by the surface model. We present an outline of a specialized form-based system for acquisition and representation of expert knowledge required for this hybrid modeling. %A Chidanand V. Apte %T A Framework for Expert Control of Interactive Softweare Systems %R Technical Report CBM-TR-144 (THESIS) %I Rutgers University, Department of Computer Science %D September 1984 %D October 1984 %X Expert problem-solving strategies in many domains require the uise of detailed mathematical techniquers coupled with experiential knowledge about how and when to use the appropriate techniques. In many of these domains, such techniques are made available to experts in large software packages. In attempting to build expert systems for these domains, we wish to make use of these existing packages, and are therefore faced with an important problem: how to integrate the existing software, and knowledge about its use, into a practical expert system. The expert knowledge is used, in dynamic selection of appropriate programs and parameters, to reach a successful goal in the problem-solving. This kind of expert problem-solving is achieved through two interacting bodies of knowledge; problem domain knowledge, and knowledge about the programs that comprise the software package. .lp This thesis describes the framework of a hybrid expert system for representing problem-solving knowledge in these domains. This hybrid system may be characterized as consisting of a ``surface'' model and a ``deep'' model. The surface model is a production-rule based expert subsystem that consists of heuristics used by an expert. The deep model is a collection of methods, each parameterized by a set of controlling and observed parameters. The method and their results are reasoned about using their parameter sets. The existing software is reorganized as necessary to map it into the deep model structure of a hybrid system. This framework has evoled out of an effort to build an expert system for performding well-log analysis (ELAS - Expert Log Analysis System). A generalized expert-system building methodology based upon principles drawn from ELAS is introduced. The use of ``method-abstractions'' in assembling a hybrid system is discussed. The notion of ``worksheet-reasoning'' is defined, and discussed. %A Paul E. Utgoff %T Shift of Bias for Inductive Concept Learning %R Technical Report CBM-TR-145 (THESIS) %I Rutgers University, Department of Computer Science %D October 1984 %X We identify and examine the fundamental role that bias plays in inductive concept learning. Bias is the set of all influences, procedural or declarative, that causes a concept learner to prefer one hypothesis to another. Much of the success of concept learning programs to date results from the program's author having provided the learning program with appropriate bias. To date there has been no good mechanical method for shifting from one bias to another that is better. Instead, the author of a learning program has himself had to search for a better bias. The program author manually generates a bias, from scratch or by revising a previous bias, and then tests it in his program. If the author is not satisfied with the induced concepts, then he repeats the manual-generate and program-test cycle. If the author is satisfied, then he deems his program successful. Too often, he does not recognize his own role in the learning process. .lp Our thesis is that search for appropriate bias is itself a major part of the learning task, and that we can create mechanical procedures for conducting a well-directed search for an appropriate bias. We would like to understand better how a program author does about doing his search for appropriate bias. What insights does he have? What does he learn when he observes that a particular bias produces poor performance? What domain knowledge does he apply? .lp We explore the problem of mechanizing the search for appropriate bias. To that end, we develop a framework for a procedure that shifts bias. We then build two instantiations of the procedure in a program called STABB, which we then incorporate in the LEX learning program. One, called "constraint back propagation" uses analytic deduction. We report experiments with the implementations that both demonstrate the usefulness of the framework, and uncover important issues for this kind of learning. %A Von-Wun Soo %A Casimir A. Kulikowski %A David Garfinkel %T A Framework for Representation of Expertise in Experimental Design for Enzyme Kinetics %R Technical Report CBM-TR-146 %I Rutgers University, Department of Computer Science %D May 1985 %X In this paper, we present part of our current research on expert systems in enzyme kinetics. Because of the richness and diversity of the problem solving knowledge required in this domain, we have found it to be an excellent vehicle for studying issues of knowledge representation and expert reasoning in AI. Biochemical experimental design, the focus of this paper, is a major problem solving activity of the enzyme kineticist that has not been explored by expert systems researchers. Their problem solving expertise can usually be described as the application of a sequence of methods. In designing a complicated biochemical experiment, the experimenter has several methods to choose from at any stage. These methods are represented as computer programs which can be organized into a hierarchy. This paper proposes a structure for these problem solving methods and an expert consultation system for experimental design. .lp We have found that problem solving expertise in experimental design can be divided into three phases. In the first phase, we deal with problems of selecting the experimental methods that satisfy an experimenter's goal, given certain postulated models. The experimental conditions and optimal design points can be derived if the model is given and the goal and the assumptions of the optimal design criterion are satisfied. In the second phase, we deal with the problems of preparing an enzyme assay. The interactions among experimental conditions and other influencing factors must be carefully controlled so that the correct concentration of a given species can be calculated. In the third phase, we face the problem of analyzing and interpreting the experimental data and recommending further refinement of the experiment. %A Sholom M. Weiss %A K. Kern %A Casimir A. Kulikowski %A Michael Uschold %T A Guide to the Use of the EXPERT Consultation System %R Technical Report CBM-TR-94 %I Rutgers University, Department of Computer Science %D May 1981 (Revised January 1982) %X EXPERT is a system for designing and applying consultation models. An EXPERT model consists of hypotheses (conclusions), findings (observations), and rules for logically relating findings to hypotheses. Three phases of model development are outlined for users of the system. These include: the design of a decision-making model, compilation of the model, and consultation using the model. The facilities of the system are described, and examples of models and consultation sessions are presented. %A R. Vichnevetsky %T Energy and Group Velocity in Semi Discretizations of Hyperbolic Equations %R Technical Report DCS-TR-100 %I Rutgers University, Department of Computer Science %D February 1981 %D January 1982 %X Propagation properties of a numerical semi-discretization of hyperbolic equation are analyzed, using time-Fourier Transforms. It is shown that numerical solutions are of two types, corresponding to the two roots of a characteristic equation which is associated with the semi-discretization. The properties of those two types of solutions in terms of phase velocity, wavelength and group velocity are derived. While solutions of the first type converge to genuine solutions of the equation, solutions of the second type have group velocities opposite to the direction of flow, and are entirely spurious. %A R. Vichnevetsky %T Propagation Through Numerical Mesh Refinement for Hyperbolic Equations %R Technical Report DCS-TR-101 %I Rutgers University, Department of Computer Science %D February 1981 %D January 1982 %X Spurious reflection that occurs at the interface between coarse and fine mesh in the numerical finite-difference approximation of hyperbolic equations is analyzed. By using time-Fourier Tranforms, a reflection ratio at the interface can be derived. It is then possible to describe exactly the reflection that occurs, in the form of a Fourier integral using this reflection ratio. Energy conservation and convergence rates are examined for a "standard" and a "modified" treatment of the interface point in the mesh refinement. %A Apostolos Gerasoulis %A R. P. Srivastav %T On The Solvability of Singular Integral Equations Via Gauss- Jacobi Quadrature %R Technical Report DCS-TR-102 %I Rutgers University, Department of Computer Science %D May 1981 %D January 1982 %X It is shown that the coefficient matrix obtained by the discretization of singular integral equations using Gauss-Jacobi quadrature formulae is non-singular for any n, if @ @ is not an eigenvalue of the equation. %A R. Vichnevetsky %A E. Sciubba %A Y. Pak %T Numerical Upstream Boundary Conditions that Reduce Spurious Reflection %R Technical Report DCS-TR-103 %I Rutgers University, Department of Computer Science %D August 1981 %D January 1982 %X It is commonly (but erroneously) assumed that the best way to treat upstream boundaries for hyperbolic equations is to let the numerical value be equal to the imposed value. What is erroneous in this assumption is that it ignores the presence of spurious numerical solutions which may have originated inside of the computational domain and which may be present near the boundary. Such spurious solutions are characterized by short wavelength spatial oscillations, with a group velocity which is opposed to the direction of flow and are therefore moving as "packets" toward the boundary. They are reflected by the "standard" treatment whereas a better numerical treatment of the boundary should attempt to absorb them. .lp This paper describes two methods for the modified numerical treatment of upstream boundaries of hyperbolic equations, which are effective in absorbing those spurious solutions with a remainder which decreases as ?? and ?? respectively, where ?? is the frequency and ?? is the mesh size. %A E. Seneta %A W. L. Steiger %T A New Lad Curve Fitting Algorithm: Slightly Overdetermined Equation Systems in L1 %R Technical Report DCS-TR-104 %I Rutgers University, Department of Computer Science %D August 1981 %D January 1982 %X We present a new algorithm for the discrete LAD curve-fitting problem for n points in k less than or equal to n dimensional space. When k is about n/3 it begins to out-perform the best current methods, and the advantage increases with k. The algorithm should be of interest in approximation theory and in robust regression. Moreover our approach may be useful in linear optimization, especially in linear programming. %A R. N. Goldberg %T User's Guide for MEDIT, A Machine Independent Text Editor %R Technical Report DCS-TR-105 %I Rutgers University, Department of Computer Science %D March 1981 %D January 1982 %X A text editor is a program used to create and modify text files stored in the computer. Text editors are useful for creating and modifying computer programs, documentation, data files, letters, and many other kinds of text. .lp MEDIT is a text editor written in MAINSAIL. It can run on any computer for which a MAINSAIL implementation exists, and on any terminal type. Both with regard to the computer on which it runs and the terminal used, it is machine-independent. .lp The basic line editing commands have been adopted from the powerful SOS text editor for the PDP-10. The intraline editing capability of SOS has been extended in MEDIT to a video editor which keeps the screen of the terminal constantly updated to reflect the true state of the file as modifications are made. Video editing greatly increases user productivity, especially in editing of English and other unrepetitive text. On hard copy terminals and on unsupported CRT terminals, MEDIT cannot provide video editing, but it knows how to utilize full duplex capabilities to provide character by character modifications within a line of text. The commands for making changes within a line are the same for all terminals. .lp This document contains both an introductory user's manual and as a reference manual to MEDIT. The introductory manual introduces the user to the editor while the reference manual provides complete, detailed specifications of the editor's features. The style of the introductory manual is informal to aid the learning process. For clarity, when examples of interaction with MEDIT are given, those parts that were typed by the user will be shown in italics while those parts typed by MEDIT will be in a more standard font. %A C. V. Srinivasan %T Knowledge Representation and Problem Solving in MDS %R Technical Report DCS-TR-106 %I Rutgers University, Department of Computer Science %D October 1981 %D January 1982 %X This work presents a new approach for using a first order theory to generate procedures for solving goal satisfaction problems without using general thorem proving. The core of the problem solving system has three basic components: an inferencing mechanism based on ``residues'', a control structure for "means-end" analysis that uses ``natural deduction'', and a generalization scheme that is based on the structure of statements in the domain theory itself. .lp The work represents a beginning in the development of knowledge based systems that can generate their own problem solving programs, evolve with experience and adapt to a changing domain theory. %A C. V. Srinivasan %T Note on Learning in MDS Based on Predicate Signatures %R Technical Report DCS-TR-107 %I Rutgers University, Department of Computer Science %D October 1981 %D January 1982 %X This note illustrates a simple learning scheme in the context of two examples. In the first example the system learns the distinguishing features of the letters in the English alphabet, where each letter is described in terms of a relational system of features. In the second example the domain is a set of family relationships. In this case the system identifies invariant properties like "father.father=grandfather" that exist in the domain. .lp In both examples the system first creates an abstraction of the given set of relations and uses the abstraction to identify invariant (or distinguishing) features of the given set of relations. The abstraction scheme is based on the concept of "predicate signatures" that is described in the note. .lp The method is a general one. It can be used to identify large classes of invariant (or distinguishing) features of sets of objects where each object is described in terms of a set of relations that hold true for the object. %A Apostolos Gerasoulis %T Singular Integral Convergence of the Nystrom Interpolant of the Gauss-Chebyshev Method %R Technical Report DCS-TR-108 %I Rutgers University, Department of Computer Science %D October 1981 %D January 1982 %X Nystrom's interpolation formula is applied to the numerical solution of singular integral equations. For the Gauss-Chebyshev method, it is shown that this approximation converges uniformly, provided that the kernel and the input functions possess a continuous derivative. Moreover, the error of the Nystrom interpolant is bounded from above by the Gaussian quadrature errors and thus converges fast, especially for smooth functions. For C?? input functions, a sharp upper bound for the error is obtained. Finally numerical examples are considered. It is found that the actual computational error agrees well with the theoretical derived bounds. %A Bernard Nudel %T Equations - The ``Improved Constraint Satisfaction Algorithms using Inter-Variable Compatibilities'' %R Technical Report DCS-TR-109 %I Rutgers University, Department of Computer Science %D December 1981 %D January 1982 %X This report addresses the problem of improving algorithms for solving ``consistent-labeling'' (also called ``constraint-satisfaction'' problems. The concept of ``compatibility'' between variables in such problems is introduced. How to obtain compatibilities analytically and empirically is discussed, and various compatibility-based heuristics (as well as some useful but less effective non compatibility-based heuristics) are developed to improve a version of the Waltz algorithm which was found best of a set of consistent-labeling problem algorithms tested by Haralick. Empirical results with these heuristics are very encouraging, with over an order of magnitude improvement in performance with respect to the basic algorithm on a set of randomly generated consistent-labeling problems. %A R. N. Goldberg %T Software Design Issues in the Architecture and Implementation of Distributed Text Editors %R Technical Report DCS-TR-110 (Thesis) %I Rutgers University, Department of Computer Science %D December 1981 %D January 1982 %X This thesis analyzes the performance improvements achieved by distributing text editing software between a dedicated microprocessor and a timeshared host computer. Theoretical analysis of the problem coupled with detailed simulations of possible architectures, using data collected from over 15,000 actual editing sessions, provide strong evidence that simple, easily implemented schemes produce significant improvements of response time to user commands, communications bandwidth requirements, and host CPU utilization. .lp Consisting of an editor server program running on a host and a local editor running on a microprocessor, a distributed editor would a) provide general, flexible editing capabilities that are functionally indistinguishable from those of conventional video editors, b) provide the same average response on 1200 baud communications lines as conventional editors running at 9600 baud, c) isolate the user from most delays due to host timesharing pauses and communications network slowdowns, and d) reduce considerably the number of program activations and CPU cycles at the host, making distributed video editors consume fewer host resources than conventional line editors. The implementation of a prototype distributed editor based on the conclusions of the study demonstrates the practicality of implementing the required software on existing, inexpensive microprocessors such as those found in conventional terminals. %A Apostolos Gerasoulis %T Nystrom's Interpolation Formula in the Solution of Singular Integral Equations Discretized by the Gauss-Jacobi Quadrature %R Technical Report DCS-TR-111 %I Rutgers University, Department of Computer Science %D March 1982 %X The numerical solution of Singular Integral Equations of Cauchy-type at a discrete set of points t[i], is obtained through discretization of the original equation with the Gauss-Jacobi quadrature. The natural or Nystrom's interpolation formula is used to approximate the solution of the equation for points different than t[i]. Uniform convergence of the interpolation formula is shown for C^[1] functions. Finally, error bounds are derived and for smooth function it is shown that Nystrom's formula converges faster than Lagrange's interpolation polynomials. %A Bernard Nudel %T Understand Consistent-Labelling Problems and Their Algorithms: Part I %R Technical Report DCS-TR-112 %I Rutgers University, Department of Computer Science %D April 1982 %O forthcoming %A Bernard Nudel %T Consistent-Labeling Problems and Their Algorithms: Part II %R Technical Report DCS-TR-113 %I Rutgers University, Department of Computer Science %D April 1982 %D October 1982 (revised February 1983) %X A new parameter is introduced to characterize a type of search problem of broad relevance in Artificial Intelligence, Operations Research and Symbolic Logic. This paramter, which we call inter-variable compatibility is particularly important in that complexity analyses incorporating it are able to capture the dependence of problem complexity on search order used by an algorithm. Thus compatibility-based theories can provide a theoretical basis for the extraction of heuristics for choosing good search orderings - a long-sought goal for such problems, since it can lead to significant savings during search. We carry out expected complexity analyses for the traditional Backtrack algorithm as well as for two more recent algorithms that have been found empirically to be significant improvements, Forward Checking and word-wise Forward Checking. We extract compatibility-based ordering-heuristics from the theory for Forward Checking. Preliminary experimental results are presented showing the large savings that result from their use. Similar savings can be expected for other algorithms when heuristics taking account of inter-variable compatibilities are used. Our compatibility-based theories also provide a more precise way of predicting which algorithm is best for a given problem. %A Abe Lockman %A David Klappholz %T The Control of Inferencing in Natural Language Understanding %R Technical Report DCS-TR-114 %I Rutgers University, Department of Computer Science %D March 1982 %D April 1982 %X The understanding of a natural language text requires that a reader (human or computer program) be able to resolve ambiguities at the syntactic and lexical levels; it also requires that a reader be able to recover that part of the meaning of a text which is over and above the collection of meanings of its individual sentences taken in isolation. .lp The satisfaction of this requirement involves complex inferencing from a large database of world-knowledge. While human readers seem able to perform this task easily, the designer of computer programs for natural language understanding faces the serious difficulty of algorithmically defining precisely the items of world-knowledge required at any point in the processing, i.e., the problem of controlling inferencing. This paper discusses the problems involved in such control of inferencing; an approach to their solution is presented, based on the notion of determining where each successive sentence "fits" into the text as a whole. %A R. M. Keller %T A Survey of Research in Strategy Acquisition %R Technical Report DCS-TR-115 %I Rutgers University, Department of Computer Science %D April 1982 %D July 1982 %X This paper surveys literature in the area of strategy acquisition for artificial and human problem solving systems. A unifying view of the term "strategy" is suggested which places strategies along a continuum from abstract to concrete. Major concerns of strategy acquisition research are described, including (i) strategic component learning, (ii) strategy applicability recognition, (iii) strategy customization and (iv) strategy transformation. Various researchers' approaches to these issues are reviewed and open problems are discussed. %A R. Vichnevetsky %T Group Velocity and Reflection Phenomena in Numerical Approximations of Hyperbolic Equations: A Survey %R Technical Report DCS-TR-116 %I Rutgers University, Department of Computer Science %D March 1982 %D June 1982 %X Recent analyses of spurious phenomena near interfaces and boundaries of numerical approximations of hyperbolic equations have produced a host of interesting, concrete results which are reviewed in this paper. .lp What characterizes these analyses is that they rely on tools of mathematical physics, in particular in the systematic use of Fourier transforms and the concept of sinusoidal wave propagation this leads to a description of numerical inaccuracy in terms of dispersion, and a description of spurious numerical solutions in terms of wave packets and group velocities. .lp It is in providing the mathematics needed to describe spurious reflection at numerical boundaries and at interfaces in mesh refinement that the most applied results of this theory are obtained. %A Barbara G. Ryder %T Incremental Data Flow Analysis Based on a Unified Model of Elimination Algorithms %R Technical Report DCS-TR-117 (Thesis) %I Rutgers University, Department of Computer Science %D September 1982 %X In this thesis we present our work on the development of incremental update algorithms for global data flow analysis, algorithms which modify a known data flow solution to reflect changes in the problem. We have shown that given a data flow problem defined on a digraph, the effects of a set of localized program changes can be determined without full re-analysis by some global data flow algorithm. .lp Our major research contributions fall into three categories. First, we have modelled three elimination methods for global data flow analysis: Allen/Cocke interval analysis, Hecht/Ullman T1-T2 analysis, and Tarjan interval analysis. Our models allow comparison among the methods and highlight the sources of worst case complexity improvement. Second, we have designed two incremental update data flow algorithms based on the Allen/Cocke algorithm (ACINCF, ACINCB) and the Hecht/Ullman algorithm (HUINC). The complexity and correctness of these algorithms is demonstrated. .lp Third, we have studied our incremental algorithm performance on a robust, Algol-like structured programming language ``L''. We have identified program structures which affect the complexity of incremental updating and established their effects singly and in concert. Specifically, for reducible digraphs we have shown that the elimination phase of ACINCF (and/or ACINCB) updates on each linear system in the derived sequence, a set of interval head equations and at most the equations in one interval. Further, we have considered program changes within one interval in a nested loop in a program in ``L'' and have characterized the set of all variables whose equations may be affected, in terms of each variable's corresponding program structure and its relation to the original change site. Thus, we have ascertained all the data flow solutions affected by the changes. Our result enables us to analyze a digraph in ``L'' with a set of possible changes identifying a priori, nodes whose equations will be affected by these changes. %A Robert Paige %T Transformational Programming--Applications to Algorithms and Systems %R Technical Report DCS-TR-118 %I Rutgers University, Department of Computer Science %D September 1982 %X Transformational programming is a nascent software development methodology that promises to reduce programming labor, increase program reliability, and improve program performance. Our research centers around a prototype transformational programming system called RAPTS (Rutgers Abstract Program Transformation System), developed during the past several years at Laboratory for Computer Science Research. Experiments in RAPTS with algorithm derivations are expected to lead to pragmatic applications to algorithm design, program development, and large system construction. %A W. L. Steiger %A P. M. Neuss %T Fast Minimal Distance Enumeration of Small Combinations %R Technical Report DCS-TR-119 %I Rutgers University, Department of Computer Science %D September 1982 %X We give a history dependent algorithm that satisfies the claims of the title. It has other desirable attributes as well. It is computationally much simpler than algorithms studied in recent work of Payne and Ives when, in enumerating n objects k at a time, k is small compared to n. In fact SN(n,k) decreases as n -> infinity where SN denotes the complexity of the present method and PI, that of Payne-Ives. This is probably due to the savings in overhead required by historyless enumeration. %A Barbara G. Ryder %T Incremental Data Flow Analysis %R Technical Report DCS-TR-120 %I Rutgers University, Department of Computer Science %D November 1982 %X In this paper we present ACINCF and ACINCB, incremental update algorithms for forward and backward data flow problems, which are based on a linear equations model of Allen/Cocke interval analysis [Allen 77, Ryder 82]. We have studied their performance on a robust structured programming language ``L''. Given a set of localized program changes in a program in ``L'', we can identify a priori the nodes in its flow graph whose corresponding data flow equations will be affected by the changes. We can characterize these affected nodes by their corresponding program structures and their relation to the original change sites. %A Ronald I. Becker %A Yehoshua Perl %T Finding the Two-Core of a Tree %R Technical Report DCS-TR-121 %I Rutgers University, Department of Computer Science %D December 1982 %X The 1-core of a graph is a path minimizing the sum of the distances of all vertices of the graph from the path. A linear algorithm for finding a 1-core of a tree was presented by Morgan and Slater. The problem for general graphs is NP-Hard. .lp A 2-core of a graph is a set of two paths minimizing the sum of the distances of all vertices of the graph from any of the two paths. We consider both cases of disjoint paths and intersecting paths for a tree. .lp Interesting relations between 1-core and 2-core of a tree are found. These relations imply efficient algorithms for finding the 2-core. The complexity of the algorithms is 0(|V|.d(T)) where d(T) is the number of edges in the diameter of the tree. These algorithms are applicable for routing highways in a system of roads. .lp A w-point core is a path minimizing the sum of the distances of all vertices of the graph from either the vertex w or the path. A linear algorithm for finding a w-point core of a tree is presented. It is applied as a procedure for the previous algorithms. %A Yehoshua Perl %A Marc Snir %T Circuit Partitioning with Size and Connection Constraints %R Technical Report DCS-TR-122 %I Rutgers University, Department of Computer Science %D December 1982 %X The problem of partitioning a circuit into subcomponents with constraints on the size of each subcomponent and the number of external connections is examined. While this problem is shown to be NP complete even for very restricted cases, a pseudo-polynomial dynamic programming algorithm is given for the case where the circuit has a tree structure. %A Yehoshua Perl %T The Bitonic and Odd-Even Networks are more than Merging %R Technical Report DCS-TR-123 %I Rutgers University, Department of Computer Science %D February 1983 %X The known bitonic and odd-even merging networks are reinvestigated. For both networks the following results are obtained: The input vectors sorted by the network are characterized. Those vectors are recursively balanced for some definition of balance. The set of those vectors is much larger than the set of vectors known to be sorted by the network. The output vectors obtainable by applying the network to an arbitrary input vector are also characterized. Those vectors satisfy recursive dominance for some definition of dominance. %A Yehoshua Perl %A U. Vishkin %T Efficient Implementation of a Shifting Algorithm %R Technical Report DCS-TR-124 %I Rutgers University, Department of Computer Science %D February 1983 %X An efficient implementation of the shifting algorithm (BPS) for min-max tree partitioning is given. The complexity is reduced from O(Rk^[3]) + kn) to O(Rk(k + logd) + n) where a tree of n vertices, radius of R edges, and maximum degree d is partitioned into k+ 1 subtrees. The improvement is mainly due to the new junction tree data structure which suggests a succinct representation for subsets of edges, of a given tree, that preserves the interrelation between the edges on the tree. %A Yehoshua Perl %T Optimum Split Trees %R Technical Report DCS-TR-125 %I Rutgers University, Department of Computer Science %D March 1983 %X A split tree is a special kind of a binary search tree introduced by Sheil. He conjectured that constructing an optimum split tree is an intractable problem since there is a difficulty in applying the dynamic programming technique. We realize that the difficulty arise since top down decisions are required while applying the bottom up dynamic programming technique. .lp We demonstrate that it is possible in this case to overcome such a difficulty, and present a polynomial algorithm for constructing an optimum split tree. This algorithm incorporates top down decisions into a dynamic programming procedure similar to Knuth's algorithm for constructing an optimum binary search tree. An improved algorithm of complexity O(n4) is finally presented. A modification for the case that equiprobable keys are permitted is discussed. %A D. Ronen %A Yehoshua Perl %T Heuristics for Finding a Maximum Number of Disjoint Bounded Paths %R Technical Report DCS-TR-126 %I Rutgers University, Department of Computer Science %D March 1983 %D April 1983 %X We consider the following problem: Given an integer k and a network G with two distinct vertices s and t, find a maximum number of vertex disjoint paths from s to t of length bounded by k. .lp In a recent work [IPS] it was shown that for length greater than four this problem is NP-Hard. In this paper we present a polynomial heuristic algorithm for the problem for general length. The algorithm is proved to give optimal solution for length less than five. Experiments show very good results for the algorithm. %A M. Dowd %A Yehoshua Perl %A L. Rudolph %A M. Saks %T The Balanced Sorting Network %R Technical Report DCS-TR-127 %I Rutgers University, Department of Computer Science %D March 1983 %X This paper introduces a new sorting network, called the ``balanced sorting network''. To sort input vectors of size n=2^k the network requires O(log n)^2 time and uses n(log n)/2 comparators, which are comparable to the bounds for the well known odd even and bitonic networks of Batcher. On the other hand our network has certain advantages over these two networks. In particular the balanced sorting network consists of a sequence of k identical] blocks called balanced merging networks. Thus implementations of the network as a hardware network or on the shuffle exchange interconnection model have some advantages over the corresponding implementations of the other networks mentioned above. %A Bernard Nudel %T Solving the General Consistent Labeling (or Constraint Satisfaction) Problem: Two Algorithms and their Expected Complexities %R Technical Report DCS-TR-128 %I Rutgers University, Department of Computer Science %D June 1983 %D June 1983 %X The Consistent Labeling Problem is of considerable importance in Artificial Intelligence, Operations Research and Symbolic Logic. It has received much attention, but most work has addressed the specialized binary form of the problem. Furthermore, none of the relatively few papers that treat the general problem have dealt analytically with the issue of complexity. In this paper we present two algorithms for solving the general Consistent Labeling Problem and for each of these the expected complexity is given under a simple statistical model for the distribution problems. This model is sufficient to expose certain interesting aspects of complexity for the two algorithms. Work currently in progress will address more subtle aspects by extension to more refined statistical models. %A R. Vichnevetsky %T Fourier Methods in Computational Fluid and Field Dynamics %R Technical Report DCS-TR-129 %I Rutgers University, Department of Computer Science %D April 1983 %X This paper is a review, with examples, of those areas where the theory of Fourier transforms has played a role in the development of computational methods in fluid and field dynamics. These may be separated into (i) the development of computing algorithms using Fourier transforms and (ii) the analysis of standard finite difference and finite element algorithms by Fourier methods. .lp Recent results in the analysis of spurious reflection phenomena at computational boundaries of fluid flow simulation, which invoke the theory of wave propagation and the concept of group velocity are given. %A Ravinderpal Singh Sandhu %T Design and Analysis of Protection Schemes Based on the Send-Receive Transport Mechanism %R Technical Report DCS-TR-130 (Thesis) %I Rutgers University, Department of Computer Science %D April 1983 %X In a protection mechanism based on authorization, the ability of a subject (i.e., a user or a process) to operate on the system is determined by privileges in its domain. A mechanism for transport of privileges must accommodate a variety of policies, while permitting analysis of the privileges which a given subject might obtain. The ``send-receive transport mechanism'' was designed by Minsky with these objectives in mind. In this mechanism, a transport operation is explicity authorized at both the source and destination, and the authorization is selective with respect to which privileges can be transported. .lp Here we study a restricted version of this mechanism. Under our restrictions a protected system is designed in two stages. Firstly, a protection scheme is defined by specifying the values of certain parameters, which determine the static component of every subject's domain. Secondly, the initial state is defined by specifying the dynamic component of every subject's domain. This state then evolves as permitted by the protection scheme. .lp We formulate the flow-analysis problem which is concerned with determining a bound on the authorization for transport of privileges, given a protection scheme and an initial state. We develop techniques for deriving and improving the desired bound. The major complication in doing so is the create operation, which permits the protection state to evolve in an unbounded manner. We investigate conditions which enable us to ignore the create operation. We also investigate conditions under which the initial authorization for transport of privileges remains invariant in every derived state. .lp We study additional analysis issues in the context of sub-classes of our design framework. The questions raised in such detailed analysis depend on the structure of these sub-classes. %A Marvin C. Paull %A Barbara G. Ryder %T Incremental Data Flow Analysis Algorithms %R Technical Report DCS-TR-131 %I Rutgers University, Department of Computer Science %D July 1983 %X ACINCF and ACINCB are two incremental update algorithms for global data flow analysis which are based on our linear equations models of Allen/Cocke interval analysis. We have studied their performance on a robust structured programming language ``L''. Given a set of localized program changes in a program in ``L'', we can identify a priori the nodes in its flow graph whose corresponding data flow equations may be affected by the changes. We can characterize these affected nodes by their corresponding program structures and their relation to the original change sites, and do so without actually performing the incremental uipdate algorithm. Our results can be refined to characterize the reduced equations affected when the structured loop exit mechanisms are used singly or together, thereby relating the richness of programming language usage to the ease of incremental updating. %A R. Vichnevetsky %A E. C. Pariser %T High Order Numerical Sommerfeld Boundary Conditions: Theory and Experiments %R Technical Report DCS-TR-132 %I Rutgers University, Department of Computer Science %D August 1983 (revised October 1983) %X We develop high-order, non-reflecting boundary equations for a semi-discrete approximation of the simple (hyperbolic) advection equation U[1] + cU[x] = 0. These boundary equations are based on a discrete interpretation of Sommerfeld's radiation condition for a second order wave equation which is associated with the semi-discrete equations. The performance of these schemes is expressed by an exact measure of the energy reflected at the boundary. For low order cases, the discrete Sommerfeld boundary equations are identical with the standard finite difference equations, but for high orders of approximation (starting with 4 points), the discrete Sommerfeld schemes differ from standard finite differences. It is shown, and verified experimentally, that the discrete Sommerfeld schemes are optimal, in the sense that they produce the least amount of reflected energy. .lp Moreover, it is known theoretically, and we verify experimentally, that the reflected energy remains invariant when the semi-discrete equations are time-discretized with the trapezoidal (Crank-Nicolson) method. The corresponding fully discrete boundary equations are thus also optimal in the sense that they minimize the reflected energy. %A Apostolos Gerasoulis %T Singular Integral Equations-The Singular Value Decomposition of the Gauss-Chebyshev and Lobatto-Chebyshev Matrices %R Technical Report DCS-TR-133 %I Rutgers University, Department of Computer Science %D July 1983 %D August 1983 %X A closed form expression for the singular values and singular vectors of the Gauss-Chebyshev and Lobatto-Chebyshev matrices is obtained and the singular value decomposition for both matrices is derived. We use the singular value decomposition to investigate the convergence of an iterative method applied in the solution of Singular Integral Equations of Cauchy type. We show that this iterative method converges only under very restrictive conditions. %A S. L. Epstein %T Knowledge Representation in Mathematics a Case Study in Graph Theory %R Technical Report DCS-TR-134 %I Rutgers University, Department of Computer Science %D April 1983 %D September 1983 %X In this dissertation we present our work on representational languages for graph theory. We have shown that a knowledge representation can be structured to provide both expressive and procedural power. .lp Our major research contributions are three. First, we have defined representations of infinite sets and recommended that mathematical concepts be considered as sets of objects with relations among them. Second, we have demonstrated how a carefully controlled hierarchy of representations is available through formal languages. Third, we have employed a recursive formulation of concepts which enables their application to many of the behaviors of a research mathematician. .lp Two major families of representations are described: edge-set languages and recursive languages. The edge-set languages have finite expressive power and an interesting potential for hashing digraphs, characterizing classes of graphs and detecting differences among them. The recursive languages have extensible expressive power and impressive procedural power. Recursive languages appear to be an excellent implementation technique for artificial intelligence programs in mathematical research. .lp Our results enable us to complare the complexity of mathematical concepts (via floors). Concepts represented in our languages can be inverted (to test for the presence of a property) and merged (to combine properties). Conjectures are available through simple search, and most theorems easily proved under the representation. %A D. G. Corneil %A Yehoshua Perl %T Clustering and Domination in Perfect Graphs %R Technical Report DCS-TR-135 %I Rutgers University, Department of Computer Science %D October 1983 %X A k-cluster in a graph is an induced subgraph in k-vertices which maximizes the number of edges. Both the k-cluster problem and the k-dominating set problem are NP-complete for graphs in general. In this paper we investigate the complexity status of these problems on various subclasses of perfect graphs. In particular, we examine comparability graphs, chordal graphs, bipartite graphs, split graphs, cographs and k-trees. For example, it is shown that the k-cluster problem is NP-complete for both bipartite and chordal graphs and the independent k-dominating set problem is NP-complete for bipartite graphs. Furthermore, where the k-cluster problem is polynomial we study the weighted and connected versions as well. Similarly we also look at the minimum k-dominating set problem on families which have polynomial k-dominating set algorithms. %A R. Vichnevetsky %T THE ENERGY FLOW EQUATION %R Technical Report DCS-TR-136 %I Rutgers University, Department of Computer Science %D November 1983 %X An equation which describes the flow of energy in energy conservative semi- and full discretizations of hyperbolic equations is derived. While the form of this equation for semi-discretizations verifies known principles of group velocity and wave propagation in periodic structures, its form and strict applicability to discrete-space-discrete-time systems like those resulting from the full discretization of hyperbolic equations are new results. %A R. Vichnevetsky %T INVARIANCE THEOREMS CONCERNING REFLECTION AT NUMERICAL BOUNDARIES %R Technical Report DCS-TR-137 %I Rutgers University, Department of Computer Science %D December 1983 %X In the numerical approximation of hyperbolic equations, outflow boundaries are in general not transparent, and they create spurious reflection. A useful measure of this is given by the energy (or usual sum of squares) of the reflected solution in response to an arbitrary solution which originates from within the computing domain. .lp We prove, in that respect, a somewhat unexpected property: Namely that for those full discretizations which are obtained by applying to a space-discretization of the problem an energy conservative discrete time-marching method, the energy reflected at the boundary is independent of which time-marching method is used, of the value of delta-t, and is strictly equal to the reflected energy in the semi-discrete case. This is also verified by numerical experiments. .lp Optimal boundary equations may be defined in the semi-discrete case of those which maximize the rate of convergence to zero of the reflected energy when h->0. A corollary of the preceding result is that those boundary equations remain optimal, in the same sense, when used in an energy conservative full discretization. %A G. Richter %T A Finite Element Method for Hyperbolic Equations %R Technical Report DCS-TR-138 %I Rutgers University, Department of Computer Science %D December 1983 %A A. Michael Berman %A Marvin C. Paull %A Cheng %T Exploring the Structure of Incremental Algorithms %R Technical Report DCS-TR-139 %I Rutgers University, Department of Computer Science %D April 1984 %D August 1984 %X A study of the general properties of incremental algorithms is presented. First, it is shown that within certain limitations it is not possible for an incremental algorithm to be of complexity less than 1/n of that of the best non-incremental algorithm for any given function. Next we describe a model for incremental algorithms based on a generalization of finite state machines. It is demonstrated that all finite state machines have either constant or linear complexity, and this is extended to a subclass of incremental algorithms. Then the requirements for machines with good incremental algorithms are discussed, and two classes are presented, one which can be updated in constant time, and one in sqrt(n) time, when averaged over a sequence of changes. %A Barbara G. Ryder %A Marvin C. Paull %T A Unified Model of Elimination Algorithms %R Technical Report DCS-TR-140 %I Rutgers University, Department of Computer Science %D January 1984 %X A data flow algorithm is one which gathers information about the deifnition and use of data in a program or a set of programs. A unified model of a family of data flow algorithms called elimination methods is presented. The algorithms are characterized by the manner in which they solve the systems of linear equations which describe data flow problems of interest. These implementation independent descriptions of the algorithms facilitate comparisons among them and illustrate the sources of worst case complexity improvement. This tutorial is valuable as a study in algorithm design and improvement; it presents a new view of these algorithms and their interrelations. %A A. Israeli %A Yehoshua Perl %T On Scheduling the Construction of a Treelike Communication Network %R Technical Report DCS-TR-141 %I Rutgers University, Department of Computer Science %D February 1984 %X In a treelike communication network, transmission equipment is required for the internal vertices. We consider optimal schedules for the construction of the edges of the network, minimizing the cost of hiring the transmission equipment during the construction process. .lp Properties of optimal schedules are investigated. It is shown that an optimal schedule is constructed in three parts: The initial matching, the stars part and the residual subgraph. Rules concerning the construction of each one of the parts are presented. Efficient algorithms are presented for some special cases of paths of stars. %A Tomasz Imielinski %A D. Rozenshtein %T Towards a Flexible User Interface to Relational Database Systems: Processing Simple Second Order Queries %R Technical Report DCS-TR-142 %I Rutgers University, Department of Computer Science %D April 1984 %X In this paper, we present a mechanism for answering simple second order queries of the form "retrieve connection between attributes A and B." By "connection between A and B" we mean any and all meaningful relationships that can be established between A and B in a given database. .lp What gives our queries their second order nature is the fact that semantics of connections is independent of a particular set of predicates comprising the database scheme. The answers to them, however, depend on which relationships are considered meaningful; this, in turn, depends on the particular structure of the database scheme and, also, on additional semantic information carried by roles. While we can treat our queries as intentionally second order queries directed at the database instance, we can also treat them as incompletely specified queries posed by a user whose knowledge of the data base structure is partial. .lp In this paper, we present a formal characterization of "meaningful relationships," present algorithms for computing connections, and describe how our method can be extended to sets containing any number of attributes. %A R. Vichnevetsky %T Propation and Reflection in Discrete-Space-Discrete-Time Structures %R Technical Report DCS-TR-143 %I Rutgers University, Department of Computer Science %D May 1984 %X We describe the essential mathematics that are to be invoked to arrive at a coherent description of energy conservation, propagation and reflection in discrete-space-discrete-time structures. These are the structures which occur when hyperbolic equations are approximated numerically on a regular mesh in space-time. .lp The combined use of discrete Fourier Transforms and energy measures produces a set of new and particularly elegant results relating to spurious reflection at computational boundaries. These results are sketched in this paper. %A ? %T ? %R Technical Report DCS-TR-144 %I Rutgers University, Department of Computer Science %D ? 1984 %A ? %T ? %R Technical Report DCS-TR-145 %I Rutgers University, Department of Computer Science %D ? 1984 %A S. Kedar-Cabelli %T Analogy - From a Unified Perspective %R Technical Report DCS-TR-146 %I Rutgers University, Department of Computer Science %D June 1984 %O not received yet %A B. Kalantari %A J. B. Rosen %T An Algorithm for large-scale global minimization of linearly constrained concave quadratic functions %R Technical Report DCS-TR-147 %I Rutgers University, Department of Computer Science %D September 1985 %O not received yet %A B. Kalantari %A J. B. Rosen %T Penalty Formulation for Zero-One Programming %R Technical Report DCS-TR-148 %I Rutgers University, Department of Computer Science %D September 1984 %X Raghavachari has shown the equivalence of zero-one integer programming and a concave quadratic penalty function for a sufficiently large value of the penalty. A lower bound for this penalty was found by Kalantari and Rosen. It was also shown that this penalty could not be reduced in specific cases. We show that the results generalize to the case where the objective function is any concave function. Equivalent penalty formulation for non-concave functions is also considered. %A J. M. Steele %A W. L. Steiger %T Algorithms and Complexity for a statistical problem. Minimum median residual fitting %R Technical Report DCS-TR-149 %I Rutgers University, Department of Computer Science %D November 1984 %X Given n points {(x[1],y[i])} in the plane we study the problem of fitting the minimum median squared residual ([MM^2.R]) line. This involves the study of the function f(g[a],g[b]) = median (|y[y] - (g[a] + g[b] x[i])|); it is piecewise linear and can have a quadratic number of local minima. Several algorithms that locate a minimizer of f are presented. The best of these has time complexity O(n^3) in the worst case. Our most practical algorithm appears to be one which has worst case behavior of only O(n^3 log(n)), but which we conjective to have expected time complexity O((n log(n))^2). Generalizations to k dimensions are mentioned. %A E. A. Akkoyunlu %A Richard M. Nemes %T Modular Verification of Communicating Sequential Processes %R Technical Report DCS-TR-150 %I Rutgers University, Department of Computer Science %D November 1984 %X The semantics of communication in a distributed computing environment without shared objects are investigated from the viewpoint of modularity and hierarchial system structure. Communicating Sequential Processes (CSP), Hoare's language for parallel programming, is modified and expanded to support process modularity and hierarchial structure using a port construction. A formal axiomatic verification methodology for partial correctness is developed that extends the Hoare axiomatic proof methodology for sequential (nonparallel) programs to CSP-like programs, without resort to global invariants. Hierarchial structure and modularity are fully supported within the proof system. Processes are verified against an abstract entity, the interface, thereby achieving a formal notion of process specification and plug-compatibility. Alongside maintenance to a system is maintenance to its correctness proof, the two evolving side by side in an isomorphic fashion. The formalism is broadened further to include shared ports by the introduction of a universal assertion termed Kirchhoff's Law. Several examples are provided that demonstrate the methodology, including a modular proof of the generic single-entry, multiple-user CSP subroutine process. %A R. Vichnevetsky %T The Mathematics of Energy Propagation in Numerical Approximations of Hyperbolic Equations %R Technical Report DCS-TR-151 %I Rutgers University, Department of Computer Science %D January 1985 %D February 1985 %X This paper is mostly a review and discussion of tools: there are errors in the numerical approximations of hyperbolic equations which manifest themselves by the appearance of spurious waves. It is not until one introduces in the analysis the concepts of energy and group velocity (which originated in physics) that a reasonably complete analytic description of those phenomena can be reached. .lp To produce results (in particular those concerning energy) which are consistent with the mathematics of numerical analysis, one must select carefully the setting and the tools which are used. It is on this selection process that the emphasis of the following is placed. %A R. Vichnevetsky %A Venkatakrishnan %T Tensor Form of Numerical Diffusion in Multi-Dimensional Flow Computations %R Technical Report DCS-TR-152 %I Rutgers University, Department of Computer Science %D January 1985 %D February 1985 %X The numerical approximation of hyperbolic equations describing advection often introduces spurious diffusion. In the two and three dimensional case, this spurious diffusion is anisotropic, and may be described by a matrix or a tensor. After briefly describing the corresponding theory (a detailed derivation is given elsewhere), we show experimental results which verify the theoretical results. %A C. V. Srinivasan %T CK-LOG, A Calculus for Knowledge Processing in Logic %R Technical Report DCS-TR-153 %I Rutgers University, Department of Computer Science %D September 1984 %D February 1985 %X This paper introduces the principal concepts in the organization and operation of the logic based knowledge processing system, called CK-LOG (A Calculus for Knowledge in LOGic). CK-LOG uses the frame based system MDS (the Meta Description System) for knowledge representation and for modelling world states. It uses an inference engine based on Natural Deduction for stating and solving problems. %A A. Michael Berman %A Marvin C. Paull %A Barbara G. Ryder %T Proving Relative Lower Bounds for Incremental Algorithms %R Technical Report DCS-TR-154 %I Rutgers University, Department of Computer Science %D March 1985 %D June 1985 %X A general, powerful method that permits simple proofs of relative lower bounds for incremental update algorithms is presented. This method is applied to derive a hierarchy of relative incremental complexity, which classifies functions by relative lower bounds. We demonstrate our technique by bounding a number of incremental algorithms drawn from various domains. The method described expands upon work by Paull, Berman, and Cheng and generalizes a result of Even and Gazit. Our results have interesting implications with respect to the optimality of an incremental algorithm previously developed by Ryder. We also show that for certain graphs, Frederickson's update algorithm for minimum spanning tree is nearly optimal. Perhaps most importantly, the proof method and hierarchy suggest which type of problems are likely to yield good incremental algorithms (i.e., of lower complexity) and which cannot be improved by an incremental approach. %A Hsu Arding %A Tomasz Imielinski %T Integrity Checking for Multiple Updates %R Technical Report DCS-TR-155 %I Rutgers University, Department of Computer Science %D March 1985 %X In this paper we generalize methods for fast checking of integrity constraints to deal with multiple (multiple-tuple and multiple-range) updates. This enables us to consider practically important cases when updates are neither restricted to single range (relation) nor to single tuple and could be grouped in possibly complex transactions. .lp We provide simplification method for arbitrary transactions, and also for arbitrary constraints expressed in prenex normal form in relational tuple calculus with no restrictions on the number of variables ranging over the same relation. Based on the prefix of a constraint, our simplification algorithms transform the constraint into an AND-OR combination of simpler constraints which are easier to check. The analysis of certain patterns in the prefixes of constraints enables us to conclude when for a given type of updates no simplification is possible at all and when a significant improvement can be achieved. .lp In effect we can recommend this method as a fast tool for a possibly preliminary but fast simplification of a constraint when not much of the structure of a transaction (update) is known. In case further simplification is needed, one must look for methods requiring more information about updates and constraints. %A Apostolos Gerasoulis %T Piecewise-Polynomial Quadratures for Cauchy Singular Integrals %R Technical Report DCS-TR-156 %I Rutgers University, Department of Computer Science %D March 1985 %X In this paper we propose piecewise-polynomial methods for the approximation of Cauchy principal value integrals and develop a simple, efficient and numerically stable algorithm for the evaluation of the weights of the resulting piecewise-polynomial quadratures. We present two examples to illustrate the advantages of these quadratures versus the Gauss-Jacobi quadratures. %A Richard M. Nemes %T Communication Denial in CSP %R Technical Report DCS-TR-157 %I Rutgers University, Department of Computer Science %D April 1985 %X In Communicating Sequential Processes CSP), Hoare's language for concurrent programming using synchronous message-passing, nondeterministic process synchronization is expressed by including I/0 commands in the guards of guarded commands. When the guards of a repetitive command contain only I/0 commands - a common paradigm occurring in many well-known problems (producer-consumer, for example) - then loop termination is controlled solely by the termination of addressed processes, the so-called distributed termination convention (DTC). If loop termination is to be controlled by external processes without resorting to DTC, then additional guarded commands and boolean flags are required. The result is an excessively complex loop with diminished readability. .lp In this paper we explore a new communication primitive, the DENY command, that causes false evaluation of a matching I/0 command contained in a loop guard. The results are loops with the clarityy of DTC, but with more explicit control structuring than DTC. Artificial flag mechanisms and extra guarded commands are avoided. We describe the formal proof-rule semantics of the DENY command and show its integration into the modular proof system developed by the author. We show how the establishment of loop postconditions is simplified. %A Barbara G. Ryder %T Incremental Algorithms for Software Systems %R Technical Report DCS-TR-158 %I Rutgers University, Department of Computer Science %D July 1985 %X This report is a 2 year NSF research proposal in the area of incremental algorithms for analyzing software systems. The systems of interest can be characterized as large and complex, in the sense that they are written and maintained by many people. The algorithms we are designing will enable people to alter the intermodular structure of such systems and a prior see the side effects of such changes. %A T. Marlowe %A Marvin C. Paull %A Barbara G. Ryder %T Applicability of Incremental Iterative Algorithms %R Technical Report DCS-TR-159 %I Rutgers University, Department of Computer Science %D August 1985 %X In computer science there has been much interest in iteration as a procedure for obtaining the solution of a system of equations. The applicability of iteration does not depend strongly on the form or properties of the system of equations. Its use ranges from solution of numerical equations to data-flow analysis. Also the problem of finding incremental algorithms which adjust a solution to a small change in parameters has received much attention recently and is of particular importance for large problems such as arise in data-flow analysis. .lp In this paper we show that one cannot always continue iterating from a previous solution after even a small change in parameters; we give conditions under which it is legitimate to do so. This result follows a brief discussion of iteration in general. .lp We consider finding a fixed point, X, by iteration of a monotonic function, W, on a partially ordered set Q. This differs somewhat from previous developments in that, beyond monotonicity, W is no further constrained, Q need not be a semi-lattice and X need not be reachable by a finite number of iterations. %A C. V. Srinivasan %T Knowledge Processing vs Programming: CK-LOG vs PROLOG %R Technical Report DCS-TR-160 %I Rutgers University, Department of Computer Science %D February 1984 %D September 1985 %X This paper introduces the concept of knowledge processing as one that involves both problem solving and learning, and contrasts knowledge processing in CK-LOG with programming in PROLOG. CK-LOG is a logic based knowledge processing system just as PROLOG is a logic based programming system. CK-LOG uses the frame representation system, MDS for knowledge representation and uses a general theorem proving system based on natural deduction for problem solving. The theorem proving system uses the truth maintenance system of MDS to build an evolving set of partial models during the theorem proving process, and uses these models to guide its search for proofs. The truth maintenance system works in three valued logic of T, ? (unknown) and F, which gives CK-LOG the capability to use model based default reasoning, and the capability to function effectively in the context of incomplete knowledge. Knowledge in CK-LOG pertains to a universe which may consist of objects, actions and time. CK-LOG has the capability to represent and reason about dynamically changing world states in which multiple simultaneous actions may occur. The organization, logic and operation of CK-LOG is discussed here. It is argued that the kinds of knowledge processing capabilities exhibited by CK-LOG are fundamental to knowledge processing in general, and it is impossible to incorporate them into PROLOG. %A B. Reed %A J. S. Salowe %A W. L. Steiger %T A Note on Unmerging %R Technical Report DCS-TR-161 %I Rutgers University, Department of Computer Science %D September 1985 %X In the Summer '85 issue of SIGACT News, N. Santoro enquired about the space-time complexity of unmerging. Suppose that lists A and B, each of size n/2, are merged to produce the list L. The problem is to separate L into A and B in their original order. By studying a simple, but non optimal merging algorithm, we obtain an unmergng algorithm which, using extra space 0(k), runs in time 0(nlogn). Thus, given Elementof > 0, the algorithm can unmerge in linear time provided 0(n^Elementof) space is available. %A J. S. Salowe %A W. L. Steiger %T Stable Unmerging in Linear Time and Constant Space %R Technical Report DCS-TR-162 %I Rutgers University, Department of Computer Science %D September 1985 %X In the Summer 1985 issue of SIGACT news, N. Santoro inquired about the space-time complexity of unmerging. Suppose that two sorted lists A and B, each of size n/2, are merged to produce the list L. The problem is to separate L into A and B in sorted order. An optimal algorithm is presented which unmerges in time 0(n) using 0(1) extra space, and which is stable. %A L. Rudolph %A W. L. Steiger %T Subset Size in Parallel %R Technical Report DCS-TR-163 %I Rutgers University, Department of Computer Science %D September 1985 %X In this paper we consider the problem of counting and packing the elements of a subset using a shared memory model of parallel processing. That is, given a set of n items, (i) compute the cardinality of the subset of items that satisfy some predicate and (ii) place all the items in the subset contiguously in the first k locations of an array, where k is the cardinality. Using n processors, it is easy to solve this problem in 0(min[k,log(n)]) parallel time. Our solution is a probabilistic algorithm that uses only k processors and 0(k) storage. With high probability its running time is logarithmic in k, the size of the subset, so it is independent of n. The analysis of the expected running time uses a technique which may be interesting in its own right and might be applicable to the analysis of other parallel algorithms. %A D. Sandford %T Parts I, II, III of KNOWLEDGE BASED LEARNING SYSTEMS DS + CVS = A Proposal for Research CVS = An Introduction to the Meta-Theory & Logical Foundations %R Technical Report DCS-TR-89 %I Rutgers University, Department of Computer Science %D May 1980 %D January 1982 %X Current state of the art experience in designing domain specific, intelligent, automated problem solving systems argues convincingly that: Firstly, large amounts of what is known as domain dependent or domain specific knowledge is crucial to achieving acceptable efficiency in realistic problem solving situations; and secondly that the task of implementing such systems "from scratch" is such a formidable one that it has impeded experimental research into the nature and role of domain specific knowledge in problem solving. .lp This project is directed towards attaining an understanding of the processes and types of organizations required for an automated system to be able to learn for itself the relevant domain dependent knowledge from its experience with the domain. The research is based on a meta-theory of knowledge based learning systems, systems that can discover domain knowledge and use it to solve problems in a domain. The research project will employ both experimentation with implemented systems and theoretical analysis of systems. The goals are to shed light on both the detailed mechanisms by which domain dependent knowledge increases search efficiency, and to understand the type of innate biases that an automated system needs, to be able to analyze a domain and discover the appropriate domain knowledge. The research is based on a meta-theory of systems that are both knowledge based systems and learning systems. .lp The research focusses on two kinds of systems: Systems that can build and use empirical theories of domains, and systems that use Axiomatic theories and theorem proving. The nature of domain knowledge and ways of using it in both these systems are investigated. %A C. V. Srinivasan %T Knowledge-based learning, an Example %R Technical Report DCS-TR-90 %I Rutgers University, Department of Computer Science %D May 1980 %D January 1982 %X Introduction - How may a machine "learn" from examples of situations that are presented to it? What may constitute the "knowledge" of a set of such situations? How should the examples be presented to the machine? Are there general principles which a machine can use to acquire the knowledge automatically by examining the examples presented to it, and to use the knowledge so obtained to solve problems in a domain? These are the general concerns of my research. %A M. Ruschitzka %T Policy Function Scheduling %R Technical Report DCS-TR-91 %I Rutgers University, Department of Computer Science %D July 1980 %D January 1982 %X Scheduling disciplines have traditionally been specified in terms of queues and algorithms for routing jobs between the queues. Alternatively, a discipline may be formally defined by a policy function, a function of job and system parameters. A policy function scheduler is a parameterized scheduler that - when supplied with a specific policy function - behaves like the specified discipline. The formal definition allows performance measures of a discipline (e.g., the response function) to be expressed in terms of the defining policy function. We review the principles of formal definitions, summarize previous queueing-theoretical results concerning response functions of policy function schedulers, and extend them to multiple preemptive job classes with processor-sharing subclasses. For a large variety of disciplines and job classes, we also express the policy functions in terms of the resulting response functions. Given a desired realizable performance goal, this relation serves to determine the discipline that achieves it. Policy function schedulers with their explicit relation between policy and response functions, which we plot for several different job characteristics, thus offer increased precision in controlling the performance of a computer system. %A George Shrier %T Average Case Behavior of the Alpha-Beta Tree Pruning Algorithm %R Technical Report DCS-TR-92 %I Rutgers University, Department of Computer Science %D April 1980 %D January 1982 %A Lou I. Steinberg %T A Dialogue Moderation for Program Specification - Dialogues in the PSI System %R Technical Report DCS-TR-93 %I Rutgers University, Department of Computer Science %D August 1980 %D January 1982 %A Apostolos Gerasoulis %T On The Existence of Approximate Solutions for Singular Integral Equations of Cauchy Type Discretized by Gauss-Chebyshev Quadrature Formulae %R Technical Report DCS-TR-94 %I Rutgers University, Department of Computer Science %D September 1980 %D January 1982 %X It is shown that the direct Gauss-Chebyshev method used for the numerical solution of singular integral equations of Cauchy-type possesses a unique solution for any n,as long as @w( ) is not an eigenvalue of the equation. %A W. L. Steiger %T A Mini-Max Problem %R Technical Report DCS-TR-95 %I Rutgers University, Department of Computer Science %D October 1980 %D January 1982 %X Determine an algorithm, better than complete enumeration, for the following problem: given a non-negative integer matrix, permute the entries in each column independently so as to minimize the largest row sum. This problem had arisen in determining an optimal scheduling for a factory work force. %A W. L. Steiger %T A Comparison of Methods for Discrete L1 Curve-Fitting %R Technical Report DCS-TR-96 %I Rutgers University, Department of Computer Science %D January 1981 %D January 1982 %X We study discrete L1 curve-fitting of n points in k dimensional space. Execution times for the algorithms of Barrodale-Roberts (BR), Bartels-Conn-Sinclair (BCS), and Bloomfield-Steiger (BS) - three of the best LAD curve-fitting procedures - are compared over a variety of deterministic and random curve-fitting problems. Analysis of the results allows us to make surprisingly precise statements about the computational complexity of these algorithms. .lp In particular, BR is in a different complexity class than BCS and BS as the number of points, n, increases. All algorithms are linear in the dimension, k, and BS is less complex than BCS. %A W. L. Steiger %T Linear Programming via Discrete L1 Curve-fitting %R Technical Report DCS-TR-97 %I Rutgers University, Department of Computer Science %D May 1981 %D January 1982 %X A bounded linear programming problem with feasible solutions may be cast as a discrete L1 curve-fitting problem of the same size. This may be usefully exploited in solving dense LP problems: On the average, a recent L1 algorithm solves the equivalent L1 curve fit in far fewer steps, and taking far less time, than that which would be required by the one-phase Simplex method applied to the original problem. The relative advantage increases with problem size and the comparison is even more favorable against the two-phase Simplex method. Finally, the Klee-Minty problems on which the Simplex method is of exponential complexity, seem to be "easy" problems as equivalent L1 curve fits. %A M. D. Grigoriadis %T Partitioning Methods for Block-Diagonal Linear Systems and Programs %R Technical Report DCS-TR-98 %I Rutgers University, Department of Computer Science %D November 1980 %D January 1982 %A M. D. Grigoriadis %T Network Optimization Problems and Algorithms: An Annotated Bibliography %R Technical Report DCS-TR-99 %I Rutgers University, Department of Computer Science %D December 1980 %D January 1982 %X This annotated bibliography is intended as a repository of notes and reminders of contributions to the field of network optimization and it is by no means historically or topically complete. It represents the current state of a computerized bibliography database and the author's attempt to track the large number of technical papers in specific areas of network optimization. While it may never reach perfect completeness, future versions of this report will be released with updates and additional references. %A R. N. Goldberg %T User's Guide for MEDIT, A Machine Independent Text Editor %R Technical Report LCSR-TR-10 %I Rutgers University, Department of Computer Science %D March 1981 %D January 1982 %O also published as DCS-TR-105 %A Tom M. Mitchell %A Lou I. Steinberg %A R. G. Smith %A P. Schooley %A Van E. Kelly %T Representations for Reasoniing about Digital Circuits %R Technical Report LCSR-TR-11 %I Rutgers University, Department of Computer Science %D March 1981 %D January 1982 %X We are interested in developing programs that reason about digital electronic circuits, in order to design, redesign, and debug them. The first step toward developing such programs is to determine a useful way of representing the design and operation of circuits. A useful representation must make apparent the roles of various circuit components in implementing the overall circuit function, and must allow a program to reason about the operation of the circuit at various levels of abstraction. This paper summarizes our efforts to develop such a representation. This work is closely related to other AI work on representing plans and on representing and reasoning about complex physical processes. %A Naftaly H. Minsky %T Privileges for Distributed Authorization %R Technical Report LCSR-TR-12 %I Rutgers University, Department of Computer Science %D January 1982 %X This paper deals with two kinds of authorization policies which we call "distributed" and "knowledge based" policies. The attempt to support these policies leads to a deeper understanding of the concept of privilege, which contributes to the design of more powerful and more efficient authorization schemes. The specific type of privileges being proposed in this paper is a simplification of the "Activator", which is the main privilege used in the Operation Control scheme, and in the ongoing COINS (Controlled Information System) project in Rutgers University. %A Naftaly H. Minsky %A Abe Lockman %T Unidirectional Transport of Rights and Take-Grant Control* %R Technical Report LCSR-TR-13 ??CBM-TR-137?? %I Rutgers University, Department of Computer Science %D June 1981 %D January 1982 %O This paper is a revision of a report entitled ``How to Control Grants'', July 1979. %X One of the fundamental problems of protection is the exercise of control over the movement of rights in a system. The Take-Grant model, which captures certain essential aspects of several popular protection schemes, suffers from a serious limitation: it cannot enforce strictly unidirectional channels for the flow of rights. This property limits the applicability of this model and therefore that of the protection schemes which it models. We analyze the nature and ramifications of this limitation and demonstrate that its root cause is the fact that a right residing in the sender suffices to authorize a movement of rights. We propose an alternative, "Take-Receive", model in which this limitation is eliminated, thus enabling the implementation of more useful protection disciplines. We prove this result by analyzing the dynamic behavior of the proposed model. %A Naftaly H. Minsky %T Synergistic Authorization in Database Systems %R Technical Report LCSR-TR-14 %I Rutgers University, Department of Computer Science %D May 1981 %D January 1982 %X We say that an authorization scheme exhibits synergistic effects if it allows for the power provided by the union of two privileges to be larger than the union of the powers of the individual privileges. This phenomenon where the whole is larger than the sum of its parts is shown to be essential for the support of some important classes of authorization policies, and useful for the optimization of the enforcement of authorization-based rules. An authorization scheme which exhibits synergistic effects is introduced and some of its applications are discussed. %A Tom M. Mitchell %A Paul E. Utgoff %A Bernard Nudel %A R. Banerji %T Learning Problem-Solving Heuristics through Practice %R Technical Report LCSR-TR-15 %I Rutgers University, Department of Computer Science %D July 1981 %D January 1982 %X We are interested in automatic improvement of problem solving strategies through practice. A computer program called LEX is described, which acquires problem solving heuristics in the domain of symbolic integration. LEX learns heuristics from a sequence of presented training problems by (1) using available heuristics to solve the practice problem, (2) analyzing the steps it performed in solving the problem, and (3) proposing and refining domain-specific heuristics designed to improve its performance on subsequent problems. We describe the methods used by LEX, results obtained by the program, and strengths and weaknesses in the current system. %A J. S. Hall %T System Design and Programming Methodology for a CAM-based General-purpose Computer %R Technical Report LCSR-TR-16 %I Rutgers University, Department of Computer Science %D July 1981 %D January 1982 %X VLSI makes feasible the massive use of content addressable memory in a general purpose computer. This report presents a design for memory which is addressable conventionally and by content, and which supports low-level bit-serial word-parallel algorithms. CAM provides one of the most easily understood and programmed frameworks for massively parallel computations. We present a programming methodology for the use of our design. A programming language, CAML, which provides higher-level constructs to take advantage of the capabilities of CAM, is discussed in detail. A number of algorithms from various fields which demonstrate the generality of the design and the language, and techniques for transforming algorithms from conventional to CAM-based structures and methods, indicate the feasibility of the general-purpose use of CAM. %A Naftaly H. Minsky %T Selective and Locally Controlled Transport of Privileges %R Technical Report LCSR-TR-17 %I Rutgers University, Department of Computer Science %D July 1981 (revised March 1984) %D March 1984 %X In a system based on authorization, the ability of a subject to operate on the system is a function of the privileges that he possesses. In this paper we introduce and study a mechanism, called Send-Receive Mechanism, for the transport of such privileges. The control provided by this mechanism over the movement of privileges has two notable properties. .lp The control is selective, in the sense that it permits the creation of transport-channels which allow for the movement of only certain types of privileges and only between certain kinds of subjects. .lp The control is local, in the sense that every movement of privileges into and out of the domain of a given subject must be authorized by privileges already in his domain. .lp We show that the proposed transport mechanism allows one to impose a local upper bound on the power of any given subject. This bound which is independent of the rest of the system and can, therefore, be viewed as an intrinsic property of the subject. The ability to impose such bounds is considered essential for effective modularization of computer systems. In addition, the locality of our control has beneficial global effects on the flow of privileges. In particular, it helps remove the undesirable symmetry of transport, exhibited by the conventional Take-Grant mechanism. %A C. V. Srinivasan %T Problem Solving in the Meta Description System %R Technical Report LCSR-TR-18 %I Rutgers University, Department of Computer Science %D August 1981 %D January 1982 %O forthcoming %A C. V. Srinivasan %T Notes on Object Centered Associative Memory Organization %R Technical Report LCSR-TR-19 %I Rutgers University, Department of Computer Science %D July 1981 %D January 1982 %X An associative memeory organization is proposed here that incorporates features of "pages" and "chapters", which are used to organize the information in a relational database in a manner that facilitates efficient evaluation of relational algebra expressions: Set intersection and union operations are done in one memory access time, independent of the size of the sets, limited only by the size of the memory. Join operations and cross products may be done in atmost three memory accesses for binary relations, and on the average in 3(k/2) memory accesses for k-ary relations, when k > 2. The basic organizational concepts are briefly described in this memo. %A Charles M. Eastman %A Gilles M. E. Lafue %T Semantic Integrity Transactions in Design Databases %R Technical Report LCSR-TR-20 %I Rutgers University, Department of Computer Science %D October 1981 %D November 1982 %X This paper views engineering design as a task involving simultaneously defining and assigning values to a database. Such a database (called a ``design database''), represents the artifact being designed throughout many phases, from early specifications all the way to manufacturing instructions. This paper examines some of the semantic integrity requirements of design databases in relation with current research, and proposes a refinement of the notion of integrity transaction, i.e., a structure to manage the semantic integrity of a database without necessarily guaranteeing it all the time. Semantic integrity is also discussed within the larger framework of confidence in data values of design databases. %A Tom M. Mitchell %T Combining Empirical and Analytical Methods for Inferring Heuristics %R Technical Report LCSR-TR-21 %I Rutgers University, Department of Computer Science %D January 1982 %O (see LCSR-TR-27) %A Richard King %T A File Organization Based on Extensible Hashing %R Technical Report LCSR-TR-22 %I Rutgers University, Department of Computer Science %D December 1981 %D January 1982 %X Direct access secondary storage organization allows a record corresponding to a given key to be obtained in approximately one access to secondary storage. Direct access is accomplished by choosing a hash function, H, that maps a key space, K, into an address space, A. Usually, the record with a given key, k, can be located in a single access at address H(k) of secondary storage. .lp However, a straightforward direct access method has two shortcomings: .lp (collision problem) The number of different records mapped to the same secondary storage address can exceed the available space. .lp (allocation problem) The amount of space reserved for the file must be chosen in advance to fit the range size of the hash function. .lp We propose a novel way to maintain dynamic files efficiently. We believe that this method has particular relevance to very high level languages and query languages where sets and maps are important. Our algorithm will be compared to several other algorithms that have been proposed to partially overcome the shortcomings mentioned above. %A Shaye Koenig %T A Transformational Framework for Automatic Derived Data Control and its Applications in an Entity-Relationship Data Model %R Technical Report LCSR-TR-23 %I Rutgers University, Department of Computer Science %D December 1981 %D January 1982 %X This thesis investigates the specification, implementation and application of derived data in the context of MADAM, an entity-relationship oriented, map-based data model/programming language for database conceptual schema representation and processing. The data representation and manipulation facilities of MADAM, described in chapter 2, represent a synthesis of ideas from the areas of very high level languages, in particular SETL, and the binary association and entity-relationship approaches to data modeling. Derived data refers to data that appears to exist in its declared form, but is actually derived from related data in the database. Previous approaches to the materialization of derived data have been based on a global recalculation strategy in which derived data is recomputed whenever it is referenced. In this thesis we present an alternative approach in which derived data is explicitly stored and incrementally maintained. In chapter 3, we describe the definition of derived data in MADAM; discuss its importance as a means of fostering logical data independence, providing access control mechanisms, and supporting semantic relativism; and present a unified framework for the automatic maintenance of derived data. This framework is based on the transformational techniques of finite differencing in which repeated costly computations are replaced by more efficient incremental counterparts. In addition to the importance of our incremental maintenance approach for supporting alternative views of the same data, additional applications of our incremental maintenance approach to the implementation of summary data, integrity control, and triggers are discussed in chapter 4. %A Ravinderpal Singh Sandhu %T The Case for a SetL Based Query Language %R Technical Report LCSR-TR-24 %I Rutgers University, Department of Computer Science %D December 1981 %D January 1982 %X Lacroix and Pirotte formulated sixty-six queries in nine different query languages for the relational data model. We augment their study by formulating these queries in a SETL based query language. We emphasize the functional notation of SETL and transform a relational schema into SETL declarations so as to exploit this aspect of SETL. We also propose additional dictions in SETL for convenience in query formulation. %A Tom M. Mitchell %T AI Research at Rutgers %R Technical Report LCSR-TR-25 %I Rutgers University, Department of Computer Science %D March 1982 %X Current research at Rutgers covers a broad range of AI concerns, from the development of expert systems in a variety of domains, to the study of basic issues of representation and inference. This report provides summaries of many of the current research projects. %A Gilles M. E. Lafue %T Semantic Integrity Dependencies and Delayed Integrity Checking %R Technical Report LCSR-TR-26 %I Rutgers University, Department of Computer Science %D March 1982 %D April 1982 %X This paper's approach to semantic integrity management is that in an integrity constraint, some variables may be operated on in order to maintain the constraint, while others may be declaired unaffected by it. This defines integrity dependencies between variables. Various examples of integrity dependencies and their meanings are discussed. In addition to corresponding to real world practice, integrity dependencies improve the efficiency of checking constraints. This is achieved by delaying the checking (and maintenance) of data which depends on, but does not affect, the data currently operated on. It also gives delayed checking and maintenance a chance to be performed in parallel with applications. Simulation results are presented to support this claim. %A Tom M. Mitchell %T Toward Combining Empirical and Analytical Methods for Inferring Heuristics %R Technical Report LCSR-TR-27 %I Rutgers University, Department of Computer Science %D April 1982 %X Inferring problem solving heuristics from examples of worked out solutions is one kind of generalizing from examples. A spectrum of techniques for this generalization problem is considered, ranging from purely empirical, data-driven methods, to purely deductive, analytic methods. It is argued that a combination of empirical and analytical methods is appropriate for inferring heuristics. One way of combining empirical and analytical mechanisms for inferring generalizations is suggested and illustrated for the task of inferring search heuristics for symbolic integration. %A Gilles M. E. Lafue %A Tom M. Mitchell %T Data Base Management Systems and Expert Systems for CAD %R Technical Report LCSR-TR-28 %I Rutgers University, Department of Computer Science %D May 1982 %X This paper examines some issues that Data Base Management Systems (DBMS) and Expert Systems (ES) begin to share, or will probably share soon. This convergence is based on the observation that DBMS's take more active control of database semantic integrity, and ES's are faced with growing amounts of data to manage. Furthermore, this convergence is reinforced by applications in Computer-Aided Design. The issues discussed concern data/knowledge representation and data manipulation control. %A Naftaly H. Minsky %T Localization of Power in Software Systems %R Technical Report LCSR-TR-29 %I Rutgers University, Department of Computer Science %D September 1982 %X This paper proposes a technique for what we call localization of power in computer systems, which can be viewed as a generalization of such linguistic disciplines as scope rules, strong typing and data-abstraction. Although the proposed technique is conceptually based on the theory of protection, it is presented as a rather simple extension of the package construct of the Ada language. This technique is expected to be beneficial for software engineering in several ways. In particular: .lp It facilitates reasoning about large scale systems, by allowing one to ignore most of the details of the system when reasoning about specific aspects of it. .lp It provides us with a generalization of the conventional concept of data-abstraction, by allowing the formation of several different abstractions for the same type of objects, and by supporting "interactions" between the abstractions of different types. .lp It allows us to provide parts of a system with a certain ability to control the activity of the rest of it. .lp It supports a broad spectrum of policies for the design and management of large scale, evolving systems. %A Van E. Kelly %A Lou I. Steinberg %T The Critter System: Analyzing Digital Circuits by Propagating Behaviors and Specifications %R Technical Report LCSR-TR-30 %I Rutgers University, Department of Computer Science %D July 1982 %X CRITTER is a system that reasons about digital hardware designs, using a declarative representation that can represent components and signals at arbitrary levels of abstraction. CRITTER can derive the behaviors of a component's outputs given the behaviors of the inputs, it canderive the specifications a component's inputs must meet in order for some given specifciations on theoutputs to be met, and it can verify that a given signal behavior satisfies a given specification. By combining these operations, it evaluates both the correctness and the robustness of the overall design. %A Tom M. Mitchell %A Paul E. Utgoff %A R. Banerji %T Learning by Experimentation: Acquiring and Modifying Problem-Solving Heuristics %R Technical Report LCSR-TR-31 %I Rutgers University, Department of Computer Science %D September 1982 %X This chapter concerns learning heuristic problem solving strategies through experieince. In particular, we focus on the issue of learning heuristics to guide a forward-search problem solver, and describe a computer program called LEX, which acquires problem solving heuristics in the domain of symbolic integration. LEX acquires and modifies heuristics by iteratively applying the following process: (1) generate a practice problem, (2) use available heuristics to solve this problem, (3) analyze the search steps performed in obtaining the solution, and (4) propose and refine new domain-specific heuristics to improve performance on subsequent problems. We describe the methods currently used by LEX, analyze strengths and weaknesses of these methods, and discuss our current research toward more powerful approaches to learning heuristics. %A Gilles M. E. Lafue %T Semantic Integrity Management of Databases: A Survey %R Technical Report LCSR-TR-32 %I Rutgers University, Department of Computer Science %D October 1982 %X This is a survey of the state-of-the-art in research on management of database semantic integrity with an emphasis on automatic integrity checking. About half a dozen of the most significant approaches are considered. It is illustrated how these approaches use the major principles for reducing the cost of automatic integrity checking, namely, compile-time selection of the constraints to check after database updates, compile-time simplification of constraints, and use of redundant data. %A M. Dowd %T Some New Sequencings of Groups %R Technical Report LCSR-TR-33 %I Rutgers University, Department of Computer Science %D October 1982 %D November 1982 %X Several new results concerning sequencings of groups are obtained. We study the classification of sequencings, analyzing their possible symmetries under automorphisms of the group. We construct some large families of sequencings of the cyclic group, and show that the dihedral groups of order 8k+2 are sequenceable. Finally we extend the list of known sequenceable groups by computer to include the non-Abelian groups of order 16 and S[4]. %A M. Dowd %T Isomorphism of Complete Sets %R Technical Report LCSR-TR-34 %I Rutgers University, Department of Computer Science %D October 1982 %D November 1982 %X We show that the linear space many one complete sets are linear space isomorphic; that the polynomial time complete sets are polynomial time one-one equivalent; and that these facts holds for the sets complete for certain complexity classes. We show that if the polynomial time complete sets are not polynomial time isomorphic then P = NP. We also show that linear space one-one equivalence implies linear space isomorphism. %A M. Dowd %T Forcing and the P Hierarchy %R Technical Report LCSR-TR-35 %I Rutgers University, Department of Computer Science %D October 1982 %D November 1982 %O (Preliminary Version) %X We introduce the notion of a generic oracle, and show that generally speaking if two relativized complexity classes can be separated by some oracle then they are separated by every generic oracle. We also show that if the hierarchy collapses it does so with respect to any sparse oracle. %A Donald E. Smith %T FOCUSER: A Strategic Interaction Paradigm for Language Acquisition %R Technical Report LCSR-TR-36 (Thesis) %I Rutgers University, Department of Computer Science %D November 1982 %D January 1982 %X Acquisition tasks in which the presentation of information to a learning system (LS) is controlled by a motivated, intelligent presenter are considered. Techniques are presented that explicitly deal with the structure and strategy of the presentation. These techniques make use of a layered decision mechanism that considers the presented information at successive levels of detail. The first level determines the strategies being employed by the presenter; the second level, the focus communicated by the presentation; and the last determines a representation for the acquired knowledge. .lp The use of strategic techniques within an acquisition task is shown to increase the acquisition efficiency of LS's whose inputs are constructed by intelligent presenters while not requiring that the presenter have in-depth knowledge of the LS. .lp The use and workings of these techniques are exemplified by a LS named FOCUSER that acquires language for a blocks world domain. The strategic features employed by FOCUSER and the resultant benefits to the acquisition task, as well as possible problems, are discussed in detail. %A M. D. Grigoriadis %T Mini-Cost Network Floss, Part I: A Network Simplex Method %R Technical Report LCSR-TR-37 %I Rutgers University, Department of Computer Science %D November 1982 %D December 1982 %A M. D. Grigoriadis %T ? %R Technical Report LCSR-TR-38 %I Rutgers University, Department of Computer Science %D December 1982 %A M. D. Grigoriadis %T ? %R Technical Report LCSR-TR-39 %I Rutgers University, Department of Computer Science %D December 1982 %A Sholom M. Weiss %A Casimir A. Kulikowski %A Chidanand V. Apte %A Michael Uschold %T Building Expert Systems for Controlling Complex Programs %R Technical Report LCSR-TR-40 %I Rutgers University, Department of Computer Science %D September 1982 %D November 1982 %X Production rule schemes have proven quite effective in concisely representing expert knowledge in several application areas. Yet, there are many problems for which one would like to take advantage of additional knowledge that may not be easily represented in these surface level models. One class of problems of particular practical interest are those in which we would like to have a computer-based system give interactive advice on how to control a and interpret results from a set of complex and interrelated applications prpograms. The advice may refer to interpretations of current results, possible experiments that should be performed with the help of the applications programs, and indications of inconsistencies in specific analytical procedures and in problem solving sequences followed by the user. In the present paper we report on our experiences in designing an expert system (ELAS), of the type described above, for well log analysis in oil exploration and production. We have integrated a production rule advice model (using the EXPERT system) with existing Amoco software for well-log analysis was reorganized so that its knowledge structured according to the types and sequences of methods used by expert analysts. By varying the assumptions and parameters used in the different individual analysises. Our goal is to make available interactive interpretations of the alternative approaches that an expert might take to a complex problem of well-log analysis. %A Chidanand V. Apte %T Expert Knowledge Management for Multi-level Modelling -- with an application to Well-Log Analysis %R Technical Report LCSR-TR-41 %I Rutgers University, Department of Computer Science %D December 1982 %X Expert problem solving strategies in many problem domains make use of detailed quantitative or mathematical techniques coupled with experiential knowledge about how such techniques can be used in solving a problem. Engineering Analysis and Multiple Signal Interpretation are two instances of this class of problems. In many such domains, these techniques are available as part of large software packages. In attempting to build expert systems in these domains, we wish to make use of such existing packages, and are therefore faced with an important problem: How to intelligently control large-scale complex software, during the course of its use, in the analysis of a task. .lp I am therefore proposing amulti-level model for representing problem solving knowledge in such domains. These levels capture the mathematical/quantitative methods, associated interpretive knowledge, and control information on how they interact. I also present an outline of a form-based system, for acquisition and representation of expert knowledge which controls and interprets these methods. .lp To support this proposal, I can report some preliminary, successful results, in using these techniques to build an expert system for Well-log analysis, ELAS. Well logs are the various electromagnetic, sonic and nuclear signals obtained from instruments placed downhole in a well, which characterize the properties of the rock and fluid formations around the borehole. The purpose of the overall analysis is to determine the likely presence of hydrocarbons in the well for potential production. .lp This research proposal has evolved within a project concerned with the introduction of Expert Systems methods in Well-log analysis. Some of the issues raised by Hart, coupled with previous experience in expert knowledge modelling, are influencing the direction of this research. %A Naftaly H. Minsky %A Abe Lockman %T Principles for Database Authorization %R Technical Report LCSR-TR-42 %I Rutgers University, Department of Computer Science %D December 1982 %X While most database systems incorporate some sort of authorization mechanism, we contend that existing mechanisms are not quite adequate for two tasks which they should support: a) enforcing the complex authorization policies of real-life organizations; b) presenting users with useful abstractions of the database. This paper proposes a number of guiding principles for the design of database authorization mechanisms which provide better support for these two tasks. We illustrate how these principles might be implemented by giving examples from our Operation Control authorization mechanism. %A M. D. Grigoriadis %A Tau Hsu %T Numerical Methods for Basic Solutions of Generalized Flow Networks %R Technical Report LCSR-TR-43 %I Rutgers University, Department of Computer Science %D April 1983 %D May 1983 %X A central problem in implementing specializations of the simplex method for solving large minimum-cost generalized network flow problems is the accurate and efficient computation of basic solutions. In general, bases for such problems are block-diagonal, with each block corresponding to a "one-tree", i.e. a connected graph which contains exactly one cycle. This paper reviews two previously proposed techniques for solving such one-tree systems and develops a new algorithm based on Gaussian elimination for use within a generalized network flow code. In addition to error bounds, the paper gives numerical results and a computational comparison for all known methods. %A R. M. Keller %T Learning by Re-expressing Concepts for Efficient Recognition %R Technical Report LCSR-TR-44 %I Rutgers University, Department of Computer Science %D June 1983 %X Much attention in the field of machine learning has been directed at the problem of inferring concept descriptions from excamples. But in many learning situations, we are initially presented with a fully-formed concept description, and our goal is instead to re-express that description with some particular task in mind. In this paper, we specifically consider the task of recognizing concept instances efficiently. We describe how concepts that are accurate, though computationally inefficient for use in recognizing instances, can be re-expressed in an efficient form through a process we call ``concept operationalization''. Various techniques for concept operationalization are illustrated in the context of the LEX learning system. %A Tom M. Mitchell %T Learning and Problem Solving %R Technical Report LCSR-TR-45 %I Rutgers University, Department of Computer Science %D June 1983 %X One mark of intelligence is the ability to improve one's problem-solving abilities with experience. Modelling this kind of learning is an important goal for AI, both because it is bound to lead to general insights on the nature of intelligence, and because of its practical importance for developing high-performance problem solving problems. We discuss some recent progress in the area of learning problem-solving expertise, as well as some directions in which research in this area is headed. %A A. Goldberg %A Robert Paige %T Stream Processing %R Technical Report LCSR-TR-46 %I Rutgers University, Department of Computer Science %D August 1983 %X Stream processing is a database query optimization related to loop fusion that can improve space and speed by a process of loop combination. Previous attempts at implementation have been either highly restrictive, have required manual intervention, or have involved naive strategies resulting in impractically slow running times. This paper defines a general stream processing problem that is formulated graph theoretically and shown to be NP-Hard, even in the presence of constraints. Two new efficient algorithms will be presented. One uses a greedy strategy to solve the general problem, but yields suboptimal solutions on all but a modest class of problem instances. The other algorithm solves a restricted problem but will yield optimal results for a wide subclass of problem instances. %A Lou I. Steinberg %A Tom M. Mitchell %T A Knowledge Based Approach to VLSI CAD %R Technical Report LCSR-TR-47 %I Rutgers University, Department of Computer Science %D September 1983 %D February 1984 %X Artificial Intelligence (AI) techniques offer one possible avenue toward new CAD tools to handle the complexities of VLSI. This paper summarizes the experience of the Rutgers AI/VLSI group in exploring applications of AI to VLSI design over the past few years. In particular, it summarizes our experience in developing REDESIGN, a knowledge-based system for providing interactive aid in the functional redesign of digital circuits. Given a desired change to the function of a circuit, REDESIGN combines rule-based knowledge of design tactics with its ability to analyze signal propagation through circuits, in order to (1) help the user focus on an appropriate portion of the circuit to redesign, (2) suggest local redesign alternatives, and (3) determine side effects of possible redesigns. We also summarize our more recent research toward constructing a knowledge-based system for VLSI design and a system for chip debugging, both based on extending the techniques used by the REDESIGN system. %A Naftaly H. Minsky %A A. Borgida %T Controlling Software Evolution: an Overview of Darwin %R Technical Report LCSR-TR-48 %I Rutgers University, Department of Computer Science %D November 1983 %D December 1983 %X There has been considerable past effort expanded on software engineering techniques which attempt to control the power of one part of a program to affect other parts, but little or no attention has been paid to the problem of controlling the evolution of a software system. .lp This report describes a software development environment which provides a unified, authorization-based regime for controlling both the operations of a system as well as its evolutionary development. %A Naftaly H. Minsky %T Power and its Distribution in Software Systems %R Technical Report LCSR-TR-49 %I Rutgers University, Department of Computer Science %D December 1983 %X This paper introduces what we call a "power-based regime", which can be viewed as a generalization of such linguistic disciplines as scope rules, information hiding and data-abstraction. This technique is expected to be beneficial for software engineering in several ways. In particular: .lp It facilitates reasoning about large scale systems, by allowing one to ignore most of the details of the system when reasoning about specific aspects of it. .lp It allows us to provide parts of a system with a certain ability to control the activity of the rest of it. .lp It supports a broad spectrum of policies for the design and management of large scale, evolving systems. %A A. Borgida %T Features of Languages for the Development of Information Systems at the Conceptual Level %R Technical Report LCSR-TR-52 %I Rutgers University, Department of Computer Science %D December 1983 %D January 1984 %X A computer system which stores, retrieves and manipulates information about some portion of the real world can be viewed as a model of that domain of discourse. There has been considerable research recently on languages which allow one to capture more of the semantics of the real world in these computerized Information Systems -- research which has variously been labelled as Semantic Data Modeling, Semantic Modeling or Conceptual Modeling. This review paper presents a list of the features which appear to distinguish these languages from those traditionally used to describe and develop database-intensive applications, and considers the motivation for these features as well as the potential advantages to be gained through their use. The paper, which is intended for those familiar with current data processing practices, also compares in greater detail four programming languages which incorporate semantic modeling facilities, and discusses some of the methodologies and tools for Information System development based on these languages. %A N. S. Sridharan %A John L. Bresina %T ??? %R Technical Report LCSR-TR-53 %I Rutgers University, Department of Computer Science %D March 1984 %X The problem solving ability of an AI program is critically dependent on the nature of the symbolic formulation of the problem given to the program. Improvement in performance of the problem solving program can be made not only by improving the strategy of directing search but also by shifting the problem formulation to a more appropriate form. Certain formulations are more amenable to incremental acquisition of problem solving strategy than others. With this in mind, an ``extensible problem reduction method'' is developed that allows ``incremental strategy construction''. .lp The overall objective of our present research is to study strategy acquisition and problem reformulation for planning problems. We illustrate the combined use of analytical and empirical methods. We demonstrate Case Study and Plan Recognition as techniques that permit improvement of problem solving strategy using single problem instances. The work we propose here builds on our earlier work on Plan Recognition and on Plan Generation. %A Naftaly H. Minsky %A A. Borgida %T The Darwin Programming Environment %R Technical Report LCSR-TR-54 %I Rutgers University, Department of Computer Science %D February 1984 %O forthcoming %A Van E. Kelly %T The Critter System: Automated Critiquing of Digital Circuit Designs %R Technical Report LCSR-TR-55 %I Rutgers University, Department of Computer Science %D May 1984 %D September 1984 %X CRITTER is an exploratory prototype design aid, built using Artificial Intelligence ("Expert Systems") technology, to aid in "critiquing" digital circuit designs, encompassing issues of functional correctness, operating speed, timing robustness, and circuit sensitivity to changes in device parameters. Its non-procedural representation for both real-time circuit behavior and circuit specifications has led to a streamlined circuit modeling formalism based on ordinary mathematical function composition. In its interactions with the user it strives to be as concise as possible, concentrating only on findings it judges to be unexpected or unusual. After successful tests with circuits of up to a dozen TTL SSI/MSI packages, CRITTER is being extended for use in an automated VLSI design environment. %A Tom M. Mitchell %A Lou I. Steinberg %A Jeffrey S. Shulman %T VEXED: A Knowledge-Based VLSI Design Consultant %R Technical Report LCSR-TR-57 %I Rutgers University, Department of Computer Science %D July 1984 %D September 1984 %X This paper presents a framework for knowledge-based aids for design of VLSI circuits. In particular, we describe the design of an interactive knowledge-based consultant for VLSI design (called VEXED for Vlsi EXpert EDitor), a prototype implementation of VEXED, and several issues that have arisen from implementing and experimenting with this prototype. .lp This research is of significance to both the field of Artificial Intelligence and the field of Computer-Aided Design. Given the current state of the art of Expert Systems technology, it is now fairly well understood how to build rule-based expert systems for diagnostic and classification tasks, and several general-purpose frameworks are now available for constructing such systems. However, very few knowledge-based systems for design tasks have been developed, and as a field we have yet to produce a generic model of design tasks and generic frameworks for developing knowledge-based systems for design. VEXED represents an attempt to build a system for circuit design, in a way which we anticipate should extend to other design domains as well. %A John A. Roach %T The Rectangle Placement Language %R Technical Report LCSR-TR-58 %I Rutgers University, Department of Computer Science %D September 1984 %X Constraint-based symbolic layout of VLSI designs has received growing attention during the past few years. Several systems have been developed which can offer graphical or textual media for the expression of designs and can provide automatic compaction to technology specific design rules. The ability of these systems to allow for the expression of more generalized relationships is a topic open for further research. RPL is a constraint-based symbolic layout language intended to address the problem of providing a more flexible set of constraints and restraints to the designer for expressing the relationships and dependencies of the design. This paper describes some of the ideas behind this work and details some of the progress that has been made. %A M. D. Grigoriadis %A B. Kalantari %T A Lower Bound to the Complexity of Euclidean Matching Algorithms %R Technical Report LCSR-TR-59 %I Rutgers University, Department of Computer Science %D October 1984 %O out of print - see LCSR-TR-69 %X We show that the time complexity of any exact algorithm for the Euclidean minimum-weight perfect matching problem over n vertices is, in the worst-case, bounded below by O(n log n). This lower bound also applies to any heuristic algorithm whose worst-case ratio of the weight of the approximate solution it produces to the weight of the optimal solution depends only on n. %A Saul Amarel %T Artificial Intelligence and the Social Sciences: A Preliminary Report %R Technical Report LCSR-TR-60 %I Rutgers University, Department of Computer Science %D November 1984 %X As part of the work of an NSF-sponsored panel on 'Methodological Research Frontiers and the Social Sciences' (which met in Washington D.C. on September 7, 1984) the relevance of Artificial Intelligence (AI) methodologies to the Social Sciences was reviewed and appraised. The purpose of this preliminary report is to describe a number of AI activities that bear on this question, and to anticipate some contributions that AI methodologies may make to the conduct of social science research and to professional work in related areas. %A A. Borgida %A S. Greenspan %A J. Mylopoulos %T Knowledge Representation as the Basis for Requirements %R Technical Report LCSR-TR-61 %I Rutgers University, Department of Computer Science %D December 1984 %X Specification of many kinds of knowledge about the world is essential to requirements engineering. Research on knowledge representation in Artificial Intelligence addresses fundamental issues that Software Engineering must eventually face, and provides a wealth of relevant techniques which can be incorporated into specification languages. We elaborate on these claims by using examples from our research on Requirements Modeling. %A Saul Amarel %T Introduction to the Comtex Microfiche Edition of the Rutgers University Artificial Intelligence Research Reports %R Technical Report LCSR-TR-62 %I Rutgers University, Department of Computer Science %D August 1984 %D December 1984 %X In this introduction I will provide historical highlights of the development of AI research at Rutgers, emphasizing the major strands and themes that brought us to the current state of our research. Naturally, the story will reflect my personal perspective on this development. The present collection of reports covers most of our work in AI from 1970 through 1983. %A Naftaly H. Minsky %T Controlling the Evolution of large scale Software Systems %R Technical Report LCSR-TR-63 %I Rutgers University, Department of Computer Science %D January 1985 %X This paper deals wih certain fundamental issues involved with the development and operation of large scale evolving software systems. We are particularly, but not exclusively, concerened with systems that evolve in vivo, that is, systems whose development and maintenance take place in the same environment, and at the same time, in which the system itself operates. Such evolution involves several serious difficulties : First, it provides programmers with inordinate amount of power with respect to the enterprise being served by the system, such as the power to manipulate money wielded by the programmers of a computerized financial system, if they are allowed to change the system on the fly. The second, related, difficulty is the unpredictability of the behavior of evolving systems, emanating from the inherent unpredictability of the programmers who manipulate it. .lp We argue that in order to be able to treat an evolving system as a single "organism" that exhibits a reasonable degree of behavior predictability, it is necessary to control and constrain the process of system evolution itself. A Software Development Environment, called DARWIN, which provides such a control is described. Some implkications of controlled evolution to the management of software projects, and for the reliability of software, are briefly discussed. %A Tom M. Mitchell %A Sridhar Mahadevan %A Lou I. Steinberg %T LEAP: A Learning Apprentice for VLSI Design %R Technical Report LCSR-TR-64 %I Rutgers University, Department of Computer Science %D January 1985 %X It is by now well-recognized that a major impediment to developing knowledge-based systems is the knowledge acquisition bottleneck: the task of building up a complete enough and correct enough knowledge base to provide high-level performance. This paper proposes a new class of knowledge-based systems designed to address this knowledge-acquisition bottleneck by incorporating a learning component to acquire new knowledge through experience. In particular, we define a Learning Apprentice System as a knowledge-based system that provides interactive aid in solving some problem, and that acquires new domain knowledge by generalizing from training examples acquired through the normal course of its use. This paper describes a specific Learning Apprentice System, called LEAP, which is presently being developed in the domain of VLSI design. We also discuss design issues for Learning Apprentice Systems more generally, as well as restrictions on the generality of our current approach. %A Tom M. Mitchell %A Lou I. Steinberg %A Jeffrey S. Shulman %T A Knowledge-Based Approach to Design %R Technical Report LCSR-TR-65 %I Rutgers University, Department of Computer Science %D January 1985 %O forthcoming %A Sridhar Mahadevan %T Verification-Based Learning: A Generalization Strategy for Inferring Problem-Decomposition Methods %R Technical Report LCSR-TR-66 %I Rutgers University, Department of Computer Science %D January 1985 %X A new technique for learning problem-solving operators from examples is described in this paper. The technique is illustrated by demonstrating how it can be used to acquire a specific type of operator, namely a problem-decomposition method. The important notion of a ``verification'' is introduced as a basis for obtaining general problem-decomposition methods from single examples. Two detailed examples of the application of the technique from the domains of Circuit Design and Symbolic Integration illustrate the generality of the technique. %A Naftaly H. Minsky %A Abe Lockman %T Extending Authorization by Adding Obligations to Permissions %R Technical Report LCSR-TR-67 %I Rutgers University, Department of Computer Science %D January 1985 %D February 1985 %X Conventional authorization mechanisms provide actors with permissions to act, without the actor ever incurring any obligations as a result of executing the permitted action. There exist, however, many situations where system integrity requires that certain actions always be followed by others, within some reasonable time frame. We propose an extension to conventional authorization that allows the explicit association of obligations with permissions, and enforces them. We demonstrate that the extended mechanism can be used to support and enforce several general types of control policies and integrity constraints which are otherwise difficult or impossible to support. %A Naftaly H. Minsky %A D. Rozenshtein %A Jan Chomicki %T Unifying the Use and Evolution of Database Systems: A Case Study in Prolog %R Technical Report LCSR-TR-68 %I Rutgers University, Department of Computer Science %D February 1985 %X This paper presents a model that provides a single uniform mechanism to control the use, operation and evolution of database systems. This model unifies several concepts generally considered to be quite distinct. In particular, it minimizes the formal distinction between the users of the database, the programs embedded in it, and even the programmers maintaining it. Furthermore, under this model, the concepts of subschema and of program module are replaced with a single concept of frame which serves as the locus of power and of activity in the database system. Moreover, the proposed control mechanism is closed, in the sense that the process of establishing control is itself controllable by the same mechanism. .lp Besides its considerable conceptual parsimony, this model supports the formalization of a wide variety of managerial policies regarding the operation and evolution of database systems, and has a potential to enhance their reliability. %A M. D. Grigoriadis %A B. Kalantari %T A Lower Bound to the Complexity of Euclidean and Rectliniear Matching Algorithms %R Technical Report LCSR-TR-69 %I Rutgers University, Department of Computer Science %D February 1985 %O replaces LCSR-TR-59 %X The worst-case time complexity of any exact algorithm for the Euclidean or retilinear minimum-weight perfect matching problem which takes as input the list of coordinates of n points in ????, is shown to be bounded below by the infimum of the worst-case time complexities of all algorithms which sort n real numbers. This result also applies to any heuristic algorithm for which the worst-case ratio of the weight of the approximate matching it produces to the weight of the optimal matching depends only upon n. %A A. Borgida %T Language Features for Flexible Handling of Exceptions in Information Systems %R Technical Report LCSR-TR-70 %I Rutgers University, Department of Computer Science %D March 1985 %X We present an exception handling facility suitable for languages used to implement database-intensive Information Systems. Such a mechanism facilitates the development and maintenance of more flexible software systems by supporting the abstraction of details concerning special or abnormal occurrences. We consider the type constraints imposed by the schema and various semantic integrity assertions to be normalcy conditions, and the key contribution of this work is to allow exceptions to these constraints to persist. To achieve this, we propose solutions to a range of problems, including the sharing of exceptional information, computing with exceptional data, implementation issues, accountability, exception handling by users, and the logic of constraints with exceptions. We also illustrate the use of exception handling in dealing with null values, estimates, and measurements. %A Jan Chomicki %A Naftaly H. Minsky %T Towards a Programming Environment for Large Prolog Programs %R Technical Report LCSR-TR-71 %I Rutgers University, Department of Computer Science %D June 1985 %X We discuss here two issues of programming in-the-large in Prolog: modularization and system evolution. We propose a mechanism, which controls both module interactions and programmer's actions. In this way, certain invariants of a large Prolog programs can be maintained not only during a single execution, but also throughout the lifetime of the program. Traditional constructs, such as scope rules and modules can be modelled in our framework. Moreover, various kinds of managerial policies in software projects can be enforced. An experimental programming environment, called Elog, incorporating our ideas has been implemented (in Prolog). %A Van E. Kelly %T The Critter System -- An Artificial Intelligence Approach to Digital Circuit Design Critiquing %R Technical Report LCSR-TR-72 %I Rutgers University, Department of Computer Science %D June 1985 %X The CRITTER system is an exploratory prototype digital circuit design aid, built using Artificial Intelligence technology, for comprehensive 'critiquing' of digital circuit designs. This critiquing encompasses issues of a circuit's functional correctness, operating speed, timing robustness, and sensitivity to changes in device parameters. In contrast to most circuit design aids, CRITTER is not only concerned with modeling circuit performance, but also with reporting the results of that modeling in the light of the concerns of the circuit designer, i.e., concisely summarizing only those findings it judges to be unexpected or unusual. .lp CRITTER uses a non-procedural, applicative language to describe real-time circuit behavior, and a predicate calculus notation for expressing an exhaustive set of specifications a circuit must satisfy, including both user-specified requirements and constraints arising as side effects of implementation choices. CRITTER evaluates the satisfaction of all these specifications by a process of symbolic constraint propagation, a generalization of traditional constraint propagation systems which employs symbolic algebra. By detecting, comparing and classifying all the violated specifications, CRITTER tries to pinpoint design errors and performance bottlenecks. Additionally, CRITTER has a performance bounding approach to evaluating timing specifications in the presence of uncertain timing information. .lp CRITTER has been used to validate circuits consisting of up to a dozen TTL SSI/MSI packages. It successfully detected a marginal timing design in one of these that had been missed during less formal design evaluations. CRITTER is currently being adapted for use in an automated VLSI design environment. %A B. Kalantari %T Quadratic Functions with Exponential Number of Local Maxima %R Technical Report LCSR-TR-73 %I Rutgers University, Department of Computer Science %D August 1985 %X We demonstrate complexity in quadratic optimization in the following sense: For each natural numbers n and k > 2, we construct a quadratic function f, with n x k variables and exhibit 2^n vertices of the unit hypercube of dimension n x k over which f takes distinct values and where each vertex is a strong local maximum of f in the continuous sense as well as in the discrete sense. %A M. D. Grigoriadis %A B. Kalantari %T A Good Heuristic for the Chinese Postman Problem %R Technical Report LCSR-TR-74 %I Rutgers University, Department of Computer Science %D August 1985 %X We present a two-phase heuristic for the Chinese postman problem over undirected connected edge-weighted graphs of arbitrary density. The first phase is a generalization of Papadimitriou's minimum spanning tree heuristic for the minimum perfect matching problem on complete graphs satisfying the triangle inequality. This phase obtains an approximate solution for which the weight of the "duplicate" edge traversals does not exceed (n - @!@+[@u{k}]@/@-[2]) times that in the optimal solution, where n is the total number of vertices in the given graph and k the number of odd vertices. We show that this bound is tight in the worst-case. The second phase improves upon this approximation by solving a minimum cost transshipment problem. Our experimental results indicate that, for graphs of various sizes and densities, the two-phase heuristic, which is much faster than the best implementation of Edmonds and Johnson's exact algorithm, consistently produces approximate solutions with very small relative errors. %A Robert Paige %A Shaye Koenig %T Finite Differencing of Computable Expressions %R Technical Report LCSR-TR-8 %I Rutgers University, Department of Computer Science %D August 1980 (revised December 1981) %D January 1982 %X One of the fundamental problems of protection is the exercise of control over the movement of rights in a system. The Take-Grant model, which captures certain essential aspects of several popular protection schemes, suffers from a serious limitation in its implementation of such control: any channel established for the movement of rights may be reversed. This property limit the applicability of this model and by implication the protection schemes which it models. We analyze the nature and ramifications of this limitation and demonstrate that it is due to the manner in which the GRANT operation is authorized by the grant right. We propose an alternative, "Receive-Right" model in which this limitation is eliminated, thus enabling the implementation of more useful protection disciplines. The properties of the Receive-Right model are discussed and results are presented on the analysis of the dynamic behavior of its protection states. %A Robert Paige %T ? %R Technical Report LCSR-TR-9 %I Rutgers University, Department of Computer Science %D January 1981 %O forthcoming %A L. Thorne McCarty %T The Applications of Artificial Intelligence to Law: A Survey of Six Current Projects %R Technical Report LRP-TR-10 %I Rutgers University, Department of Computer Science %D August 1981 %D January 1982 %X This paper is a survey of six current projects on the applications of artificial intelligence to legal problem domains, as presented in a panel discussion at the 1981 National Computer Conference. .lp Two of the projects are concerned primarily with practical applications: Hafner, represented in Section I, has explored the use of a conceptual knowledge-base in an enhanced legal retrieval system, and Sprowl, in Section II, has developed and tested a system which assists an attorney in the drafting of routine legal documents. Several of the projects have considered the general problem of designing a language in which legal rules and legal concepts might be easily expressed: the LEGOL project (Section IV) and the TAXMAN project (Section V) fit within this category, as does the work of Meldman (Section III) on the design of a system for computer-aided legal analysis. Finally, two of the projects are exploring some general theoretical issues about the legal process: McCarty and Sridharan (Section V) are attempting to understand the patterns of argument that appear in a contested corporate tax case, and Waterman and Peterson (Section VI) are attempting to understand the decisionmaking procedures of attorneys and claims adjusters in product liability cases. %A L. Thorne McCarty %A N. S. Sridharan %T The Representation of an Evolving System of Legal Concepts II. Prototypes and Deformations %R Technical Report LRP-TR-11 %I Rutgers University, Department of Computer Science %D August 1981 %D January 1982 %X One of the principal goals of the TAXMAN project is to develop a theory about the structure and dynamics of legal concepts, using corporate tax law as an experimental problem domain. In this paper we describe the "prototype-plus-deformation" model of legal conceptual structure: a concept is represented here by a prototypical description plus a sequence of deformations of the prototype, where the deformations are selected from among the possible "mappings" of one concrete description into another. The paper focuses on the set of mappings, which is the most important component of the model because it makes manifest the basic coherence of the conceptual space. The syntax and semantics of the mappings are described, and their role in the process of legal argument is suggested. The formal model is then illustrated by examples drawn from Eisner v. Macomber, an early stock dividend case. %A L. Thorne McCarty %A N. S. Sridharan %T The Representation of Conceptual Structures of TAXMAN II. Part One: Logical Templates %R Technical Report LRP-TR-4 %I Rutgers University, Department of Computer Science %D May 1980 %D January 1982 %X This report presents some basic information about the organization of the TAXMAN II system, as of January, l980. We assume that the reader is familiar with the general outline of the TAXMAN project, as described in McCarty (l980) and McCarty, Sridharan and Sangster (l979). In particular, we assume a familiarity with the basic distinction between Logical Template structures and Prototype-plus-Deformation structures, of which only the former will be discussed in these pages. An abbreviated version of this report will be presented at the Third National Conference of the Canadian Society for Computational Studies of Intelligence, May 14-16, l980, under the title: "The Representation of an Evolving System of Legal Concepts: I. Logical Templates." .lp The report is divided into four sections. Section I describes AIMDS, the basic language for the TAXMAN II implementation, and compares this language to several other languages in the current AI literature. Section II describes an extension of AIMDS which is unique to TAXMAN II a set of descriptions (called DDNs and PDNs) which can be arranged in an abstraction/expansion hierarchy to represent a complex system of concepts. Section III then describes the pattern-matcher for this abstraction/expansion hierarchy, with emphasis on our current techniques for incorporating several diverse pattern-matching strategies into a single system. Finally, Section IV outlines our current work on the semantics of the TAXMAN II domain, and explains the relationship between our present representation of Logical Template structures and our projected representation of Prototype-plus-Deformation structures. Part Two of this report (which has not yet been written) will discuss these latter structures in greater detail. %A D. Brody %T The Post-Macomber Cases in a TAXMAN II Framework: A Preliminary Analysis %R Technical Report LRP-TR-5 %I Rutgers University, Department of Computer Science %D March 1980 %D January 1982 %X In his revised proposal for Taxman II, (The Implementation of Taxman II: An Experiment in Artificial Intelligence and Legal Reasoning) (hereinafter cited as "McCarty"), L. Thorne McCarty examined the arguments of Justices Pitney and Brandeis concerning the nature of taxable income under the 16th amendment to the United States Constitution. These arguments were seen to exhibit respectively what McCarty called template and prototype-plus-deformation form. It was hoped that analysis of the non-statutory reorganization cases and the later stock dividend cases would show repetition of these styles of argument and provide evidence of the process of conceptual change. "Most significantly, we hope to be able to see in greater detail why some conceptual structures work well, and others work poorly; why some arguments are persuasive, and others are not." %A L. Thorne McCarty %T Some Notes on the MAP Formalism of TAXMAN II, with Applications to Eisner v. Macomber %R Technical Report LRP-TR-6 %I Rutgers University, Department of Computer Science %D May 1980 %D January 1982 %X We have talked for some time within the TAXMAN project about a magical entity called a MAP, but our specification of the structure and function of this entity has so far been exceedingly vague. These notes are an attempt to fix some ideas about the MAP formalism as they occur to me at the present time (early November, l979). I will try to be fairly precise about the syntax and semantics of MAPs, and I will try to analyze the case of Eisner v. Macomber within this framework. I think we are now close to having a computational theory of the Macomber case worked out in full, and I will try to set this down as completely as possible. %A L. Thorne McCarty %A N. S. Sridharan %T The Representation of an Evolving System of Legal Concepts: I. Logical Templates %R Technical Report LRP-TR-7 %I Rutgers University, Department of Computer Science %D March 1980 %D January 1982 %X Although our earlier work on the TAXMAN Project (McCarty, 1977) has demonstrated the basic feasibility of applying artificial intelligence techniques to the field of corporate tax law, the original TAXMAN system was seriously deficient as a model of "legal reasoning". More recently (McCarty, Sridharan and Sangster, l979), we have proposed an alternative model of conceptual structure, and an approach to the process of conceptual change, in an attempt to remedy these deficiencies. In the TAXMAN II system, which is currently under development, we distinguish between two different kinds of legal concepts. Precise statutory rules are represented as logical templates, a term intended to suggest the way in which a "logical" pattern is "matched" to a lower-level factual network during the analysis of a corporate tax case. But the more amorphous concepts of corporate tax law, the concepts typically constructed and reconstructed in the process of a judicial decision, are represented by a prototype and a sequence of deformations of the prototype. The prototype is a relatively concrete description selected from the lower-level factual network itself, and the deformations are selected from among the possible mappings of one concrete description into another. We have suggested that these prototype-plus-deformation structures play a crucial role in the process of legal argument, and that they contribute a degree of stability and flexibility to a system of legal concepts that would not exist with the template structures alone (see McCarty, l980). %A L. Thorne McCarty %T Some Requirements for a Computer-based Legal Consultant %R Technical Report LRP-TR-8 %I Rutgers University, Department of Computer Science %D July 1980 %D January 1982 %X Although the literature on computer-based consultation systems has often suggested the possibility of building an expert system in the field of law (see, e.g., Buchanan and Headrick, l970), it is only recently that several AI researchers have begun to explore this possibility seriously. Recent projects include: the development of a computational theory of legal reasoning, using corporate tax law as an experimental problem domain (McCarty, l977; McCarty, l980; McCarty and Sridharan, l980); the development of a language for expressing legal rules within a data-base management environment (Jones, Mason and Stamper, l979; Stamper, l980); the design of an information retrieval system based on a computer model of legal knowledge (Hafner, l978); and the design of an artificial intelligence system to analyze simple tort cases (Meldman, l975; Meldman, l977). .lp This paper attempts to identify the principal obstacles to the development of a legal consultation system, given the current state of artificial intelligence research, and argues that there are only certain areas of the law which are amenable to such treatment at the present time. The paper then suggests several criteria for selecting the most promising areas of application, and indicates the kinds of results that might be expected, using our current work on the TAXMAN project (McCarty, Sridharan and Sangster, l979) as an example. %A L. Thorne McCarty %T A Computational Theory of Eisner v. Macomber %R Technical Report LRP-TR-9 %I Rutgers University, Department of Computer Science %D October 1980 %D January 1982 %X This article pulls together some of the technical ideas from our earlier papers on the TAXMAN project, and fashions them into a coordinated analysis of Eisner v. Macomber, 252 U.S. 189 (l920), the early stock dividend case. I discussed this case in detail in (McCarty, 1980) but the analysis there was quite loose: the paper spoke vaguely of "templates" and "prototypes", of invariants and "transformations", without attempting to give these concepts a rigorous specification as computer data structures. Another earlier paper (McCarty and Sridharan, 1980), described somewhat more rigorously our current representation of the logical template structures of TAXMAN II, but made no attempt to define what we meant by prototypes and deformations. In this paper I will attempt to remedy these deficiencies in a modest way. I will start with the representational machinery of (McCarty and Sridharan, 1980), and add just enough of the "prototype-plus-deformation" machinery to represent the basic patterns of legal argument observed in (McCarty 1980). The result, I claim, is a computational theory of Eisner v. Macomber: a specification of the initial cognitive structures adopted by the participants in the case, a specification of the strategic choices made at each step in the construction of the arguments, and a specification of the final positions set forth by Justice Pitney and Justice Brandeis for the majority and the dissent. .lp Several caveats are in order. First, given the space limitations for these conference proceedings, it is not possible to write an entirely self-contained paper. I will thus assume that the reader has access to (McCarty and Sridharan 1980) for the details of the TAXMAN II representation, and I will include here only a brief outline of the essential points. I will also assume that the reader is generally familiar with the discussion of Eisner v. Macomber in (McCarty 1980), so that I may proceed directly to the computational version. Second, although the theory outlined in this paper is intended to be implemented in a computer program, our implementation is not, at the time of writing, complete. The present theory is therefore only a "hand simulation": it poses computational problems at critical points (e.g., "generate a data structure with the following characteristics...") and then, without actually performing the computation, it exhibits the data structures which Justice Pitney and Justice Brandeis seem to have constructed in the analogous situations. Given the lead time required for the preparation of this paper, however, we hope to have an initial implementation available at the time of the conference, and it should be interesting at that time to compare the present hand simulation with the real thing.