Rule Based System Research Papers (original) (raw)

Machine learning is an emerging field of computer science concerned with the learning of knowledge from exploration of already stored data. However, effective utilization of extracted knowledge is an important issue. Extracted knowledge... more

Machine learning is an emerging field of computer science concerned with the learning of knowledge from exploration of already stored data. However, effective utilization of extracted knowledge is an important issue. Extracted knowledge may be best utilized via feeding to knowledge based system. To this end, the work reported in this paper is based on a novel idea to enhance the productivity of the previously developed systems. This paper presents the proposed architecture of a Learning Apprentice System in Medical Billing system being developed for medical claim processing. A new dimension is added whereby, the process of extracting and utilization of knowledge are implemented in relational database environment for improved performance. It opens enormous application areas as most business data is in relational format managed by some relational database management server. The major components of the proposed system include knowledge base, rule engine, knowledge editor, and data mini...

ONTO-H is a semi-automatic collaborative tool for the semantic annotation of documents, built as a Protege 3.0 tab plug-in. Among its multiple functionalities aimed at easing the document annotation process, ONTO-H uses a rule-based... more

ONTO-H is a semi-automatic collaborative tool for the semantic annotation of documents, built as a Protege 3.0 tab plug-in. Among its multiple functionalities aimed at easing the document annotation process, ONTO-H uses a rule-based system to create cascading annotations out from a single drag and drop operation from a part of a document into an already existing concept or instance of the domain ontology being used for annotation. It also gives support to the detection of name conflicts and instance duplications in the creation of the annotations. The rule system runs on top of the open source rule engine DROOLS and is connected to the domain ontology used for annotation by means of an ad-hoc programmed Java proxy.

Rule Based Systems belong to a well established branch of Artificial Intelligence. So far thousands of rule based systems and their related systems have been built and successfully used. Recently a Rule Based Engine has been successfully... more

Rule Based Systems belong to a well established branch of Artificial Intelligence. So far thousands of rule based systems and their related systems have been built and successfully used. Recently a Rule Based Engine has been successfully designed and developed using Structured Query Language, and applied in Medical Claim processing domain. The rule engine has been integrated with medical billing software to identify billing errors in medical claims at real-time. Performance of the engine has been good, giving promising results. To further improve the efficiency of the system and to utilize power of rule based systems' techniques, enhancements in the existing rule based engine are being proposed in this research paper. Besides explaining the design of new rule based engine, this paper also reviews the design of current engine, which is already in operation, and the overall architecture of the whole system. Enhanced rule engine being proposed here can be implemented in any domain ...

– A system is described for applying hierarchical unsupervised neural networks (self organizing feature maps) to the intruder detection problem. Specific emphasis is given to the representation of time and the incremental development of a... more

– A system is described for applying hierarchical unsupervised neural networks (self organizing feature maps) to the intruder detection problem. Specific emphasis is given to the representation of time and the incremental development of a hierarchy. Preliminary results are given for the DARPA 1998 Intrusion Detection Problem.

One of the main problems in building rule based systems is that the knowledge elicited from experts is not always correct. Therefore there is a need for means of revising the rule base whenever an inaccuracy is discovered. The rule base... more

One of the main problems in building rule based systems is that the knowledge elicited from experts is not always correct. Therefore there is a need for means of revising the rule base whenever an inaccuracy is discovered. The rule base revision is the problem of how best to go about revising a deficient rule base using information contained in cases that expose inaccuracies. The revision process is very sensitive to implicit and explicit biases that are encoded in the specific revision algorithm employed. In a sense, each revision algorithm must provide two forms of biases. The first bias governs the preferred location in the rule base for the correction, while the second bias governs the type of correction performed. In this paper we present a system for incremental revision of rule bases called FRST (Forward chaining Revision SysTem). This system enables the user to analyze the impact of different revisions and to select the most appropriate revision operator. The user provides t...

In this paper we describe how the Predictive Model Markup Language (PMML) standard enhances the JBoss Drools production rule engine with native support for using predictive models in business rules. The historic debate between symbolic... more

In this paper we describe how the Predictive Model Markup Language (PMML) standard enhances the JBoss Drools production rule engine with native support for using predictive models in business rules. The historic debate between symbolic and connectionist approaches to rule/model orchestration provides numerous examples of hybrid systems combining "hard" and "soft" computing techniques to achieve different levels of integration. Rules

— Many learning algorithms have been developed to solve various problems. Machine learning practitioners must use their knowledge of the merits of the algorithms they know to decide which to use for each task. This process often raises... more

— Many learning algorithms have been developed to solve various problems. Machine learning practitioners must use their knowledge of the merits of the algorithms they know to decide which to use for each task. This process often raises questions such as: (1) If performance is poor after trying certain algorithms, which should be tried next? (2) Are some learning algorithms the same in terms of actual task classification? (3) Which algorithms are most different from each other? (4) How different? (5) Which algorithms should be tried for a particular problem? This research uses the COD (Classifier Output Difference) distance metric for measuring how similar or different learning algorithms are. The COD quantifies the difference in output behavior between pairs of learning algorithms. We construct a distance matrix from the individual COD values, and use the matrix to show the spectrum of differences among families of learning algorithms. Results show that individual algorithms tend to...

This paper describes a method of analysing rule-based systems, which models the procedural semantics of such languages. Through a process of 'abstract interpretation', the program, AbsPS, derives a description of the mapping... more

This paper describes a method of analysing rule-based systems, which models the procedural semantics of such languages. Through a process of 'abstract interpretation', the program, AbsPS, derives a description of the mapping between a rule base's inputs and outputs. In contrast to earlier approaches, AbsPS can analyse the effects of: conflict resolution, closed-world negation and the retraction of facts. This considerably reduces the size of the search space because, in the abstract domain, AbsPS takes advantage of the very same control information which guides the inference engine in the concrete domain. AbsPS can detect redundancies which would be missed if the procedural semantics were ignored. Furthermore, the abstract description of a rule base's input-output mapping can be used to prove that the rule base meets its specification.

Abstract-This paper details the design and implementation of ANGY, a rule-based expert systenm in the domain of medical image processing. Given a subtracted digital angiogram of the chest, ANGY identifies and isolates the coronary... more

Abstract-This paper details the design and implementation of ANGY, a rule-based expert systenm in the domain of medical image processing. Given a subtracted digital angiogram of the chest, ANGY identifies and isolates the coronary vessels, while ignoring any nonves-sel ...

This paper describes a comparison between a statistical and a rule -based MT system. The first section describe s the setup and the evaluation results; the second section analyses the strengths and weaknesses of the respective approaches,... more

This paper describes a comparison between a statistical and a rule -based MT system. The first section describe s the setup and the evaluation results; the second section analyses the strengths and weaknesses of the respective approaches, and the third tries to define an architecture for a hybrid system, based on a rule -based backbone and enhanced by statistical int elligence. This contribution originated in a project called "Translation Quality for Professionals" (TQPro) 1 which aimed at developing translation tools for professional translators. One of the interests in this project was to find a baseline for machine translation quality, and to extend MT quality beyond it. The baseline should compare state - of-the-art techniques for both statistical packages and rule - based systems, and draw conclusions from the comparison. This paper presents some insights into the results of this work. 1 Baseline The experiment was to compare the state-of-the-art quality of MT, and it...

This paper investigates attributes of user search behavior and develops dasiasearch satisfaction metricspsila to determine user satisfaction with search engine. Unlike most previous Web studies that have analyzed user behavior through... more

This paper investigates attributes of user search behavior and develops dasiasearch satisfaction metricspsila to determine user satisfaction with search engine. Unlike most previous Web studies that have analyzed user behavior through search engine logs, this work focuses primarily on remotely observing user as he queries search engine. A survey was conducted to identify pattern of search among large sample of

The semantic web is expected to have an impact at least as big as that of the existing HTML based web, if not greater. However, the challenge lays in creating this semantic web and in converting existing web information into the semantic... more

The semantic web is expected to have an impact at least as big as that of the existing HTML based web, if not greater. However, the challenge lays in creating this semantic web and in converting existing web information into the semantic paradigm. One of the core technologies that can help in migration process is automatic markup, the semantic markup of content, providing the semantic tags to describe the raw content. This paper describes a hybrid statistical and knowledge-based information extraction model, able to extract entities and relations at the sentence level. The model attempts to retain and improve the high accuracy levels of knowledge-based systems while drastically reducing the amount of manual labor by relying on statistics drawn from a training corpus. The implementation of the model, called TEG (Trainable Extraction Grammar), can be adapted to any IE domain by writing a suitable set of rules in a SCFG (Stochastic Context Free Grammar) based extraction language, and t...

In this paper the conversion of a spaghetti into a topologically structured topographic base map is discussed. The first step, called structuring, comprises node computation, handling overshoots, undershoots and overlapping line segments.... more

In this paper the conversion of a spaghetti into a topologically structured topographic base map is discussed. The first step, called structuring, comprises node computation, handling overshoots, undershoots and overlapping line segments. Node computation is a computational geometry problem driven by tolerances and a weighting scheme. The next step is placing additional lines by hand to close certain areas. Finally, area features are classified using a rule-based system. Specifically the rule-based classification of the areas has been implemented, tested and compared to human classification. In these experiments automatic classification leads to a speed-up of a factor 2 while maintaining a similar classification performance when compared with manual classification. 1

This paper discusses successive rule refinement as a method for belief and evidence fusion. The set of “beliefs” is encoded in rule-form (disjunctive normal form) and is the main basis for decision-making. These beliefs in a rule-based... more

This paper discusses successive rule refinement as a method for belief and evidence fusion. The set of “beliefs” is encoded in rule-form (disjunctive normal form) and is the main basis for decision-making. These beliefs in a rule-based system are then confirmed, modified, challenged, or left unsupported by the “evidence” available. Certain new evidences that do not figure in any existing belief are assimilated as new belief. This fusion of belief and evidence is done through successive rule refinement. Evidence is in the form of raw data which have to be converted into rule-form so that they can be integrated with the existing beliefs about the domain. Converting evidence into rule-form is done through a rule extraction system that trains a neural network using the available evidence and extracts rules from the network once it has been sufficiently trained. From the experiments conducted to demonstrate the applicability of the approach, it can be seen that the system’s set of belief...