Rule Based System Research Papers (original) (raw)

2025, Journal of Emerging Technologies and Innovative Research

Due to the vast generation of online information, there are huge number of opportunities and challenges created for both consumers and internet users. Opinion Mining is the digital acknowledgement of public feelings, emotions, opinions... more

Due to the vast generation of online information, there are huge number of opportunities and challenges created for both consumers and internet users. Opinion Mining is the digital acknowledgement of public feelings, emotions, opinions and services. It is also helps individual users and business peoples and organizations to monitor the other people comment about a particular items, services, issue and topics. The objective of the paper is to feat the affluence of user opinions collected from micro blogging sites to analyse the semantic orientation of reviews by categorizing them into positive, negative classes to identify and classify negations and intensifiers along with slang words and emoticons.

2025, Proceedings of the 2011 workshop on Predictive markup language modeling

In this paper we describe how the Predictive Model Markup Language (PMML) standard enhances the JBoss Drools production rule engine with native support for using predictive models in business rules. The historic debate between symbolic... more

In this paper we describe how the Predictive Model Markup Language (PMML) standard enhances the JBoss Drools production rule engine with native support for using predictive models in business rules. The historic debate between symbolic and connectionist approaches to rule/model orchestration provides numerous examples of hybrid systems combining "hard" and "soft" computing techniques to achieve different levels of integration. Rules are often used to decide when and which model to invoke; model outputs, in turn, can be used to evaluate the preconditions of a rule. In a loosely coupled system, the rule engine calls an external component implementing the predictive model, but this has several disadvantages, most notably the need to setup proper communications and reconcile any difference in the way the components encode the data. We propose instead, a tightly integrated system where predictive models and rules become part of the same reasoning framework. The models, encoded using the PMML 4 standard, are loaded and processed by a compiler implemented using the rule engine itself. The PMML document is transformed into a set of facts that define the model, and a series of rules that formalize the model's behavior. In addition, most PMML data processing, validation, and transformation procedures are also implemented using auto-generated rules. Finally, in oder to integrate model inputs and outputs seamlessly in the inference process, we exploit an extension of the Drools engine which adds native support for uncertainty and/or fuzziness.

2025, AIP Conference Proceedings

The presence of outliers in a dataset can cause the outcome of classical statistical tools to be inaccurate. Especially in a multivariate context, where researchers have to deal with either or both cellwise and casewise outliers. This... more

The presence of outliers in a dataset can cause the outcome of classical statistical tools to be inaccurate. Especially in a multivariate context, where researchers have to deal with either or both cellwise and casewise outliers. This study investigated the accuracy of the Multiple Discriminant Rule (MDR) when both cellwise and casewise outliers exist in a proportionate manner. Classical MDR (CMDR) was constructed using the classical sample mean (x ̅ ) and sample covariance (S) while Robust MDR (RMDRHL) was constructed using the Hodges-Lehmann estimator (𝛉 ̂𝐇𝐋 ) and Robust Covariance (SR). The simulation was carried out where cellwise outliers were shifted in location value and casewise outliers were involved with location, covariance and dual influence. Based on the simulation results, despite the performance of both CMDR and RMDRHL being quite close when dealing with cellwise-location and casewise-location outliers, RMDRHL outperformed CMDR in both cellwise-location and casewise-covariance as well as cellwise-location and casewise-dual conditions. In summary, the use of 𝛉 ̂𝐇𝐋 in robustifying MDR was competent even though dealing with outliers percentage beyond its tolerance.

2025, Proceedings of the Annual Meeting of the Berkeley Linguistics Society

2025, Revista Iberoamericana de Automática e Informática Industrial RIAI

En este trabajo se aborda el diagnóstico de eventos o perturbaciones de tensión registradas en subestaciones de distribución. La aparición de dichos eventos se debe a causas diversas que van desde faltas en la red, el arranque de motores... more

En este trabajo se aborda el diagnóstico de eventos o perturbaciones de tensión registradas en subestaciones de distribución. La aparición de dichos eventos se debe a causas diversas que van desde faltas en la red, el arranque de motores de inducción, energización de transformadores y conmutación de bancos de capacitores. Se propone la caracterización de estos eventos de tensión a partir de atributos extraídos directamente de la forma de onda, y que se relacionan con los fenómenos físicos asociados tanto con las causas de los eventos, como con su localización relativa respecto del punto de medida. Se ha estudiado la relevancia de dichos atributos mediante un análisis estadístico de la varianza (MANOVA). Los atributos más relevantes se han utilizado para la obtención de reglas de clasificación mediante algoritmos de aprendizaje automático. Los resultados fueron obtenidos empleando datos de 484 eventos reales y 38 eventos simulados.

2025, Lecture Notes in Computer Science

This paper proposes an approach to integrate a software product line (Aurora) with reflective rule-based business process modeling (RUMBA). Aurora is a service-oriented application development and execution platform supporting today's... more

This paper proposes an approach to integrate a software product line (Aurora) with reflective rule-based business process modeling (RUMBA). Aurora is a service-oriented application development and execution platform supporting today's well known "Rich Internet Applications" and "Enterprise Internet Applications" concepts. On the other hand, RUMBA is a rule-based model in which rules and rule-sets can be expressed in terms of dynamic aspects and delegated facts. The proposed approach mainly addresses "Reflective Aspect" and "Reflective Rule" patterns for the seamless integration of Aurora and RUMBA. Both architectural patterns introduce a "generative" approach for developing the basic aspects, dynamic rules and rule-sets so that all can be implemented in the Adaptive Object Model (AOM). The proposed model will be explained in detail and exemplified with existing projects using both Aurora and RUMBA approaches.

2025, Perception & Psychophysics

Many researchers have proposed that, for the purpose of recognition, human vision parses shapes into component parts. Precisely how is not yet known. The minima rule for silhouettes (Hoffman & Richards, 1984) defines boundary points at... more

Many researchers have proposed that, for the purpose of recognition, human vision parses shapes into component parts. Precisely how is not yet known. The minima rule for silhouettes (Hoffman & Richards, 1984) defines boundary points at which to parse but does not tell how to use these points to cut silhouettes and, therefore, does not tell what the parts are. In this paper, we propose the slwrt-cut rule, which states that, other things being equal, human vision prefers to use the shortest possible cuts to parse silhouettes. Wemotivate this rule, and the well-known Petter's rule for modal completion, by the principle of transversality. Wepresent five psychophysical experiments that test the short-cut rule, show that it successfully predicts part cuts that connect boundary points given by the minima rule, and show that it can also create new boundary points.

2025, Journal of Business Research

Armstrong, Green and Graefe (this issue) propose the Golden Rule in forecasting: "be conservative". According to the authors, the successful application of the Golden Rule comes through a checklist of 28 guidelines. Even if the authors of... more

Armstrong, Green and Graefe (this issue) propose the Golden Rule in forecasting: "be conservative". According to the authors, the successful application of the Golden Rule comes through a checklist of 28 guidelines. Even if the authors of this commentary embrace the main ideas around the Golden Rule, which targets to address the "average" situation, they believe that this rule should not be applied automatically. There is no universal extrapolation method that can tackle every forecasting problem; nor are there simple rules that automatically apply without reference to the data. Similarly, it is demonstrated that for a specific causal regression model the recommended conservative rule leads to unnecessary inaccuracy. In this commentary the authors demonstrate, using the power of counter examples, two cases where the Golden Rule fails. Forecasting performance is contextdependent and, as such, forecasters (researchers and practitioners) should take into account the specific features of the situation faced.

2025, Journal of Mathematics

Named entity recognition (NER) is fundamental in several natural language processing applications. It involves finding and categorizing text into predefined categories such as a person's name, location, and so on. One of the most... more

Named entity recognition (NER) is fundamental in several natural language processing applications. It involves finding and categorizing text into predefined categories such as a person's name, location, and so on. One of the most famous approaches to identify named entity is the rule-based approach. This paper introduces a rule-based NER method that can be used to examine Classical Arabic documents. The proposed method relied on triggers words, patterns, gazetteers, rules, and blacklists generated by the linguistic information about entities named in Arabic. The method operates in three stages, operational stage, preprocessing stage, and processing the rule application stage. The proposed approach was evaluated, and the results indicate that this approach achieved a 90.2% rate of precision, an 89.3% level of recall, and an F-measure of 89.5%. This new approach was introduced to overcome the challenges related to coverage in rule-based NER systems, especially when dealing with Cl...

2025, 2007 International Conference on Intelligent Systems Applications to Power Systems

The electric power distribution systems of Naval Ships require autonomous control under all operating conditions. In some instances, natural or forced disturbances lead to arc faults between conductors and/or the hull of the ship. This... more

The electric power distribution systems of Naval Ships require autonomous control under all operating conditions. In some instances, natural or forced disturbances lead to arc faults between conductors and/or the hull of the ship. This paper develops a Rule-Base (RB) approach for arcing fault control with appropriate recommendations that minimize the effect of the arc current. The proposed technique applies an Intelligent Systems (IS) solution strategy to solve a unique and complex problem that is of value to the Navy.

2025

This paper presents a semantically oriented, rule based method for single sentence text generation and discusses its implementation in the Kafka generator. This generator is part of the XCALIBUR natural language interface developed at CMU... more

This paper presents a semantically oriented, rule based method for single sentence text generation and discusses its implementation in the Kafka generator. This generator is part of the XCALIBUR natural language interface developed at CMU to provide natural language facilities for a wide range of expert systems and data bases. Kafka takes as input the knowledge representation used in XCALIBUR system and incrementally transforms it first into conceptual dependency graphs and then into English?

2025, HAL (Le Centre pour la Communication Scientifique Directe)

Rule-based modelling languages, such as Kappa, allow for the description of very detailed mechanistic models. Yet, as the rules become more and more numerous, there is a need for formal methods to enhance the level of confidence in the... more

Rule-based modelling languages, such as Kappa, allow for the description of very detailed mechanistic models. Yet, as the rules become more and more numerous, there is a need for formal methods to enhance the level of confidence in the models that are described with these languages. We develop abstract interpretation tools to capture invariants about the biochemical structure of the biomolecular species that may occur in a given model. In previous works, we have focused on the relationships between the states of the sites that belong to the same instance of a protein. This comes down to detect for a specific set of patterns, which ones may be reachable during the execution of the model. In this paper, we generalise this approach to a broader family of abstract domains that we call orthogonal sets of patterns. More precisely, an orthogonal set of patterns is obtained by refining recursively the information about some patterns containing a given protein, so as to partition the set of occurrences of this protein in any mixture. We show that orthogonal sets of patterns offer a convenient choice to design scalable and accurate static analyses. As an example, we use them to infer properties in models with transport of molecules (more precisely, we show that each pair of proteins that are connected, always belong to the same compartment), and models involving double bindings (we show that whenever a protein of type A is bound twice to proteins of type B, then the protein A is necessarily bound twice to the same instance of the protein B).

2025, Elsevier eBooks

In recognition of the potential benefits of exploiting the desirable features of both modelbased and rule-based control strategies, this paper proposes a combined approach which offers robustness at the expense of a modest increase in... more

In recognition of the potential benefits of exploiting the desirable features of both modelbased and rule-based control strategies, this paper proposes a combined approach which offers robustness at the expense of a modest increase in computational complexity. By combining the adaptivity of a model-based self-tuning controller with the simplicity of a rule-based fuzzy logic controller, the overall combined controller is shown to offer a realistic approach which produces results that are of a superior nature, in terms of robustness, over either the model-based or rule-based scheme in isolation.

2025, Synthese

For various domains in proportional reasoning cognitive development is characterized as a progression through a series of increasingly complex rules. A multiplicative relationship between two task features, such as weight and distance... more

For various domains in proportional reasoning cognitive development is characterized as a progression through a series of increasingly complex rules. A multiplicative relationship between two task features, such as weight and distance information of blocks placed at both sides of the fulcrum of a balance scale, appears difficult to discover. During development, children change their beliefs about the balance scale several times: from a focus on the weight dimension (Rule I) to occasionally considering the distance dimension (Rule II), guessing (Rule III), and applying multiplication (Rule IV;. Because of the detailed empirical findings the balance scale task has become a benchmark task for computational models of proportional reasoning. In this article, we present a large empirical study (N = 420) of which the findings provide a challenge for computational models. The effect of feedback and the effect of individually adapted training items on rule transition were tested for children using Rule I or Rule II. Presenting adapted training items initiates belief revision for Rule I but not for Rule II. The experience of making mistakes (by providing feedback) induces a change for both Rule I and Rule II. However, a delayed posttest shows that these changes are preserved after 2 weeks only for children using Rule I. We conclude that the transition from Rule I to Rule II differs from the transition from Rule II to a more complex rule. Concerning these empirical findings, we will review performance of computational models and the implications for a future belief revision model.

2025, Synthese

For various domains in proportional reasoning cognitive development is characterized as a progression through a series of increasingly complex rules. A multiplicative relationship between two task features, such as weight and distance... more

For various domains in proportional reasoning cognitive development is characterized as a progression through a series of increasingly complex rules. A multiplicative relationship between two task features, such as weight and distance information of blocks placed at both sides of the fulcrum of a balance scale, appears difficult to discover. During development, children change their beliefs about the balance scale several times: from a focus on the weight dimension (Rule I) to occasionally considering the distance dimension (Rule II), guessing (Rule III), and applying multiplication (Rule IV;. Because of the detailed empirical findings the balance scale task has become a benchmark task for computational models of proportional reasoning. In this article, we present a large empirical study (N = 420) of which the findings provide a challenge for computational models. The effect of feedback and the effect of individually adapted training items on rule transition were tested for children using Rule I or Rule II. Presenting adapted training items initiates belief revision for Rule I but not for Rule II. The experience of making mistakes (by providing feedback) induces a change for both Rule I and Rule II. However, a delayed posttest shows that these changes are preserved after 2 weeks only for children using Rule I. We conclude that the transition from Rule I to Rule II differs from the transition from Rule II to a more complex rule. Concerning these empirical findings, we will review performance of computational models and the implications for a future belief revision model.

2025, Computer Speech & Language

Accurate prediction of segmental duration from text in a text-to-speech system is difficult for several reasons. One which is especially relevant is the great quantity of contextual factors that affect timing and it is difficult to find... more

Accurate prediction of segmental duration from text in a text-to-speech system is difficult for several reasons. One which is especially relevant is the great quantity of contextual factors that affect timing and it is difficult to find the right way to model them. There are many parameters that affect duration, but not all of them are always relevant and some can even be counterproductive because of the possibility of overtraining. The main motivation of this paper has been to reduce the error in the duration estimation. To this end, it is of the utmost importance to find the factors that most influence duration in a given language. The approach we have taken is to use a neural network, which is completely configurable, and experiment with the different combinations of parameters that yield the minimum error in the estimation. We have oriented our work mainly towards the following aspects: the most significant parameters that can be used as input to the automatic model, and the best way to code these parameters. We have studied first the effect of each parameter alone and, after that, we have included all parameters together to have our final system. Another important aspect of this study is the generation of a suite of software tools and design protocols that will be used in future tasks with different speakers and databases. The applications for automatic modelling are obvious: adapt the prosody to a new speaker, to a new environment, to "restricted-domain" sentences, etc., in a fast, semi-automatic and inexpensive way. After the database labelling, it is a matter of minutes to prepare the inputs to the network for the new situation, and the network is trained in 1 h. The result has been a system that predicts duration with very good results (19 ms in RMS) and that clearly improves our previous rule-based system.

2025, Lecture Notes in Computer Science

This paper is a contribution to the modelling of adaptive hypermedia. The main feature of such a system is able to construct its adaptive behaviour easily. It is based on the concept of the active rule (Event-Condition-Action). The... more

This paper is a contribution to the modelling of adaptive hypermedia. The main feature of such a system is able to construct its adaptive behaviour easily. It is based on the concept of the active rule (Event-Condition-Action). The advantage of this concept resides in the integration of all existing adaptation techniques in the system. Furthermore, this system allows introducing new adaptation strategies easily. The system architecture consists of three main parts: first the traditional hypermedia subsystem that contains the navigational model, the interface model, and the multimedia resources; second the adaptive subsystem that contains the user model, the semantic network model, the events analyser, and the adaptive behaviour specification entity; third the tasks model which contain the strategic rules bases associated with the adaptive behaviour specification entity, the navigational rules bases associated with the navigational model, and the interface rules bases associated with the interface model.

2025, Information Retrieval

The ability to correctly classify sentences that describe events is an important task for many natural language applications such as Question Answering (QA) and Text Summarisation. In this paper, we treat event detection as a sentence... more

The ability to correctly classify sentences that describe events is an important task for many natural language applications such as Question Answering (QA) and Text Summarisation. In this paper, we treat event detection as a sentence level text classification problem. We compare the performance of two approaches to this task: a Support Vector Machine (SVM) classifier and a Language Modeling (LM) approach. We also investigate a rule-based method that uses hand-crafted lists of 'trigger' terms derived from WordNet. We use two datasets in our experiments and test each approach using six different event types, i.e, Die, Attack, Injure, Meet, Transport and Charge-Indict. Our experimental results indicate that although the trained SVM classifier consistently outperforms the language modeling approach, our rule-based system marginally outperforms the trained SVM classifier on three of our six event types. We also observe that overall performance is greatly affected by the type of corpus used to train the algorithms. Specifically, we have found that a homogeneous training corpus that contains many instances of a specific event type (i.e., Die events in the recent Iraqi war) produces a poorer performing classifier than one trained on a heterogeneous dataset containing more diverse instances of the event (i.e., Die events in many different settings, for example, traffic accidents, natural disasters etc.). Our heterogeneous dataset is provided by the ACE (Automatic Content Extraction) initiative, while our novel homogeneous dataset consists of news articles and annotated Die events from the Iraq Body Count (IBC) database. Overall, our results show that the techniques presented here are effective solutions to the event classification task described in this paper, where F1 scores of over 90% are achieved.

2025, Journal of Pharmacy and Pharmaceutical Sciences

Clinical data is complex, context-dependent, and multi-dimensional, and such data generates an amalgamation of computing research challenges. To extract and interpret the useful information from raw data is a challenging job. This study... more

Clinical data is complex, context-dependent, and multi-dimensional, and such data generates an amalgamation of computing research challenges. To extract and interpret the useful information from raw data is a challenging job. This study aims at developing an automated predictive model to diagnose the state of an epileptic patient using EEG signals. The segmented EEG signals are utilized to extract various statistical features which are used for prediction. Strategically, we have designed a fully automated neural network model, capable of classifying the seizure activity into ictal, interictal and normal state with an accuracy as high as 99.3%, maximum sensitivity of 100% and specificity as high as 98.3% for all the classes. For the different set of parameters and optimum number of neurons in hidden layer, ANN model revealed a superior model for validating the classification.

2025

The development of efficient algorithms to correct faults in rule-based systems is very crucial in extending the verification and validation of rule sets and in the development of rule-based systems. While it is important to detect... more

The development of efficient algorithms to correct faults in rule-based systems is very crucial in extending the verification and validation of rule sets and in the development of rule-based systems. While it is important to detect various kinds of faults in rule sets, it is also equally important to provide a user/expert with a set of heuristics that can aid in correcting these faults. In this paper, a set of correction algorithms/heuristics for inconsistency, contradiction, circularity, redundancy, and unreachability faults are presented.

2025, International Journal of Computing and …

Abstract: The development of efficient algorithms to correct faults in rule-based systems is very crucial in extending the verification and validation of rule sets and in the development of rule-based systems. While it is important to... more

Abstract: The development of efficient algorithms to correct faults in rule-based systems is very crucial in extending the verification and validation of rule sets and in the development of rule-based systems. While it is important to detect various kinds of faults in rule sets, it is ...

2025, Journal of Advanced Mechanical Design Systems and Manufacturing

In this paper, an approach to complement legacy rules for the elevator operation is proposed. The approach is derived from the analysis that the elevator operation in the real world often obeys a heuristic rule, and such a rule can be... more

In this paper, an approach to complement legacy rules for the elevator operation is proposed. The approach is derived from the analysis that the elevator operation in the real world often obeys a heuristic rule, and such a rule can be divided into a legacy rule and ad-hoc rules. In the approach, ad-hoc rules are represented as polysemous rules, and a Genetics-Based Machine Learning (GBML) method is applied to acquire such rules. Here, a polysemous rule encodes, not a set of environments' states as the well-known if-then rule does, but a relative attribute vector of an arbitrary elevator. The elevator selection rule based on polysemous rules is simple: if there is a polysemous rule which matches one of attribute vectors of the elevators, select the elevator which corresponds to the matching vector; otherwise select an elevator according to a legacy rule. In computer illustrations, the GBML method is applied to 3 traffic patterns formed by the system's users. It is shown that the resultant polysemous rules seem to complement an existing (legacy) operational rule. Furthermore, polysemous rules, which are selected among those acquired by the GBML method, are successfully applied to harder problems with more elevators than those used in learning.

2025, IEEJ Transactions on Electronics, Information and Systems

2025, Journal of Advanced Mechanical Design, Systems, and Manufacturing

In this paper, an approach to complement legacy rules for the elevator operation is proposed. The approach is derived from the analysis that the elevator operation in the real world often obeys a heuristic rule, and such a rule can be... more

In this paper, an approach to complement legacy rules for the elevator operation is proposed. The approach is derived from the analysis that the elevator operation in the real world often obeys a heuristic rule, and such a rule can be divided into a legacy rule and ad-hoc rules. In the approach, ad-hoc rules are represented as polysemous rules, and a Genetics-Based Machine Learning (GBML) method is applied to acquire such rules. Here, a polysemous rule encodes, not a set of environments' states as the well-known if-then rule does, but a relative attribute vector of an arbitrary elevator. The elevator selection rule based on polysemous rules is simple: if there is a polysemous rule which matches one of attribute vectors of the elevators, select the elevator which corresponds to the matching vector; otherwise select an elevator according to a legacy rule. In computer illustrations, the GBML method is applied to 3 traffic patterns formed by the system's users. It is shown that the resultant polysemous rules seem to complement an existing (legacy) operational rule. Furthermore, polysemous rules, which are selected among those acquired by the GBML method, are successfully applied to harder problems with more elevators than those used in learning.

2025, International Journal of Engineering

The electrical energy one of most important basics of modern lifeand the demand on this energy increase consciously. Which make some challenges to operators to increase the generation to feed the growing demand. This growth lead to extend... more

The electrical energy one of most important basics of modern lifeand the demand on this energy increase consciously. Which make some challenges to operators to increase the generation to feed the growing demand. This growth lead to extend the centralized plant to provide distribution network by required energy also lead them to develop transmission system. This method is so costly where it is required money to extend the power system. An appearance of distributed generation DG is considered a solution for these problems and can reduce the cost where it provides the load by electricity in customer side without passing through transmission system. While the aim target from DG is supporting the grid and improve power quality, voltage profile, reduction of power losses and reliability. In this paper will discuss sizing and siting the distributed generation in the grid. In this work introduced hybrid method where it is used 2/3 rule and harmony search algorithm to find optimum location a...

2025, Soft Computing

This paper investigates the capabilities of evolutionary on-line rule-based systems, also called learning classifier systems (LCSs), for extracting knowledge from imbalanced data. While some learners may suffer from class imbalances and... more

This paper investigates the capabilities of evolutionary on-line rule-based systems, also called learning classifier systems (LCSs), for extracting knowledge from imbalanced data. While some learners may suffer from class imbalances and instances sparsely distributed around the feature space, we show that LCSs are flexible methods that can be adapted to detect such cases and find suitable models. Results on artificial data sets specifically designed for testing the capabilities of LCSs in imbalanced data show that LCSs are able to extract knowledge from highly imbalanced domains. When LCSs are used with real-world problems, they demonstrate to be one of the most robust methods compared with instance-based learners, decision trees, and support vector machines. Moreover, all the learners benefit from re-sampling techniques. Although there is not a re-sampling technique that performs best in all data sets and for all learners, those based in over-sampling seem to perform better on average. The paper adapts and analyzes LCSs for challenging imbalanced data sets and establishes the bases for further studying the combination of re-sampling technique and learner best suited to a specific kind of problem.

2025, Springer eBooks

As part of the access control process an authorization decision needs to be taken based on a certain authorization model. Depending on the environment different models are applicable (e.g., RBAC in organizations, MAC in the military... more

As part of the access control process an authorization decision needs to be taken based on a certain authorization model. Depending on the environment different models are applicable (e.g., RBAC in organizations, MAC in the military field). An authorization model contains all necessary elements needed for the decision (e.g., subjects, objects, and roles) as well as their relations. As these elements are usually inherent in the software architecture of an access control module, such modules limit themselves to the use of a certain specific authorization model. A later change of the model consequently results in a substantial effort for revising the software architecture of the given module. Rule-based systems are well suited to represent authorization models by mapping them to facts and rules, which can be modified in a flexible manner. In this paper we present a generic authorization module, which can take authorization decisions on the basis of arbitrary models utilizing rule-based technology. The implementation of the popular RBAC and ABAC (attribute-based access control) models is demonstrated.

2025

“Data Warehouse Design for Knowledge Discovery from Healthcare Data”

2025, Knowledge Based Systems

Fuzzy rule-based systems (FRBSs) are a common alternative for applying fuzzy logic in different areas and real-world problems. The schemes and algorithms used to generate these types of systems imply that their performance can be analyzed... more

Fuzzy rule-based systems (FRBSs) are a common alternative for applying fuzzy logic in different areas and real-world problems. The schemes and algorithms used to generate these types of systems imply that their performance can be analyzed from different points of view, not only model accuracy. Any model, including fuzzy models, needs to be sufficiently accurate, but other perspectives, such as interpretability, are also possible for the FRBSs. Thus, the Accuracy-Interpretability trade-off arises as a challenge for fuzzy systems, as approaches are currently able to generate FRBSs with different trade-offs. Here, rule Relevance is added to Accuracy and Interpretability for a better trade-off in FRBSs. These three factors are involved in this approach to make a rule selection using a multi-objective evolutionary algorithm. The proposal has been tested and compared with nine datasets, two linguistic and two scatter fuzzy algorithms, four measures of interpretability and two rule relevance formulations. The results have been analyzed for different views of Interpretability, Accuracy and Relevance, and the statistical tests have shown that significant improvements have been achieved. On the other hand, the Relevance-based role of fuzzy rules has been checked, and it has been shown that low Relevance rules have a relevant role for trade-off, $ This work has been partially supported by the Spanish Ministry of Economy and Competitiveness through the Project no. DPI2015-67341-C2-2-R

2025, Knowledge-Based Systems

Fuzzy rule-based systems (FRBSs) are a common alternative for applying fuzzy logic in different areas and real-world problems. The schemes and algorithms used to generate these types of systems imply that their performance can be analyzed... more

Fuzzy rule-based systems (FRBSs) are a common alternative for applying fuzzy logic in different areas and real-world problems. The schemes and algorithms used to generate these types of systems imply that their performance can be analyzed from different points of view, not only model accuracy. Any model, including fuzzy models, needs to be sufficiently accurate, but other perspectives, such as interpretability, are also possible for the FRBSs. Thus, the Accuracy-Interpretability trade-off arises as a challenge for fuzzy systems, as approaches are currently able to generate FRBSs with different trade-offs. Here, rule Relevance is added to Accuracy and Interpretability for a better trade-off in FRBSs. These three factors are involved in this approach to make a rule selection using a multi-objective evolutionary algorithm. The proposal has been tested and compared with nine datasets, two linguistic and two scatter fuzzy algorithms, four measures of interpretability and two rule relevance formulations. The results have been analyzed for different views of Interpretability, Accuracy and Relevance, and the statistical tests have shown that significant improvements have been achieved. On the other hand, the Relevance-based role of fuzzy rules has been checked, and it has been shown that low Relevance rules have a relevant role for trade-off, $ This work has been partially supported by the Spanish Ministry of Economy and Competitiveness through the Project no. DPI2015-67341-C2-2-R

2025

Interposing a backward chaining reasoner between a knowledge base and a query manager yields an architecture that can support reasoning in the face of frequent changes. However, such an interposition of the reasoning introduces... more

Interposing a backward chaining reasoner between a knowledge base and a query manager yields an architecture that can support reasoning in the face of frequent changes. However, such an interposition of the reasoning introduces uncertainty regarding the size and effort measurements typically exploited during query optimization. This paper presents an algorithm for dynamic query optimization in such an architecture. Experimental results confirming its effectiveness are presented.

2025, Applied Optics

The slow execution speed of current rule-based systems has restricted their application areas. Multiprocessor architectures have been proposed to overcome this limitation. However, as the number of processors in a multiprocessor system... more

The slow execution speed of current rule-based systems has restricted their application areas. Multiprocessor architectures have been proposed to overcome this limitation. However, as the number of processors in a multiprocessor system grows, so does the cost of communication between processors or between processor and memory units. The use of optics for a fast and parallel implementation of rule-based systems is proposed. The proposed optical system is hybrid in nature, using electronics for the user interface and optics for the rule-based inference engine. The proposed system uses twodimensional planes as basic computational entities and is therefore able to provide concurrent rule processing. Furthermore, it provides highly efficient implementation of the basic operations needed in rule-based systems; namely, matching, selection, and rule firing. The execution speed of the proposed system is theoretically estimated and is shown to be potentially of orders of magnitude faster than current electronic systems.

2025, Lecture Notes in Computer Science

Structural analysis of web pages has been proposed several times and for a number of reasons and purposes, such as the re-flowing of standard web pages to fit a smaller PDA screen. elISA is a rule-based system for the analysis of... more

Structural analysis of web pages has been proposed several times and for a number of reasons and purposes, such as the re-flowing of standard web pages to fit a smaller PDA screen. elISA is a rule-based system for the analysis of regularities and structures within web pages that is used for a fairly different task, the determination of editable text blocks within standard web pages, as needed by the IsaWiki collaborative editing environment. The elISA analysis engine is implemented as a XSLT meta-stylesheet that applied to a rule set generates an XSLT stylesheet that, in turn, applied to the original HTML document generates the requested analysis.

2025, Rule Representation, Interchange and Reasoning on the Web

2025, National Conference on Artificial Intelligence

WC argue that the question selection proccsscs used in the existing AZ in Afct-licine programs arc inadequate. WC trace thcsc inadcquacics to their use of purely surface lcvcl models of discasc and to the lack of planning in sequencing... more

WC argue that the question selection proccsscs used in the existing AZ in Afct-licine programs arc inadequate. WC trace thcsc inadcquacics to their use of purely surface lcvcl models of discasc and to the lack of planning in sequencing their inquiries. WC then describe the information gathering component of AlEI, (a program for Acid-Base and Hcctrolyte disturbances) with emphasis on its USC of a causal model of a patient's illness in guiding its diagnostic inquiries. Diagnostic programs built using the .4/ in Mcriicinc (AIM) nlti;hodoiogy typically consist of two parts: one IU furm hypo~l~cs~s that explain the data known to the program. and one to acquire new information from the program's users to clarify difficulties remaining with the hypotheses under consideration. In this paper WC suggest that the questioning behavior of most existing diagnostic programs is inadequate because it is not very rcsponsivc to the totality of the progri~l~'s hypotheses. IIotb rule-based systems such as Mycin IShortliffc761 and frame-based systems such as lntcrnist-I [Poplc77] and PIP [Paukcr76] suffer from this dcfcct, though for diverse reasons. Mycin [Shortliffe76] is a rule-based expert program for the diagnosis and therapy of infectious discascs. It uses a backward chaming control structure to scqucncc through its rule-base in search of information rclcvant in the diagnosis of possible infections. Mycin always pursues the antecedent clauses of a rule in their order of appearance in the rule. 'I'hus during dny cxcoution of ihc program the antecedent clauses of every rule are expanded in exactly the same order, resulting in a pre-dctcrmincd vrdcr in which the questions may be asked." This rigidity in Mycin results from its USC of rules to encode both its medical knowledge and the control component of expert reasoning. 'l'hc resulting limitations of the Mycin system were recognized by Davis [Davis76], who proposed mcra-rules as a means of separating thcsc two aspects of cxpcrt bchabior.' Meta-rules are used in his system, 'I'eircsias [l>avis76], to encode strategic knowledge for the control of the backward chaining rule interpreter by dynamically re-ordering and selecting rules to be pursued. It is important to note that the meta-rules do not alter the program's domain knowledge; they provide additional knowlcdgc that allows it to make more efficient use of this knowledge.

2025

Urdu language raises several challenges to Natural Language Processing (NLP) largely due to its rich morphology. In this language, morphological processing becomes particularly important for Information Retrieval (IR). The core tool of IR... more

Urdu language raises several challenges to Natural Language Processing (NLP) largely due to its rich morphology. In this language, morphological processing becomes particularly important for Information Retrieval (IR). The core tool of IR is a Stemmer which reduces a word to its stem form. Due to the diverse nature of Urdu, developing stemmer is a challenging task. In Urdu, there are large numbers of variant forms (derivational and inflectional forms) for a single word form. The aim of this paper is to present issues pertaining to the development of Urdu stemmer (rule based stemmer).

2024

An appropriate combination of objectoriented programming concepts allows the development not only of single reusable components but also of semifinished architectures (= frameworks). The paper discusses an adequate way to represent the... more

An appropriate combination of objectoriented programming concepts allows the development not only of single reusable components but also of semifinished architectures (= frameworks). The paper discusses an adequate way to represent the know-how of software engineers in the realm of developing and adapting frameworks. Active cookbooks rely on a particular knowledge base. These electronic books form an integration basis of various tools adjusted to a domain-specific framework. Active cookbooks guide programmers and end users through typical framework adaptation steps. Examples taken from a prototype implementation of such a cookbook illustrate the concepts that allow active support of framework-centered software development.

2024

This research work will help the customer for decision making in share market, based on fuzzy rule based system. Past performance data could be used to overcome uncertainty , vagueness and imprecision of share market. Six factors which... more

This research work will help the customer for decision making in share market, based on fuzzy rule based system. Past performance data could be used to overcome uncertainty , vagueness and imprecision of share market. Six factors which are affecting the share prices are Market Growth, Monson Policy/Political Foreign Investment, Ratio Analysis and Agent Activity are very helpful to take market share price decision. Technical and fundamental approaches are used in parallel to estimate short-term and long-term return and risk. The customer will more secure to take risk at time of share trading, Fuzzy Logic framework is developed to carry out the required analysis for arriving at the governance rating of the firms.

2024, International Journal of Computer Applications

This paper explores the possibility of applying fuzzy logic theory for handling vagueness and imprecision that characterize the decision making process. Fuzzy metagraph framework is developed to carry out the required analysis for... more

This paper explores the possibility of applying fuzzy logic theory for handling vagueness and imprecision that characterize the decision making process. Fuzzy metagraph framework is developed to carry out the required analysis for arriving at the governance rating of the firms. A person carrying a share of a company holds that part of ownership in that company Share is nothing but the Ownership of the company divided into small parts and each part is called as Share or Stock. Share market is the place where buying and selling of shares takes place, with the use of computer display datas on screen . A person holding maximum shares carry maximum ownership and designated like director, chairman etc. This research work will help the customer for decision making in share market.

2024, International Joint Conference on Artificial Intelligence

One of the major 'weaknesses of current automated reasoning systems is that they lack the ability to con trol inference in a sophisticated, context-directed fashion. General strategies such as the set-of-support strategy are useful, but... more

One of the major 'weaknesses of current automated reasoning systems is that they lack the ability to con trol inference in a sophisticated, context-directed fashion. General strategies such as the set-of-support strategy are useful, but have proven inadequate for many individual problems. A strategy component is needed that possesses knowledge about many particu lar domains and problems. Such a body of knowledge would require a prohibitive amount of time to con struct by hand. This leads us to consider means of automatically acquiring control knowledge from exam ple proofs. One particular means of learning is explanation-based learning. This paper analyzes the basis of explanations -finding weakest preconditions that enable a particular rule to fire -to derive a representation within which explanations can be extracted from examples, generalized and used to guide the actions of a problem-solving system.

2024, IEEE Transactions on Computers

L.0 is an executable specification language designed for describing communications protocols and similar reactive systems. L.0 is synchronous and rule-based. The rules are either cause-effect rules or constraints. Rules can be activated... more

L.0 is an executable specification language designed for describing communications protocols and similar reactive systems. L.0 is synchronous and rule-based. The rules are either cause-effect rules or constraints. Rules can be activated and deactivated dynamically. Several can be fired simultaneously. L.0 has modern notions of encapsulation and data sharing. Indirection, quantification, and recursive definition of modules increase its expressiveness. L.0 has been used in several development projects to design, simulate, and prototype protocols. It seems to provide a paradigm that is precise and yet can be used by designers and engineers who are not specialists in specification languages. This paper gives an overview of L.0 and its applications to protocols.

2024, Proceedings of SYNASC

Abstract. We describe a new methodology to program evaluation strategies, which relies on some advanced features of the rule-based programming language ρLog. We illustrate how our approach works for a number of important evaluation... more

Abstract. We describe a new methodology to program evaluation strategies, which relies on some advanced features of the rule-based programming language ρLog. We illustrate how our approach works for a number of important evaluation strategies.

2024

Around ten percent of patients admitted to National Health Service (NHS) hospitals have experienced a patient safety incident, and an important reason for the high rate of patient safety incidents is medical errors. Research shows that... more

Around ten percent of patients admitted to National Health Service (NHS) hospitals have experienced a patient safety incident, and an important reason for the high rate of patient safety incidents is medical errors. Research shows that appropriate increase in the use of clinical decision support systems (CDSSs) could help to reduce medical errors and result in substantial improvement in patient safety. However several barriers continue to impede the effective implementation of CDSSs in clinical settings, among which representation of and reasoning about medical knowledge particularly under uncertainty are areas that require refined methodologies and techniques. Particularly, the knowledge base in a CDSS needs to be updated automatically based on accumulated clinical cases to provide evidence-based clinical decision support.In the research, we employed the recently developed belief Rule-base Inference Methodology using the Evidential Reasoning approach (RIMER) for design and developm...

2024, Il Nuovo Cimento A (1965-1970)

As an alternative to the Lagrangian method the chain rule may be used to obtain the new momenta after a canonical transformation. In the latter case the imposition of constraints is highly nontrivial. Here some typical examples are... more

As an alternative to the Lagrangian method the chain rule may be used to obtain the new momenta after a canonical transformation. In the latter case the imposition of constraints is highly nontrivial. Here some typical examples are investigated and it is shown how the constraints are implemented in the chain rule method.

2024

In the online social network such as Facebook, Twitter, etc., it is possible to display any type of data on the wall of the user. These data can contain unwanted messages such as: policy statement, vulgar data, account staff teasing, etc.... more

In the online social network such as Facebook, Twitter, etc., it is possible to display any type of data on the wall of the user. These data can contain unwanted messages such as: policy statement, vulgar data, account staff teasing, etc. other users can see the data and may also rule on such a position. Such a position can affect user social image. So, the safety of such personal wall is an important issue. To some extent face book allows users to specify which helped to put messages in their walls (i.e., friends, friends of friends, or defined groups of friends). However, no preferences based on content are supported, so it is impossible to prevent the display of these unwanted messages. To protect a spam message posted on the wall of the user and to protect the user social image is an important issue in the social networking site. To filter out unwanted messages, we offer three levels architecture containing a message classifier based on the content and use of machine learning te...

2024, International Journal of Applied Information Systems

2024

Asthma, a chronic inflammatory disease of the airways, is the most common chronic disorder of children, affecting as many as 10-12 % of children in USA. The purpose of this paper is to propose an integrated computerized decision support... more

Asthma, a chronic inflammatory disease of the airways, is the most common chronic disorder of children, affecting as many as 10-12 % of children in USA. The purpose of this paper is to propose an integrated computerized decision support system for managing asthma in school-age children. The system is intended to cover various aspects of the patient's life, including the home environment, the school and the doctor's office, thus improving the treatment of asthma overall.