Multiple proportion case-basing driven CBRE and its application in the evaluation of possible failure of firms (original) (raw)
Related papers
Case representation issues for case-based reasoning from ensemble research
2001
Ensembles of classifiers will produce lower errors than the member classifiers if there is diversity in the ensemble. One means of producing this diversity in nearest neighbour classifiers is to base the member classifiers on different feature subsets. In this paper we show four examples where this is the case. This has implications for the practice of feature subset selection (an important issue in CBR and data-mining) because it shows that, in some situations, there is no single best feature subset to represent a problem.
Case-Based Reasoning Research and Development
Lecture Notes in Computer Science, 2005
Ensembles of classifiers will produce lower errors than the member classifiers if there is diversity in the ensemble. One means of producing this diversity in nearest neighbour classifiers is to base the member classifiers on different feature subsets. In this paper we show four examples where this is the case. This has implications for the practice of feature subset selection (an important issue in CBR and data-mining) because it shows that there is no best feature subset to represent a problem. We show that if diversity is emphasised in the development of the ensemble that the ensemble members appear to be local learners specializing in sub-domains of the problem space. The paper concludes with some proposals on how analysis of ensembles of local learners might provide insight on problem-space decomposition for hierarchical CBR.
Ensemble machine learning algorithm optimization of bankruptcy prediction of bank
IAES International Journal of Artificial Intelligence, 2021
The ensemble consists of a single set of individually trained models, the predictions of which are combined when classifying new cases, in building a good classification model requires the diversity of a single model. The algorithm, logistic regression, support vector machine, random forest, and neural network are single models as alternative sources of diversity information. Previous research has shown that ensembles are more accurate than single models. Single model and modified ensemble bagging model are some of the techniques we will study in this paper. We experimented with the banking industry's financial ratios. The results of his observations are: First, an ensemble is always more accurate than a single model. Second, we observe that modified ensemble bagging models show improved classification model performance on balanced datasets, as they can adjust behavior and make them more suitable for relatively small datasets. The accuracy rate is 97% in the bagging ensemble learning model, an increase in the accuracy level of up to 16% compared to other models that use unbalanced datasets.
Business failure prediction: A comparison of classification methods
Operational Research, 2002
Business failure prediction is one of the most essential problems in the field of finance. The research on developing business failure prediction models has been focused on building classification models to distinguish among failed and non-failed finns. Such models are of major importance to financial decision makers (credit managers, managers of firms, investors, etc.); they serve as early warning systems of the failure probability of a corporate entity. The significance of business failure prediction models has been a major motivation for researchers to develop efficient approaches for the development of such models. This paper considers several such approaches, including multicriteria decision aid (MCDA) techniques, linear programming and performs a thorough comparison to traditional statistical techniques such as linear discriminant analysis and logit analysis. The comparison is performed using a sample of 144 US firms for a period of up to five years prior to failure.
A comparative study of classifier ensembles for bankruptcy prediction
Applied Soft Computing, 2014
The aim of bankruptcy prediction in the areas of data mining and machine learning is to develop an effective model which can provide the higher prediction accuracy. In the prior literature, various classification techniques have been developed and studied, in/with which classifier ensembles by combining multiple classifiers approach have shown their outperformance over many single classifiers. However, in terms of constructing classifier ensembles, there are three critical issues which can affect their performance. The first one is the classification technique actually used/adopted, and the other two are the combination method to combine multiple classifiers and the number of classifiers to be combined, respectively. Since there are limited, relevant studies examining these aforementioned disuses, this paper conducts a comprehensive study of comparing classifier ensembles by three widely used classification techniques including multilayer perceptron (MLP) neural networks, support vector machines (SVM), and decision trees (DT) based on two well-known combination methods including bagging and boosting and different numbers of combined classifiers. Our experimental results by three public datasets show that DT ensembles composed of 80-100 classifiers using the boosting method perform best. The Wilcoxon signed ranked test also demonstrates that DT ensembles by boosting perform significantly different from the other classifier ensembles. Moreover, a further study over a real-world case by a Taiwan bankruptcy dataset was conducted, which also demonstrates the superiority of DT ensembles by boosting over the others.
This study empirically explores the use of a group, or ensemble, of classifiers to support managerial decision making in domains characterized by asymmetric misclassification costs. The approach developed in this study is intended to assist a decision maker in determining whether a current situation warrants the choice of an ensemble over an individual classifier. The decision is based primarily on misclassification costs in the decision context and the associated basis on which performance is assessed. We show that the criteria for evaluating classifier performance are fundamentally dependent on the symmetry or asymmetry of misclassification costs. The result of this study is a set of heuristics for identifying highly-and poorly-performing ensembles.
The application of ensemble methods in forecasting bankruptcy
In practice, one chosen method is generally used to solve classification tasks. Although the most modern procedures yield excellent accuracy rates, international research findings show that a concurrent (ensemble) application of methods with weaker classification performance achieves comparable rates of high accuracy. This article’s main objective is to compare the predictive power of the two ensemble methods (Adaboost and Bagging) most commonly used in bankruptcy prediction, using a sample consisting of 976 Hungarian corporations. The article’s other objective is to compare the accuracy rates of bankruptcy models built on the deviations in specific financial ratios from industry averages to those of models built on financial ratios and variables factoring in their dynamics.
International Journal of Electrical and Computer Engineering (IJECE), 2021
Company bankruptcy is often a very big problem for companies. The impact of bankruptcy can cause losses to elements of the company such as owners, investors, employees, and consumers. One way to prevent bankruptcy is to predict the possibility of bankruptcy based on the company's financial data. Therefore, this study aims to find the best predictive model or method to predict company bankruptcy using the dataset from Polish companies bankruptcy. The prediction analysis process uses the best feature selection and ensemble learning. The best feature selection is selected using feature importance to XGBoost with a weight value filter of 10. The ensemble learning method used is stacking. Stacking is composed of the base model and meta learner. The base model consists of K-nearest neighbor, decision tree, support vector machines (SVM), and random forest, while the meta learner used is LightGBM. The stacking model accuracy results can outperform the base model accuracy with an accuracy rate of 97%.
Business failure prediction using statistical techniques: A review
2012
Accurate business failure prediction models would be extremely valuable to many industry sectors, particularly in financial investment and lending. The potential value of such models has been recently emphasised by the extremely costly failure of high profile businesses in both Australia and overseas, such as HIH (Australia) and Enron (USA). Consequently, there has been a significant increase in interest in business failure prediction from both industry and academia.