Semi-Supervised Clustering with Partial Background Information (original) (raw)
Related papers
Two phase semi-supervised clustering using background knowledge
2006
Abstract. Using background knowledge in clustering, called semi-clustering, is one of the actively researched areas in data mining. In this paper, we illustrate how to use background knowledge related to a domain more efficiently. For a given data, the number of classes is investigated by using the must-link constraints before clustering and these must-link data are assigned to the corresponding classes. When the clustering algorithm is applied, we make use of the cannot-link constraints for assignment.
arXiv (Cornell University), 2023
This study addresses the problem of performing clustering in the presence of two types of background knowledge: pairwise constraints and monotonicity constraints. To achieve this, the formal framework to perform clustering under monotonicity constraints is, firstly, defined, resulting in a specific distance measure. Pairwise constraints are integrated afterwards by designing an objective function which combines the proposed distance measure and a pairwise constraint-based penalty term, in order to fuse both types of information. This objective function can be optimized with an EM optimization scheme. The proposed method serves as the first approach to the problem it addresses, as it is the first method designed to work with the two types of background knowledge mentioned above. Our proposal is tested in a variety of benchmark datasets and in a real-world case of study.
Data mining is the process of finding the previously unknown and potentially interesting patterns and relation in database. Data mining is the step in the knowledge discovery in database process (KDD) .The structures that are the outcome of the data mining process must meet certain condition so that these can be considered as knowledge. These conditions are validity, understandability, utility, novelty, interestingness. Researcher identifies two fundamental goals of data mining: prediction and description. The proposed research work suggests the semi-supervised clustering problem where to know (with varying degree of certainty) that some sample pairs are (or are not) in the same class. A probabilistic model for semi-supervised clustering based on Shared Semi-supervised Neighbor clustering (SSNC) that provides a principled framework for incorporating supervision into prototype-based clustering. Semi-supervised clustering that combines the constraint-based and fitness-based approaches in a unified model. The proposed method first divides the Constraint-sensitive assignment of instances to clusters, where points are assigned to clusters so that the overall distortion of the points from the cluster centroids is minimized, while a minimum number of must-link and cannot-link constraints are violated. Experimental results across UCL Machine learning semi-supervised dataset results show that the proposed method has higher F-Measures than many existing Semi-Supervised Clustering methods.
Clustering is one of the most common data mining tasks, used frequently for data organization and analysis in various application domains. Traditional machine learning approaches to clustering are fully automated and unsupervised where class labels are unknown a priori. In real application domains, however, some " weak " form of side information about the domain or data sets can be often available or derivable. In particular, information in the form of instance-level pairwise constraints is general and is relatively easy to derive. The problem with traditional clustering techniques is that they cannot benefit from side information even when available. I study the problem of semi-supervised clustering, which aims to partition a set of unlabeled data items into coherent groups given a collection of constraints. Because semi-supervised clustering promises higher quality with little extra human effort, it is of great interest both in theory and in practice. Semi-supervised clu...
A classification-based approach to semi-supervised clustering with pairwise constraints
Neural Networks, 2020
In this paper, we introduce a neural network framework for semi-supervised clustering (SSC) with pairwise (must-link or cannot-link) constraints. In contrast to existing approaches, we decompose SSC into two simpler classification tasks/stages: the first stage uses a pair of Siamese neural networks to label the unlabeled pairs of points as must-link or cannot-link; the second stage uses the fully pairwise-labeled dataset produced by the first stage in a supervised neural-network-based clustering method. The proposed approach, S 3 C 2 (Semi-Supervised Siamese Classifiers for Clustering), is motivated by the observation that binary classification (such as assigning pairwise relations) is usually easier than multi-class clustering with partial supervision. On the other hand, being classification-based, our method solves only well-defined classification problems, rather than less well specified clustering tasks. Extensive experiments on various datasets demonstrate the high performance of the proposed method.
2019
Clustering algorithms with constraints (also known as semi-supervised clustering algorithms) have been introduced to the field of machine learning as a significant variant to the conventional unsupervised clustering learning algorithms. They have been demonstrated to achieve better performance due to integrating prior knowledge during the clustering process, that enables uncovering relevant useful information from the data being clustered. However, the research conducted within the context of developing semi-supervised hierarchical clustering techniques are still an open and active investigation area. Majority of current semi-supervised clustering algorithms are developed as partitional clustering (PC) methods and only few research efforts have been made on developing semi-supervised hierarchical clustering methods. The aim of this research is to enhance hierarchical clustering (HC) algorithms based on prior knowledge, by adopting novel methodologies. [Continues.]
Semi-supervised learning using multiple clusterings with limited labeled data
Information Sciences, 2016
Supervised classification consists in learning a predictive model using a set of labeled samples. It is accepted that predictive models accuracy usually increases as more labeled samples are available. Labelled samples are generally difficult to obtain as the labelling step if often performed manually. On the contrary, unlabeled samples are easily available. As the labeling task is tedious and time consuming, users generally provide a very limited number of labeled objects. However, designing approaches able to work efficiently with a very limited number of labeled samples is highly challenging. In this context, semi-supervised approaches have been proposed to leverage from both labeled and unlabeled data. In this paper, we focus on cases where the number of labeled samples is very limited. We review and formalize eight semi-supervised learning algorithms and introduce a new method that combine supervised and unsupervised learning in order to use both labeled and unlabeled data. The main idea of this method is to produce new features derived from a first step of data clustering. These features are then used to enrich the description of the input data leading to a better use of the data distribution. The efficiency of all the methods is compared on various artificial, UCI datasets, and on the classification of a very high resolution remote sensing image. The experiments reveal that our method shows good results, especially when the number of labeled sample is very limited. It also confirms that combining labeled and unlabeled data is very useful in pattern recognition.
Unsupervised and semi-supervised clustering with learnable cluster dependent kernels
Despite the large number of existing clustering methods, clustering remains a challenging task especially when the structure of the data does not correspond to easily separable categories, and when clusters vary in size, density and shape. Existing kernel based approaches allow to adapt a specific similarity measure in order to make the problem easier. Although good results were obtained using the Gaussian kernel function, its performance depends on the selection of the scaling parameter. Moreover, since one global parameter is used for the entire data set, it may not be possible to find one optimal scaling parameter when there are large variations between the distributions of the different clusters in the feature space. One way to learn optimal scaling parameters is through an exhaustive search of one optimal scaling parameter for each cluster. However, this approach is not practical since it is computationally expensive especially when the data includes a large number of clusters and when the dynamic range of possible values of the scaling parameters is large. Moreover, it is not trivial to evaluate the resulting partition in order to select the optimal parameters. To overcome this limitation, we introduce two new fuzzy relational clustering techniques that learn cluster dependent Gaussian kernels. iv The first algorithm called clustering and Local 8cale Learning algorithm (L8L) minimizes one objective function for both the optimal partition and for cluster dependent scaling parameters that reflect the intra-cluster characteristics of the data. The second algorithm, called Fuzzy clustering with Learnable Cluster dependent Kernels (FLeCK) learns the scaling parameters by optimizing both the intracluster and the inter-cluster dissimilarities. Consequently, the learned scale parameters reflect the relative density, size, and position of each cluster with respect to the other clusters. We also introduce semi-supervised versions of L8L and FLeCK. These algorithms generate a fuzzy partition of the data and learn the optimal kernel resolution of each cluster simultaneously. We show that the incorporation of a small set of constraints can guide the clustering process to better learn the scaling parameters and the fuzzy memberships in order to obtain a better partition of the data. In particular, we show that the partial supervision is even more useful on real high dimensional data sets where the algorithms are more susceptible to local minima. All of the proposed algorithms are optimized iteratively by dynamically updating the partition and the scaling parameter in each iteration. This makes these algorithms simple and fast. Moreover, our algorithms are formulated to work on relational data. This makes them applicable to data where objects cannot be represented by vectors or when clusters of similar objects cannot be represented efficiently by a single prototype. Our extensive experiments show that FLeCK and 88-FLeCK outperform existing algorithms. In particular, we show that when data include clusters with various inter-cluster and intra-cluster distances, learning cluster dependent kernel is crucial in obtaining a good partition.
Active Learning for Semi-Supervised Clustering Framework for High Dimensional Data
isara solutions, 2019
In certain clustering tasks it is possible to obtain limited supervision in the form of pairwise constraints, i.e., pairs of instances labeled as belonging to same or different clusters. The resulting problem is known as semi-supervised clustering, an instance of semi-supervised learning stemming from a traditional unsupervised learning setting. Several algorithms exist for enhancing clustering quality by using supervision in the form of constraints [2]. These algorithms typically utilize the pairwise constraints to either modify the clustering objective function or to learn the clustering distortion measure. Semi-supervised clustering employs limited supervision in the form of labeled instances or pairwise instance constraints to aid unsupervised clustering and often significantly improves the clustering performance. Despite the vast amount of expert knowledge spent on this problem, most existing work is not designed for handling high-dimensional sparse data [4]. Semi-supervised clustering uses a small amount of supervised data to aid unsupervised learning. One typical approach specifies a limited number of must-link and cannot link constraints between pairs of examples. It presents a pairwise constrained clustering framework and a new method for actively selecting informative pairwise constraints to get improved clustering performance [6]. The clustering and active learning methods are both easily scalable to large datasets, and can handle very high dimensional data. Experimental and theoretical results confirm that this active querying of pairwise constraints significantly improves the accuracy of clustering when given a relatively small amount of supervision [5].
Improving Semi-Supervised Classification using Clustering
2020
Supervised classification techniques, broadly depend on the availability of labeled data. However, collecting this labeled data is always a tedious and costly process. To reduce these efforts and improve the performance of classification process, this paper proposes a new framework, which combines a most basic classification technique with the semi-supervised process of clustering. Semi-supervised clustering algorithms, aim to increase the accuracy of clustering process by effectively exploring available supervision from a limited amount of labeled data and help to label the unlabeled data. In our paper, a semi-supervised clustering is integrated with naive bayes classification technique which helps to better train the classifier. To evaluate the performance of the proposed technique, we conduct experiments on several real world benchmark datasets. The experimental results show that the proposed approach surpasses the competing approaches in both accuracy and efficiency.