Propagation Kernels for Partially Labeled Graphs 击 (original) (raw)

Graph Kernels for Object Category Prediction in Task-Dependent Robot Grasping

Robot grasping is a critical and difficult problem in robotics. The problem of simply finding a stable grasp is difficult enough, but to perform a useful grasp, we must also consider other aspects of the task: the object, its properties, and any task-related constraints. The choice of grasping region is highly dependent on the category of object, and the automated prediction of object category is the problem we focus on here. In this paper, we consider manifold information and semantic object parts in a graph kernel to predict categories of a large variety of household objects such as cups, pots, pans, bottles, and various tools. The similarity based category prediction is achieved by employing propagation kernels, a recently introduced graph kernel for partially labeled graphs, on graph representations of 3D point clouds of objects. Our work highlights the importance of moving towards the use of structured machine learning approaches in order to achieve the dream of autonomous and intelligent robot grasping: learning to map low-level visual features to good grasping points under consideration of object-task affordances and high-level world knowledge. We evaluate propagation kernels for object category prediction on a (synthetic) dataset of 41 objects with 11 categories and a dataset of 126 point clouds derived from laser range data with part labels estimated by a part detector. Further, we point out the benefit of leveraging kernel-based object category distributions for task-dependent robot grasping.

A graph-matching kernel for object categorization

2011 International Conference on Computer Vision, 2011

This paper addresses the problem of category-level image classification. The underlying image model is a graph whose nodes correspond to a dense set of regions, and edges reflect the underlying grid structure of the image and act as springs to guarantee the geometric consistency of nearby regions during matching. A fast approximate algorithm for matching the graphs associated with two images is presented. This algorithm is used to construct a kernel appropriate for SVM-based image classification, and experiments with the Caltech 101, Caltech 256, and Scenes datasets demonstrate performance that matches or exceeds the state of the art for methods using a single type of features.

On Graph Kernels: Hardness Results and Efficient Alternatives

2003

As most 'real-world' data is structured, research in kernel methods has begun investigating kernels for various kinds of structured data. One of the most widely used tools for modeling structured data are graphs. An interesting and important challenge is thus to investigate kernels on instances that are represented by graphs. So far, only very specific graphs such as trees and strings have been considered. This paper investigates kernels on labeled directed graphs with general structure. It is shown that computing a strictly positive definite graph kernel is at least as hard as solving the graph isomorphism problem. It is also shown that computing an inner product in a feature space indexed by all possible graphs, where each feature counts the number of subgraphs isomorphic to that graph, is NP-hard. On the other hand, inner products in an alternative feature space, based on walks in the graph, can be computed in polynomial time. Such kernels are defined in this paper.

On Valid Optimal Assignment Kernels and Applications to Graph Classification

The success of kernel methods has initiated the design of novel positive semidef-inite functions, in particular for structured data. A leading design paradigm for this is the convolution kernel, which decomposes structured objects into their parts and sums over all pairs of parts. Assignment kernels, in contrast, are obtained from an optimal bijection between parts, which can provide a more valid notion of similarity. In general however, optimal assignments yield indefinite functions, which complicates their use in kernel methods. We characterize a class of base kernels used to compare parts that guarantees positive semidefinite optimal assignment kernels. These base kernels give rise to hierarchies from which the optimal assignment kernels are computed in linear time by histogram intersection. We apply these results by developing the Weisfeiler-Lehman optimal assignment kernel for graphs. It provides high classification accuracy on widely-used benchmark data sets improving over the original Weisfeiler-Lehman kernel.

Graph Kernel Neural Networks

arXiv (Cornell University), 2021

The convolution operator at the core of many modern neural architectures can effectively be seen as performing a dot product between an input matrix and a filter. While this is readily applicable to data such as images, which can be represented as regular grids in the Euclidean space, extending the convolution operator to work on graphs proves more challenging, due to their irregular structure. In this paper we propose to use graph kernels, i.e., kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain. This allows us to define an entirely structural model that does not require computing the embedding of the input graph. Our architecture allows to plug-in any type and number of graph kernels and has the added benefit of providing some interpretability in terms of the structural masks that are learned during the training process, similarly to what happens for convolutional masks in traditional convolutional neural networks. We perform an extensive ablation study to investigate the impact of the model hyper-parameters and we show that our model achieves competitive performance on standard graph classification datasets.

Global graph kernels using geometric embeddings

Applications of machine learning methods increasingly deal with graph structured data through kernels. Most existing graph kernels compare graphs in terms of features defined on small subgraphs such as walks, paths or graphlets, adopting an inherently local perspective. However, several interesting properties such as girth or chromatic number are global properties of the graph, and are not captured in local substructures. This paper presents two graph kernels defined on unlabeled graphs which capture global properties of graphs using the celebrated Lovász number and its associated orthonormal representation. We make progress towards theoretical results aiding kernel choice, proving a result about the separation margin of our kernel for classes of graphs. We give empirical results on classification of synthesized graphs with important global properties as well as established benchmark graph datasets, showing that the accuracy of our kernels is better than or competitive to existing graph kernels.

A Novel Graph Embedding Framework for Object Recognition

Lecture Notes in Computer Science, 2015

A great deal of research works have been devoted to understand image contents. In this field many well-known methods exploit Bag of Words (BoW) features describing image contents as appearance frequency histogram of visual words. These approaches have a main drawback, the location information and the relationships between features are lost. To overcame this limitation we propose a novel methodology for the Object recognition task. A digital image is described as a feature vector computed by means of a new graph embedding paradigm on the Attributed Relational SIFT Regions Graph. The final classification is performed by using Logistic Label Propagation classifier. Our framework is evaluated on standard databases (such as ETH-80, COIL-100 and ALOI) and the achieved results compared with those obtained by well-known methodologies confirm its quality.

Rethinking Kernel Methods for Node Representation Learning on Graphs

2019

Graph kernels are kernel methods measuring graph similarity and serve as a standard tool for graph classification. However, the use of kernel methods for node classification, which is a related problem to graph representation learning, is still ill-posed and the state-of-the-art methods are heavily based on heuristics. Here, we present a novel theoretical kernel-based framework for node classification that can bridge the gap between these two representation learning problems on graphs. Our approach is motivated by graph kernel methodology but extended to learn the node representations capturing the structural information in a graph. We theoretically show that our formulation is as powerful as any positive semidefinite kernels. To efficiently learn the kernel, we propose a novel mechanism for node feature aggregation and a data-driven similarity metric employed during the training phase. More importantly, our framework is flexible and complementary to other graph-based deep learning ...

Hyperparameter and Kernel Learning for Graph Based Semi-Supervised Classification

2005

There have been many graph-based approaches for semi-supervised classification. One problem is that of hyperparameter learning: performance depends greatly on the hyperparameters of the similarity graph, transformation of the graph Laplacian and the noise model. We present a Bayesian framework for learning hyperparameters for graph-based semisupervised classification. Given some labeled data, which can contain inaccurate labels, we pose the semi-supervised classification as an inference problem over the unknown labels. Expectation Propagation is used for approximate inference and the mean of the posterior is used for classification. The hyperparameters are learned using EM for evidence maximization. We also show that the posterior mean can be written in terms of the kernel matrix, providing a Bayesian classifier to classify new points. Tests on synthetic and real datasets show cases where there are significant improvements in performance over the existing approaches.