abhinav gupta - Academia.edu (original) (raw)

Papers by abhinav gupta

Research paper thumbnail of An Uncertain Future: Forecasting from Static Images Using Variational Autoencoders

Computer Vision – ECCV 2016, 2016

In a given scene, humans can easily predict a set of immediate future events that might happen. H... more In a given scene, humans can easily predict a set of immediate future events that might happen. However, pixel-level anticipation in computer vision is difficult because machine learning struggles with the ambiguity in predicting the future. In this paper, we focus on predicting the dense trajectory of pixels in a scene-what will move in the scene, where it will travel, and how it will deform over the course of one second. We propose a conditional variational autoencoder as a solution to this problem. In this framework, direct inference from the image shapes the distribution of possible trajectories while latent variables encode information that is not available in the image. We show that our method predicts events in a variety of scenes and can produce multiple different predictions for an ambiguous future. We also find that our method learns a representation that is applicable to semantic vision tasks.

Research paper thumbnail of A "Shape Aware" Model for semi-supervised Learning of Objects and its Context

We present an approach that combines bag-of-words and spatialmodels to perform semantic and synta... more We present an approach that combines bag-of-words and spatialmodels to perform semantic and syntactic analysis for recognition of an object based on its internal appearance and its context. We argue that while object recognition requires modeling relative spatial locations of image features within the object, a bag-of-word is sufficient for representing context. Learning such a model from weakly labeled data involves labeling of features into two classes: foreground(object) or "informative" background(context). We present a "shape-aware" model which utilizes contour information for efficient and accurate labeling of features in the image. Our approach iterates between an MCMC-based labeling and contour based labeling of features to integrate co-occurrence of features and shape similarity.

Research paper thumbnail of Understanding videos, constructing plots learning a visually grounded storyline model from annotated videos

2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009

Analyzing videos of human activities involves not only recognizing actions (typically based on th... more Analyzing videos of human activities involves not only recognizing actions (typically based on their appearances), but also determining the story/plot of the video. The storyline of a video describes causal relationships between actions. Beyond recognition of individual actions, discovering causal relationships helps to better understand the semantic meaning of the activities. We present an approach to learn a visually grounded storyline model of videos directly from weakly labeled data. The storyline model is represented as an AND-OR graph, a structure that can compactly encode storyline variation across videos. The edges in the AND-OR graph correspond to causal relationships which are represented in terms of spatio-temporal constraints. We formulate an Integer Programming framework for action recognition and storyline extraction using the storyline model and visual groundings learned from training data. Videos + Captions (long and short) After the pitcher pitches the ball, batter hits it. Batter its the ball and runs towards the base. Meanwhile the fielder catches the ball and runs towards the base. Picher pitches the ball and then the batter swings the bat but misses it. Pitcher pitches the ball and then the batter hits it. After hitting the ball, batter runs to the base. Simultaneously, the fielder runs to ball, catchs it and then throws it to the fielder at the base.

Research paper thumbnail of Learning What and How of Contextual Models for Scene Labeling

Lecture Notes in Computer Science, 2010

We present a data-driven approach to predict the importance of edges and construct a Markov netwo... more We present a data-driven approach to predict the importance of edges and construct a Markov network for image analysis based on statistical models of global and local image features. We also address the coupled problem of predicting the feature weights associated with each edge of a Markov network for evaluation of context. Experimental results indicate that this scene dependent structure construction model eliminates spurious edges and improves performance over fully-connected and neighborhood connected Markov network.

Research paper thumbnail of Objects in Action: An Approach for Combining Action Understanding and Object Perception

2007 IEEE Conference on Computer Vision and Pattern Recognition, 2007

Research paper thumbnail of Representing Videos Using Mid-level Discriminative Patches

2013 IEEE Conference on Computer Vision and Pattern Recognition, 2013

How should a video be represented? We propose a new representation for videos based on mid-level ... more How should a video be represented? We propose a new representation for videos based on mid-level discriminative spatio-temporal patches. These spatio-temporal patches might correspond to a primitive human action, a semantic object, or perhaps a random but informative spatiotemporal patch in the video. What defines these spatiotemporal patches is their discriminative and representative properties. We automatically mine these patches from hundreds of training videos and experimentally demonstrate that these patches establish correspondence across videos and align the videos for label transfer techniques. Furthermore, these patches can be used as a discriminative vocabulary for action classification where they demonstrate stateof-the-art performance on UCF50 and Olympics datasets.

Research paper thumbnail of Observing Human-Object Interactions: Using Spatial and Functional Compatibility for Recognition

IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009

Interpretation of images and videos containing humans interacting with different objects is a dau... more Interpretation of images and videos containing humans interacting with different objects is a daunting task. It involves understanding scene/event, analyzing human movements, recognizing manipulable objects, and observing the effect of the human movement on those objects. While each of these perceptual tasks can be conducted independently, recognition rate improves when interactions between them are considered. Motivated by psychological studies of human perception, we present a Bayesian approach which integrates various perceptual tasks involved in understanding human-object interactions. Previous approaches to object and action recognition rely on static shape/appearance feature matching and motion analysis, respectively. Our approach goes beyond these traditional approaches and applies spatial and functional constraints on each of the perceptual elements for coherent semantic interpretation. Such constraints allow us to recognize objects and actions when the appearances are not discriminative enough. We also demonstrate the use of such constraints in recognition of actions from static images without using any motion information.

Research paper thumbnail of Demystifying Contrastive Self-Supervised Learning: Invariances, Augmentations and Dataset Biases

ArXiv, 2020

Self-supervised representation learning approaches have recently surpassed their supervised learn... more Self-supervised representation learning approaches have recently surpassed their supervised learning counterparts on downstream tasks like object detection and image classification. Somewhat mysteriously the recent gains in performance come from training instance classification models, treating each image and it's augmented versions as samples of a single class. In this work, we first present quantitative experiments to demystify these gains. We demonstrate that approaches like MOCO and PIRL learn occlusion-invariant representations. However, they fail to capture viewpoint and category instance invariance which are crucial components for object recognition. Second, we demonstrate that these approaches obtain further gains from access to a clean object-centric training dataset like Imagenet. Finally, we propose an approach to leverage unstructured videos to learn representations that possess higher viewpoint invariance. Our results show that the learned representations outperform...

Research paper thumbnail of Bounce and Learn: Modeling Scene Dynamics with Real-World Bounces

ArXiv, 2019

We introduce an approach to model surface properties governing bounces in everyday scenes. Our mo... more We introduce an approach to model surface properties governing bounces in everyday scenes. Our model learns end-to-end, starting from sensor inputs, to predict post-bounce trajectories and infer two underlying physical properties that govern bouncing - restitution and effective collision normals. Our model, Bounce and Learn, comprises two modules -- a Physics Inference Module (PIM) and a Visual Inference Module (VIM). VIM learns to infer physical parameters for locations in a scene given a single still image, while PIM learns to model physical interactions for the prediction task given physical parameters and observed pre-collision 3D trajectories. To achieve our results, we introduce the Bounce Dataset comprising 5K RGB-D videos of bouncing trajectories of a foam ball to probe surfaces of varying shapes and materials in everyday scenes including homes and offices. Our proposed model learns from our collected dataset of real-world bounces and is bootstrapped with additional informat...

Research paper thumbnail of Marr Revisited: 2D-3D Alignment via Surface Normal Prediction

2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016

Figure 1. Given a single 2D image, we predict surface normals that capture detailed object surfac... more Figure 1. Given a single 2D image, we predict surface normals that capture detailed object surfaces. We use the image and predicted surface normals to retrieve a 3D model from a large library of object CAD models.

Research paper thumbnail of Mid-level visual element discovery as discriminative mode seeking

Recent work on mid-level visual representations aims to capture information at the level of compl... more Recent work on mid-level visual representations aims to capture information at the level of complexity higher than typical "visual words", but lower than full-blown semantic objects. Several approaches [5, 6, 12, 23] have been proposed to discover mid-level visual elements, that are both 1) representative, i.e., frequently occurring within a visual dataset, and 2) visually discriminative. However, the current approaches are rather ad hoc and difficult to analyze and evaluate. In this work, we pose visual element discovery as discriminative mode seeking, drawing connections to the the well-known and well-studied mean-shift algorithm [2, 1, 4, 8]. Given a weakly-labeled image collection, our method discovers visually-coherent patch clusters that are maximally discriminative with respect to the labels. One advantage of our formulation is that it requires only a single pass through the data. We also propose the Purity-Coverage plot as a principled way of experimentally analyzing and evaluating different visual discovery approaches, and compare our method against prior work on the Paris Street View dataset of [5]. We also evaluate our method on the task of scene classification, demonstrating state-of-the-art performance on the MIT Scene-67 dataset.

Research paper thumbnail of Unsupervised Learning of Visual Representations Using Videos

2015 IEEE International Conference on Computer Vision (ICCV), 2015

Is strong supervision necessary for learning a good visual representation? Do we really need mill... more Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to the same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52% mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4%. We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation.

Research paper thumbnail of Beyond Grids: Learning Graph Representations for Visual Recognition

We propose learning graph representations from 2D feature maps for visual recognition. Our method... more We propose learning graph representations from 2D feature maps for visual recognition. Our method draws inspiration from region based recognition, and learns to transform a 2D image into a graph structure. The vertices of the graph define clusters of pixels ("regions"), and the edges measure the similarity between these clusters in a feature space. Our method further learns to propagate information across all vertices on the graph, and is able to project the learned graph representation back into 2D grids. Our graph representation facilitates reasoning beyond regular grids and can capture long range dependencies among regions. We demonstrate that our model can be trained from end-to-end, and is easily integrated into existing networks. Finally, we evaluate our method on three challenging recognition tasks: semantic segmentation, object detection and object instance segmentation. For all tasks, our method outperforms state-of-the-art methods.

Research paper thumbnail of Compositional Learning for Human Object Interaction

Computer Vision – ECCV 2018, 2018

The world of human-object interactions is rich. While generally we sit on chairs and sofas, if ne... more The world of human-object interactions is rich. While generally we sit on chairs and sofas, if need be we can even sit on TVs or top of shelves. In recent years, there has been progress in modeling actions and human-object interactions. However, most of these approaches require lots of data. It is not clear if the learned representations of actions are generalizable to new categories. In this paper, we explore the problem of zero-shot learning of human-object interactions. Given limited verb-noun interactions in training data, we want to learn a model than can work even on unseen combinations. To deal with this problem, In this paper, we propose a novel method using external knowledge graph and graph convolutional networks which learns how to compose classifiers for verbnoun pairs. We also provide benchmarks on several dataset for zero-shot learning including both image and video. We hope our method, dataset and baselines will facilitate future research in this direction.

Research paper thumbnail of The visual object tracking VOT2013 challenge results

Proc. of the ICCV workshop on Visual Object Tracking, 2013

Visual tracking has attracted a significant attention in the last few decades. The recent surge i... more Visual tracking has attracted a significant attention in the last few decades. The recent surge in the number of publications on tracking-related problems have made it almost impossible to follow the developments in the field. One

Research paper thumbnail of Context as Supervisory Signal: Discovering Objects with Predictable Context

Lecture Notes in Computer Science, 2014

This paper addresses the well-established problem of unsupervised object discovery with a novel m... more This paper addresses the well-established problem of unsupervised object discovery with a novel method inspired by weaklysupervised approaches. In particular, the ability of an object patch to predict the rest of the object (its context) is used as supervisory signal to help discover visually consistent object clusters. The main contributions of this work are: 1) framing unsupervised clustering as a leaveone-out context prediction task; 2) evaluating the quality of context prediction by statistical hypothesis testing between thing and stuff appearance models; and 3) an iterative region prediction and context alignment approach that gradually discovers a visual object cluster together with a segmentation mask and fine-grained correspondences. The proposed method outperforms previous unsupervised as well as weaklysupervised object discovery approaches, and is shown to provide correspondences detailed enough to transfer keypoint annotations.

Research paper thumbnail of Data-driven visual similarity for cross-domain image matching

ACM Transactions on Graphics, 2011

The goal of this work is to findvisually similarimages even if they appear quite different at the... more The goal of this work is to findvisually similarimages even if they appear quite different at the raw pixel level. This task is particularly important for matching images across visual domains, such as photos taken over different seasons or lighting conditions, paintings, hand-drawn sketches, etc. We propose a surprisingly simple method that estimates the relative importance of different features in a query image based on the notion of "data-driven uniqueness". We employ standard tools from discriminative object detection in a novel way, yielding a generic approach that does not depend on a particular image representation or a specific visual domain. Our approach shows good performance on a number of difficult cross-domain visual tasks e.g., matching paintings or sketches to real photographs. The method also allows us to demonstrate novel applications such asInternet re-photography, and painting2gps. While at present the technique is too computationally intensive to be pra...

Research paper thumbnail of Exemplar-SVMs for Visual Object Detection, Label Transfer and Image Retrieval

Research paper thumbnail of An Uncertain Future: Forecasting from Static Images Using Variational Autoencoders

Computer Vision – ECCV 2016, 2016

In a given scene, humans can easily predict a set of immediate future events that might happen. H... more In a given scene, humans can easily predict a set of immediate future events that might happen. However, pixel-level anticipation in computer vision is difficult because machine learning struggles with the ambiguity in predicting the future. In this paper, we focus on predicting the dense trajectory of pixels in a scene-what will move in the scene, where it will travel, and how it will deform over the course of one second. We propose a conditional variational autoencoder as a solution to this problem. In this framework, direct inference from the image shapes the distribution of possible trajectories while latent variables encode information that is not available in the image. We show that our method predicts events in a variety of scenes and can produce multiple different predictions for an ambiguous future. We also find that our method learns a representation that is applicable to semantic vision tasks.

Research paper thumbnail of A "Shape Aware" Model for semi-supervised Learning of Objects and its Context

We present an approach that combines bag-of-words and spatialmodels to perform semantic and synta... more We present an approach that combines bag-of-words and spatialmodels to perform semantic and syntactic analysis for recognition of an object based on its internal appearance and its context. We argue that while object recognition requires modeling relative spatial locations of image features within the object, a bag-of-word is sufficient for representing context. Learning such a model from weakly labeled data involves labeling of features into two classes: foreground(object) or "informative" background(context). We present a "shape-aware" model which utilizes contour information for efficient and accurate labeling of features in the image. Our approach iterates between an MCMC-based labeling and contour based labeling of features to integrate co-occurrence of features and shape similarity.

Research paper thumbnail of Understanding videos, constructing plots learning a visually grounded storyline model from annotated videos

2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009

Analyzing videos of human activities involves not only recognizing actions (typically based on th... more Analyzing videos of human activities involves not only recognizing actions (typically based on their appearances), but also determining the story/plot of the video. The storyline of a video describes causal relationships between actions. Beyond recognition of individual actions, discovering causal relationships helps to better understand the semantic meaning of the activities. We present an approach to learn a visually grounded storyline model of videos directly from weakly labeled data. The storyline model is represented as an AND-OR graph, a structure that can compactly encode storyline variation across videos. The edges in the AND-OR graph correspond to causal relationships which are represented in terms of spatio-temporal constraints. We formulate an Integer Programming framework for action recognition and storyline extraction using the storyline model and visual groundings learned from training data. Videos + Captions (long and short) After the pitcher pitches the ball, batter hits it. Batter its the ball and runs towards the base. Meanwhile the fielder catches the ball and runs towards the base. Picher pitches the ball and then the batter swings the bat but misses it. Pitcher pitches the ball and then the batter hits it. After hitting the ball, batter runs to the base. Simultaneously, the fielder runs to ball, catchs it and then throws it to the fielder at the base.

Research paper thumbnail of Learning What and How of Contextual Models for Scene Labeling

Lecture Notes in Computer Science, 2010

We present a data-driven approach to predict the importance of edges and construct a Markov netwo... more We present a data-driven approach to predict the importance of edges and construct a Markov network for image analysis based on statistical models of global and local image features. We also address the coupled problem of predicting the feature weights associated with each edge of a Markov network for evaluation of context. Experimental results indicate that this scene dependent structure construction model eliminates spurious edges and improves performance over fully-connected and neighborhood connected Markov network.

Research paper thumbnail of Objects in Action: An Approach for Combining Action Understanding and Object Perception

2007 IEEE Conference on Computer Vision and Pattern Recognition, 2007

Research paper thumbnail of Representing Videos Using Mid-level Discriminative Patches

2013 IEEE Conference on Computer Vision and Pattern Recognition, 2013

How should a video be represented? We propose a new representation for videos based on mid-level ... more How should a video be represented? We propose a new representation for videos based on mid-level discriminative spatio-temporal patches. These spatio-temporal patches might correspond to a primitive human action, a semantic object, or perhaps a random but informative spatiotemporal patch in the video. What defines these spatiotemporal patches is their discriminative and representative properties. We automatically mine these patches from hundreds of training videos and experimentally demonstrate that these patches establish correspondence across videos and align the videos for label transfer techniques. Furthermore, these patches can be used as a discriminative vocabulary for action classification where they demonstrate stateof-the-art performance on UCF50 and Olympics datasets.

Research paper thumbnail of Observing Human-Object Interactions: Using Spatial and Functional Compatibility for Recognition

IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009

Interpretation of images and videos containing humans interacting with different objects is a dau... more Interpretation of images and videos containing humans interacting with different objects is a daunting task. It involves understanding scene/event, analyzing human movements, recognizing manipulable objects, and observing the effect of the human movement on those objects. While each of these perceptual tasks can be conducted independently, recognition rate improves when interactions between them are considered. Motivated by psychological studies of human perception, we present a Bayesian approach which integrates various perceptual tasks involved in understanding human-object interactions. Previous approaches to object and action recognition rely on static shape/appearance feature matching and motion analysis, respectively. Our approach goes beyond these traditional approaches and applies spatial and functional constraints on each of the perceptual elements for coherent semantic interpretation. Such constraints allow us to recognize objects and actions when the appearances are not discriminative enough. We also demonstrate the use of such constraints in recognition of actions from static images without using any motion information.

Research paper thumbnail of Demystifying Contrastive Self-Supervised Learning: Invariances, Augmentations and Dataset Biases

ArXiv, 2020

Self-supervised representation learning approaches have recently surpassed their supervised learn... more Self-supervised representation learning approaches have recently surpassed their supervised learning counterparts on downstream tasks like object detection and image classification. Somewhat mysteriously the recent gains in performance come from training instance classification models, treating each image and it's augmented versions as samples of a single class. In this work, we first present quantitative experiments to demystify these gains. We demonstrate that approaches like MOCO and PIRL learn occlusion-invariant representations. However, they fail to capture viewpoint and category instance invariance which are crucial components for object recognition. Second, we demonstrate that these approaches obtain further gains from access to a clean object-centric training dataset like Imagenet. Finally, we propose an approach to leverage unstructured videos to learn representations that possess higher viewpoint invariance. Our results show that the learned representations outperform...

Research paper thumbnail of Bounce and Learn: Modeling Scene Dynamics with Real-World Bounces

ArXiv, 2019

We introduce an approach to model surface properties governing bounces in everyday scenes. Our mo... more We introduce an approach to model surface properties governing bounces in everyday scenes. Our model learns end-to-end, starting from sensor inputs, to predict post-bounce trajectories and infer two underlying physical properties that govern bouncing - restitution and effective collision normals. Our model, Bounce and Learn, comprises two modules -- a Physics Inference Module (PIM) and a Visual Inference Module (VIM). VIM learns to infer physical parameters for locations in a scene given a single still image, while PIM learns to model physical interactions for the prediction task given physical parameters and observed pre-collision 3D trajectories. To achieve our results, we introduce the Bounce Dataset comprising 5K RGB-D videos of bouncing trajectories of a foam ball to probe surfaces of varying shapes and materials in everyday scenes including homes and offices. Our proposed model learns from our collected dataset of real-world bounces and is bootstrapped with additional informat...

Research paper thumbnail of Marr Revisited: 2D-3D Alignment via Surface Normal Prediction

2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016

Figure 1. Given a single 2D image, we predict surface normals that capture detailed object surfac... more Figure 1. Given a single 2D image, we predict surface normals that capture detailed object surfaces. We use the image and predicted surface normals to retrieve a 3D model from a large library of object CAD models.

Research paper thumbnail of Mid-level visual element discovery as discriminative mode seeking

Recent work on mid-level visual representations aims to capture information at the level of compl... more Recent work on mid-level visual representations aims to capture information at the level of complexity higher than typical "visual words", but lower than full-blown semantic objects. Several approaches [5, 6, 12, 23] have been proposed to discover mid-level visual elements, that are both 1) representative, i.e., frequently occurring within a visual dataset, and 2) visually discriminative. However, the current approaches are rather ad hoc and difficult to analyze and evaluate. In this work, we pose visual element discovery as discriminative mode seeking, drawing connections to the the well-known and well-studied mean-shift algorithm [2, 1, 4, 8]. Given a weakly-labeled image collection, our method discovers visually-coherent patch clusters that are maximally discriminative with respect to the labels. One advantage of our formulation is that it requires only a single pass through the data. We also propose the Purity-Coverage plot as a principled way of experimentally analyzing and evaluating different visual discovery approaches, and compare our method against prior work on the Paris Street View dataset of [5]. We also evaluate our method on the task of scene classification, demonstrating state-of-the-art performance on the MIT Scene-67 dataset.

Research paper thumbnail of Unsupervised Learning of Visual Representations Using Videos

2015 IEEE International Conference on Computer Vision (ICCV), 2015

Is strong supervision necessary for learning a good visual representation? Do we really need mill... more Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to the same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52% mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4%. We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation.

Research paper thumbnail of Beyond Grids: Learning Graph Representations for Visual Recognition

We propose learning graph representations from 2D feature maps for visual recognition. Our method... more We propose learning graph representations from 2D feature maps for visual recognition. Our method draws inspiration from region based recognition, and learns to transform a 2D image into a graph structure. The vertices of the graph define clusters of pixels ("regions"), and the edges measure the similarity between these clusters in a feature space. Our method further learns to propagate information across all vertices on the graph, and is able to project the learned graph representation back into 2D grids. Our graph representation facilitates reasoning beyond regular grids and can capture long range dependencies among regions. We demonstrate that our model can be trained from end-to-end, and is easily integrated into existing networks. Finally, we evaluate our method on three challenging recognition tasks: semantic segmentation, object detection and object instance segmentation. For all tasks, our method outperforms state-of-the-art methods.

Research paper thumbnail of Compositional Learning for Human Object Interaction

Computer Vision – ECCV 2018, 2018

The world of human-object interactions is rich. While generally we sit on chairs and sofas, if ne... more The world of human-object interactions is rich. While generally we sit on chairs and sofas, if need be we can even sit on TVs or top of shelves. In recent years, there has been progress in modeling actions and human-object interactions. However, most of these approaches require lots of data. It is not clear if the learned representations of actions are generalizable to new categories. In this paper, we explore the problem of zero-shot learning of human-object interactions. Given limited verb-noun interactions in training data, we want to learn a model than can work even on unseen combinations. To deal with this problem, In this paper, we propose a novel method using external knowledge graph and graph convolutional networks which learns how to compose classifiers for verbnoun pairs. We also provide benchmarks on several dataset for zero-shot learning including both image and video. We hope our method, dataset and baselines will facilitate future research in this direction.

Research paper thumbnail of The visual object tracking VOT2013 challenge results

Proc. of the ICCV workshop on Visual Object Tracking, 2013

Visual tracking has attracted a significant attention in the last few decades. The recent surge i... more Visual tracking has attracted a significant attention in the last few decades. The recent surge in the number of publications on tracking-related problems have made it almost impossible to follow the developments in the field. One

Research paper thumbnail of Context as Supervisory Signal: Discovering Objects with Predictable Context

Lecture Notes in Computer Science, 2014

This paper addresses the well-established problem of unsupervised object discovery with a novel m... more This paper addresses the well-established problem of unsupervised object discovery with a novel method inspired by weaklysupervised approaches. In particular, the ability of an object patch to predict the rest of the object (its context) is used as supervisory signal to help discover visually consistent object clusters. The main contributions of this work are: 1) framing unsupervised clustering as a leaveone-out context prediction task; 2) evaluating the quality of context prediction by statistical hypothesis testing between thing and stuff appearance models; and 3) an iterative region prediction and context alignment approach that gradually discovers a visual object cluster together with a segmentation mask and fine-grained correspondences. The proposed method outperforms previous unsupervised as well as weaklysupervised object discovery approaches, and is shown to provide correspondences detailed enough to transfer keypoint annotations.

Research paper thumbnail of Data-driven visual similarity for cross-domain image matching

ACM Transactions on Graphics, 2011

The goal of this work is to findvisually similarimages even if they appear quite different at the... more The goal of this work is to findvisually similarimages even if they appear quite different at the raw pixel level. This task is particularly important for matching images across visual domains, such as photos taken over different seasons or lighting conditions, paintings, hand-drawn sketches, etc. We propose a surprisingly simple method that estimates the relative importance of different features in a query image based on the notion of "data-driven uniqueness". We employ standard tools from discriminative object detection in a novel way, yielding a generic approach that does not depend on a particular image representation or a specific visual domain. Our approach shows good performance on a number of difficult cross-domain visual tasks e.g., matching paintings or sketches to real photographs. The method also allows us to demonstrate novel applications such asInternet re-photography, and painting2gps. While at present the technique is too computationally intensive to be pra...

Research paper thumbnail of Exemplar-SVMs for Visual Object Detection, Label Transfer and Image Retrieval