Mohamed Elhoseiny - Academia.edu (original) (raw)

Papers by Mohamed Elhoseiny

Research paper thumbnail of Automatic Annotation of Structured Facts in Images

arXiv (Cornell University), Apr 2, 2016

Motivated by the application of fact-level image understanding, we present an automatic method fo... more Motivated by the application of fact-level image understanding, we present an automatic method for data collection of structured visual facts from images with captions. Example structured facts include attributed objects (e.g., <flower, red>), actions (e.g., <baby, smile>), interactions (e.g., <man, walking, dog>), and positional information (e.g., <vase, on, table>). The collected annotations are in the form of fact-image pairs (e.g.,<man, walking, dog> and an image region containing this fact). With a language approach, the proposed method is able to collect hundreds of thousands of visual fact annotations with accuracy of 83% according to human judgment. Our method automatically collected more than 380,000 visual fact annotations and more than 110,000 unique visual facts from images with captions and localized them in images in less than one day of processing time on standard CPU platforms.

Research paper thumbnail of Write a Classifier: Predicting Visual Classifiers from Unstructured Text

arXiv (Cornell University), Dec 31, 2015

People typically learn through exposure to visual concepts associated with linguistic description... more People typically learn through exposure to visual concepts associated with linguistic descriptions. For instance, teaching visual object categories to children is often accompanied by descriptions in text or speech. In a machine learning context, these observations motivates us to ask whether this learning process could be computationally modeled to learn visual classifiers. More specifically, the main question of this work is how to utilize purely textual description of visual classes with no training images, to learn explicit visual classifiers for them. We propose and investigate two baseline formulations, based on regression and domain transfer, that predict a linear classifier. Then, we propose a new constrained optimization formulation that combines a regression function and a knowledge transfer function with additional constraints to predict the parameters of a linear classifier. We also propose a generic kernelized models where a kernel classifier is predicted in the form defined by the representer theorem. The kernelized models allow defining and utilizing any two RKHS 1 kernel functions in the visual space and text space, respectively. We finally propose a kernel function between unstructured text descriptions that builds on distributional semantics, which shows an advantage in our setting and could be useful for other applications. We applied all the studied models to predict visual classifiers on two fine-grained and challenging categorization datasets (CU Birds and Flower Datasets), and the results indicate successful predictions of our final model over several baselines that we designed.

Research paper thumbnail of Sherlock: Modeling Structured Knowledge in Images

arXiv (Cornell University), Nov 16, 2015

We study scalable and uniform understanding of facts in images. Existing visual recognition syste... more We study scalable and uniform understanding of facts in images. Existing visual recognition systems are typically modeled differently for each fact type such as objects, actions, and interactions. We propose a setting where all these facts can be modeled simultaneously with a capacity to understand unbounded number of facts in a structured way. The training data comes as structured facts in images, including (1) objects (e.g., <<<boy$>$), (2) attributes (e.g., <<<boy, tall$>$), (3) actions (e.g., <<<boy, playing$>$), and (4) interactions (e.g., <<<boy, riding, a horse >>>). Each fact has a semantic language view (e.g., <<< boy, playing$>$) and a visual view (an image with this fact). We show that learning visual facts in a structured way enables not only a uniform but also generalizable visual understanding. We propose and investigate recent and strong approaches from the multiview learning literature and also introduce two learning representation models as potential baselines. We applied the investigated methods on several datasets that we augmented with structured facts and a large scale dataset of more than 202,000 facts and 814,000 images. Our experiments show the advantage of relating facts by the structure by the proposed models compared to the designed baselines on bidirectional fact retrieval.

Research paper thumbnail of Write a Classifier: Predicting Visual Classifiers from Unstructured Text Descriptions

arXiv (Cornell University), 2016

People typically learn through exposure to visual concepts associated with linguistic description... more People typically learn through exposure to visual concepts associated with linguistic descriptions. For instance, teaching visual object categories to children is often accompanied by descriptions in text or speech. In a machine learning context, these observations motivates us to ask whether this learning process could be computationally modeled to learn visual classifiers. More specifically, the main question of this work is how to utilize purely textual description of visual classes with no training images, to learn explicit visual classifiers for them. We propose and investigate two baseline formulations, based on regression and domain transfer, that predict a linear classifier. Then, we propose a new constrained optimization formulation that combines a regression function and a knowledge transfer function with additional constraints to predict the parameters of a linear classifier. We also propose a generic kernelized models where a kernel classifier is predicted in the form defined by the representer theorem. The kernelized models allow defining and utilizing any two RKHS (Reproducing Kernel Hilbert Space) kernel functions in the visual space and text space, respectively. We finally propose a kernel function between unstructured text descriptions that builds on distributional semantics, which shows an advantage in our setting and could be useful for other applications. We applied all the studied models to predict visual classifiers on two fine-grained and challenging categorization datasets (CU Birds and Flower Datasets), and the results indicate successful predictions of our final model over several baselines that we designed.

Research paper thumbnail of Imagine it for me: Generative Adversarial Approach for Zero-Shot Learning from Noisy Texts

Research paper thumbnail of Automatic Annotation of Structured Facts in Images

Proceedings of the 5th Workshop on Vision and Language, 2016

Motivated by the application of fact-level image understanding, we present an automatic method fo... more Motivated by the application of fact-level image understanding, we present an automatic method for data collection of structured visual facts from images with captions. Example structured facts include attributed objects (e.g., <flower, red>), actions (e.g., <baby, smile>), interactions (e.g., <man, walking, dog>), and positional information (e.g., <vase, on, table>). The collected annotations are in the form of fact-image pairs (e.g.,<man, walking, dog> and an image region containing this fact). With a language approach, the proposed method is able to collect hundreds of thousands of visual fact annotations with accuracy of 83% according to human judgment. Our method automatically collected more than 380,000 visual fact annotations and more than 110,000 unique visual facts from images with captions and localized them in images in less than one day of processing time on standard CPU platforms.

Research paper thumbnail of SRI-Sarnoff AURORA System at TRECVID 2014 Multimedia Event Detection and Recounting

TREC Video Retrieval Evaluation, 2014

In Multimedia Event Detection 2014 evaluation [20], SRI Aurora team participated in task 000Ex, 0... more In Multimedia Event Detection 2014 evaluation [20], SRI Aurora team participated in task 000Ex, 010Ex and 100Ex with full system evaluation. Aurora system extracts multi-modality features including motion features, static image feature, and audio features from videos, and represents a video with Bag-of-Word (BOW) and Fisher Vector model. In addition, various high-level concept features have been explored. Other than the action concept features and SIN features, deep learning based semantic features including both DeCaf and Overfeat implementation have been explored. The deep-learning features achieve good performance for MED, but they are not the right features for MER. In particular, we performed further study on semi-supervised Automatic Annotation to expand our action concepts. To distinguish event categories efficiently and effectively, we introduce Linear SVM into our system, as well as the feature-mapping technique to approximate the Histogram Intersection Kernel for BOW video model. All the modalities are fused by an ensemble of classifiers including techniques such as Logistic Regression, SVR, Boosting, and so on. Eventually, we achieve satisfied achieved satisfactory results. In MER task, we developed an approach to provide a breakdown of the evidences of why the MED decision has been made by exploring the SVM-based event detector.

Research paper thumbnail of Semi-Supervised Few-Shot Learning with Prototypical Random Walks

arXiv (Cornell University), Mar 5, 2019

Recent progress has shown that few-shot learning can be improved with access to unlabelled data, ... more Recent progress has shown that few-shot learning can be improved with access to unlabelled data, known as semisupervised few-shot learning(SS-FSL). We introduce an SS-FSL approach, dubbed as Prototypical Random Walk Networks(PRWN), built on top of Prototypical Networks (PN). We develop a random walk semi-supervised loss that enables the network to learn representations that are compact and wellseparated. Our work is related to the very recent development on graph-based approaches for few-shot learning. However, we show that compact and well-separated class representations can be achieved by modeling our prototypical random walk notion without needing additional graph-NN parameters or requiring a transductive setting where a collective test set is provided. Our model outperforms baselines in most benchmarks with significant improvements in some cases. Our model, trained with 40% of the data as labeled, compares competitively against fully supervised prototypical networks, trained on 100% of the labels, even outperforming it in the 1-shot mini-Imagenet case with 50.89% to 49.4% accuracy. We also show that our loss is resistant to distractors, unlabeled data that does not belong to any of the training classes, and hence reflecting robustness to labeled/unlabeled class distribution mismatch. Associated github page can be found at https://prototypical-random-walk.github.io.

Research paper thumbnail of RelTransformer: A Transformer-Based Long-Tail Visual Relationship Recognition

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

generalizes well on many VRR benchmarks. Our model outperforms the best-performing models on two ... more generalizes well on many VRR benchmarks. Our model outperforms the best-performing models on two large-scale long-tail VRR benchmarks, VG8K-LT (+2.0% overall acc) and GQA-LT (+26.0% overall acc), both having a highly skewed distribution towards the tail. It also achieves strong results on the VG200 relation detection task. Our code is available at https://github.com/Vision-CAIR/ RelTransformer.

Research paper thumbnail of StyleGAN-V: A Continuous Video Generator with the Price, Image Quality and Perks of StyleGAN2

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

Videos show continuous events, yet most-if not allvideo synthesis frameworks treat them discretel... more Videos show continuous events, yet most-if not allvideo synthesis frameworks treat them discretely in time. In this work, we think of videos of what they should betime-continuous signals, and extend the paradigm of neural representations to build a continuous-time video generator. For this, we first design continuous motion representations through the lens of positional embeddings. Then, we explore the question of training on very sparse videos and demonstrate that a good generator can be learned by using as few as 2 frames per clip. After that, we rethink the traditional image and video discriminators pair and propose to use a single hypernetwork-based one. This decreases the training cost and provides richer learning signal to the generator, making it possible to train directly on 1024 2 videos for the first time. We build our model on top of StyleGAN2 and it is just ≈5% more expensive to train at the same resolution while achieving almost the same image quality. Moreover, our latent space features similar properties, enabling spatial manipulations that our method can propagate in time. We can generate arbitrarily long videos at arbitrary high frame rate, while prior work struggles to generate even 64 frames at a fixed rate. Our model achieves state-of-the-art results on four modern 256 2 video synthesis benchmarks and one 1024 2 resolution one. 1

Research paper thumbnail of The Shape of Art History in the Eyes of the Machine

Proceedings of the AAAI Conference on Artificial Intelligence

How does the machine classify styles in art? And how does it relate to art historians' method... more How does the machine classify styles in art? And how does it relate to art historians' methods for analyzing style? Several studies showed the ability of the machine to learn and predict styles, such as Renaissance, Baroque, Impressionism, etc., from images of paintings. This implies that the machine can learn an internal representation encoding discriminative features through its visual analysis. However, such a representation is not necessarily interpretable. We conducted a comprehensive study of several of the state-of-the-art convolutional neural networks applied to the task of style classification on 67K images of paintings, and analyzed the learned representation through correlation analysis with concepts derived from art history. Surprisingly, the networks could place the works of art in a smooth temporal arrangement mainly based on learning style labels, without any a priori knowledge of time of creation, the historical time and context of styles, or relations between st...

Research paper thumbnail of Exploring Hierarchical Graph Representation for Large-Scale Zero-Shot Image Classification

arXiv (Cornell University), Mar 2, 2022

The main question we address in this paper is how to scale up visual recognition of unseen classe... more The main question we address in this paper is how to scale up visual recognition of unseen classes, also known as zero-shot learning, to tens of thousands of categories as in the ImageNet-21K benchmark. At this scale, especially with many fine-grained categories included in ImageNet-21K, it is critical to learn quality visual semantic representations that are discriminative enough to recognize unseen classes and distinguish them from seen ones. We propose a H ierarchical Graphical knowledge Representation framework for the confidence-based classification method, dubbed as HGR-Net. Our experimental results demonstrate that HGR-Net can grasp class inheritance relations by utilizing hierarchical conceptual knowledge. Our method significantly outperformed all existing techniques, boosting the performance by 7% compared to the runner-up approach on the ImageNet-21K benchmark. We show that HGR-Net is learning-efficient in few-shot scenarios. We also analyzed our method on smaller datasets like ImageNet-21K-P, 2-hops and 3-hops, demonstrating its generalization ability. Our benchmark and code are available at https://kaiyi.me/p/hgrnet.html.

Research paper thumbnail of M-Elfeki/GDPP: Generator loss to reduce mode-collapse and to improve the generated samples quality

Research paper thumbnail of Domain-Aware Continual Zero-Shot Learning

We introduce Domain Aware Continual Zero-Shot Learning (DACZSL), the task of visually recognizing... more We introduce Domain Aware Continual Zero-Shot Learning (DACZSL), the task of visually recognizing images of unseen categories in unseen domains sequentially. We created DACZSL on top of the DomainNet dataset by dividing it into a sequence of tasks, where classes are incrementally provided on seen domains during training and evaluation is conducted on unseen domains for both seen and unseen classes. We also proposed a novel Domain-Invariant CZSL Network (DIN), which outperforms stateof-the-art baseline models that we adapted to DACZSL setting. We adopt a structure-based approach to alleviate forgetting knowledge from previous tasks with a small pertask private network in addition to a global shared network. To encourage the private network to capture the domain and task-specific representation, we train our model with a novel adversarial knowledge disentanglement setting to make our global network task-invariant and domaininvariant over all the tasks. Our method also learns a classwise learnable prompt to obtain better class-level text representation, which is used to represent side information to enable zero-shot prediction of future unseen classes. Our code and benchmarks will be made publicly available.

Research paper thumbnail of Efficiently Disentangle Causal Representations

arXiv (Cornell University), Jan 6, 2022

This paper proposes an efficient approach to learning disentangled representations with causal me... more This paper proposes an efficient approach to learning disentangled representations with causal mechanisms based on the difference of conditional probabilities in original and new distributions. We approximate the difference with models' generalization abilities so that it fits in the standard machine learning framework and can be efficiently computed. In contrast to the state-of-the-art approach, which relies on the learner's adaptation speed to new distribution, the proposed approach only requires evaluating the model's generalization ability. We provide a theoretical explanation for the advantage of the proposed method, and our experiments show that the proposed technique is 1.9-11.0× more sample efficient and 9.4-32.4× quicker than the previous method on various tasks. The source code is available at https://github.com/yuanpeng16/EDCR.

Research paper thumbnail of Temporal Positive-unlabeled Learning for Biomedical Hypothesis Generation via Risk Estimation

arXiv (Cornell University), Oct 5, 2020

Understanding the relationships between biomedical terms like viruses, drugs, and symptoms is ess... more Understanding the relationships between biomedical terms like viruses, drugs, and symptoms is essential in the fight against diseases. Many attempts have been made to introduce the use of machine learning to the scientific process of hypothesis generation (HG), which refers to the discovery of meaningful implicit connections between biomedical terms. However, most existing methods fail to truly capture the temporal dynamics of scientific term relations and also assume unobserved connections to be irrelevant (i.e., in a positive-negative (PN) learning setting). To break these limits, we formulate this HG problem as future connectivity prediction task on a dynamic attributed graph via positive-unlabeled (PU) learning. Then, the key is to capture the temporal evolution of node pair (term pair) relations from just the positive and unlabeled data. We propose a variational inference model to estimate the positive prior, and incorporate it in the learning of node pair embeddings, which are then used for link prediction. Experiment results on real-world biomedical term relationship datasets and case study analyses on a COVID-19 dataset validate the effectiveness of the proposed model.

Research paper thumbnail of ArtEmis: Affective Language for Visual Art Supplemental Material

In total, we annotated 80,031 artworks covering the entire WikiArt, as downloaded in 2015 [6]. We... more In total, we annotated 80,031 artworks covering the entire WikiArt, as downloaded in 2015 [6]. We note that this version of the WikiArt dataset contains 81,446 artworks. However, as our analysis indicated 1,415 artworks were exact duplicates, of the 80,031 unique artworks we kept for annotation purposes. We found these duplicates using the ‘fdupes’ program [5] and limited manual inspection on pairs of nearest-neighbors artworks (using features of a ResNet-32, pretrained on ImageNet), whose distance was smaller than a manually selected threshold. When displaying the image of an artwork in AMT we scale down its largest size to 600 pixels, keeping the original aspect-ratio (or do not apply any scaling if the largest size is less than 600 pixels). We do this scaling to homogenize the presentation of our visual stimuli, and crucially to also reduce the loading and scrolling time required with higher resolution images.

Research paper thumbnail of Normalization Matters in Zero-Shot Learning

ArXiv, 2020

An ability to grasp new concepts from their descriptions is one of the key features of human inte... more An ability to grasp new concepts from their descriptions is one of the key features of human intelligence, and zero-shot learning (ZSL) aims to incorporate this property into machine learning models. In this paper, we theoretically investigate two very popular tricks used in ZSL: "normalize+scale" trick and attributes normalization and show how they help to preserve a signal's variance in a typical model during a forward pass. Next, we demonstrate that these two tricks are not enough to normalize a deep ZSL network. We derive a new initialization scheme, which allows us to demonstrate strong state-of-the-art results on 4 out of 5 commonly used ZSL datasets: SUN, CUB, AwA1, and AwA2 while being on average 2 orders faster than the closest runner-up. Finally, we generalize ZSL to a broader problem -- Continual Zero-Shot Learning (CZSL) and test our ideas in this new setup. The source code to reproduce all the results is available at this https URL.

Research paper thumbnail of RelTransformer: Balancing the Visual Relationship Detection from Local Context, Scene and Memory

ArXiv, 2021

Visual relationship recognition (VRR) is a fundamental scene understanding task. The structure th... more Visual relationship recognition (VRR) is a fundamental scene understanding task. The structure that VRR provides is essential to improve the AI interpretability in downstream tasks such as image captioning and visual question answering. Several recent studies showed that the long-tail problem in VRR is even more critical than that in object recognition due to the compositional complexity and structure. To overcome this limitation, we propose a novel transformerbased framework, dubbed as RelTransformer, which performs relationship prediction using rich semantic features from multiple image levels. We assume that more abundant contextual features can generate more accurate and discriminative relationships, which can be useful when sufficient training data are lacking. The key feature of our model is its ability to aggregate three different-level features (local context, scene, and dataset-level) to compositionally predict the visual relationship. We evaluate our model on the visual ge...

Research paper thumbnail of Adapted Appearance ( c ) Student Model Output ( a ) Baseline Appearance ( b ) Teacher Adaptation Target Adaptation Target Interactive Video Object Segmentation using Teacher-Student Adaptation

Video object segmentation is an essential task in robot manipulation to facilitate grasping and l... more Video object segmentation is an essential task in robot manipulation to facilitate grasping and learning affordances. Incremental learning is important for robotics in unstructured environments. Inspired by the children learning process, human robot interaction (HRI) can be utilized to teach robots about the world guided by humans similar to how children learn from a parent or a teacher. A human teacher can show potential objects of interest to the robot, which is able to self adapt to the teaching signal without providing manual segmentation labels. We propose a novel teacher-student learning paradigm to teach robots about their surrounding environment. A two-stream motion and appearance ”teacher” network provides pseudo-labels to adapt an appearance ”student” network. The student network is able to segment the newly learned objects in other scenes, whether they are static or in motion. We also introduce a carefully designed dataset that serves the proposed HRI setup, denoted as (I...

Research paper thumbnail of Automatic Annotation of Structured Facts in Images

arXiv (Cornell University), Apr 2, 2016

Motivated by the application of fact-level image understanding, we present an automatic method fo... more Motivated by the application of fact-level image understanding, we present an automatic method for data collection of structured visual facts from images with captions. Example structured facts include attributed objects (e.g., <flower, red>), actions (e.g., <baby, smile>), interactions (e.g., <man, walking, dog>), and positional information (e.g., <vase, on, table>). The collected annotations are in the form of fact-image pairs (e.g.,<man, walking, dog> and an image region containing this fact). With a language approach, the proposed method is able to collect hundreds of thousands of visual fact annotations with accuracy of 83% according to human judgment. Our method automatically collected more than 380,000 visual fact annotations and more than 110,000 unique visual facts from images with captions and localized them in images in less than one day of processing time on standard CPU platforms.

Research paper thumbnail of Write a Classifier: Predicting Visual Classifiers from Unstructured Text

arXiv (Cornell University), Dec 31, 2015

People typically learn through exposure to visual concepts associated with linguistic description... more People typically learn through exposure to visual concepts associated with linguistic descriptions. For instance, teaching visual object categories to children is often accompanied by descriptions in text or speech. In a machine learning context, these observations motivates us to ask whether this learning process could be computationally modeled to learn visual classifiers. More specifically, the main question of this work is how to utilize purely textual description of visual classes with no training images, to learn explicit visual classifiers for them. We propose and investigate two baseline formulations, based on regression and domain transfer, that predict a linear classifier. Then, we propose a new constrained optimization formulation that combines a regression function and a knowledge transfer function with additional constraints to predict the parameters of a linear classifier. We also propose a generic kernelized models where a kernel classifier is predicted in the form defined by the representer theorem. The kernelized models allow defining and utilizing any two RKHS 1 kernel functions in the visual space and text space, respectively. We finally propose a kernel function between unstructured text descriptions that builds on distributional semantics, which shows an advantage in our setting and could be useful for other applications. We applied all the studied models to predict visual classifiers on two fine-grained and challenging categorization datasets (CU Birds and Flower Datasets), and the results indicate successful predictions of our final model over several baselines that we designed.

Research paper thumbnail of Sherlock: Modeling Structured Knowledge in Images

arXiv (Cornell University), Nov 16, 2015

We study scalable and uniform understanding of facts in images. Existing visual recognition syste... more We study scalable and uniform understanding of facts in images. Existing visual recognition systems are typically modeled differently for each fact type such as objects, actions, and interactions. We propose a setting where all these facts can be modeled simultaneously with a capacity to understand unbounded number of facts in a structured way. The training data comes as structured facts in images, including (1) objects (e.g., <<<boy$>$), (2) attributes (e.g., <<<boy, tall$>$), (3) actions (e.g., <<<boy, playing$>$), and (4) interactions (e.g., <<<boy, riding, a horse >>>). Each fact has a semantic language view (e.g., <<< boy, playing$>$) and a visual view (an image with this fact). We show that learning visual facts in a structured way enables not only a uniform but also generalizable visual understanding. We propose and investigate recent and strong approaches from the multiview learning literature and also introduce two learning representation models as potential baselines. We applied the investigated methods on several datasets that we augmented with structured facts and a large scale dataset of more than 202,000 facts and 814,000 images. Our experiments show the advantage of relating facts by the structure by the proposed models compared to the designed baselines on bidirectional fact retrieval.

Research paper thumbnail of Write a Classifier: Predicting Visual Classifiers from Unstructured Text Descriptions

arXiv (Cornell University), 2016

People typically learn through exposure to visual concepts associated with linguistic description... more People typically learn through exposure to visual concepts associated with linguistic descriptions. For instance, teaching visual object categories to children is often accompanied by descriptions in text or speech. In a machine learning context, these observations motivates us to ask whether this learning process could be computationally modeled to learn visual classifiers. More specifically, the main question of this work is how to utilize purely textual description of visual classes with no training images, to learn explicit visual classifiers for them. We propose and investigate two baseline formulations, based on regression and domain transfer, that predict a linear classifier. Then, we propose a new constrained optimization formulation that combines a regression function and a knowledge transfer function with additional constraints to predict the parameters of a linear classifier. We also propose a generic kernelized models where a kernel classifier is predicted in the form defined by the representer theorem. The kernelized models allow defining and utilizing any two RKHS (Reproducing Kernel Hilbert Space) kernel functions in the visual space and text space, respectively. We finally propose a kernel function between unstructured text descriptions that builds on distributional semantics, which shows an advantage in our setting and could be useful for other applications. We applied all the studied models to predict visual classifiers on two fine-grained and challenging categorization datasets (CU Birds and Flower Datasets), and the results indicate successful predictions of our final model over several baselines that we designed.

Research paper thumbnail of Imagine it for me: Generative Adversarial Approach for Zero-Shot Learning from Noisy Texts

Research paper thumbnail of Automatic Annotation of Structured Facts in Images

Proceedings of the 5th Workshop on Vision and Language, 2016

Motivated by the application of fact-level image understanding, we present an automatic method fo... more Motivated by the application of fact-level image understanding, we present an automatic method for data collection of structured visual facts from images with captions. Example structured facts include attributed objects (e.g., <flower, red>), actions (e.g., <baby, smile>), interactions (e.g., <man, walking, dog>), and positional information (e.g., <vase, on, table>). The collected annotations are in the form of fact-image pairs (e.g.,<man, walking, dog> and an image region containing this fact). With a language approach, the proposed method is able to collect hundreds of thousands of visual fact annotations with accuracy of 83% according to human judgment. Our method automatically collected more than 380,000 visual fact annotations and more than 110,000 unique visual facts from images with captions and localized them in images in less than one day of processing time on standard CPU platforms.

Research paper thumbnail of SRI-Sarnoff AURORA System at TRECVID 2014 Multimedia Event Detection and Recounting

TREC Video Retrieval Evaluation, 2014

In Multimedia Event Detection 2014 evaluation [20], SRI Aurora team participated in task 000Ex, 0... more In Multimedia Event Detection 2014 evaluation [20], SRI Aurora team participated in task 000Ex, 010Ex and 100Ex with full system evaluation. Aurora system extracts multi-modality features including motion features, static image feature, and audio features from videos, and represents a video with Bag-of-Word (BOW) and Fisher Vector model. In addition, various high-level concept features have been explored. Other than the action concept features and SIN features, deep learning based semantic features including both DeCaf and Overfeat implementation have been explored. The deep-learning features achieve good performance for MED, but they are not the right features for MER. In particular, we performed further study on semi-supervised Automatic Annotation to expand our action concepts. To distinguish event categories efficiently and effectively, we introduce Linear SVM into our system, as well as the feature-mapping technique to approximate the Histogram Intersection Kernel for BOW video model. All the modalities are fused by an ensemble of classifiers including techniques such as Logistic Regression, SVR, Boosting, and so on. Eventually, we achieve satisfied achieved satisfactory results. In MER task, we developed an approach to provide a breakdown of the evidences of why the MED decision has been made by exploring the SVM-based event detector.

Research paper thumbnail of Semi-Supervised Few-Shot Learning with Prototypical Random Walks

arXiv (Cornell University), Mar 5, 2019

Recent progress has shown that few-shot learning can be improved with access to unlabelled data, ... more Recent progress has shown that few-shot learning can be improved with access to unlabelled data, known as semisupervised few-shot learning(SS-FSL). We introduce an SS-FSL approach, dubbed as Prototypical Random Walk Networks(PRWN), built on top of Prototypical Networks (PN). We develop a random walk semi-supervised loss that enables the network to learn representations that are compact and wellseparated. Our work is related to the very recent development on graph-based approaches for few-shot learning. However, we show that compact and well-separated class representations can be achieved by modeling our prototypical random walk notion without needing additional graph-NN parameters or requiring a transductive setting where a collective test set is provided. Our model outperforms baselines in most benchmarks with significant improvements in some cases. Our model, trained with 40% of the data as labeled, compares competitively against fully supervised prototypical networks, trained on 100% of the labels, even outperforming it in the 1-shot mini-Imagenet case with 50.89% to 49.4% accuracy. We also show that our loss is resistant to distractors, unlabeled data that does not belong to any of the training classes, and hence reflecting robustness to labeled/unlabeled class distribution mismatch. Associated github page can be found at https://prototypical-random-walk.github.io.

Research paper thumbnail of RelTransformer: A Transformer-Based Long-Tail Visual Relationship Recognition

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

generalizes well on many VRR benchmarks. Our model outperforms the best-performing models on two ... more generalizes well on many VRR benchmarks. Our model outperforms the best-performing models on two large-scale long-tail VRR benchmarks, VG8K-LT (+2.0% overall acc) and GQA-LT (+26.0% overall acc), both having a highly skewed distribution towards the tail. It also achieves strong results on the VG200 relation detection task. Our code is available at https://github.com/Vision-CAIR/ RelTransformer.

Research paper thumbnail of StyleGAN-V: A Continuous Video Generator with the Price, Image Quality and Perks of StyleGAN2

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

Videos show continuous events, yet most-if not allvideo synthesis frameworks treat them discretel... more Videos show continuous events, yet most-if not allvideo synthesis frameworks treat them discretely in time. In this work, we think of videos of what they should betime-continuous signals, and extend the paradigm of neural representations to build a continuous-time video generator. For this, we first design continuous motion representations through the lens of positional embeddings. Then, we explore the question of training on very sparse videos and demonstrate that a good generator can be learned by using as few as 2 frames per clip. After that, we rethink the traditional image and video discriminators pair and propose to use a single hypernetwork-based one. This decreases the training cost and provides richer learning signal to the generator, making it possible to train directly on 1024 2 videos for the first time. We build our model on top of StyleGAN2 and it is just ≈5% more expensive to train at the same resolution while achieving almost the same image quality. Moreover, our latent space features similar properties, enabling spatial manipulations that our method can propagate in time. We can generate arbitrarily long videos at arbitrary high frame rate, while prior work struggles to generate even 64 frames at a fixed rate. Our model achieves state-of-the-art results on four modern 256 2 video synthesis benchmarks and one 1024 2 resolution one. 1

Research paper thumbnail of The Shape of Art History in the Eyes of the Machine

Proceedings of the AAAI Conference on Artificial Intelligence

How does the machine classify styles in art? And how does it relate to art historians' method... more How does the machine classify styles in art? And how does it relate to art historians' methods for analyzing style? Several studies showed the ability of the machine to learn and predict styles, such as Renaissance, Baroque, Impressionism, etc., from images of paintings. This implies that the machine can learn an internal representation encoding discriminative features through its visual analysis. However, such a representation is not necessarily interpretable. We conducted a comprehensive study of several of the state-of-the-art convolutional neural networks applied to the task of style classification on 67K images of paintings, and analyzed the learned representation through correlation analysis with concepts derived from art history. Surprisingly, the networks could place the works of art in a smooth temporal arrangement mainly based on learning style labels, without any a priori knowledge of time of creation, the historical time and context of styles, or relations between st...

Research paper thumbnail of Exploring Hierarchical Graph Representation for Large-Scale Zero-Shot Image Classification

arXiv (Cornell University), Mar 2, 2022

The main question we address in this paper is how to scale up visual recognition of unseen classe... more The main question we address in this paper is how to scale up visual recognition of unseen classes, also known as zero-shot learning, to tens of thousands of categories as in the ImageNet-21K benchmark. At this scale, especially with many fine-grained categories included in ImageNet-21K, it is critical to learn quality visual semantic representations that are discriminative enough to recognize unseen classes and distinguish them from seen ones. We propose a H ierarchical Graphical knowledge Representation framework for the confidence-based classification method, dubbed as HGR-Net. Our experimental results demonstrate that HGR-Net can grasp class inheritance relations by utilizing hierarchical conceptual knowledge. Our method significantly outperformed all existing techniques, boosting the performance by 7% compared to the runner-up approach on the ImageNet-21K benchmark. We show that HGR-Net is learning-efficient in few-shot scenarios. We also analyzed our method on smaller datasets like ImageNet-21K-P, 2-hops and 3-hops, demonstrating its generalization ability. Our benchmark and code are available at https://kaiyi.me/p/hgrnet.html.

Research paper thumbnail of M-Elfeki/GDPP: Generator loss to reduce mode-collapse and to improve the generated samples quality

Research paper thumbnail of Domain-Aware Continual Zero-Shot Learning

We introduce Domain Aware Continual Zero-Shot Learning (DACZSL), the task of visually recognizing... more We introduce Domain Aware Continual Zero-Shot Learning (DACZSL), the task of visually recognizing images of unseen categories in unseen domains sequentially. We created DACZSL on top of the DomainNet dataset by dividing it into a sequence of tasks, where classes are incrementally provided on seen domains during training and evaluation is conducted on unseen domains for both seen and unseen classes. We also proposed a novel Domain-Invariant CZSL Network (DIN), which outperforms stateof-the-art baseline models that we adapted to DACZSL setting. We adopt a structure-based approach to alleviate forgetting knowledge from previous tasks with a small pertask private network in addition to a global shared network. To encourage the private network to capture the domain and task-specific representation, we train our model with a novel adversarial knowledge disentanglement setting to make our global network task-invariant and domaininvariant over all the tasks. Our method also learns a classwise learnable prompt to obtain better class-level text representation, which is used to represent side information to enable zero-shot prediction of future unseen classes. Our code and benchmarks will be made publicly available.

Research paper thumbnail of Efficiently Disentangle Causal Representations

arXiv (Cornell University), Jan 6, 2022

This paper proposes an efficient approach to learning disentangled representations with causal me... more This paper proposes an efficient approach to learning disentangled representations with causal mechanisms based on the difference of conditional probabilities in original and new distributions. We approximate the difference with models' generalization abilities so that it fits in the standard machine learning framework and can be efficiently computed. In contrast to the state-of-the-art approach, which relies on the learner's adaptation speed to new distribution, the proposed approach only requires evaluating the model's generalization ability. We provide a theoretical explanation for the advantage of the proposed method, and our experiments show that the proposed technique is 1.9-11.0× more sample efficient and 9.4-32.4× quicker than the previous method on various tasks. The source code is available at https://github.com/yuanpeng16/EDCR.

Research paper thumbnail of Temporal Positive-unlabeled Learning for Biomedical Hypothesis Generation via Risk Estimation

arXiv (Cornell University), Oct 5, 2020

Understanding the relationships between biomedical terms like viruses, drugs, and symptoms is ess... more Understanding the relationships between biomedical terms like viruses, drugs, and symptoms is essential in the fight against diseases. Many attempts have been made to introduce the use of machine learning to the scientific process of hypothesis generation (HG), which refers to the discovery of meaningful implicit connections between biomedical terms. However, most existing methods fail to truly capture the temporal dynamics of scientific term relations and also assume unobserved connections to be irrelevant (i.e., in a positive-negative (PN) learning setting). To break these limits, we formulate this HG problem as future connectivity prediction task on a dynamic attributed graph via positive-unlabeled (PU) learning. Then, the key is to capture the temporal evolution of node pair (term pair) relations from just the positive and unlabeled data. We propose a variational inference model to estimate the positive prior, and incorporate it in the learning of node pair embeddings, which are then used for link prediction. Experiment results on real-world biomedical term relationship datasets and case study analyses on a COVID-19 dataset validate the effectiveness of the proposed model.

Research paper thumbnail of ArtEmis: Affective Language for Visual Art Supplemental Material

In total, we annotated 80,031 artworks covering the entire WikiArt, as downloaded in 2015 [6]. We... more In total, we annotated 80,031 artworks covering the entire WikiArt, as downloaded in 2015 [6]. We note that this version of the WikiArt dataset contains 81,446 artworks. However, as our analysis indicated 1,415 artworks were exact duplicates, of the 80,031 unique artworks we kept for annotation purposes. We found these duplicates using the ‘fdupes’ program [5] and limited manual inspection on pairs of nearest-neighbors artworks (using features of a ResNet-32, pretrained on ImageNet), whose distance was smaller than a manually selected threshold. When displaying the image of an artwork in AMT we scale down its largest size to 600 pixels, keeping the original aspect-ratio (or do not apply any scaling if the largest size is less than 600 pixels). We do this scaling to homogenize the presentation of our visual stimuli, and crucially to also reduce the loading and scrolling time required with higher resolution images.

Research paper thumbnail of Normalization Matters in Zero-Shot Learning

ArXiv, 2020

An ability to grasp new concepts from their descriptions is one of the key features of human inte... more An ability to grasp new concepts from their descriptions is one of the key features of human intelligence, and zero-shot learning (ZSL) aims to incorporate this property into machine learning models. In this paper, we theoretically investigate two very popular tricks used in ZSL: "normalize+scale" trick and attributes normalization and show how they help to preserve a signal's variance in a typical model during a forward pass. Next, we demonstrate that these two tricks are not enough to normalize a deep ZSL network. We derive a new initialization scheme, which allows us to demonstrate strong state-of-the-art results on 4 out of 5 commonly used ZSL datasets: SUN, CUB, AwA1, and AwA2 while being on average 2 orders faster than the closest runner-up. Finally, we generalize ZSL to a broader problem -- Continual Zero-Shot Learning (CZSL) and test our ideas in this new setup. The source code to reproduce all the results is available at this https URL.

Research paper thumbnail of RelTransformer: Balancing the Visual Relationship Detection from Local Context, Scene and Memory

ArXiv, 2021

Visual relationship recognition (VRR) is a fundamental scene understanding task. The structure th... more Visual relationship recognition (VRR) is a fundamental scene understanding task. The structure that VRR provides is essential to improve the AI interpretability in downstream tasks such as image captioning and visual question answering. Several recent studies showed that the long-tail problem in VRR is even more critical than that in object recognition due to the compositional complexity and structure. To overcome this limitation, we propose a novel transformerbased framework, dubbed as RelTransformer, which performs relationship prediction using rich semantic features from multiple image levels. We assume that more abundant contextual features can generate more accurate and discriminative relationships, which can be useful when sufficient training data are lacking. The key feature of our model is its ability to aggregate three different-level features (local context, scene, and dataset-level) to compositionally predict the visual relationship. We evaluate our model on the visual ge...

Research paper thumbnail of Adapted Appearance ( c ) Student Model Output ( a ) Baseline Appearance ( b ) Teacher Adaptation Target Adaptation Target Interactive Video Object Segmentation using Teacher-Student Adaptation

Video object segmentation is an essential task in robot manipulation to facilitate grasping and l... more Video object segmentation is an essential task in robot manipulation to facilitate grasping and learning affordances. Incremental learning is important for robotics in unstructured environments. Inspired by the children learning process, human robot interaction (HRI) can be utilized to teach robots about the world guided by humans similar to how children learn from a parent or a teacher. A human teacher can show potential objects of interest to the robot, which is able to self adapt to the teaching signal without providing manual segmentation labels. We propose a novel teacher-student learning paradigm to teach robots about their surrounding environment. A two-stream motion and appearance ”teacher” network provides pseudo-labels to adapt an appearance ”student” network. The student network is able to segment the newly learned objects in other scenes, whether they are static or in motion. We also introduce a carefully designed dataset that serves the proposed HRI setup, denoted as (I...