Haoyue Cheng - Academia.edu (original) (raw)

Haoyue Cheng

Uploads

Papers by Haoyue Cheng

Research paper thumbnail of Joint-Modal Label Denoising for Weakly-Supervised Audio-Visual Video Parsing

This paper focuses on the weakly-supervised audiovisual video parsing task, which aims to recogni... more This paper focuses on the weakly-supervised audiovisual video parsing task, which aims to recognize all events belonging to each modality and localize their temporal boundaries. This task is challenging because only overall labels indicating the video events are provided for training. However, an event might be labeled but not appear in one of the modalities, which results in a modality-specific noisy label problem. In this work, we propose a training strategy to identify and remove modality-specific noisy labels dynamically. It is motivated by two key observations: 1) networks tend to learn clean samples first; and 2) a labeled event would appear in at least one modality. Specifically, we sort the losses of all instances within a mini-batch individually in each modality, and then select noisy samples according to the relationships between intra-modal and intermodal losses. Besides, we also propose a simple but valid noise ratio estimation method by calculating the proportion of instances whose confidence is below a preset threshold. Our method makes large improvements over the previous state of the arts (e.g., from 60.0% to 63.8% in segment-level visual metric), which demonstrates the effectiveness of our approach. Code and trained models are publicly available at https://github.com/MCG-NJU/JoMoLD.

Research paper thumbnail of NJU MCG - Sensetime Team Submission to Pre-training for Video Understanding Challenge Track II

Proceedings of the 29th ACM International Conference on Multimedia, 2021

This paper presents the method that underlies our submission to the Pre-training for Video Unders... more This paper presents the method that underlies our submission to the Pre-training for Video Understanding Challenge Track II. We follow the basic pipeline of temporal segment networks [20] and further improve its performance in several aspects. Specifically, we use the latest transformer-based architectures, e.g., Swin Transformer, DeiT, CLIP-ViT, to enhance the representation power. We analyze different pre-training proxy tasks on the official pre-training datasets and other open-source video datasets. With these techniques, we derive an ensemble of deep models to attain a high classification accuracy (Top-1 accuracy 62.28%) on the testing set and secures first place in Track II of this challenge. CCS CONCEPTS • Computing methodologies → Computer vision.

Research paper thumbnail of Joint-Modal Label Denoising for Weakly-Supervised Audio-Visual Video Parsing

This paper focuses on the weakly-supervised audiovisual video parsing task, which aims to recogni... more This paper focuses on the weakly-supervised audiovisual video parsing task, which aims to recognize all events belonging to each modality and localize their temporal boundaries. This task is challenging because only overall labels indicating the video events are provided for training. However, an event might be labeled but not appear in one of the modalities, which results in a modality-specific noisy label problem. In this work, we propose a training strategy to identify and remove modality-specific noisy labels dynamically. It is motivated by two key observations: 1) networks tend to learn clean samples first; and 2) a labeled event would appear in at least one modality. Specifically, we sort the losses of all instances within a mini-batch individually in each modality, and then select noisy samples according to the relationships between intra-modal and intermodal losses. Besides, we also propose a simple but valid noise ratio estimation method by calculating the proportion of instances whose confidence is below a preset threshold. Our method makes large improvements over the previous state of the arts (e.g., from 60.0% to 63.8% in segment-level visual metric), which demonstrates the effectiveness of our approach. Code and trained models are publicly available at https://github.com/MCG-NJU/JoMoLD.

Research paper thumbnail of NJU MCG - Sensetime Team Submission to Pre-training for Video Understanding Challenge Track II

Proceedings of the 29th ACM International Conference on Multimedia, 2021

This paper presents the method that underlies our submission to the Pre-training for Video Unders... more This paper presents the method that underlies our submission to the Pre-training for Video Understanding Challenge Track II. We follow the basic pipeline of temporal segment networks [20] and further improve its performance in several aspects. Specifically, we use the latest transformer-based architectures, e.g., Swin Transformer, DeiT, CLIP-ViT, to enhance the representation power. We analyze different pre-training proxy tasks on the official pre-training datasets and other open-source video datasets. With these techniques, we derive an ensemble of deep models to attain a high classification accuracy (Top-1 accuracy 62.28%) on the testing set and secures first place in Track II of this challenge. CCS CONCEPTS • Computing methodologies → Computer vision.

Log In