Aojun Zhou - Academia.edu (original) (raw)

Papers by Aojun Zhou

Research paper thumbnail of HBONet: Harmonious Bottleneck on Two Orthogonal Dimensions

arXiv (Cornell University), Aug 11, 2019

MobileNets, a class of top-performing convolutional neural network architectures in terms of accu... more MobileNets, a class of top-performing convolutional neural network architectures in terms of accuracy and efficiency trade-off, are increasingly used in many resourceaware vision applications. In this paper, we present Harmonious Bottleneck on two Orthogonal dimensions (HBO), a novel architecture unit, specially tailored to boost the accuracy of extremely lightweight MobileNets at the level of less than 40 MFLOPs. Unlike existing bottleneck designs that mainly focus on exploring the interdependencies among the channels of either groupwise or depthwise convolutional features, our HBO improves bottleneck representation while maintaining similar complexity via jointly encoding the feature interdependencies across both spatial and channel dimensions. It has two reciprocal components, namely spatial contraction-expansion and channel expansion-contraction, nested in a bilaterally symmetric structure. The combination of two interdependent transformations performing on orthogonal dimensions of feature maps enhances the representation and generalization ability of our proposed module, guaranteeing compelling performance with limited computational resource and power. By replacing the original bottlenecks in MobileNetV2 backbone with HBO modules, we construct HBONets which are evaluated on ImageNet classification, PASCAL VOC object detection and Market-1501 person re-identification. Extensive experiments show that with the severe constraint of computational budget our models outperform MobileNetV2 counterparts by remarkable margins of at most 6.6%, 6.3% and 5.0% on the above benchmarks respectively. Code and pretrained models are available at https://github. com/d-li14/HBONet.

Research paper thumbnail of SnapQuant: A Probabilistic and Nested Parameterization for Binary Networks

Research paper thumbnail of Efficient N:M Sparse DNN Training Using Algorithm, Architecture, and Dataflow Co-Design

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

Research paper thumbnail of CEST: Computation-Efficient N:M Sparse Training for Deep Neural Networks

2023 Design, Automation & Test in Europe Conference & Exhibition (DATE)

Research paper thumbnail of Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement

arXiv (Cornell University), Apr 3, 2023

Research paper thumbnail of LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention

arXiv (Cornell University), Mar 28, 2023

Research paper thumbnail of FeatAug-DETR: Enriching One-to-Many Matching for DETRs with Feature Augmentation

arXiv (Cornell University), Mar 2, 2023

Research paper thumbnail of Omni-Dimensional Dynamic Convolution

arXiv (Cornell University), Sep 16, 2022

Research paper thumbnail of Group R-CNN for Weakly Semi-supervised Object Detection with Points

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

We study the problem of weakly semi-supervised object detection with points (WSSOD-P), where the ... more We study the problem of weakly semi-supervised object detection with points (WSSOD-P), where the training data is combined by a small set of fully annotated images with bounding boxes and a large set of weakly-labeled images with only a single point annotated for each instance. The core of this task is to train a point-to-box regressor on well-labeled images that can be used to predict credible bounding boxes for each point annotation. We challenge the prior belief that existing CNN-based detectors are not compatible with this task. Based on the classic R-CNN architecture, we propose an effective point-to-box regressor: Group R-CNN. Group R-CNN first uses instancelevel proposal grouping to generate a group of proposals for each point annotation and thus can obtain a high recall rate. To better distinguish different instances and improve precision, we propose instance-level proposal assignment to replace the vanilla assignment strategy adopted in original R-CNN methods. As naive instance-level assignment brings converging difficulty, we propose instanceaware representation learning which consists of instanceaware feature enhancement and instance-aware parameter generation to overcome this issue. Comprehensive experiments on the MS-COCO benchmark demonstrate the effectiveness of our method. Specifically, Group R-CNN significantly outperforms the prior method Point DETR by 3.9 mAP with 5% well-labeled images, which is the most challenging scenario. The source code can be found at https://github.com/jshilong/GroupRCNN

Research paper thumbnail of An Algorithm–Hardware Co-Optimized Framework for Accelerating N:M Sparse Transformers

IEEE Transactions on Very Large Scale Integration (VLSI) Systems

The Transformer has been an indispensable staple in deep learning. However, for real-life applica... more The Transformer has been an indispensable staple in deep learning. However, for real-life applications, it is very challenging to deploy efficient Transformers due to immense parameters and operations of models. To relieve this burden, exploiting sparsity is an effective approach to accelerate Transformers. Newly emerging Ampere GPUs leverage a 2:4 sparsity pattern to achieve model acceleration, while it can hardly meet the diverse algorithm and hardware constraints when deploying models. By contrast, we propose an algorithm-hardware co-optimized framework to flexibly and efficiently accelerate Transformers by utilizing general N:M sparsity patterns. (1) From algorithm perspective, we propose a sparsity inheritance mechanism along with inherited dynamic pruning (IDP) to obtain a series of N:M sparse candidate Transformers rapidly. A model compression scheme is further proposed to significantly reduce the storage requirement for deployment. (2) From hardware perspective, we present a flexible and efficient hardware architecture, namely STA, to achieve significant speedup when deploying N:M sparse Transformers. STA features not only a computing engine unifying both sparse-dense and dense-dense matrix multiplications with high computational efficiency but also a scalable softmax module eliminating the latency from intermediate off-chip data communication. Experimental results show that compared to other methods, N:M sparse Transformers, generated using IDP, achieves an average of 6.7% improvement on accuracy with high training efficiency. Moreover, STA can achieve 14.47× and 11.33× speedup compared to Intel i9-9900X and NVIDIA RTX 2080 Ti, respectively, and perform 2.00 ∼ 19.47× faster inference than the state-of-the-art FPGA-based accelerators for Transformers.

Research paper thumbnail of Deep Neural Network Compression With Single and Multiple Level Quantization

Proceedings of the AAAI Conference on Artificial Intelligence

Network quantization is an effective solution to compress deep neural networks for practical usag... more Network quantization is an effective solution to compress deep neural networks for practical usage. Existing network quantization methods cannot sufficiently exploit the depth information to generate low-bit compressed network. In this paper, we propose two novel network quantization approaches, single-level network quantization (SLQ) for high-bit quantization and multi-level network quantization (MLQ) for extremely low-bit quantization (ternary). We are the first to consider the network quantization from both width and depth level. In the width level, parameters are divided into two parts: one for quantization and the other for re-training to eliminate the quantization loss. SLQ leverages the distribution of the parameters to improve the width level. In the depth level, we introduce incremental layer compensation to quantize layers iteratively which decreases the quantization loss in each iteration. The proposed approaches are validated with extensive experiments based on the state...

Research paper thumbnail of Differentiable Dynamic Wirings for Neural Networks

2021 IEEE/CVF International Conference on Computer Vision (ICCV)

A standard practice of deploying deep neural networks is to apply the same architecture to all th... more A standard practice of deploying deep neural networks is to apply the same architecture to all the input instances. However, a fixed architecture may not be suitable for different data with high diversity. To boost the model capacity, existing methods usually employ larger convolutional kernels or deeper network layers, which incurs prohibitive computational costs. In this paper, we address this issue by proposing Differentiable Dynamic Wirings (DDW), which learns the instance-aware connectivity that creates different wiring patterns for different instances. 1) Specifically, the network is initialized as a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent the connection paths. 2) We generate edge weights by a learnable module, Router, and select the edges whose weights are larger than a threshold, to adjust the connectivity of the neural network structure. 3) Instead of using the same path of the network, DDW aggregates features dynamically in each node, which allows the network to have more representation power. To facilitate effective training, we further represent the network connectivity of each sample as an adjacency matrix. The matrix is updated to aggregate features in the forward pass, cached in the memory, and used for gradient computing in the backward pass. We validate the effectiveness of our approach with several mainstream architectures, including MobileNetV2, ResNet, ResNeXt, and RegNet. Extensive experiments are performed on ImageNet classification and COCO object detection, which demonstrates the effectiveness and generalization ability of our approach.

Research paper thumbnail of Pyramid Fusion Transformer for Semantic Segmentation

The recently proposed MaskFormer \cite{maskformer} gives a refreshed perspective on the task of s... more The recently proposed MaskFormer \cite{maskformer} gives a refreshed perspective on the task of semantic segmentation: it shifts from the popular pixel-level classification paradigm to a mask-level classification method. In essence, it generates paired probabilities and masks corresponding to category segments and combines them during inference for the segmentation maps. The segmentation quality thus relies on how well the queries can capture the semantic information for categories and their spatial locations within the images. In our study, we find that per-mask classification decoder on top of a single-scale feature is not effective enough to extract reliable probability or mask. To mine for rich semantic information across the feature pyramid, we propose a transformer-based Pyramid Fusion Transformer (PFT) for per-mask approach semantic segmentation on top of multi-scale features. To efficiently utilize image features of different resolutions without incurring too much computatio...

Research paper thumbnail of Group Fisher Pruning for Practical Network Compression

arXiv (Cornell University), Aug 2, 2021

Network compression has been widely studied since it is able to reduce the memory and computation... more Network compression has been widely studied since it is able to reduce the memory and computation cost during inference. However, previous methods seldom deal with complicated structures like residual connections, group/depthwise convolution and feature pyramid network, where channels of multiple layers are coupled and need to be pruned simultaneously. In this paper, we present a general channel pruning approach that can be applied to various complicated structures. Particularly, we propose a layer grouping algorithm to find coupled channels automatically. Then we derive a unified metric based on Fisher information to evaluate the importance of a single channel and coupled channels. Moreover, we find that inference speedup on GPUs is more correlated with the reduction of memory 2 rather than FLOPs, and thus we employ the memory reduction of each channel to normalize the importance. Our method can be used to prune any structures including those with coupled channels. We conduct extensive experiments on various backbones, including the classic ResNet and ResNeXt, mobilefriendly MobileNetV2, and the NAS-based Reg-Net, both on image classification and object detection which is under-explored. Experimental results validate that our method can effectively prune sophisticated networks, boosting inference speed without sacrificing accuracy.

Research paper thumbnail of Towards Improving Generalization of Deep Networks via Consistent Normalization

Standard convolutional neural networks(CNNs) require consistent image resolutions in both trainin... more Standard convolutional neural networks(CNNs) require consistent image resolutions in both training and testing phase. However, in practice, testing with smaller image sizes is necessary for fast inference. We show that trivially evaluating low-resolution images on networks trained with high-resolution images results in a catastrophic accuracy drop in standard CNN architectures. We propose a novel training regime called Scale calibrated Training(SCT) which allows networks to learn from various scales of input simultaneously. By taking advantages of SCT, single network can provide decent accuracy at test time in response to multiple test scales. In our analysis, we surprisingly find that vanilla batch normalization can lead to sub-optimal performance in SCT. Therefore, a novel normalization scheme called Scale-Specific Batch Normalization is equipped to SCT in replacement of batch normalization. Experiment results show that SCT improves accuracy of single Resnet-50 on ImageNet by 1.7% and 11.5% accuracy when testing on image sizes of 224 and 128 respectively.

Research paper thumbnail of Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch

arXiv (Cornell University), Feb 8, 2021

Sparsity in Deep Neural Networks (DNNs) has been widely studied to compress and accelerate the mo... more Sparsity in Deep Neural Networks (DNNs) has been widely studied to compress and accelerate the models on resource-constrained environments. It can be generally categorized into unstructured fine-grained sparsity that zeroes out multiple individual weights distributed across the neural network, and structured coarsegrained sparsity which prunes blocks of sub-networks of a neural network. Finegrained sparsity can achieve a high compression ratio but is not hardware friendly and hence receives limited speed gains. On the other hand, coarse-grained sparsity cannot concurrently achieve both apparent acceleration on modern GPUs and decent performance. In this paper, we are the first to study training from scratch an N:M fine-grained structured sparse network, which can maintain the advantages of both unstructured fine-grained sparsity and structured coarse-grained sparsity simultaneously on specifically designed GPUs. Specifically, a 2 : 4 sparse network could achieve 2× speed-up without performance drop on Nvidia A100 GPUs. Furthermore, we propose a novel and effective ingredient, sparse-refined straightthrough estimator (SR-STE), to alleviate the negative influence of the approximated gradients computed by vanilla STE during optimization. We also define a metric, Sparse Architecture Divergence (SAD), to measure the sparse network's topology change during the training process. Finally, We justify SR-STE's advantages with SAD and demonstrate the effectiveness of SR-STE by performing comprehensive experiments on various tasks. Source codes and models are available at https://github.com/NM-sparsity/NM-sparsity.

Research paper thumbnail of DominoSearch: Find layer-wise fine-grained N:M sparse schemes from dense neural networks

Neural pruning is a widely-used compression technique for Deep Neural Networks (DNNs). Recent inn... more Neural pruning is a widely-used compression technique for Deep Neural Networks (DNNs). Recent innovations in Hardware Architectures (e.g. Nvidia Ampere Sparse Tensor Core) and N:M fine-grained Sparse Neural Network algorithms (i.e. every M-weights contains N non-zero values) reveal a promising research line of neural pruning. However, the existing N:M algorithms only address the challenge of how to train N:M sparse neural networks in a uniform fashion (i.e. every layer has the same N:M sparsity) and suffer from a significant accuracy drop for high sparsity (i.e. when sparsity > 80%). To tackle this problem, we present a novel technique – DominoSearch to find mixed N:M sparsity schemes from pre-trained dense deep neural networks to achieve higher accuracy than the uniform-sparsity scheme with equivalent complexity constraints (e.g. model size or FLOPs). For instance, for the same model size with 2.1M parameters (87.5% sparsity), our layer-wise N:M sparse ResNet18 outperforms its u...

Research paper thumbnail of Incorporating Convolution Designs into Visual Transformers

2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021

Motivated by the success of Transformers in natural language processing (NLP) tasks, there emerge... more Motivated by the success of Transformers in natural language processing (NLP) tasks, there emerge some attempts (e.g., ViT and DeiT) to apply Transformers to the vision domain. However, pure Transformer architectures often require a large amount of training data or extra supervision to obtain comparable performance with convolutional neural networks (CNNs). To overcome these limitations, we analyze the potential drawbacks when directly borrowing Transformer architectures from NLP. Then we propose a new Convolution-enhanced image Transformer (CeiT) which combines the advantages of CNNs in extracting lowlevel features, strengthening locality, and the advantages of Transformers in establishing long-range dependencies. Three modifications are made to the original Transformer: 1) instead of the straightforward tokenization from raw input images, we design an Image-to-Tokens (I2T) module that extracts patches from generated low-level features; 2) the feed-froward network in each encoder block is replaced with a Locally-enhanced Feed-Forward (LeFF) layer that promotes the correlation among neighboring tokens in the spatial dimension; 3) a Layer-wise Class token Attention (LCA) is attached at the top of the Transformer that utilizes the multi-level representations. Experimental results on ImageNet and seven downstream tasks show the effectiveness and generalization ability of CeiT compared with previous Transformers and stateof-the-art CNNs, without requiring a large amount of training data and extra CNN teachers. Besides, CeiT models also demonstrate better convergence with 3× fewer training iterations, which can reduce the training cost significantly 1 .

Research paper thumbnail of Towards Improving Generalization of Deep Networks via Consistent Normalization

ArXiv, 2019

Batch Normalization (BN) was shown to accelerate training and improve generalization of Convoluti... more Batch Normalization (BN) was shown to accelerate training and improve generalization of Convolutional Neural Networks (ConvNets), which typically use the “Conv-BN” couple as building block. However, this work shows a common phenomenon that the Conv-BN module does not necessarily outperform the networks trained without using BN, especially when data augmentation is presented in training. We find that this phenomenon occurs because there is inconsistency between the distribution of the augmented data and that of the normalized representation. To address this issue, we propose Consistent Normalization (CN) that not only retains the advantages of the existing normalization methods, but also achieves state-ofthe-art performance on various tasks including image classification, segmentation, and machine translation. The code will be released to facilitate reproducibility.

Research paper thumbnail of Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks

ArXiv, 2020

One practice of employing deep neural networks is to apply the same architecture to all the input... more One practice of employing deep neural networks is to apply the same architecture to all the input instances. However, a fixed architecture may not be representative enough for data with high diversity. To promote the model capacity, existing approaches usually employ larger convolutional kernels or deeper network structure, which may increase the computational cost. In this paper, we address this issue by raising the Dynamic Graph Network (DG-Net). The network learns the instance-aware connectivity, which creates different forward paths for different instances. Specifically, the network is initialized as a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent the connection paths. We generate edge weights by a learnable module \textit{router} and select the edges whose weights are larger than a threshold, to adjust the connectivity of the neural network structure. Instead of using the same path of the network, DG-Net aggregates featu...

Research paper thumbnail of HBONet: Harmonious Bottleneck on Two Orthogonal Dimensions

arXiv (Cornell University), Aug 11, 2019

MobileNets, a class of top-performing convolutional neural network architectures in terms of accu... more MobileNets, a class of top-performing convolutional neural network architectures in terms of accuracy and efficiency trade-off, are increasingly used in many resourceaware vision applications. In this paper, we present Harmonious Bottleneck on two Orthogonal dimensions (HBO), a novel architecture unit, specially tailored to boost the accuracy of extremely lightweight MobileNets at the level of less than 40 MFLOPs. Unlike existing bottleneck designs that mainly focus on exploring the interdependencies among the channels of either groupwise or depthwise convolutional features, our HBO improves bottleneck representation while maintaining similar complexity via jointly encoding the feature interdependencies across both spatial and channel dimensions. It has two reciprocal components, namely spatial contraction-expansion and channel expansion-contraction, nested in a bilaterally symmetric structure. The combination of two interdependent transformations performing on orthogonal dimensions of feature maps enhances the representation and generalization ability of our proposed module, guaranteeing compelling performance with limited computational resource and power. By replacing the original bottlenecks in MobileNetV2 backbone with HBO modules, we construct HBONets which are evaluated on ImageNet classification, PASCAL VOC object detection and Market-1501 person re-identification. Extensive experiments show that with the severe constraint of computational budget our models outperform MobileNetV2 counterparts by remarkable margins of at most 6.6%, 6.3% and 5.0% on the above benchmarks respectively. Code and pretrained models are available at https://github. com/d-li14/HBONet.

Research paper thumbnail of SnapQuant: A Probabilistic and Nested Parameterization for Binary Networks

Research paper thumbnail of Efficient N:M Sparse DNN Training Using Algorithm, Architecture, and Dataflow Co-Design

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

Research paper thumbnail of CEST: Computation-Efficient N:M Sparse Training for Deep Neural Networks

2023 Design, Automation & Test in Europe Conference & Exhibition (DATE)

Research paper thumbnail of Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement

arXiv (Cornell University), Apr 3, 2023

Research paper thumbnail of LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention

arXiv (Cornell University), Mar 28, 2023

Research paper thumbnail of FeatAug-DETR: Enriching One-to-Many Matching for DETRs with Feature Augmentation

arXiv (Cornell University), Mar 2, 2023

Research paper thumbnail of Omni-Dimensional Dynamic Convolution

arXiv (Cornell University), Sep 16, 2022

Research paper thumbnail of Group R-CNN for Weakly Semi-supervised Object Detection with Points

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

We study the problem of weakly semi-supervised object detection with points (WSSOD-P), where the ... more We study the problem of weakly semi-supervised object detection with points (WSSOD-P), where the training data is combined by a small set of fully annotated images with bounding boxes and a large set of weakly-labeled images with only a single point annotated for each instance. The core of this task is to train a point-to-box regressor on well-labeled images that can be used to predict credible bounding boxes for each point annotation. We challenge the prior belief that existing CNN-based detectors are not compatible with this task. Based on the classic R-CNN architecture, we propose an effective point-to-box regressor: Group R-CNN. Group R-CNN first uses instancelevel proposal grouping to generate a group of proposals for each point annotation and thus can obtain a high recall rate. To better distinguish different instances and improve precision, we propose instance-level proposal assignment to replace the vanilla assignment strategy adopted in original R-CNN methods. As naive instance-level assignment brings converging difficulty, we propose instanceaware representation learning which consists of instanceaware feature enhancement and instance-aware parameter generation to overcome this issue. Comprehensive experiments on the MS-COCO benchmark demonstrate the effectiveness of our method. Specifically, Group R-CNN significantly outperforms the prior method Point DETR by 3.9 mAP with 5% well-labeled images, which is the most challenging scenario. The source code can be found at https://github.com/jshilong/GroupRCNN

Research paper thumbnail of An Algorithm–Hardware Co-Optimized Framework for Accelerating N:M Sparse Transformers

IEEE Transactions on Very Large Scale Integration (VLSI) Systems

The Transformer has been an indispensable staple in deep learning. However, for real-life applica... more The Transformer has been an indispensable staple in deep learning. However, for real-life applications, it is very challenging to deploy efficient Transformers due to immense parameters and operations of models. To relieve this burden, exploiting sparsity is an effective approach to accelerate Transformers. Newly emerging Ampere GPUs leverage a 2:4 sparsity pattern to achieve model acceleration, while it can hardly meet the diverse algorithm and hardware constraints when deploying models. By contrast, we propose an algorithm-hardware co-optimized framework to flexibly and efficiently accelerate Transformers by utilizing general N:M sparsity patterns. (1) From algorithm perspective, we propose a sparsity inheritance mechanism along with inherited dynamic pruning (IDP) to obtain a series of N:M sparse candidate Transformers rapidly. A model compression scheme is further proposed to significantly reduce the storage requirement for deployment. (2) From hardware perspective, we present a flexible and efficient hardware architecture, namely STA, to achieve significant speedup when deploying N:M sparse Transformers. STA features not only a computing engine unifying both sparse-dense and dense-dense matrix multiplications with high computational efficiency but also a scalable softmax module eliminating the latency from intermediate off-chip data communication. Experimental results show that compared to other methods, N:M sparse Transformers, generated using IDP, achieves an average of 6.7% improvement on accuracy with high training efficiency. Moreover, STA can achieve 14.47× and 11.33× speedup compared to Intel i9-9900X and NVIDIA RTX 2080 Ti, respectively, and perform 2.00 ∼ 19.47× faster inference than the state-of-the-art FPGA-based accelerators for Transformers.

Research paper thumbnail of Deep Neural Network Compression With Single and Multiple Level Quantization

Proceedings of the AAAI Conference on Artificial Intelligence

Network quantization is an effective solution to compress deep neural networks for practical usag... more Network quantization is an effective solution to compress deep neural networks for practical usage. Existing network quantization methods cannot sufficiently exploit the depth information to generate low-bit compressed network. In this paper, we propose two novel network quantization approaches, single-level network quantization (SLQ) for high-bit quantization and multi-level network quantization (MLQ) for extremely low-bit quantization (ternary). We are the first to consider the network quantization from both width and depth level. In the width level, parameters are divided into two parts: one for quantization and the other for re-training to eliminate the quantization loss. SLQ leverages the distribution of the parameters to improve the width level. In the depth level, we introduce incremental layer compensation to quantize layers iteratively which decreases the quantization loss in each iteration. The proposed approaches are validated with extensive experiments based on the state...

Research paper thumbnail of Differentiable Dynamic Wirings for Neural Networks

2021 IEEE/CVF International Conference on Computer Vision (ICCV)

A standard practice of deploying deep neural networks is to apply the same architecture to all th... more A standard practice of deploying deep neural networks is to apply the same architecture to all the input instances. However, a fixed architecture may not be suitable for different data with high diversity. To boost the model capacity, existing methods usually employ larger convolutional kernels or deeper network layers, which incurs prohibitive computational costs. In this paper, we address this issue by proposing Differentiable Dynamic Wirings (DDW), which learns the instance-aware connectivity that creates different wiring patterns for different instances. 1) Specifically, the network is initialized as a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent the connection paths. 2) We generate edge weights by a learnable module, Router, and select the edges whose weights are larger than a threshold, to adjust the connectivity of the neural network structure. 3) Instead of using the same path of the network, DDW aggregates features dynamically in each node, which allows the network to have more representation power. To facilitate effective training, we further represent the network connectivity of each sample as an adjacency matrix. The matrix is updated to aggregate features in the forward pass, cached in the memory, and used for gradient computing in the backward pass. We validate the effectiveness of our approach with several mainstream architectures, including MobileNetV2, ResNet, ResNeXt, and RegNet. Extensive experiments are performed on ImageNet classification and COCO object detection, which demonstrates the effectiveness and generalization ability of our approach.

Research paper thumbnail of Pyramid Fusion Transformer for Semantic Segmentation

The recently proposed MaskFormer \cite{maskformer} gives a refreshed perspective on the task of s... more The recently proposed MaskFormer \cite{maskformer} gives a refreshed perspective on the task of semantic segmentation: it shifts from the popular pixel-level classification paradigm to a mask-level classification method. In essence, it generates paired probabilities and masks corresponding to category segments and combines them during inference for the segmentation maps. The segmentation quality thus relies on how well the queries can capture the semantic information for categories and their spatial locations within the images. In our study, we find that per-mask classification decoder on top of a single-scale feature is not effective enough to extract reliable probability or mask. To mine for rich semantic information across the feature pyramid, we propose a transformer-based Pyramid Fusion Transformer (PFT) for per-mask approach semantic segmentation on top of multi-scale features. To efficiently utilize image features of different resolutions without incurring too much computatio...

Research paper thumbnail of Group Fisher Pruning for Practical Network Compression

arXiv (Cornell University), Aug 2, 2021

Network compression has been widely studied since it is able to reduce the memory and computation... more Network compression has been widely studied since it is able to reduce the memory and computation cost during inference. However, previous methods seldom deal with complicated structures like residual connections, group/depthwise convolution and feature pyramid network, where channels of multiple layers are coupled and need to be pruned simultaneously. In this paper, we present a general channel pruning approach that can be applied to various complicated structures. Particularly, we propose a layer grouping algorithm to find coupled channels automatically. Then we derive a unified metric based on Fisher information to evaluate the importance of a single channel and coupled channels. Moreover, we find that inference speedup on GPUs is more correlated with the reduction of memory 2 rather than FLOPs, and thus we employ the memory reduction of each channel to normalize the importance. Our method can be used to prune any structures including those with coupled channels. We conduct extensive experiments on various backbones, including the classic ResNet and ResNeXt, mobilefriendly MobileNetV2, and the NAS-based Reg-Net, both on image classification and object detection which is under-explored. Experimental results validate that our method can effectively prune sophisticated networks, boosting inference speed without sacrificing accuracy.

Research paper thumbnail of Towards Improving Generalization of Deep Networks via Consistent Normalization

Standard convolutional neural networks(CNNs) require consistent image resolutions in both trainin... more Standard convolutional neural networks(CNNs) require consistent image resolutions in both training and testing phase. However, in practice, testing with smaller image sizes is necessary for fast inference. We show that trivially evaluating low-resolution images on networks trained with high-resolution images results in a catastrophic accuracy drop in standard CNN architectures. We propose a novel training regime called Scale calibrated Training(SCT) which allows networks to learn from various scales of input simultaneously. By taking advantages of SCT, single network can provide decent accuracy at test time in response to multiple test scales. In our analysis, we surprisingly find that vanilla batch normalization can lead to sub-optimal performance in SCT. Therefore, a novel normalization scheme called Scale-Specific Batch Normalization is equipped to SCT in replacement of batch normalization. Experiment results show that SCT improves accuracy of single Resnet-50 on ImageNet by 1.7% and 11.5% accuracy when testing on image sizes of 224 and 128 respectively.

Research paper thumbnail of Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch

arXiv (Cornell University), Feb 8, 2021

Sparsity in Deep Neural Networks (DNNs) has been widely studied to compress and accelerate the mo... more Sparsity in Deep Neural Networks (DNNs) has been widely studied to compress and accelerate the models on resource-constrained environments. It can be generally categorized into unstructured fine-grained sparsity that zeroes out multiple individual weights distributed across the neural network, and structured coarsegrained sparsity which prunes blocks of sub-networks of a neural network. Finegrained sparsity can achieve a high compression ratio but is not hardware friendly and hence receives limited speed gains. On the other hand, coarse-grained sparsity cannot concurrently achieve both apparent acceleration on modern GPUs and decent performance. In this paper, we are the first to study training from scratch an N:M fine-grained structured sparse network, which can maintain the advantages of both unstructured fine-grained sparsity and structured coarse-grained sparsity simultaneously on specifically designed GPUs. Specifically, a 2 : 4 sparse network could achieve 2× speed-up without performance drop on Nvidia A100 GPUs. Furthermore, we propose a novel and effective ingredient, sparse-refined straightthrough estimator (SR-STE), to alleviate the negative influence of the approximated gradients computed by vanilla STE during optimization. We also define a metric, Sparse Architecture Divergence (SAD), to measure the sparse network's topology change during the training process. Finally, We justify SR-STE's advantages with SAD and demonstrate the effectiveness of SR-STE by performing comprehensive experiments on various tasks. Source codes and models are available at https://github.com/NM-sparsity/NM-sparsity.

Research paper thumbnail of DominoSearch: Find layer-wise fine-grained N:M sparse schemes from dense neural networks

Neural pruning is a widely-used compression technique for Deep Neural Networks (DNNs). Recent inn... more Neural pruning is a widely-used compression technique for Deep Neural Networks (DNNs). Recent innovations in Hardware Architectures (e.g. Nvidia Ampere Sparse Tensor Core) and N:M fine-grained Sparse Neural Network algorithms (i.e. every M-weights contains N non-zero values) reveal a promising research line of neural pruning. However, the existing N:M algorithms only address the challenge of how to train N:M sparse neural networks in a uniform fashion (i.e. every layer has the same N:M sparsity) and suffer from a significant accuracy drop for high sparsity (i.e. when sparsity > 80%). To tackle this problem, we present a novel technique – DominoSearch to find mixed N:M sparsity schemes from pre-trained dense deep neural networks to achieve higher accuracy than the uniform-sparsity scheme with equivalent complexity constraints (e.g. model size or FLOPs). For instance, for the same model size with 2.1M parameters (87.5% sparsity), our layer-wise N:M sparse ResNet18 outperforms its u...

Research paper thumbnail of Incorporating Convolution Designs into Visual Transformers

2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021

Motivated by the success of Transformers in natural language processing (NLP) tasks, there emerge... more Motivated by the success of Transformers in natural language processing (NLP) tasks, there emerge some attempts (e.g., ViT and DeiT) to apply Transformers to the vision domain. However, pure Transformer architectures often require a large amount of training data or extra supervision to obtain comparable performance with convolutional neural networks (CNNs). To overcome these limitations, we analyze the potential drawbacks when directly borrowing Transformer architectures from NLP. Then we propose a new Convolution-enhanced image Transformer (CeiT) which combines the advantages of CNNs in extracting lowlevel features, strengthening locality, and the advantages of Transformers in establishing long-range dependencies. Three modifications are made to the original Transformer: 1) instead of the straightforward tokenization from raw input images, we design an Image-to-Tokens (I2T) module that extracts patches from generated low-level features; 2) the feed-froward network in each encoder block is replaced with a Locally-enhanced Feed-Forward (LeFF) layer that promotes the correlation among neighboring tokens in the spatial dimension; 3) a Layer-wise Class token Attention (LCA) is attached at the top of the Transformer that utilizes the multi-level representations. Experimental results on ImageNet and seven downstream tasks show the effectiveness and generalization ability of CeiT compared with previous Transformers and stateof-the-art CNNs, without requiring a large amount of training data and extra CNN teachers. Besides, CeiT models also demonstrate better convergence with 3× fewer training iterations, which can reduce the training cost significantly 1 .

Research paper thumbnail of Towards Improving Generalization of Deep Networks via Consistent Normalization

ArXiv, 2019

Batch Normalization (BN) was shown to accelerate training and improve generalization of Convoluti... more Batch Normalization (BN) was shown to accelerate training and improve generalization of Convolutional Neural Networks (ConvNets), which typically use the “Conv-BN” couple as building block. However, this work shows a common phenomenon that the Conv-BN module does not necessarily outperform the networks trained without using BN, especially when data augmentation is presented in training. We find that this phenomenon occurs because there is inconsistency between the distribution of the augmented data and that of the normalized representation. To address this issue, we propose Consistent Normalization (CN) that not only retains the advantages of the existing normalization methods, but also achieves state-ofthe-art performance on various tasks including image classification, segmentation, and machine translation. The code will be released to facilitate reproducibility.

Research paper thumbnail of Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks

ArXiv, 2020

One practice of employing deep neural networks is to apply the same architecture to all the input... more One practice of employing deep neural networks is to apply the same architecture to all the input instances. However, a fixed architecture may not be representative enough for data with high diversity. To promote the model capacity, existing approaches usually employ larger convolutional kernels or deeper network structure, which may increase the computational cost. In this paper, we address this issue by raising the Dynamic Graph Network (DG-Net). The network learns the instance-aware connectivity, which creates different forward paths for different instances. Specifically, the network is initialized as a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent the connection paths. We generate edge weights by a learnable module \textit{router} and select the edges whose weights are larger than a threshold, to adjust the connectivity of the neural network structure. Instead of using the same path of the network, DG-Net aggregates featu...