Jianghao Shen - Academia.edu (original) (raw)

Papers by Jianghao Shen

Research paper thumbnail of A Pixel Is Worth More Than One 3D Gaussians in Single-View 3D Reconstruction

arXiv (Cornell University), May 30, 2024

Research paper thumbnail of Learning Spatially-Adaptive Squeeze-Excitation Networks for Few Shot Image Synthesis

2023 IEEE International Conference on Image Processing (ICIP)

Research paper thumbnail of Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference

arXiv (Cornell University), Jan 2, 2020

Research paper thumbnail of Growing Deep Forests Efficiently with Soft Routing and Learned Connectivity

Research paper thumbnail of Hierarchical temporal and spatial memory for gait pattern recognition

This research extends the Hierarchical Temporal Memory (HTM) algorithm and applies it to gait rec... more This research extends the Hierarchical Temporal Memory (HTM) algorithm and applies it to gait recognition. The gait sequence first is decomposed into temporal sub-sequences of spatial sub-regions. The sub-sequence are defined as the period of one step and half step, and the sub-regions are defined as the areas that correspond to body parts. Each sub-area will learn the temporal variation of the body part by constructing Markov Chains. Finally, the classification result is the concatenation of the beliefs of all sub-areas. Unlike other methods, which use gait-specific features, our method uses only image patches of sub-areas. Our extension of previous versions of HTM provides hierarchical temporal inference to cumulate the belief. This generalized new approach is evaluated on a dataset of 151 subjects and two walking conditions. It compares favorably to other current methods used with those data, without requiring problem-specific inputs.

Research paper thumbnail of Learning Spatially-Adaptive Style-Modulation Networks for Single Image Synthesis

2023 IEEE International Conference on Image Processing (ICIP)

Research paper thumbnail of Hierarchical temporal and spatial memory for gait pattern recognition

2016 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), 2016

This research extends the Hierarchical Temporal Memory (HTM) algorithm and applies it to gait rec... more This research extends the Hierarchical Temporal Memory (HTM) algorithm and applies it to gait recognition. The gait sequence first is decomposed into temporal sub-sequences of spatial sub-regions. The sub-sequence are defined as the period of one step and half step, and the sub-regions are defined as the areas that correspond to body parts. Each sub-area will learn the temporal variation of the body part by constructing Markov Chains. Finally, the classification result is the concatenation of the beliefs of all sub-areas. Unlike other methods, which use gait-specific features, our method uses only image patches of sub-areas. Our extension of previous versions of HTM provides hierarchical temporal inference to cumulate the belief. This generalized new approach is evaluated on a dataset of 151 subjects and two walking conditions. It compares favorably to other current methods used with those data, without requiring problem-specific inputs.

Research paper thumbnail of Learning Inception Attention for Image Synthesis and Image Recognition

ArXiv, 2021

Image synthesis and image recognition have witnessed remarkable progress, but often at the expens... more Image synthesis and image recognition have witnessed remarkable progress, but often at the expense of computationally expensive training and inference. Learning lightweight yet expressive deep model has emerged as an important and interesting direction. Inspired by the well-known split-transform-aggregate design heuristic in the Inception building block, this paper proposes a Skip-Layer Inception Module (SLIM) that facilitates efficient learning of image synthesis models, and a same-layer variant (dubbed as SLIM too) as a stronger alternative to the well-known ResNeXts for image recognition. The proposed method is built on the idea of developing spatially-adaptive feature modulation/attention in a network. Specifically, SLIM is done from an input source feature map to a target one, either skip-layer or same-layer. In SLIM, the input feature map is first split into a number of groups (e.g., 4). Each group is then transformed to a latent style vector (via channel-wise attention) and a...

Research paper thumbnail of Growing Deep Forests Efficiently with Soft Routing and Learned Connectivity

Despite the latest prevailing success of deep neural networks (DNNs), several concerns have been ... more Despite the latest prevailing success of deep neural networks (DNNs), several concerns have been raised against their usage, including the lack of intepretability the gap between DNNs and other well-established machine learning models, and the growingly expensive computational costs. A number of recent works [1],[2],[3] explored the alternative to sequentially stacking decision tree/random forest building blocks in a purely feed-forward way, with no need of back propagation. Since decision trees enjoy inherent reasoning transparency, such deep forest models can also facilitate the understanding of the internal decision making process. This paper further extends the deep forest idea in several important aspects. Firstly, we employ a probabilistic tree whose nodes make probabilistic routing decisions, a.k.a., "soft routing", rather than hard binary decisions. Besides enhancing the flexibility, it also enables non-greedy optimization for each tree. Second, we propose an innov...

Research paper thumbnail of Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference

Proceedings of the AAAI Conference on Artificial Intelligence

While increasingly deep networks are still in general desired for achieving state-of-the-art perf... more While increasingly deep networks are still in general desired for achieving state-of-the-art performance, for many specific inputs a simpler network might already suffice. Existing works exploited this observation by learning to skip convolutional layers in an input-dependent manner. However, we argue their binary decision scheme, i.e., either fully executing or completely bypassing one layer for a specific input, can be enhanced by introducing finer-grained, “softer” decisions. We therefore propose a Dynamic Fractional Skipping (DFS) framework. The core idea of DFS is to hypothesize layer-wise quantization (to different bitwidths) as intermediate “soft” choices to be made between fully utilizing and skipping a layer. For each input, DFS dynamically assigns a bitwidth to both weights and activations of each layer, where fully executing and skipping could be viewed as two “extremes” (i.e., full bitwidth and zero bitwidth). In this way, DFS can “fractionally” exploit a layer's exp...

Research paper thumbnail of Dual Dynamic Inference: Enabling More Efficient, Adaptive and Controllable Deep Inference

IEEE Journal of Selected Topics in Signal Processing, 2020

Research paper thumbnail of A Pixel Is Worth More Than One 3D Gaussians in Single-View 3D Reconstruction

arXiv (Cornell University), May 30, 2024

Research paper thumbnail of Learning Spatially-Adaptive Squeeze-Excitation Networks for Few Shot Image Synthesis

2023 IEEE International Conference on Image Processing (ICIP)

Research paper thumbnail of Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference

arXiv (Cornell University), Jan 2, 2020

Research paper thumbnail of Growing Deep Forests Efficiently with Soft Routing and Learned Connectivity

Research paper thumbnail of Hierarchical temporal and spatial memory for gait pattern recognition

This research extends the Hierarchical Temporal Memory (HTM) algorithm and applies it to gait rec... more This research extends the Hierarchical Temporal Memory (HTM) algorithm and applies it to gait recognition. The gait sequence first is decomposed into temporal sub-sequences of spatial sub-regions. The sub-sequence are defined as the period of one step and half step, and the sub-regions are defined as the areas that correspond to body parts. Each sub-area will learn the temporal variation of the body part by constructing Markov Chains. Finally, the classification result is the concatenation of the beliefs of all sub-areas. Unlike other methods, which use gait-specific features, our method uses only image patches of sub-areas. Our extension of previous versions of HTM provides hierarchical temporal inference to cumulate the belief. This generalized new approach is evaluated on a dataset of 151 subjects and two walking conditions. It compares favorably to other current methods used with those data, without requiring problem-specific inputs.

Research paper thumbnail of Learning Spatially-Adaptive Style-Modulation Networks for Single Image Synthesis

2023 IEEE International Conference on Image Processing (ICIP)

Research paper thumbnail of Hierarchical temporal and spatial memory for gait pattern recognition

2016 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), 2016

This research extends the Hierarchical Temporal Memory (HTM) algorithm and applies it to gait rec... more This research extends the Hierarchical Temporal Memory (HTM) algorithm and applies it to gait recognition. The gait sequence first is decomposed into temporal sub-sequences of spatial sub-regions. The sub-sequence are defined as the period of one step and half step, and the sub-regions are defined as the areas that correspond to body parts. Each sub-area will learn the temporal variation of the body part by constructing Markov Chains. Finally, the classification result is the concatenation of the beliefs of all sub-areas. Unlike other methods, which use gait-specific features, our method uses only image patches of sub-areas. Our extension of previous versions of HTM provides hierarchical temporal inference to cumulate the belief. This generalized new approach is evaluated on a dataset of 151 subjects and two walking conditions. It compares favorably to other current methods used with those data, without requiring problem-specific inputs.

Research paper thumbnail of Learning Inception Attention for Image Synthesis and Image Recognition

ArXiv, 2021

Image synthesis and image recognition have witnessed remarkable progress, but often at the expens... more Image synthesis and image recognition have witnessed remarkable progress, but often at the expense of computationally expensive training and inference. Learning lightweight yet expressive deep model has emerged as an important and interesting direction. Inspired by the well-known split-transform-aggregate design heuristic in the Inception building block, this paper proposes a Skip-Layer Inception Module (SLIM) that facilitates efficient learning of image synthesis models, and a same-layer variant (dubbed as SLIM too) as a stronger alternative to the well-known ResNeXts for image recognition. The proposed method is built on the idea of developing spatially-adaptive feature modulation/attention in a network. Specifically, SLIM is done from an input source feature map to a target one, either skip-layer or same-layer. In SLIM, the input feature map is first split into a number of groups (e.g., 4). Each group is then transformed to a latent style vector (via channel-wise attention) and a...

Research paper thumbnail of Growing Deep Forests Efficiently with Soft Routing and Learned Connectivity

Despite the latest prevailing success of deep neural networks (DNNs), several concerns have been ... more Despite the latest prevailing success of deep neural networks (DNNs), several concerns have been raised against their usage, including the lack of intepretability the gap between DNNs and other well-established machine learning models, and the growingly expensive computational costs. A number of recent works [1],[2],[3] explored the alternative to sequentially stacking decision tree/random forest building blocks in a purely feed-forward way, with no need of back propagation. Since decision trees enjoy inherent reasoning transparency, such deep forest models can also facilitate the understanding of the internal decision making process. This paper further extends the deep forest idea in several important aspects. Firstly, we employ a probabilistic tree whose nodes make probabilistic routing decisions, a.k.a., "soft routing", rather than hard binary decisions. Besides enhancing the flexibility, it also enables non-greedy optimization for each tree. Second, we propose an innov...

Research paper thumbnail of Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference

Proceedings of the AAAI Conference on Artificial Intelligence

While increasingly deep networks are still in general desired for achieving state-of-the-art perf... more While increasingly deep networks are still in general desired for achieving state-of-the-art performance, for many specific inputs a simpler network might already suffice. Existing works exploited this observation by learning to skip convolutional layers in an input-dependent manner. However, we argue their binary decision scheme, i.e., either fully executing or completely bypassing one layer for a specific input, can be enhanced by introducing finer-grained, “softer” decisions. We therefore propose a Dynamic Fractional Skipping (DFS) framework. The core idea of DFS is to hypothesize layer-wise quantization (to different bitwidths) as intermediate “soft” choices to be made between fully utilizing and skipping a layer. For each input, DFS dynamically assigns a bitwidth to both weights and activations of each layer, where fully executing and skipping could be viewed as two “extremes” (i.e., full bitwidth and zero bitwidth). In this way, DFS can “fractionally” exploit a layer's exp...

Research paper thumbnail of Dual Dynamic Inference: Enabling More Efficient, Adaptive and Controllable Deep Inference

IEEE Journal of Selected Topics in Signal Processing, 2020