Ramanarayan Mohanty | Indian Institute of Technology Kharagpur (original) (raw)
Papers by Ramanarayan Mohanty
The lack of proper class discrimination among the Hyperspectral (HS) data points poses a potentia... more The lack of proper class discrimination among the Hyperspectral (HS) data points poses a potential challenge in HS classification. To address this issue, this paper proposes an optimal geometry-aware transformation for enhancing the classification accuracy. The underlying idea of this method is to obtain a linear projection matrix by solving a nonlinear objective function based on the intrinsic geometrical structure of the data. The objective function is constructed to quantify the discrimination between the points from dissimilar classes on the projected data space. Then the obtained projection matrix is used to linearly map the data to more discriminative space. The effectiveness of the proposed transformation is illustrated with three benchmark real-world HS data sets. The experiments reveal that the classification and dimensionality reduction methods on the projected discriminative space outperform their counterpart in the original space.
Feature selection has been studied widely in the literature. However, the efficacy of the selecti... more Feature selection has been studied widely in the literature. However, the efficacy of the selection criteria for low sample size applications is neglected in most cases. Most of the existing feature selection criteria are based on the sample similarity. However, the distance measures become insignificant for high dimensional low sample size (HDLSS) data. Moreover, the variance of a feature with a few samples is pointless unless it represents the data distribution efficiently. Instead of looking at the samples in groups, we evaluate their efficiency based on pairwise fashion. In our investigation, we noticed that considering a pair of samples at a time and selecting the features that bring them closer or put them far away is a better choice for feature selection. Experimental results on benchmark data sets demonstrate the effectiveness of the proposed method with low sample size, which outperforms many other state-of-the-art feature selection methods.
Full-batch training on Graph Neural Networks (GNN) to learn the structure of large graphs is a cr... more Full-batch training on Graph Neural Networks (GNN) to learn the structure of large graphs is a critical problem that needs to scale to hundreds of compute nodes to be feasible. It is challenging due to large memory capacity and bandwidth requirements on a single compute node and high communication volumes across multiple nodes. In this paper, we present DistGNN that optimizes the well-known Deep Graph Library (DGL) for full-batch training on CPU clusters via an efficient shared memory implementation, communication reduction using a minimum vertex-cut graph partitioning algorithm and communication avoidance using a family of delayed-update algorithms. Our results on four common GNN benchmark datasets: Reddit, OGB-Products, OGB-Papers and Proteins, show up to 3.7x speed-up using a single CPU socket and up to 97x speed-up using 128 CPU sockets, respectively, over baseline DGL implementations running on a single CPU socket
2017 25th European Signal Processing Conference (EUSIPCO), 2017
In this paper, we propose an L1 normalized graph based dimensionality reduction method for Hypers... more In this paper, we propose an L1 normalized graph based dimensionality reduction method for Hyperspectral images, called as 'L1-Scaling Cut' (L1-SC). The underlying idea of this method is to generate the optimal projection matrix by retaining the original distribution of the data. Though L2-norm is generally preferred for computation, it is sensitive to noise and outliers. However, L1-norm is robust to them. Therefore, we obtain the optimal projection matrix by maximizing the ratio of between-class dispersion to within-class dispersion using L1norm. Furthermore, an iterative algorithm is described to solve the optimization problem. The experimental results of the HSI classification confirm the effectiveness of the proposed L1-SC method on both noisy and noiseless data.
International Journal of Circuit Theory and Applications, 2015
Queremos discutir a formação inicial e continuada do professor que ensina Matemática, refletindo ... more Queremos discutir a formação inicial e continuada do professor que ensina Matemática, refletindo sobre a potencialidade de espaços não formais, em especial os Museus de Ciências, em situações de ensino e aprendizagem da Matemática. Para isso, apresentamos os desdobramentos de uma pesquisa anterior, considerando o referencial teórico relativo a: História dos museus de ciência; Aprendizagem e Interatividade; Educação formal, não formal e informal; e Exposições Matemáticas. Analisamos os resultados de Watanabe (2013), cujos dados foram coletados em visitas a quatro instituições das cidades de São Paulo e do ABC paulista. Concluímos que o reconhecimento da Matemática em áreas recreativas é pouco, assim, o potencial pedagógico dos museus de ciência não é suficientemente explorado para a Matemática.
International Journal of Services and Operations Management, 2010
... be made as follows: Yadav, OP, Nepal, B., Goel, PS, Jain, R. and Mohanty, RP (2010) &... more ... be made as follows: Yadav, OP, Nepal, B., Goel, PS, Jain, R. and Mohanty, RP (2010) 'Insights and learnings from lean manufacturing implementation practices', Int. J. Services and Operations Management, Vol. 6, No. 4, pp.398422. Biographical notes: Om Prakash Yadav is ...
Poster Flash at the 7th Heidelberg Laureate Forum Young researchers had the chance to present the... more Poster Flash at the 7th Heidelberg Laureate Forum Young researchers had the chance to present their poster infront of a big audience. The opinions expressed in this video do not necessarily reflect the views of the Heidelberg Laureate Forum Foundation or any other person or associated institution involved in the making and distribution of the video.
The Deep Graph Library (DGL) was designed as a tool to enable structure learning from graphs, by ... more The Deep Graph Library (DGL) was designed as a tool to enable structure learning from graphs, by supporting a core abstraction for graphs, including the popular Graph Neural Networks (GNN). DGL contains implementations of all core graph operations for both the CPU and GPU. In this paper, we focus specifically on CPU implementations and present performance analysis, optimizations and results across a set of GNN applications using the latest version of DGL(0.4.3). Across 7 applications, we achieve speed-ups ranging from1 1.5x-13x over the baseline CPU implementations.
In this paper, we propose an L1 normalized graph based dimensionality reduction method for Hypers... more In this paper, we propose an L1 normalized graph based dimensionality reduction method for Hyperspectral images, called as ‘L1-Scaling Cut’ (L1-SC). The underlying idea of this method is to generate the optimal projection matrix by retaining the original distribution of the data. Though L2-norm is generally preferred for computation, it is sensitive to noise and outliers. However, L1-norm is robust to them. Therefore, we obtain the optimal projection matrix by maximizing the ratio of between-class dispersion to within-class dispersion using L1- norm. Furthermore, an iterative algorithm is described to solve the optimization problem. The experimental results of the HSI classification confirm the effectiveness of the proposed L1-SC method on both noisy and noiseless data.
Ahstract-This work proposes an adaptive trace lasso regularized Ll-norm based graph cut method fo... more Ahstract-This work proposes an adaptive trace lasso regularized Ll-norm based graph cut method for dimensionality reduction of Hyperspectral images, called as ‘Trace Lasso-Ll Graph Cut’ (TL-LIGC). The underlying idea of this method is to generate the optimal projection matrix by considering both the sparsity as well as the correlation of the data samples. The conventional L2-norm used in the objective function is sensitive to noise and outliers. Therefore, in this work Ll-norm is utilized as a robust alternative to L2-norm. Besides, for further improvement of the results, we use a penalty function of trace lasso with the LIGC method. It adaptively balances the L2-norm and Ll-norm simultaneously by considering the data correlation along with the sparsity. We obtain the optimal projection matrix by maximizing the ratio of between-class dispersion to within-class dispersion using Ll-norm with trace lasso as the penalty. Furthermore, an iterative procedure for this TL-LIGC method is pro...
Artifact Digital Object Group
Artifact Digital Object Group
Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis
Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis
Full-batch training on Graph Neural Networks (GNN) to learn the structure of large graphs is a cr... more Full-batch training on Graph Neural Networks (GNN) to learn the structure of large graphs is a critical problem that needs to scale to hundreds of compute nodes to be feasible. It is challenging due to large memory capacity and bandwidth requirements on a single compute node and high communication volumes across multiple nodes. In this paper, we present DistGNN that optimizes the wellknown Deep Graph Library (DGL) for full-batch training on CPU clusters via an efficient shared memory implementation, communication reduction using a minimum vertex-cut graph partitioning algorithm and communication avoidance using a family of delayedupdate algorithms. Our results on four common GNN benchmark datasets: Reddit, OGB-Products, OGB-Papers and Proteins, show up to 3.7× speed-up using a single CPU socket and up to 97× speed-up using 128 CPU sockets, respectively, over baseline DGL implementations running on a single CPU socket.
International Journal of Circuit Theory and Applications, 2015
2017 25th European Signal Processing Conference (EUSIPCO), Aug 1, 2017
Feature selection has been studied widely in the literature. However, the efficacy of the selecti... more Feature selection has been studied widely in the literature. However, the efficacy of the selection criteria for low sample size applications is neglected in most cases. Most of the existing feature selection criteria are based on the sample similarity. However, the distance measures become insignificant for high dimensional low sample size (HDLSS) data. Moreover, the variance of a feature with a few samples is pointless unless it represents the data distribution efficiently. Instead of looking at the samples in groups, we evaluate their efficiency based on pairwise fashion. In our investigation, we noticed that considering a pair of samples at a time and selecting the features that bring them closer or put them far away is a better choice for feature selection. Experimental results on benchmark data sets demonstrate the effectiveness of the proposed method with low sample size, which outperforms many other state-of-the-art feature selection methods. Index Terms-Feature selection, pair-wise feature proximity, high dimensional low sample size data.
Dimensionality reduction (DR) methods have attracted extensive attention to provide discriminativ... more Dimensionality reduction (DR) methods have attracted extensive attention to provide discriminative information and reduce the computational burden of hyperspectral image (HSI) classification. However, the DR methods face many challenges due to limited training samples with high dimensional spectra. To address this issue, a graph-based spatial and spectral regularized local scaling cut (SSRLSC) for DR of HSI data is proposed. The underlying idea of the proposed method is to utilize the information from both the spectral and spatial domains to achieve better classification accuracy than its spectral domain counterpart. In SSRLSC, a guided filter is initially used to smoothen and homogenize the pixels of the HSI data in order to preserve the pixel consistency. This is followed by generation of between-class and within-class dissimilarity matrices in both spectral and spatial domains by regularized local scaling cut (RLSC) and neighboring pixel local scaling cut (NPLSC) respectively. Finally, we obtain the projection matrix by optimizing the updated spatial-spectral between-class and total-class dissimilarity. The effectiveness of the proposed DR algorithm is illustrated with two popular real-world HSI datasets. Index Terms-Dimensionality reduction, hyperspectral imaging, neighboring pixel local scaling cut, regularized local scaling cut, spatial spectral method.
IEEE Geoscience and Remote Sensing Letters
The lack of proper class discrimination among the Hyperspectral (HS) data points poses a potentia... more The lack of proper class discrimination among the Hyperspectral (HS) data points poses a potential challenge in HS classification. To address this issue, this paper proposes an optimal geometry-aware transformation for enhancing the classification accuracy. The underlying idea of this method is to obtain a linear projection matrix by solving a nonlinear objective function based on the intrinsic geometrical structure of the data. The objective function is constructed to quantify the discrimination between the points from dissimilar classes on the projected data space. Then the obtained projection matrix is used to linearly map the data to more discriminative space. The effectiveness of the proposed transformation is illustrated with three benchmark real-world HS data sets. The experiments reveal that the classification and dimensionality reduction methods on the projected discriminative space outperform their counterpart in the original space.
IEEE Transactions on Geoscience and Remote Sensing
The lack of proper class discrimination among the Hyperspectral (HS) data points poses a potentia... more The lack of proper class discrimination among the Hyperspectral (HS) data points poses a potential challenge in HS classification. To address this issue, this paper proposes an optimal geometry-aware transformation for enhancing the classification accuracy. The underlying idea of this method is to obtain a linear projection matrix by solving a nonlinear objective function based on the intrinsic geometrical structure of the data. The objective function is constructed to quantify the discrimination between the points from dissimilar classes on the projected data space. Then the obtained projection matrix is used to linearly map the data to more discriminative space. The effectiveness of the proposed transformation is illustrated with three benchmark real-world HS data sets. The experiments reveal that the classification and dimensionality reduction methods on the projected discriminative space outperform their counterpart in the original space.
Feature selection has been studied widely in the literature. However, the efficacy of the selecti... more Feature selection has been studied widely in the literature. However, the efficacy of the selection criteria for low sample size applications is neglected in most cases. Most of the existing feature selection criteria are based on the sample similarity. However, the distance measures become insignificant for high dimensional low sample size (HDLSS) data. Moreover, the variance of a feature with a few samples is pointless unless it represents the data distribution efficiently. Instead of looking at the samples in groups, we evaluate their efficiency based on pairwise fashion. In our investigation, we noticed that considering a pair of samples at a time and selecting the features that bring them closer or put them far away is a better choice for feature selection. Experimental results on benchmark data sets demonstrate the effectiveness of the proposed method with low sample size, which outperforms many other state-of-the-art feature selection methods.
Full-batch training on Graph Neural Networks (GNN) to learn the structure of large graphs is a cr... more Full-batch training on Graph Neural Networks (GNN) to learn the structure of large graphs is a critical problem that needs to scale to hundreds of compute nodes to be feasible. It is challenging due to large memory capacity and bandwidth requirements on a single compute node and high communication volumes across multiple nodes. In this paper, we present DistGNN that optimizes the well-known Deep Graph Library (DGL) for full-batch training on CPU clusters via an efficient shared memory implementation, communication reduction using a minimum vertex-cut graph partitioning algorithm and communication avoidance using a family of delayed-update algorithms. Our results on four common GNN benchmark datasets: Reddit, OGB-Products, OGB-Papers and Proteins, show up to 3.7x speed-up using a single CPU socket and up to 97x speed-up using 128 CPU sockets, respectively, over baseline DGL implementations running on a single CPU socket
2017 25th European Signal Processing Conference (EUSIPCO), 2017
In this paper, we propose an L1 normalized graph based dimensionality reduction method for Hypers... more In this paper, we propose an L1 normalized graph based dimensionality reduction method for Hyperspectral images, called as 'L1-Scaling Cut' (L1-SC). The underlying idea of this method is to generate the optimal projection matrix by retaining the original distribution of the data. Though L2-norm is generally preferred for computation, it is sensitive to noise and outliers. However, L1-norm is robust to them. Therefore, we obtain the optimal projection matrix by maximizing the ratio of between-class dispersion to within-class dispersion using L1norm. Furthermore, an iterative algorithm is described to solve the optimization problem. The experimental results of the HSI classification confirm the effectiveness of the proposed L1-SC method on both noisy and noiseless data.
International Journal of Circuit Theory and Applications, 2015
Queremos discutir a formação inicial e continuada do professor que ensina Matemática, refletindo ... more Queremos discutir a formação inicial e continuada do professor que ensina Matemática, refletindo sobre a potencialidade de espaços não formais, em especial os Museus de Ciências, em situações de ensino e aprendizagem da Matemática. Para isso, apresentamos os desdobramentos de uma pesquisa anterior, considerando o referencial teórico relativo a: História dos museus de ciência; Aprendizagem e Interatividade; Educação formal, não formal e informal; e Exposições Matemáticas. Analisamos os resultados de Watanabe (2013), cujos dados foram coletados em visitas a quatro instituições das cidades de São Paulo e do ABC paulista. Concluímos que o reconhecimento da Matemática em áreas recreativas é pouco, assim, o potencial pedagógico dos museus de ciência não é suficientemente explorado para a Matemática.
International Journal of Services and Operations Management, 2010
... be made as follows: Yadav, OP, Nepal, B., Goel, PS, Jain, R. and Mohanty, RP (2010) &... more ... be made as follows: Yadav, OP, Nepal, B., Goel, PS, Jain, R. and Mohanty, RP (2010) 'Insights and learnings from lean manufacturing implementation practices', Int. J. Services and Operations Management, Vol. 6, No. 4, pp.398422. Biographical notes: Om Prakash Yadav is ...
Poster Flash at the 7th Heidelberg Laureate Forum Young researchers had the chance to present the... more Poster Flash at the 7th Heidelberg Laureate Forum Young researchers had the chance to present their poster infront of a big audience. The opinions expressed in this video do not necessarily reflect the views of the Heidelberg Laureate Forum Foundation or any other person or associated institution involved in the making and distribution of the video.
The Deep Graph Library (DGL) was designed as a tool to enable structure learning from graphs, by ... more The Deep Graph Library (DGL) was designed as a tool to enable structure learning from graphs, by supporting a core abstraction for graphs, including the popular Graph Neural Networks (GNN). DGL contains implementations of all core graph operations for both the CPU and GPU. In this paper, we focus specifically on CPU implementations and present performance analysis, optimizations and results across a set of GNN applications using the latest version of DGL(0.4.3). Across 7 applications, we achieve speed-ups ranging from1 1.5x-13x over the baseline CPU implementations.
In this paper, we propose an L1 normalized graph based dimensionality reduction method for Hypers... more In this paper, we propose an L1 normalized graph based dimensionality reduction method for Hyperspectral images, called as ‘L1-Scaling Cut’ (L1-SC). The underlying idea of this method is to generate the optimal projection matrix by retaining the original distribution of the data. Though L2-norm is generally preferred for computation, it is sensitive to noise and outliers. However, L1-norm is robust to them. Therefore, we obtain the optimal projection matrix by maximizing the ratio of between-class dispersion to within-class dispersion using L1- norm. Furthermore, an iterative algorithm is described to solve the optimization problem. The experimental results of the HSI classification confirm the effectiveness of the proposed L1-SC method on both noisy and noiseless data.
Ahstract-This work proposes an adaptive trace lasso regularized Ll-norm based graph cut method fo... more Ahstract-This work proposes an adaptive trace lasso regularized Ll-norm based graph cut method for dimensionality reduction of Hyperspectral images, called as ‘Trace Lasso-Ll Graph Cut’ (TL-LIGC). The underlying idea of this method is to generate the optimal projection matrix by considering both the sparsity as well as the correlation of the data samples. The conventional L2-norm used in the objective function is sensitive to noise and outliers. Therefore, in this work Ll-norm is utilized as a robust alternative to L2-norm. Besides, for further improvement of the results, we use a penalty function of trace lasso with the LIGC method. It adaptively balances the L2-norm and Ll-norm simultaneously by considering the data correlation along with the sparsity. We obtain the optimal projection matrix by maximizing the ratio of between-class dispersion to within-class dispersion using Ll-norm with trace lasso as the penalty. Furthermore, an iterative procedure for this TL-LIGC method is pro...
Artifact Digital Object Group
Artifact Digital Object Group
Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis
Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis
Full-batch training on Graph Neural Networks (GNN) to learn the structure of large graphs is a cr... more Full-batch training on Graph Neural Networks (GNN) to learn the structure of large graphs is a critical problem that needs to scale to hundreds of compute nodes to be feasible. It is challenging due to large memory capacity and bandwidth requirements on a single compute node and high communication volumes across multiple nodes. In this paper, we present DistGNN that optimizes the wellknown Deep Graph Library (DGL) for full-batch training on CPU clusters via an efficient shared memory implementation, communication reduction using a minimum vertex-cut graph partitioning algorithm and communication avoidance using a family of delayedupdate algorithms. Our results on four common GNN benchmark datasets: Reddit, OGB-Products, OGB-Papers and Proteins, show up to 3.7× speed-up using a single CPU socket and up to 97× speed-up using 128 CPU sockets, respectively, over baseline DGL implementations running on a single CPU socket.
International Journal of Circuit Theory and Applications, 2015
2017 25th European Signal Processing Conference (EUSIPCO), Aug 1, 2017
Feature selection has been studied widely in the literature. However, the efficacy of the selecti... more Feature selection has been studied widely in the literature. However, the efficacy of the selection criteria for low sample size applications is neglected in most cases. Most of the existing feature selection criteria are based on the sample similarity. However, the distance measures become insignificant for high dimensional low sample size (HDLSS) data. Moreover, the variance of a feature with a few samples is pointless unless it represents the data distribution efficiently. Instead of looking at the samples in groups, we evaluate their efficiency based on pairwise fashion. In our investigation, we noticed that considering a pair of samples at a time and selecting the features that bring them closer or put them far away is a better choice for feature selection. Experimental results on benchmark data sets demonstrate the effectiveness of the proposed method with low sample size, which outperforms many other state-of-the-art feature selection methods. Index Terms-Feature selection, pair-wise feature proximity, high dimensional low sample size data.
Dimensionality reduction (DR) methods have attracted extensive attention to provide discriminativ... more Dimensionality reduction (DR) methods have attracted extensive attention to provide discriminative information and reduce the computational burden of hyperspectral image (HSI) classification. However, the DR methods face many challenges due to limited training samples with high dimensional spectra. To address this issue, a graph-based spatial and spectral regularized local scaling cut (SSRLSC) for DR of HSI data is proposed. The underlying idea of the proposed method is to utilize the information from both the spectral and spatial domains to achieve better classification accuracy than its spectral domain counterpart. In SSRLSC, a guided filter is initially used to smoothen and homogenize the pixels of the HSI data in order to preserve the pixel consistency. This is followed by generation of between-class and within-class dissimilarity matrices in both spectral and spatial domains by regularized local scaling cut (RLSC) and neighboring pixel local scaling cut (NPLSC) respectively. Finally, we obtain the projection matrix by optimizing the updated spatial-spectral between-class and total-class dissimilarity. The effectiveness of the proposed DR algorithm is illustrated with two popular real-world HSI datasets. Index Terms-Dimensionality reduction, hyperspectral imaging, neighboring pixel local scaling cut, regularized local scaling cut, spatial spectral method.
IEEE Geoscience and Remote Sensing Letters
The lack of proper class discrimination among the Hyperspectral (HS) data points poses a potentia... more The lack of proper class discrimination among the Hyperspectral (HS) data points poses a potential challenge in HS classification. To address this issue, this paper proposes an optimal geometry-aware transformation for enhancing the classification accuracy. The underlying idea of this method is to obtain a linear projection matrix by solving a nonlinear objective function based on the intrinsic geometrical structure of the data. The objective function is constructed to quantify the discrimination between the points from dissimilar classes on the projected data space. Then the obtained projection matrix is used to linearly map the data to more discriminative space. The effectiveness of the proposed transformation is illustrated with three benchmark real-world HS data sets. The experiments reveal that the classification and dimensionality reduction methods on the projected discriminative space outperform their counterpart in the original space.
IEEE Transactions on Geoscience and Remote Sensing