Mansoor rezghi - Academia.edu (original) (raw)

Papers by Mansoor rezghi

Research paper thumbnail of An adaptive synchronization approach for weights of deep reinforcement learning

Cornell University - arXiv, Aug 16, 2020

Deep Q-Networks (DQN) is one of the most well-known methods of deep reinforcement learning, which... more Deep Q-Networks (DQN) is one of the most well-known methods of deep reinforcement learning, which uses deep learning to approximate the action-value function. Solving numerous Deep reinforcement learning challenges such as moving targets problem and the correlation between samples are the main advantages of this model. Although there have been various extensions of DQN in recent years, they all use a similar method to DQN to overcome the problem of moving targets. Despite the advantages mentioned, synchronizing the network weight in a fixed step size, independent of the agent's behavior, may in some cases cause the loss of some properly learned networks. These lost networks may lead to states with more rewards, hence better samples stored in the replay memory for future training. In this paper, we address this problem from the DQN family and provide an adaptive approach for the synchronization of the neural weights used in DQN. In this method, the synchronization of weights is done based on the recent behavior of the agent, which is measured by a criterion at the end of the intervals. To test this method, we adjusted the DQN and rainbow methods with the proposed adaptive synchronization method. We compared these adjusted methods with their standard form on well-known games, which results confirm the quality of our synchronization methods. Keywords Reinforcement Learning • Deep Learning 1 Introduction Reinforcement learning is a branch of machine learning that deals with how an agent can act in an environment to receive the maximum rewards from the environment.

Research paper thumbnail of A block column iteration for nonnegative matrix factorization

Journal of Computational Science

Research paper thumbnail of A robust sparse feature selection for hyperspectral images

2016 2nd International Conference of Signal Processing and Intelligent Systems (ICSPIS), 2016

The hyperspectral imaging system is a powerful tool in the field of remote sensing. The extremely... more The hyperspectral imaging system is a powerful tool in the field of remote sensing. The extremely high dimensionality of these data sets, make the analysis of HSIs a complicated task. So for good learning, it is important to select some important features. Recently in [7] a sparse feature selection method based on regularized regression model proposed to select heterogeneous important features. But in application due to imaging devices, our data are contaminated with the noise. This noise affects the learning process especially for high-dimensional data analysis such as hyperspectral images. In this paper, we propose a robust feature selection method based on the proposed method in [7]. Here our model leads to total least square problem. Experimental results, confirm the performance of the proposed method.

Research paper thumbnail of Rainfall Data Analysis of Iran using Complex Networks View

2020 10th International Conference on Computer and Knowledge Engineering (ICCKE), 2020

Rainfall Zoning is one of the most significant applications in hydro-climatic science. The invest... more Rainfall Zoning is one of the most significant applications in hydro-climatic science. The investigation of these regions helps us to better interpret the functional mechanism of the climatology. A popular way to detect these regions is to use a typical clustering algorithm like K-means on spatial features of the data, But it’s better to detect the zones based on the rainfall data because temporal features of rainfall data, unlike its spatial features, can cause a better result in clustering these data types. The most challenging part while using temporal data is to apply them in the presence of missing values. Here, applying a typical clustering method due to high missing values as a whole block on these data is not proper or maybe even impossible. We implemented a clustering on Iran’s rainfall dataset and one of the most disturbing facts about this dataset was its missing values and to carry out these missing data we demanded to change the data type. To overcome this missing value problem in data, we used a method named "Event synchronization" that could give appropriate similarity for temporal data with high missing values. By this approach, the data with high missing value could be converted to a network. Then by adopting the state-of-the-art community detection algorithm we detected the most related points to each other as rainfall clusters, and we’ll see the promising results at the end. The nature of our real-world data can prove the results.

Research paper thumbnail of Applying inverse stereographic projection to manifold learning and clustering

Applied Intelligence, 2022

In machine learning, a data set is often viewed as a point set distributed on a manifold. Using E... more In machine learning, a data set is often viewed as a point set distributed on a manifold. Using Euclidean norms to measure the proximity of this data set reduces the efficiency of learning methods. Also, many algorithms like Laplacian Eigenmaps or spectral clustering that require to measure similarity assume the k-Nearest Neighbors of any point are quite equal to the local neighborhood of the point on the manifold using Euclidean norms. In this paper, we propose a new method that intelligently transforms data on an unknown manifold to an n-sphere by the conformal stereographic projection, which preserves the angles and similarities of data in the original manifold. Therefore similarities represent actual similarities of the data in the original space. Experimental results on various problems, including clustering and manifold learning, show the effectiveness of our method.

Research paper thumbnail of A novel extension of Generalized Low-Rank Approximation of Matrices based on multiple-pairs of transformations

ArXiv, 2018

Dimensionality reduction is a main step in the learning process which plays an essential role in ... more Dimensionality reduction is a main step in the learning process which plays an essential role in many applications. The most popular methods in this field like SVD, PCA, and LDA, only can be applied to data with vector format. This means that for higher order data like matrices or more generally tensors, data should be fold to the vector format. So, in this approach, the spatial relations of features are not considered and also the probability of over-fitting is increased. Due to these issues, in recent years some methods like Generalized low-rank approximation of matrices (GLRAM) and Multilinear PCA (MPCA) are proposed which deal with the data in their own format. So, in these methods, the spatial relationships of features are preserved and the probability of overfitting could be fallen. Also, their time and space complexities are less than vector-based ones. However, because of the fewer parameters, the search space in a multilinear approach is much smaller than the search space o...

Research paper thumbnail of Effect of Rating Time for Cold Start Problem in Collaborative Filtering

International Journal of Applied Operational Research - An Open Access Journal, 2014

Cold start is one of the main challenges in recommender systems. Solving sparsechallenge of cold ... more Cold start is one of the main challenges in recommender systems. Solving sparsechallenge of cold start users is hard. More cold start users and items are new. Sine many general methods for recommender systems has over fittingon cold start users and items, so recommendation to new users and items is important and hard duty. In this work to overcome sparse problem, we present a new method for recommender system based on tensor decomposition that use time dimension as independent dimension. Our method uses extra information of sequence of rating time which specify time duration of ratings. We test our method on dataset of Each Movie with 2 data types. One type has cold start users and items and another hasn’t cold start users and items. Result shows that using time dimension has more effect on cold start users and items than others.

Research paper thumbnail of Generalized low-rank approximation of matrices based on multiple transformation pairs

Pattern Recognition, 2020

Dimensionality reduction is a critical step in the learning process that plays an essential role ... more Dimensionality reduction is a critical step in the learning process that plays an essential role in various applications. The most popular methods for dimensionality reduction, SVD and PCA, for instance, only work on one-dimensional data. This means that for higher-order data like matrices or more generally tensors, data should be fold to the vector format. Thus, this approach ignores the spatial relationships of features and increases the probability of overfitting as well. Due to the mentioned issues, several methods like Generalized Low-Rank Approximation of Matrices (GLRAM) and Multilinear PCA (MPCA) proposed to deal with multi-dimensional data in their original format. Consequently, the spatial relationships of features preserved and the probability of overfitting diminished. Besides, the time and space complexity in such methods are less than vector-based ones. However, since the multilinear approach needs fewer parameters, its search space is much smaller than that of the vector-based one. To solve the previous problems of multilinear methods like GLRAM, we proposed a novel extension of GLRAM in which instead one transformation pair use multiple left and right transformation pairs on the projected data. Consequently, this provides the problem with a larger feasible region and smaller reconstruction error. This article provides several analytical discussions and experimental results that confirm the quality of the proposed method.

Research paper thumbnail of Well-to-well correlation and identifying lithological boundaries by principal component analysis of well-logs

Computers & Geosciences, 2021

Abstract Identifying the location of lithological boundaries is one of the essential steps of res... more Abstract Identifying the location of lithological boundaries is one of the essential steps of reservoir characterizations. The manual well-to-well correlation is usually implemented to identify lithological boundaries. Various automated methods were introduced to accelerate this correlation; however, most of them use single well-log data. As each well-log contains specific information of rock and fluid properties, the simultaneous use of various well-logs can enhance the correlation accuracy. We extend an automatic well-to-well correlation approach from the literature to use the benefits of various well-logs by applying principal component analysis on multiple well-logs of a carbonate reservoir. The extracted features (i.e., mean, coefficient of variation, maximum to minimum ratio, trend angle, and fractal dimension) from a reference well are examined across observation wells. The energy of principal components is evaluated to determine the appropriate number of principal components. We examine three different scenarios of applying principal component analysis and determine the best methodology for well-to-well correlation. In the first scenario, the principal component analysis reduces the dependency of statistical attributes extracted from a single well-log. We then apply principal component analysis on multiple well-logs to extract their features (Scenario II). Finally, we check whether principal component analysis can be applied at multiple steps (Scenario III). The analysis of variance and Tukey are used to compare the accuracy of the scenarios. The results show that identifying lithological boundaries in different wells is significantly improved when the principal component analysis approach combines information from multiple well-logs. Generally, it is concluded that principal component analysis is an effective tool for increasing well-to-well correlation accuracy by reducing the dependency of well-to-well correlation parameters (Scenario I) and the feature extraction from log data (Scenario II & III).

Research paper thumbnail of A Hybrid Image Denoising Method Based on Integer and Fractional-Order Total Variation

Iranian Journal of Science and Technology, Transactions A: Science, 2020

This paper introduces a new hybrid fractional model for image denoising. This proposed model is a... more This paper introduces a new hybrid fractional model for image denoising. This proposed model is a combination of two models Rudin-Osher-Fatemi and fractional-order total variation. We try to use the advantages of two mentioned models. In this regard, after introducing an appropriate norm space, we prove the existence and uniqueness of the presented model. Furthermore, finite difference method is employed for numerically solving the obtained equation. Finally, the results illustrate the efficiency of the proposed model that yields good visual effects and a better signal-to-noise ratio.

Research paper thumbnail of Image denoising by a novel variable‐order total fractional variation model

Mathematical Methods in the Applied Sciences, 2021

The total variation model performs very well for removing noise while preserving edges. However, ... more The total variation model performs very well for removing noise while preserving edges. However, it gives a piecewise constant solution which often leads to the staircase effect, consequently small details such as textures are filtered out in the denoising process. Fractional‐order total variation method is one of the major approaches to overcome such drawbacks. Unlike their good quality of fractional order, all these methods use a fixed fractional order for the whole of the image. In this paper, a novel variable‐order total fractional variation model is proposed for image denoising, in which the order of fractional derivative will be allocated automatically for each pixel based on the context of the image. This kind of selection is able to capture the edges and texture of the image simultaneously. In this regard, we prove the existence and uniqueness of the presented model. The split Bregman method is adapted to solve the model. Finally, the results illustrate the efficiency of the proposed model that yielded good visual effects and a better signal‐to‐noise ratio.

Research paper thumbnail of Extra-adaptive robust online subspace tracker for anomaly detection from streaming networks

Engineering Applications of Artificial Intelligence, 2020

Anomaly detection in time-evolving networks has many applications, for instance, traffic analysis... more Anomaly detection in time-evolving networks has many applications, for instance, traffic analysis in transportation networks and intrusion detection in computer networks. One group of popular methods for anomaly detection from evolving networks are robust online subspace trackers. However, these methods suffer from problem of insensitivity to drastic changes in the evolving subspace. In order to solve this problem, we propose a new robust online subspace and anomaly tracker, which is more adaptive and robust against sudden drastic changes in the subspace. More accurate estimation of low rank and sparse components by this tracker leads to more accurate anomaly detection. We evaluate the accuracy of our method with real-world dynamic network data sets with varying sparsity levels. The result is promising and our method outperforms the state-of-the-art.

Research paper thumbnail of A Tensor-Based Framework for rs-fMRI Classification and Functional Connectivity Construction

Frontiers in Neuroinformatics, 2020

Recently, machine learning methods have gained lots of attention from researchers seeking to anal... more Recently, machine learning methods have gained lots of attention from researchers seeking to analyze brain images such as Resting-State Functional Magnetic Resonance Imaging (rs-fMRI) to obtain a deeper understanding of the brain and such related diseases, for example, Alzheimer's disease. Finding the common patterns caused by a brain disorder through analysis of the functional connectivity (FC) network along with discriminating brain diseases from normal controls have long been the two principal goals in studying rs-fMRI data. The majority of FC extraction methods calculate the FC matrix for each subject and then use simple techniques to combine them and obtain a general FC matrix. In addition, the state-of-the-art classification techniques for finding subjects with brain disorders also rely on calculating an FC for each subject, vectorizing, and feeding them to the classifier. Considering these problems and based on multi-dimensional nature of the data, we have come up with a ...

Research paper thumbnail of End-to-end CNN + LSTM deep learning approach for bearing fault diagnosis

Applied Intelligence, 2020

Fault diagnostics and prognostics are important topics both in practice and research. There is an... more Fault diagnostics and prognostics are important topics both in practice and research. There is an intense pressure on industrial plants to continue reducing unscheduled downtime, performance degradation, and safety hazards, which requires detecting and recovering potential faults in its early stages. Intelligent fault diagnosis is a promising tool due to its ability to rapidly and efficiently processing collected signals and providing accurate diagnosis results. Although many studies have developed machine leaning (M.L) and deep learning (D.L) algorithms for detecting the bearing fault, the results have generally been limited to relatively small train and test datasets and the input data has been manipulated (selective features used) to reach high accuracy. In this work, the raw data, collected from accelerometers (time-domain features) are taken as the input of a novel temporal sequence prediction algorithm to present an end-to-end method for fault detection. We use equivalent temporal sequences as the input of a novel Convolutional Long-Short-Term-Memory Recurrent Neural Network (CRNN) to detect the bearing fault with the highest accuracy in the shortest possible time. The method can reach the highest accuracy in the literature, to the best knowledge of the authors of the present paper, voiding any sort of pre-processing or manipulation of the input data. Effectiveness and feasibility of the fault diagnosis method are validated by applying it to two commonly used benchmark real vibration datasets and comparing the result with the other intelligent fault diagnosis methods.

Research paper thumbnail of A Novel Enriched Version of Truncated Nuclear Norm Regularization for Matrix Completion of Inexact Observed Data

IEEE Transactions on Knowledge and Data Engineering, 2020

We experimentally investigate mutual information and generalized mutual information for coherent ... more We experimentally investigate mutual information and generalized mutual information for coherent optical transmission systems. The impact of the assumed channel distribution on the achievable rate is investigated for distributions in up to four dimensions. Single channel and wavelength-division multiplexing (WDM) transmission over transmission links with and without inline dispersion compensation are studied. We show that for conventional WDM systems without inline dispersion compensation, a circularly symmetric complex Gaussian distribution is a good approximation of the channel. For other channels, such as with inline dispersion compensation, this is no longer true and gains in the achievable information rate are obtained by considering more sophisticated four-dimensional (4D) distributions. We also show that for nonlinear channels, gains in the achievable information rate can also be achieved by estimating the mean values of the received constellation in four dimensions. The highest gain for such channels is seen for a 4D correlated Gaussian distribution.

Research paper thumbnail of Even-order Toeplitz tensor: framework for multidimensional structured linear systems

Computational and Applied Mathematics, 2019

In this paper, we introduce tensors with Toeplitz structure. These structured tensors occur in di... more In this paper, we introduce tensors with Toeplitz structure. These structured tensors occur in different kinds of applications such as discretization of multidimensional PDE's or Fredholm integral equations with an invariant kernel. We investigate the main properties of the new structured tensor and show the tensor contractive product with such tenors can be carried out with the fast Fourier transform. Also, we show that approximation of Toeplitz tensors with a specially structured tensor (that will be named-product tensors) can be reduced to the rank-1 approximation of a smaller tensor. Tensor equations with such-product coefficient tenors can be solved by a direct method. So, this approximation of a Toeplitz tensor can be used to find an approximate solution of the original tensor equation or can be used as a preconditioner. Our main goal is to show the ability of the tensor framework to handle structured multidimensional problems in their original format.

Research paper thumbnail of A splitting method for total least squares color image restoration problem

Journal of Visual Communication and Image Representation, 2017

Color image restoration is an important problem in image processing. Using the structured total l... more Color image restoration is an important problem in image processing. Using the structured total least squares (STLS) for fidelity term of the restoration process gives better results in comparison with the least squares (LS) approach. The main drawback of the STLS approach is its complexity. To overcome this issue, in this paper by an appropriate transformation the color image restoration is substituted with two smaller subproblems corresponding to smooth and oscillatory parts of the image. The first and second subproblems are modeled via STLS and LS approaches, respectively. We show that the proposed method is faster than STLS and gives competitive solutions with it. Also, we demonstrate that Haar wavelet perseveres the structure of the blurring operator, which causes a considerable reduction in computational and storage complexity of the proposed method.

Research paper thumbnail of Improving image segmentation by using energy function based on mixture of Gaussian pre-processing

Journal of Visual Communication and Image Representation, 2016

In this paper, by proposing a two-stage segmentation method based on active contour model, we imp... more In this paper, by proposing a two-stage segmentation method based on active contour model, we improve the procedure of former image segmentation methods. The first stage of our method is computing weights, means and variances of image by utilizing Mixture of Gaussian distribution which parameters are obtained from EM-algorithm. Once they are obtained, in the second stage, by incorporating level set method for minimizing energy function, the segmentation is achieved. We use an adaptive direction function to make the curve evolution robust against the curves initial position and a nonlinear adaptive velocity to speed up the process of curve evolution and also a probability-weighted edge and region indicator function to implement a robust segmentation for objects with weak boundaries. The paper consists of minimizing a functional containing a penalty term in an attempt to maintain the signed distance property in the entire domain and an external energy term such that it achieves a minimum when the zero level set of the function is located at desired position.

Research paper thumbnail of Integrated single image super resolution based on sparse representation

2015 The International Symposium on Artificial Intelligence and Signal Processing (AISP), 2015

This paper presents a new and efficient approach for single-image super-resolution based on spars... more This paper presents a new and efficient approach for single-image super-resolution based on sparse signal recovery. This approach uses a co-occurrence trained dictionary of image patches that obtained from a set of observed low- and high-resolution images. The linear combination of the dictionary patches can recover every patch, then each patch that used on the low-resolution image, can be recovered by the dictionary patches. Since the recovered patch is a linear combination of some patches, the noise of every patch, aggregated in the recovered patch, then we prefer a linear combination which is more sparse rather than other combinations. So the sparse representation of patches can filter the noise in the solution. Recently this approach has been used in single image super-resolution problem. These methods calculate the sparse representation of every patches separately and set it to the recovered high-resolution image. So the complexity of such methods are very high and for suitable solution the parameters of algorithm must be estimated, therefore, this process (recover all patch with an iterative algorithm and parameter estimation for each iterate) is very time consuming. This paper presents an integrated method for recovering a low-resolution image based on sparse representation of patches with one step and recover whole image together.

Research paper thumbnail of Improved NRIF Preconditioner for nonsymmetric positive definite linear systems

Research paper thumbnail of An adaptive synchronization approach for weights of deep reinforcement learning

Cornell University - arXiv, Aug 16, 2020

Deep Q-Networks (DQN) is one of the most well-known methods of deep reinforcement learning, which... more Deep Q-Networks (DQN) is one of the most well-known methods of deep reinforcement learning, which uses deep learning to approximate the action-value function. Solving numerous Deep reinforcement learning challenges such as moving targets problem and the correlation between samples are the main advantages of this model. Although there have been various extensions of DQN in recent years, they all use a similar method to DQN to overcome the problem of moving targets. Despite the advantages mentioned, synchronizing the network weight in a fixed step size, independent of the agent's behavior, may in some cases cause the loss of some properly learned networks. These lost networks may lead to states with more rewards, hence better samples stored in the replay memory for future training. In this paper, we address this problem from the DQN family and provide an adaptive approach for the synchronization of the neural weights used in DQN. In this method, the synchronization of weights is done based on the recent behavior of the agent, which is measured by a criterion at the end of the intervals. To test this method, we adjusted the DQN and rainbow methods with the proposed adaptive synchronization method. We compared these adjusted methods with their standard form on well-known games, which results confirm the quality of our synchronization methods. Keywords Reinforcement Learning • Deep Learning 1 Introduction Reinforcement learning is a branch of machine learning that deals with how an agent can act in an environment to receive the maximum rewards from the environment.

Research paper thumbnail of A block column iteration for nonnegative matrix factorization

Journal of Computational Science

Research paper thumbnail of A robust sparse feature selection for hyperspectral images

2016 2nd International Conference of Signal Processing and Intelligent Systems (ICSPIS), 2016

The hyperspectral imaging system is a powerful tool in the field of remote sensing. The extremely... more The hyperspectral imaging system is a powerful tool in the field of remote sensing. The extremely high dimensionality of these data sets, make the analysis of HSIs a complicated task. So for good learning, it is important to select some important features. Recently in [7] a sparse feature selection method based on regularized regression model proposed to select heterogeneous important features. But in application due to imaging devices, our data are contaminated with the noise. This noise affects the learning process especially for high-dimensional data analysis such as hyperspectral images. In this paper, we propose a robust feature selection method based on the proposed method in [7]. Here our model leads to total least square problem. Experimental results, confirm the performance of the proposed method.

Research paper thumbnail of Rainfall Data Analysis of Iran using Complex Networks View

2020 10th International Conference on Computer and Knowledge Engineering (ICCKE), 2020

Rainfall Zoning is one of the most significant applications in hydro-climatic science. The invest... more Rainfall Zoning is one of the most significant applications in hydro-climatic science. The investigation of these regions helps us to better interpret the functional mechanism of the climatology. A popular way to detect these regions is to use a typical clustering algorithm like K-means on spatial features of the data, But it’s better to detect the zones based on the rainfall data because temporal features of rainfall data, unlike its spatial features, can cause a better result in clustering these data types. The most challenging part while using temporal data is to apply them in the presence of missing values. Here, applying a typical clustering method due to high missing values as a whole block on these data is not proper or maybe even impossible. We implemented a clustering on Iran’s rainfall dataset and one of the most disturbing facts about this dataset was its missing values and to carry out these missing data we demanded to change the data type. To overcome this missing value problem in data, we used a method named "Event synchronization" that could give appropriate similarity for temporal data with high missing values. By this approach, the data with high missing value could be converted to a network. Then by adopting the state-of-the-art community detection algorithm we detected the most related points to each other as rainfall clusters, and we’ll see the promising results at the end. The nature of our real-world data can prove the results.

Research paper thumbnail of Applying inverse stereographic projection to manifold learning and clustering

Applied Intelligence, 2022

In machine learning, a data set is often viewed as a point set distributed on a manifold. Using E... more In machine learning, a data set is often viewed as a point set distributed on a manifold. Using Euclidean norms to measure the proximity of this data set reduces the efficiency of learning methods. Also, many algorithms like Laplacian Eigenmaps or spectral clustering that require to measure similarity assume the k-Nearest Neighbors of any point are quite equal to the local neighborhood of the point on the manifold using Euclidean norms. In this paper, we propose a new method that intelligently transforms data on an unknown manifold to an n-sphere by the conformal stereographic projection, which preserves the angles and similarities of data in the original manifold. Therefore similarities represent actual similarities of the data in the original space. Experimental results on various problems, including clustering and manifold learning, show the effectiveness of our method.

Research paper thumbnail of A novel extension of Generalized Low-Rank Approximation of Matrices based on multiple-pairs of transformations

ArXiv, 2018

Dimensionality reduction is a main step in the learning process which plays an essential role in ... more Dimensionality reduction is a main step in the learning process which plays an essential role in many applications. The most popular methods in this field like SVD, PCA, and LDA, only can be applied to data with vector format. This means that for higher order data like matrices or more generally tensors, data should be fold to the vector format. So, in this approach, the spatial relations of features are not considered and also the probability of over-fitting is increased. Due to these issues, in recent years some methods like Generalized low-rank approximation of matrices (GLRAM) and Multilinear PCA (MPCA) are proposed which deal with the data in their own format. So, in these methods, the spatial relationships of features are preserved and the probability of overfitting could be fallen. Also, their time and space complexities are less than vector-based ones. However, because of the fewer parameters, the search space in a multilinear approach is much smaller than the search space o...

Research paper thumbnail of Effect of Rating Time for Cold Start Problem in Collaborative Filtering

International Journal of Applied Operational Research - An Open Access Journal, 2014

Cold start is one of the main challenges in recommender systems. Solving sparsechallenge of cold ... more Cold start is one of the main challenges in recommender systems. Solving sparsechallenge of cold start users is hard. More cold start users and items are new. Sine many general methods for recommender systems has over fittingon cold start users and items, so recommendation to new users and items is important and hard duty. In this work to overcome sparse problem, we present a new method for recommender system based on tensor decomposition that use time dimension as independent dimension. Our method uses extra information of sequence of rating time which specify time duration of ratings. We test our method on dataset of Each Movie with 2 data types. One type has cold start users and items and another hasn’t cold start users and items. Result shows that using time dimension has more effect on cold start users and items than others.

Research paper thumbnail of Generalized low-rank approximation of matrices based on multiple transformation pairs

Pattern Recognition, 2020

Dimensionality reduction is a critical step in the learning process that plays an essential role ... more Dimensionality reduction is a critical step in the learning process that plays an essential role in various applications. The most popular methods for dimensionality reduction, SVD and PCA, for instance, only work on one-dimensional data. This means that for higher-order data like matrices or more generally tensors, data should be fold to the vector format. Thus, this approach ignores the spatial relationships of features and increases the probability of overfitting as well. Due to the mentioned issues, several methods like Generalized Low-Rank Approximation of Matrices (GLRAM) and Multilinear PCA (MPCA) proposed to deal with multi-dimensional data in their original format. Consequently, the spatial relationships of features preserved and the probability of overfitting diminished. Besides, the time and space complexity in such methods are less than vector-based ones. However, since the multilinear approach needs fewer parameters, its search space is much smaller than that of the vector-based one. To solve the previous problems of multilinear methods like GLRAM, we proposed a novel extension of GLRAM in which instead one transformation pair use multiple left and right transformation pairs on the projected data. Consequently, this provides the problem with a larger feasible region and smaller reconstruction error. This article provides several analytical discussions and experimental results that confirm the quality of the proposed method.

Research paper thumbnail of Well-to-well correlation and identifying lithological boundaries by principal component analysis of well-logs

Computers & Geosciences, 2021

Abstract Identifying the location of lithological boundaries is one of the essential steps of res... more Abstract Identifying the location of lithological boundaries is one of the essential steps of reservoir characterizations. The manual well-to-well correlation is usually implemented to identify lithological boundaries. Various automated methods were introduced to accelerate this correlation; however, most of them use single well-log data. As each well-log contains specific information of rock and fluid properties, the simultaneous use of various well-logs can enhance the correlation accuracy. We extend an automatic well-to-well correlation approach from the literature to use the benefits of various well-logs by applying principal component analysis on multiple well-logs of a carbonate reservoir. The extracted features (i.e., mean, coefficient of variation, maximum to minimum ratio, trend angle, and fractal dimension) from a reference well are examined across observation wells. The energy of principal components is evaluated to determine the appropriate number of principal components. We examine three different scenarios of applying principal component analysis and determine the best methodology for well-to-well correlation. In the first scenario, the principal component analysis reduces the dependency of statistical attributes extracted from a single well-log. We then apply principal component analysis on multiple well-logs to extract their features (Scenario II). Finally, we check whether principal component analysis can be applied at multiple steps (Scenario III). The analysis of variance and Tukey are used to compare the accuracy of the scenarios. The results show that identifying lithological boundaries in different wells is significantly improved when the principal component analysis approach combines information from multiple well-logs. Generally, it is concluded that principal component analysis is an effective tool for increasing well-to-well correlation accuracy by reducing the dependency of well-to-well correlation parameters (Scenario I) and the feature extraction from log data (Scenario II & III).

Research paper thumbnail of A Hybrid Image Denoising Method Based on Integer and Fractional-Order Total Variation

Iranian Journal of Science and Technology, Transactions A: Science, 2020

This paper introduces a new hybrid fractional model for image denoising. This proposed model is a... more This paper introduces a new hybrid fractional model for image denoising. This proposed model is a combination of two models Rudin-Osher-Fatemi and fractional-order total variation. We try to use the advantages of two mentioned models. In this regard, after introducing an appropriate norm space, we prove the existence and uniqueness of the presented model. Furthermore, finite difference method is employed for numerically solving the obtained equation. Finally, the results illustrate the efficiency of the proposed model that yields good visual effects and a better signal-to-noise ratio.

Research paper thumbnail of Image denoising by a novel variable‐order total fractional variation model

Mathematical Methods in the Applied Sciences, 2021

The total variation model performs very well for removing noise while preserving edges. However, ... more The total variation model performs very well for removing noise while preserving edges. However, it gives a piecewise constant solution which often leads to the staircase effect, consequently small details such as textures are filtered out in the denoising process. Fractional‐order total variation method is one of the major approaches to overcome such drawbacks. Unlike their good quality of fractional order, all these methods use a fixed fractional order for the whole of the image. In this paper, a novel variable‐order total fractional variation model is proposed for image denoising, in which the order of fractional derivative will be allocated automatically for each pixel based on the context of the image. This kind of selection is able to capture the edges and texture of the image simultaneously. In this regard, we prove the existence and uniqueness of the presented model. The split Bregman method is adapted to solve the model. Finally, the results illustrate the efficiency of the proposed model that yielded good visual effects and a better signal‐to‐noise ratio.

Research paper thumbnail of Extra-adaptive robust online subspace tracker for anomaly detection from streaming networks

Engineering Applications of Artificial Intelligence, 2020

Anomaly detection in time-evolving networks has many applications, for instance, traffic analysis... more Anomaly detection in time-evolving networks has many applications, for instance, traffic analysis in transportation networks and intrusion detection in computer networks. One group of popular methods for anomaly detection from evolving networks are robust online subspace trackers. However, these methods suffer from problem of insensitivity to drastic changes in the evolving subspace. In order to solve this problem, we propose a new robust online subspace and anomaly tracker, which is more adaptive and robust against sudden drastic changes in the subspace. More accurate estimation of low rank and sparse components by this tracker leads to more accurate anomaly detection. We evaluate the accuracy of our method with real-world dynamic network data sets with varying sparsity levels. The result is promising and our method outperforms the state-of-the-art.

Research paper thumbnail of A Tensor-Based Framework for rs-fMRI Classification and Functional Connectivity Construction

Frontiers in Neuroinformatics, 2020

Recently, machine learning methods have gained lots of attention from researchers seeking to anal... more Recently, machine learning methods have gained lots of attention from researchers seeking to analyze brain images such as Resting-State Functional Magnetic Resonance Imaging (rs-fMRI) to obtain a deeper understanding of the brain and such related diseases, for example, Alzheimer's disease. Finding the common patterns caused by a brain disorder through analysis of the functional connectivity (FC) network along with discriminating brain diseases from normal controls have long been the two principal goals in studying rs-fMRI data. The majority of FC extraction methods calculate the FC matrix for each subject and then use simple techniques to combine them and obtain a general FC matrix. In addition, the state-of-the-art classification techniques for finding subjects with brain disorders also rely on calculating an FC for each subject, vectorizing, and feeding them to the classifier. Considering these problems and based on multi-dimensional nature of the data, we have come up with a ...

Research paper thumbnail of End-to-end CNN + LSTM deep learning approach for bearing fault diagnosis

Applied Intelligence, 2020

Fault diagnostics and prognostics are important topics both in practice and research. There is an... more Fault diagnostics and prognostics are important topics both in practice and research. There is an intense pressure on industrial plants to continue reducing unscheduled downtime, performance degradation, and safety hazards, which requires detecting and recovering potential faults in its early stages. Intelligent fault diagnosis is a promising tool due to its ability to rapidly and efficiently processing collected signals and providing accurate diagnosis results. Although many studies have developed machine leaning (M.L) and deep learning (D.L) algorithms for detecting the bearing fault, the results have generally been limited to relatively small train and test datasets and the input data has been manipulated (selective features used) to reach high accuracy. In this work, the raw data, collected from accelerometers (time-domain features) are taken as the input of a novel temporal sequence prediction algorithm to present an end-to-end method for fault detection. We use equivalent temporal sequences as the input of a novel Convolutional Long-Short-Term-Memory Recurrent Neural Network (CRNN) to detect the bearing fault with the highest accuracy in the shortest possible time. The method can reach the highest accuracy in the literature, to the best knowledge of the authors of the present paper, voiding any sort of pre-processing or manipulation of the input data. Effectiveness and feasibility of the fault diagnosis method are validated by applying it to two commonly used benchmark real vibration datasets and comparing the result with the other intelligent fault diagnosis methods.

Research paper thumbnail of A Novel Enriched Version of Truncated Nuclear Norm Regularization for Matrix Completion of Inexact Observed Data

IEEE Transactions on Knowledge and Data Engineering, 2020

We experimentally investigate mutual information and generalized mutual information for coherent ... more We experimentally investigate mutual information and generalized mutual information for coherent optical transmission systems. The impact of the assumed channel distribution on the achievable rate is investigated for distributions in up to four dimensions. Single channel and wavelength-division multiplexing (WDM) transmission over transmission links with and without inline dispersion compensation are studied. We show that for conventional WDM systems without inline dispersion compensation, a circularly symmetric complex Gaussian distribution is a good approximation of the channel. For other channels, such as with inline dispersion compensation, this is no longer true and gains in the achievable information rate are obtained by considering more sophisticated four-dimensional (4D) distributions. We also show that for nonlinear channels, gains in the achievable information rate can also be achieved by estimating the mean values of the received constellation in four dimensions. The highest gain for such channels is seen for a 4D correlated Gaussian distribution.

Research paper thumbnail of Even-order Toeplitz tensor: framework for multidimensional structured linear systems

Computational and Applied Mathematics, 2019

In this paper, we introduce tensors with Toeplitz structure. These structured tensors occur in di... more In this paper, we introduce tensors with Toeplitz structure. These structured tensors occur in different kinds of applications such as discretization of multidimensional PDE's or Fredholm integral equations with an invariant kernel. We investigate the main properties of the new structured tensor and show the tensor contractive product with such tenors can be carried out with the fast Fourier transform. Also, we show that approximation of Toeplitz tensors with a specially structured tensor (that will be named-product tensors) can be reduced to the rank-1 approximation of a smaller tensor. Tensor equations with such-product coefficient tenors can be solved by a direct method. So, this approximation of a Toeplitz tensor can be used to find an approximate solution of the original tensor equation or can be used as a preconditioner. Our main goal is to show the ability of the tensor framework to handle structured multidimensional problems in their original format.

Research paper thumbnail of A splitting method for total least squares color image restoration problem

Journal of Visual Communication and Image Representation, 2017

Color image restoration is an important problem in image processing. Using the structured total l... more Color image restoration is an important problem in image processing. Using the structured total least squares (STLS) for fidelity term of the restoration process gives better results in comparison with the least squares (LS) approach. The main drawback of the STLS approach is its complexity. To overcome this issue, in this paper by an appropriate transformation the color image restoration is substituted with two smaller subproblems corresponding to smooth and oscillatory parts of the image. The first and second subproblems are modeled via STLS and LS approaches, respectively. We show that the proposed method is faster than STLS and gives competitive solutions with it. Also, we demonstrate that Haar wavelet perseveres the structure of the blurring operator, which causes a considerable reduction in computational and storage complexity of the proposed method.

Research paper thumbnail of Improving image segmentation by using energy function based on mixture of Gaussian pre-processing

Journal of Visual Communication and Image Representation, 2016

In this paper, by proposing a two-stage segmentation method based on active contour model, we imp... more In this paper, by proposing a two-stage segmentation method based on active contour model, we improve the procedure of former image segmentation methods. The first stage of our method is computing weights, means and variances of image by utilizing Mixture of Gaussian distribution which parameters are obtained from EM-algorithm. Once they are obtained, in the second stage, by incorporating level set method for minimizing energy function, the segmentation is achieved. We use an adaptive direction function to make the curve evolution robust against the curves initial position and a nonlinear adaptive velocity to speed up the process of curve evolution and also a probability-weighted edge and region indicator function to implement a robust segmentation for objects with weak boundaries. The paper consists of minimizing a functional containing a penalty term in an attempt to maintain the signed distance property in the entire domain and an external energy term such that it achieves a minimum when the zero level set of the function is located at desired position.

Research paper thumbnail of Integrated single image super resolution based on sparse representation

2015 The International Symposium on Artificial Intelligence and Signal Processing (AISP), 2015

This paper presents a new and efficient approach for single-image super-resolution based on spars... more This paper presents a new and efficient approach for single-image super-resolution based on sparse signal recovery. This approach uses a co-occurrence trained dictionary of image patches that obtained from a set of observed low- and high-resolution images. The linear combination of the dictionary patches can recover every patch, then each patch that used on the low-resolution image, can be recovered by the dictionary patches. Since the recovered patch is a linear combination of some patches, the noise of every patch, aggregated in the recovered patch, then we prefer a linear combination which is more sparse rather than other combinations. So the sparse representation of patches can filter the noise in the solution. Recently this approach has been used in single image super-resolution problem. These methods calculate the sparse representation of every patches separately and set it to the recovered high-resolution image. So the complexity of such methods are very high and for suitable solution the parameters of algorithm must be estimated, therefore, this process (recover all patch with an iterative algorithm and parameter estimation for each iterate) is very time consuming. This paper presents an integrated method for recovering a low-resolution image based on sparse representation of patches with one step and recover whole image together.

Research paper thumbnail of Improved NRIF Preconditioner for nonsymmetric positive definite linear systems