Blind Source Separation Research Papers (original) (raw)

This paper is devoted to blind separation of combustion noise and piston-slap in diesel engines. The two phenomena are recovered only from signals issued from accelerometers placed on one of the cylinders. A blind source separation (BSS)... more

This paper is devoted to blind separation of combustion noise and piston-slap in diesel engines. The two phenomena are recovered only from signals issued from accelerometers placed on one of the cylinders. A blind source separation (BSS) method is developed, based on a convolutive model of non-stationary mixtures. We introduce a new method based on the joint diagonalisation of the time varying spectral matrices of the observation records and a new technique to handle the problem of permutation ambiguity in the frequency domain.

Acoustic signals recorded simultaneously in a reverberant environment can be described as sums of differently convolved sources. The task of source separation is to identify the multiple channels and possibly to invert those in order to... more

Acoustic signals recorded simultaneously in a reverberant environment can be described as sums of differently convolved sources. The task of source separation is to identify the multiple channels and possibly to invert those in order to obtain estimates of the underlying sources. We tackle the problem by explicitly exploiting the nonstationarity of the acoustic sources. Changing cross-correlations at multiple times give a sufficient set of constraints for the unknown channels. A least squares optimization allows us to estimate a forward model, identifying thus the multi-path channel. In the same manner we can find an FIR backward model, which generates well separated model sources. Furthermore, for more than three channels we have sufficient conditions to estimate underlying additive sensor noise powers. We show good performance in real room environments and demonstrate the algorithm's utility for automatic speech recognition.

Many practical situations can be modelled with multiple-input multiple-output (MIMO) models. If the input sources are mutually orthogonal, several blind source separation methods can be used to reconstruct the sources and model transfer... more

Many practical situations can be modelled with multiple-input multiple-output (MIMO) models. If the input sources are mutually orthogonal, several blind source separation methods can be used to reconstruct the sources and model transfer channels. In this paper, we derive a new approach of this kind, which is based on the compensation of the model convolution kernel. It detects the triggering instants of individual sources, and tolerates their non-orthogonalities and high amount of additive noise, which qualifies the method in several signal and image analysis applications where other approaches fail.. We explain how to implement the convolution kernel compensation (CKC) method both in 1D and 2D cases. This unified approach made us able to demonstrate its performance in two different experiments. A 1D application was introduced to the decomposition of surface electromyograms (SEMG). Nine healthy males participated in the tests with 5% and 10% maximum voluntary isometric contractions (MVC) of biceps brachii muscle. We identified 3.4 ± 1.3 (mean ± standard deviation) and 6.2 ± 2.2 motor units (MUs) at 5% and 10% MVC, respectively. At the same time, we applied the 2D version of CKC to range imaging. Dealing with the Middlebury Stereo Vision referential set of images, our method found correct matches of 91.3± 12.1% of all pixels, while the obtained RMS disparity difference was 3.4 ± 2.5 pixels. This results are comparable to other ranging approaches, but our solution exhibits better robustness and reliability.

Blind source separation problems have recently drawn a lot of attention in unsupervised neural learning. In the current approaches, the number of sources is typically assumed to be known in advance, but this does not usually hold in... more

Blind source separation problems have recently drawn a lot of attention in unsupervised neural learning. In the current approaches, the number of sources is typically assumed to be known in advance, but this does not usually hold in practical applications. In this paper, various neural network architectures and associated adaptive learning algorithms are discussed for handling the cases where the number of sources is unknown. These techniques include estimation of the number of sources, redundancy removal among the outputs of the networks, and extraction of the sources one at a time. Validity and performance of the described approaches are demonstrated by extensive computer simulations for natural image and magnetoencephalographic (MEG) data. 0925-2312/99/$ -see front matter 1999 Elsevier Science B.V. All rights reserved. PII: S 0 9 2 5 -2 3 1 2 ( 9 8 ) 0 0 0 9 1 -5

In this paper we propose new algorithms for 3D tensor decomposition/factorization with many potential applications, especially in multi-way Blind Source Separation (BSS), multidimensional data analysis, and sparse signal/image... more

In this paper we propose new algorithms for 3D tensor decomposition/factorization with many potential applications, especially in multi-way Blind Source Separation (BSS), multidimensional data analysis, and sparse signal/image representations. We derive and compare three classes of algorithms: Multiplicative, Fixed-Point Alternating Least Squares (FPALS) and Alternating Interior-Point Gradient (AIPG) algorithms. Some of the proposed algorithms are characterized by improved robustness, efficiency and convergence rates and can be applied for various distributions of data and additive noise.

This paper studies a novel decomposition technique, suitable for blind separation of linear mixtures of signals comprising finite-length symbols. The observed symbols are first modeled as channel responses in a... more

This paper studies a novel decomposition technique, suitable for blind separation of linear mixtures of signals comprising finite-length symbols. The observed symbols are first modeled as channel responses in a multiple-input-multiple-output (MIMO) model, while the channel inputs are conceptually considered sparse positive pulse trains carrying the information about the symbol arising times. Our decomposition approach compensates channel responses and aims at reconstructing the input pulse trains directly. The algorithm is derived first for the overdetermined noiseless MIMO case. A generalized scheme is then provided for the underdetermined mixtures in noisy environments. Although blind, the proposed technique approaches Bayesian optimal linear minimum mean square error estimator and is, hence, significantly noise resistant. The results of simulation tests prove it can be applied to considerably underdetermined convolutive mixtures and even to the mixtures of moderately correlated input pulse trains, with their cross-correlation up to 10% of its maximum possible value.

In this paper we present denoising algorithms for enhancing noisy signals based on Local ICA (LICA), Delayed AMUSE (dAMUSE) and Kernel PCA (KPCA). The algorithm LICA relies on applying ICA locally to clusters of signals embedded in a high... more

In this paper we present denoising algorithms for enhancing noisy signals based on Local ICA (LICA), Delayed AMUSE (dAMUSE) and Kernel PCA (KPCA). The algorithm LICA relies on applying ICA locally to clusters of signals embedded in a high dimensional feature space of delayed coordinates. The components resembling the signals can be detected by various criteria like estimators of kurtosis or the variance of autocorrelations depending on the statistical nature of the signal. The algorithm proposed can be applied favorably to the problem of denoising multidimensional data. Another projective subspace denoising method using delayed coordinates has been proposed recently with the algorithm dAMUSE. It combines the solution of blind source separation problems with denoising efforts in an elegant way and proofs to be very efficient and fast. Finally, KPCA represents a non-linear projective subspace method that is well suited for denoising also. Besides illustrative applications to toy examples and images, we provide an application of all algorithms considered to the analysis of protein NMR spectra.

Echo cancelers typically employ control mechanisms to prevent adaptive filter updates during double-talk events. By contrast, this paper exploits the information contained in time-varying second order statistics of nonstationary signals... more

Echo cancelers typically employ control mechanisms to prevent adaptive filter updates during double-talk events. By contrast, this paper exploits the information contained in time-varying second order statistics of nonstationary signals to update adaptive filters and learn echo path responses during double-talk. First, a framework is presented for describing mixing and blind separation of independent groups of signals. Then several echo cancellation problems are cast in this framework, including the problem of simultaneous acoustic and line echo cancellation as encountered in speaker phones. A maximum-likelihood approach is taken to estimate both the unknown signal statistics as well as echo canceling filters. When applied to speech signals, the techniques developed in this paper typically achieved between 30 and 40 dB of echo return loss enhancement (ERLE) during continuous double-talking.

Output-only algorithms are needed for modal identification when only structural responses are available. The recent years have witnessed the fast development of blind source separation (BSS) as a promising signal processing technique,... more

Output-only algorithms are needed for modal identification when only structural responses are available. The recent years have witnessed the fast development of blind source separation (BSS) as a promising signal processing technique, pursuing to recover the sources using only the measured mixtures. As the most popular tool solving the BSS problem, independent component analysis (ICA) is able to directly extract the time-domain modal responses, which are viewed as virtual sources, from the observed system responses; however, it has been shown that ICA loses accuracy in the presence of higher-level damping. In this study, the modal identification issue, which is incorporated into the BSS formulation, is transformed into a time-frequency framework. The sparse time-frequency representations of the monotone modal responses are proposed as the targeted independent sources hidden in those of the system responses which have been short-time Fourier-transformed (STFT); they can then be efficiently extracted by ICA, whereby the time-domain modal responses are recovered such that the modal parameters are readily obtained. The simulation results of a multidegree-of-freedom system illustrate that the proposed output-only STFT-ICA method is capable of accurately identifying modal information of lightly and highly damped structures, even in the presence of heavy noise and nonstationary excitation. The laboratory experiment on a highly damped three-story frame and the analysis of the real measured seismic responses of the University of Southern California hospital building demonstrate the capability of the method to perform blind modal identification in practical applications.

Sparsity of signals in the frequency domain is widely used for blind source separation (BSS) when the number of sources is more than the number of mixtures (underdetermined BSS). In this paper we propose a simple algorithm for detection... more

Sparsity of signals in the frequency domain is widely used for blind source separation (BSS) when the number of sources is more than the number of mixtures (underdetermined BSS). In this paper we propose a simple algorithm for detection of points in the time-frequency (TF) plane of the instantaneous mixtures where only single source contributions occur. Samples at these points in the TF plane can be used for the mixing matrix estimation. The proposed algorithm identifies the single-source-points (SSPs) by comparing the absolute directions of the real and imaginary parts of the Fourier transform coefficient vectors of the mixed signals. Finally, the SSPs so obtained are clustered using the hierarchical clustering algorithm for the estimation of the mixing matrix. The proposed idea for the SSP identification is simpler than the previously reported algorithms.

new algorithm for parallel joint diagonalization o f symmetric (Hermitian) matrices is introduced. The approach is based on the Jacobi diagonalization, utilizes the distribution of the computational power nnd memory space and runs on a... more

new algorithm for parallel joint diagonalization o f symmetric (Hermitian) matrices is introduced. The approach is based on the Jacobi diagonalization, utilizes the distribution of the computational power nnd memory space and runs on a heterogeneous personal computer systems with an arbitrary number of processing units. Its basic performance indices am outlined and its application to the blind source separation problem is dixursed.

In this paper, the tasks of speech source localization, source counting and source separation are addressed for an unknown number of sources in a stereo recording scenario. In the first stage, the angles of arrival of individual source... more

In this paper, the tasks of speech source localization, source counting and source separation are addressed for an unknown number of sources in a stereo recording scenario. In the first stage, the angles of arrival of individual source signals are estimated through a peak finding scheme applied to the angular spectrum which has been derived using non-linear GCC-PHAT. Then, based on the known channel mixture coefficients, we propose an approach for separating the sources based on Maximum Likelihood (ML) estimation. The predominant source in each timefrequency bin is identified through ML assuming a diffuse noise model. The separation performance is improved over a binary time-frequency masking method. The performance is measured by obtaining the existing metrics for blind source separation evaluation. The experiments are performed on synthetic speech mixtures in both anechoic and reverberant environments.

In this paper, we review recent advances in blind source separation (BSS) and independent component analysis (ICA) for nonlinear mixing models. After a general introduction to BSS and ICA, we discuss in more detail uniqueness and... more

In this paper, we review recent advances in blind source separation (BSS) and independent component analysis (ICA) for nonlinear mixing models. After a general introduction to BSS and ICA, we discuss in more detail uniqueness and separability issues, presenting some new results. A fundamental difficulty in the nonlinear BSS problem and even more so in the nonlinear ICA problem is that they provide non-unique solutions without extra constraints, which are often implemented by using a suitable regularization. In this paper, we explore two possible approaches. The first one is based on structural constraints. Especially, post-nonlinear mixtures are an important special case, where a nonlinearity is applied to linear mixtures. For such mixtures, the ambiguities are essentially the same as for the linear ICA or BSS problems. The second approach uses Bayesian inference methods for estimating the best statistical parameters, under almost unconstrained models in which priors can be easily a...

The word is surrounded by sounds what makes it difficult when it becomes impossible to obtain a desired speech because of the noisy environment. Thus, digital signal processing is a discipline that interest to extract useful information... more

The word is surrounded by sounds what makes it difficult when it becomes impossible to obtain a desired speech because of the noisy environment. Thus, digital signal processing is a discipline that interest to extract useful information on physical phenomena from measures generally disturbed. Its most well know problem is the blind sources separation which is a specific method that in which several signals have been mixed and the purpose is to recover the original component signals from the mixed signals without any knowledge about the sources. This work, provides some of many existing algorithms solving the problem of blind source separation the most known in literature and at the end of this article there are some examples applied to real-world audio separation tasks using Matlab. General Terms Algorithms, formulations and blind signal separation.

Second-order blind identification (SOBI) is a blind source separation (BSS) algorithm that can be used to decompose mixtures of signals into a set of components or putative recovered sources. Previously, SOBI, as well as other BSS... more

Second-order blind identification (SOBI) is a blind source separation (BSS) algorithm that can be used to decompose mixtures of signals into a set of components or putative recovered sources. Previously, SOBI, as well as other BSS algorithms, has been applied to magnetoencephalography (MEG) and electroencephalography (EEG) data. These BSS algorithms have been shown to recover components that appear to be physiologically and neuroanatomically interpretable. While some proponents of these algorithms suggest that fundamental discoveries about the human brain might be made through the application of these techniques, validation of BSS components has not yet received sufficient attention. Here we present two experiments for validating SOBI-recovered components. The first takes advantage of the fact that noise sources associated with individual sensors can be objectively validated independently from the SOBI process. The second utilizes the fact that the time course and location of primary somatosensory (SI) cortex activation by median nerve stimulation have been extensively characterized using converging imaging methods. In this paper, using both known noise sources and highly constrained and well-characterized neuronal sources, we provide validation for SOBI decomposition of high-density EEG data. We show that SOBI is able to (1) recover known noise sources that were either spontaneously occurring or artificially induced; (2) recover neuronal sources activated by median nerve stimulation that were spatially and temporally consistent with estimates obtained from previous EEG, MEG, and fMRI studies;

This research presented an appropriate approach for the robust estimation of noise statistic in dental panoramic x-rays images. To achieve maximum image quality after denoising, a new, low order, local adaptive Gaussian Scale Mixture... more

This research presented an appropriate approach for the robust estimation of noise statistic in dental panoramic x-rays images. To achieve maximum image quality after denoising, a new, low order, local adaptive Gaussian Scale Mixture model is presented, which accomplishes nonlinearities from scattering. State of art methods use multi scale filtering of images to reduce the irrelevant part of information, based on generic estimation of noise. The usual assumption of a distribution of Gaussian and Poisson statistics only lead to overestimation of the noise variance in regions of low intensity (small photon counts), but to underestimation in regions of high intensity and therefore to non-optional results. The analysis approach is tested on 50 samples from a database of 50 panoramic X-rays images and the results are cross validated by medical experts.

Independent Component Analysis (ICA) is a signal-processing method to extract independent sources given only observed data that are mixtures of the unknown sources. Recently, Blind Source Separation (BSS) by ICA has received considerable... more

Independent Component Analysis (ICA) is a signal-processing method to extract independent sources given only observed data that are mixtures of the unknown sources. Recently, Blind Source Separation (BSS) by ICA has received considerable attention because of its potential signal-processing applications such as speech enhancement systems, image processing, telecommunications, medical signal processing and several data mining issues. This book brings the state-of-the-art of some of the most important current research of ICA related to Audio and Biomedical signal processing applications. The book is partly a textbook and partly a monograph. It is a textbook because it gives a detailed introduction to ICA applications. It is simultaneously a monograph because it presents several new results, concepts and further developments, which are brought together and published in the book.

Independent component analysis is a lively field of research and is being utilized for its potential in statistically independent separation of images. ICA based algorithms has been used to extract interference and mixed images and a very... more

Independent component analysis is a lively field of research and is being utilized for its potential in statistically independent separation of images. ICA based algorithms has been used to extract interference and mixed images and a very rapid developed statistical method during last few years. So, in this paper an efficient result oriented algorithm for ICA-based blind source separation has been presented. In blind source separation primary goal is to recover all original images using the observed mixtures only. Independent Component Analysis (ICA) is based on higher order statistics aiming at penetrating for the components in the mixed signals that are statistically as independent from each other as achievable.

Perceptual differences between sound reproduction systems with multiple spatial dimensions have been investigated. Two blind studies were performed using system configurations involving 1-D, 2-D, and 3-D loudspeaker arrays. Various types... more

Perceptual differences between sound reproduction systems with multiple spatial dimensions have been investigated. Two blind studies were performed using system configurations involving 1-D, 2-D, and 3-D loudspeaker arrays. Various types of source material were used, ranging from urban soundscapes to musical passages. Experiment I consisted in collecting subjects' perceptions in a free-response format to identify relevant criteria for multi-dimensional spatial sound reproduction of complex auditory scenes by means of linguistic analysis. Experiment II utilized both free response and scale judgments for seven parameters derived form Experiment I. Results indicated a strong correlation between the source material ͑sound scene͒ and the subjective evaluation of the parameters, making the notion of an ''optimal'' reproduction method difficult for arbitrary source material.

Nous proposons une méthode permettant la suppression d’images jumelles lors de la reconstruction des hologrammes numériques enregistrés dans la configuration in-line. Cette méthode est basée sur la séparation aveugle de mélanges... more

Nous proposons une méthode permettant la suppression d’images jumelles lors de la
reconstruction des hologrammes numériques enregistrés dans la configuration in-line. Cette
méthode est basée sur la séparation aveugle de mélanges convolutifs de sources. Le procédé
est composé de deux étapes: un schéma de lifting en quinconce basé sur la transformée en
ondelettes, dont le rôle est de permettre une décorrélation des images holographiques d'entrée,
et un algorithme de déconvolution géométrique pour la tâche de séparation. Les résultats
expérimentaux confirment l’efficacité de la séparation aveugle de sources pour l’élimination
de l’image jumelle à partir d'hologrammes numériques de particules.

This paper reviews recent progress in the diagnosis of Alzheimer's disease (AD) from electroencephalograms (EEG). Three major effects of AD on EEG have been observed: slowing of the EEG, reduced complexity of the EEG signals, and... more

This paper reviews recent progress in the diagnosis of Alzheimer's disease (AD) from electroencephalograms (EEG). Three major effects of AD on EEG have been observed: slowing of the EEG, reduced complexity of the EEG signals, and perturbations in EEG synchrony. In recent years, a variety of sophisticated computational approaches has been proposed to detect those subtle perturbations in the EEG of AD patients. The paper first describes methods that try to detect slowing of the EEG. Next the paper deals with several measures for EEG complexity, and explains how those measures have been used to study fluctuations in EEG complexity in AD patients. Then various measures of EEG synchrony are considered in the context of AD diagnosis.

A Separação de Fontes de Música Orientada por Partitura busca solucionar o problema da separação recorrendo a uma fonte de informação completa para uma música, a partitura. Através de softwares especializados em música é possível... more

A Separação de Fontes de Música Orientada por Partitura busca solucionar o problema da separação recorrendo a uma fonte de informação completa para uma música, a partitura. Através de softwares especializados em música é possível converter esta partitura e toda suas informações relevantes em um arquivo MIDI. Assim transformamos o conteúdo da partitura em dados computacionais e agora podemos usar as informações relacionadas a tempo e frequência da música para realizar nossa separação por meio da Fatoração em Matrizes Não Negativas (NMF). Com restrições nas matrizes de bases e ganhos feitas com as informações extraídas da partitura, se obteve uma aproximação das fontes avaliadas com um toolbox do MATLAB chamado BSS_EVAL..

A fundamental problem in neural network research, as well as in many other disciplines, is finding a suitable representation of multivariate data, i.e. random vectors. For reasons of computational and conceptual simplicity, the... more

A fundamental problem in neural network research, as well as in many other disciplines, is finding a suitable representation of multivariate data, i.e. random vectors. For reasons of computational and conceptual simplicity, the representation is often sought as a linear transformation of the original data. In other words, each component of the representation is a linear combination of the original variables. Well-known linear transformation methods include principal component analysis, factor analysis, and projection pursuit. Independent component analysis (ICA) is a recently developed method in which the goal is to find a linear representation of non-Gaussian data so that the components are statistically independent, or as independent as possible. Such a representation seems to capture the essential structure of the data in many applications, including feature extraction and signal separation. In this paper, we present the basic theory and applications of ICA, and our recent work on the subject. ᭧ .fi (A. Hyvärinen), erkki.oja@ hut.fi (E. Oja).

Conventional sub-Nyquist sampling methods for analog signals exploit prior information about the spectral support. In this paper, we consider the challenging problem of blind sub-Nyquist sampling of multiband signals, whose unknown... more

Conventional sub-Nyquist sampling methods for analog signals exploit prior information about the spectral support. In this paper, we consider the challenging problem of blind sub-Nyquist sampling of multiband signals, whose unknown frequency support occupies only a small portion of a wide spectrum. Our primary design goals are efficient hardware implementation and low computational load on the supporting digital processing. We propose a system, named the modulated wideband converter, which first multiplies the analog signal by a bank of periodic waveforms. The product is then lowpass filtered and sampled uniformly at a low rate, which is orders of magnitude smaller than Nyquist. Perfect recovery from the proposed samples is achieved under certain necessary and sufficient conditions. We also develop a digital architecture, which allows either reconstruction of the analog input, or processing of any band of interest at a low rate, that is, without interpolating to the high Nyquist rate. Numerical simulations demonstrate many engineering aspects: robustness to noise and mismodeling, potential hardware simplifications, realtime performance for signals with time-varying support and stability to quantization effects. We compare our system with two previous approaches: periodic nonuniform sampling, which is bandwidth limited by existing hardware devices, and the random demodulator, which is restricted to discrete multitone signals and has a high computational load. In the broader context of Nyquist sampling, our scheme has the potential to break through the bandwidth barrier of state-of-the-art analog conversion technologies such as interleaved converters.

This paper describes a new method for blind source separation, adapted to the case of sources having different morphologies. We show that such morphological diversity leads to a new and very efficient separation method, even in the... more

This paper describes a new method for blind source separation, adapted to the case of sources having different morphologies. We show that such morphological diversity leads to a new and very efficient separation method, even in the presence of noise. The algorithm, coined MMCA (Multichannel Morphological Component Analysis), is an extension of the Morphological Component Analysis method (MCA). The latter takes advantage of the sparse representation of structured data in large overcomplete dictionaries to separate features in the data based on their morphology. MCA has been shown to be an efficient technique in such problems as separating an image into texture and piecewise smooth parts or for inpainting applications. The proposed extension, MMCA, extends the above for multichannel data, achieving a better source separation in those circumstances. Furthermore, the new algorithm can efficiently achieve good separation in a noisy context where standard ICA methods fail. The efficiency of the proposed scheme is confirmed in numerical experiments.

In this paper a new method for muscle artifact removal in EEG is presented, based on Canonical Correlation Analysis (CCA) as a Blind Source Separation technique (BSS). This method is demonstrated on a synthetic data set. The method... more

In this paper a new method for muscle artifact removal in EEG is presented, based on Canonical Correlation Analysis (CCA) as a Blind Source Separation technique (BSS). This method is demonstrated on a synthetic data set. The method outperformed a low pass filter with different cutoff frequencies and an Independent Component Analysis (ICA) based technique for muscle artifact removal. The first preliminary results of a clinical study on 26 ictal EEGs of patients with refractory epilepsy illustrated that the removal of muscle artifact results in a better interpretation of the ictal EEG, leading to an earlier detection of the seizure onset and a better localization of the seizures onset zone. These findings make the current method indispensable for every Epilepsy Monitoring Unit.

Blind source separation (BSS) is the technique that anyone can separate the original signals or latent data from their mixtures without any knowledge about the mixing process, but using some statistical properties of latent or original... more

Blind source separation (BSS) is the technique that anyone can separate the original signals or latent data from their mixtures without any knowledge about the mixing process, but using some statistical properties of latent or original source signals. Independent component analysis is a statistical method expressed as a set of multidimensional observations that are combinations of unknown variables. These underlying unobserved variables are called sources and they are assumed to be statistically independent with respect to each other. In this paper we will use the nonlinear autcorrelation function as an object function to separate the source signals from the mixing signals. Maximization of this object function using the LMS algorithm will be obtained the coefficients of a linear filter which separate the source signals. To calculate the performance of the proposed algorithm, two parameters of Performance Index (PI) and Signal to Interference Ratio (SIR) will be used. To test the proposed algorithm, we will use Inovation Gaussian signals, Speech signals and ECG signals. It will be shown that the proposed algorithm gives better results than the other methods such as Newton method that has been proposed by Shi.

A technique is proposed to reduce additive noise from biomedical signals that have high kurtosis values using genetic algorithm (GA). The technique is applied to reduce multiple linear additive noises from electrocardiogram (ECG) signals,... more

A technique is proposed to reduce additive noise from biomedical signals that have high kurtosis values using genetic algorithm (GA). The technique is applied to reduce multiple linear additive noises from electrocardiogram (ECG) signals, which have high kurtosis values due to the presence of R peaks. This GA method uses the basic principles of Independent Component Analysis (ICA) and could also be used to reduce additive noise from other signals that have high kurtosis values. The method is simpler compared to neural learning algorithms and does not require any prior statistical knowledge of the signals. An additional advantage of the method compared to other ICA methods is that only the ECG signal will be extracted thus avoiding extraction of all independent components and manual inspection to determine the ECG signal.

I introduce Forecastable Component Analysis (ForeCA), a novel dimension reduction technique for temporally dependent signals. Based on a new forecastability measure, ForeCA finds an optimal transformation to separate multivariate signal... more

I introduce Forecastable Component Analysis (ForeCA), a novel dimension reduction technique for temporally dependent signals. Based on a new forecastability measure, ForeCA finds an optimal transformation to separate multivariate signal into a forecastable and an orthogonal white noise space. I present a provably converging algorithm with a fast eigenvector solution. Applications to financial and macro-economic data show that ForeCA can successfully discover informative structure in multivariate time series: structure that can be used for forecasting and classification.
The main methods for ForeCA are implemented in the R package ForeCA (cran.r-project.org/web/packages/ForeCA/index.html), which is publicly available on CRAN.

... the DS array has a simple structure, it requires, however, a large number of microphones to achieve high performance, particularly ... Blind source sepa-ration (BSS) is the approach to estimate original source sig-nals using only the... more

... the DS array has a simple structure, it requires, however, a large number of microphones to achieve high performance, particularly ... Blind source sepa-ration (BSS) is the approach to estimate original source sig-nals using only the information of the mixed signals observed in ...

This paper addresses the issue of separating multiple speakers from mixtures of these that are obtained using multiple microphones in a room. An adaptive blind signal separation algorithm, which is entirely based on second-order... more

This paper addresses the issue of separating multiple speakers from mixtures of these that are obtained using multiple microphones in a room. An adaptive blind signal separation algorithm, which is entirely based on second-order statistics, is derived. One of the advantages of this algorithm is that no parameters need to be tuned. Moreover, an extension of the algorithm that can simultaneously deal with blind signal separation and echo cancellation is derived. Experiments with real recordings have been carried out, showing the effectiveness of the algorithm for real-world signals.

Recent years have seen an explosion of interest in using neural oscillations to characterize the mechanisms supporting cognition and emotion. Oftentimes, oscillatory activity is indexed by mean power density in predefined frequency bands.... more

Recent years have seen an explosion of interest in using neural oscillations to characterize the mechanisms supporting cognition and emotion. Oftentimes, oscillatory activity is indexed by mean power density in predefined frequency bands. Some investigators use broad bands originally defined by prominent surface features of the spectrum. Others rely on narrower bands originally defined by spectral factor analysis (SFA). Presently, the robustness and sensitivity of these competing band definitions remains unclear. Here, a Monte Carlo-based SFA strategy was used to decompose the tonic (“resting” or “spontaneous”) electroencephalogram (EEG) into five bands: delta (1–5 Hz), alpha-low (6–9 Hz), alpha-high (10–11 Hz), beta (12–19 Hz), and gamma (N21 Hz). This pattern was consistent across SFA methods, artifact correction/rejection procedures, scalp regions, and samples. Subsequent analyses revealed that SFA failed to deliver enhanced sensitivity; narrow alpha sub-bands proved no more sensitive than the classical broadband to individual differences in temperament or mean differences in task-induced activation. Other analyses suggested that residual ocular and muscular artifact was the dominant source of activity during quiescence in the delta and gamma bands. This was observed following threshold-based artifact rejection or independent component analysis (ICA)- based artifact correction, indicating that such procedures do not necessarily confer adequate protection. Collectively, these findings highlight the limitations of several commonly used EEG procedures and underscore the necessity of routinely performing exploratory data analyses, particularly data visualization, prior to hypothesis testing. They also suggest the potential benefits of using techniques other than SFA for interrogating high-dimensional EEG datasets in the frequency or time–frequency (event-related spectral perturbation, event-related synchronization/desynchronization) domains. KEY WORDS: principal components analysis (PCA); exploratory factor analysis (EFA); blind source separation (BSS); resting neural activity; resting EEG; frontal alpha asymmetry; frontal EEG asymmetry.

| Because of the increasing portability and wearability of noninvasive electrophysiological systems that record and process electrical signals from the human brain, automated systems for assessing changes in user cognitive state, intent,... more

| Because of the increasing portability and wearability of noninvasive electrophysiological systems that record and process electrical signals from the human brain, automated systems for assessing changes in user cognitive state, intent, and response to events are of increasing interest. Braincomputer interface (BCI) systems can make use of such knowledge to deliver relevant feedback to the user or to an observer, or within a human-machine system to increase safety and enhance overall performance. Building robust and useful BCI models from accumulated biological knowledge and available data is a major challenge, as are technical problems associated with incorporating multimodal physiological, behavioral, and contextual data that may in the future be increasingly ubiquitous. While performance of current BCI modeling methods is slowly increasing, current performance levels do not yet support widespread uses. Here we discuss the current neuroscientific questions and data processing challenges facing BCI designers and outline some promising current and future directions to address them.

Sparsity of signals in the frequency domain is widely used for blind source separation (BSS) when the number of sources is more than the number of mixtures (underdetermined BSS). In this paper we propose a simple algorithm for detection... more

Sparsity of signals in the frequency domain is widely used for blind source separation (BSS) when the number of sources is more than the number of mixtures (underdetermined BSS). In this paper we propose a simple algorithm for detection of points in the time-frequency (TF) plane of the instantaneous mixtures where only single source contributions occur. Samples at these points in the TF plane can be used for the mixing matrix estimation. The proposed algorithm identifies the single-source-points (SSPs) by comparing the absolute directions of the real and imaginary parts of the Fourier transform coefficient vectors of the mixed signals. Finally, the SSPs so obtained are clustered using the hierarchical clustering algorithm for the estimation of the mixing matrix. The proposed idea for the SSP identification is simpler than the previously reported algorithms.

In this paper an algorithm for real-time signal processing of adaptive blind source separation (ABSS) is proposed. It is a promising method for speech and acoustic source signal separation in noisy environment such as room, office, etc.... more

In this paper an algorithm for real-time signal processing of adaptive blind source separation (ABSS) is proposed. It is a promising method for speech and acoustic source signal separation in noisy environment such as room, office, etc. This paper proposed real-time de-mixture of speech mixture in enclosure environments. Making the real-time mixture of source signals as simultaneously as de-mixture in simulation based on adaptive NLMS and activation function in neural network are proposed. Separating the sources in the frequency domain is shown. The time variant mixing matrix based on random vector with time variable elements are made. Several simulations obtain optimum results of implemented algorithm. An experiment for testing the efficacy of the algorithm in the real-time source separation by high performance computer is presented.

Separating a singing voice from its music accompaniment remains an important challenge in the field of music information retrieval. We present a unique neural network approach inspired by a technique that has revolutionized the field of... more

Separating a singing voice from its music accompaniment remains an important challenge in the field of music information retrieval. We present a unique neural network approach inspired by a technique that has revolutionized the field of vision: pixel-wise image classification, which we combine with cross entropy loss and pretraining of the CNN as an autoencoder on singing voice spectrograms. The pixel-wise classification technique directly estimates the sound source label for each time-frequency (T-F) bin in our spectrogram image, thus eliminating common pre-and postprocessing tasks. The proposed network is trained by using the Ideal Binary Mask (IBM) as the target output label. The IBM identifies the dominant sound source in each T-F bin of the magnitude spectrogram of a mixture signal, by considering each T-F bin as a pixel with a multi-label (for each sound source). Cross entropy is used as the training objective, so as to minimize the average probability error between the target and predicted label for each pixel. By treating the singing voice separation problem as a pixel-wise classification task, we additionally eliminate one of the commonly used, yet not easy to comprehend, postprocessing steps: the Wiener filter postprocessing. The proposed CNN outperforms the first runner up in the Music Information Retrieval Evaluation eXchange (MIREX) 2016 and the winner of MIREX 2014 with a gain of 2.2702 ∼ 5.9563 dB global normalized source to distortion ratio (GNSDR) when applied to the iKala dataset. An experiment with the DSD100 dataset on the full-tracks song evaluation task also shows that our model is able This work is supported by the MOE Academic fund AFD 05/15 SL and SUTD SRG ISTD 2017 129.

In this paper we propose and analyze a set of alternative statistical distances between distributions based on the cumulative density function instead of traditional probability density function. In particular, these new Gaussian... more

In this paper we propose and analyze a set of alternative statistical distances between distributions based on the cumulative density function instead of traditional probability density function. In particular, these new Gaussian distances provide new cost functions whose maximization performs the extraction of one independent component at each successive stage of a de ation ICA procedure. The new Gaussianity measures improve the ICA performance and also increase the robustness against outliers in comparison with traditional ones.

Microphone arrays have been used in various applications to capture conversations, such as in meetings and teleconferences. In many cases, the microphone and likely source locations are known a priori, and calculating beamforming filters... more

Microphone arrays have been used in various applications to capture conversations, such as in meetings and teleconferences. In many cases, the microphone and likely source locations are known a priori, and calculating beamforming filters is therefore straightforward. In ad-hoc situations, however, when the microphones have not been systematically positioned, this information is not available and beamforming must be achieved blindly. In achieving this, a commonly neglected issue is whether it is optimal to use all of the available microphones, or only an advantageous subset of these. This paper commences by reviewing different approaches to blind beamforming, characterising them by the way they estimate the signal propagation vector and the spatial coherence of noise in the absence of prior knowledge of microphone and speaker locations. Following this, a novel clustered approach to blind beamforming is motivated and developed. Without using any prior geometrical information, microphones are first grouped into localised clusters, which are then ranked according to their relative distance from a speaker.

Most of watermarking techniques are based on Wide Spread Spectrum (WSS). Security of such schemes is studied here in adopting a cryptanalysis point of view. The security is proportional to the difficulty the opponent has to recover the... more

Most of watermarking techniques are based on Wide Spread Spectrum (WSS). Security of such schemes is studied here in adopting a cryptanalysis point of view. The security is proportional to the difficulty the opponent has to recover the secret parameters, which are, in WSS watermarking scheme, the private carriers. Both theoretical and practical points of view are investigated when several pieces of content are watermarked with the same secret key. The opponent's difficulty is measured by the amount of data necessary to estimate accurately the private carriers, and also by the complexity of the estimation algorithms. Actually, Blind Source Separation algorithms really help the opponent exploiting the information leakage to disclose the secret carriers. The article ends with experiments comparing blind attacks to these new hacks. The main goal of the article is to warn watermarkers that embedding hidden messages with the same secret key might is a dangerous security flaws.

The Constant Modulus Algorithm (CMA) is very popular as far as the extraction of a source from a convolutive mixture is concerned. Though originally designed for stationary data, we have noticed in simulations the ability of the CMA to... more

The Constant Modulus Algorithm (CMA) is very popular as far as the extraction of a source from a convolutive mixture is concerned. Though originally designed for stationary data, we have noticed in simulations the ability of the CMA to achieve the separation even in non-stationary contexts such as mixtures of linearly modulated communication signals having arbitrary symbol periods. In this paper, we address the theoretical aspect of this point. We provide sufficient conditions on the statistics of the sources under which the use of the CMA is theoretically justified. This condition is semianalytically worked out in different scenarios of digital communications.

Blind identification of underdetermined mixtures can be addressed efficiently by using the second ChAracteristic Function (CAF) of the observations. Our contribution is twofold. First, we propose the use of a Levenberg-Marquardt... more

Blind identification of underdetermined mixtures can be addressed efficiently by using the second ChAracteristic Function (CAF) of the observations. Our contribution is twofold. First, we propose the use of a Levenberg-Marquardt algorithm, herein called LEMACAF, as an alternative to an Alternating Least Squares algorithm known as ALESCAF, which has been used recently in the case of real mixtures of real sources. Second, we extend the CAF approach to the case of complex sources for which the previous algorithms are not suitable. We show that the complex case involves an appropriate tensor stowage, which is linked to a particular tensor decomposition. An extension of the LEMACAF algorithm, called LEMACAF then proposed to blindly estimate the mixing matrix by exploiting this tensor decomposition. In our simulation results, we first provide performance comparisons between third-and fourth-order versions of ALESCAF and LEMACAF in various situations involving BPSK sources. Then, a performance study of LEMACAF is carried out considering 4-QAM sources. These results show that the proposed algorithm provides satisfying estimations especially in the case of a large underdeterminacy level.

In this paper we propose a method based on blind source-separation of acoustic emissions of partial discharges in power equipment. The goal is to build an on-line monitoring system for automatic detection and localization of partial... more

In this paper we propose a method based on blind source-separation of acoustic emissions of partial discharges in power equipment. The goal is to build an on-line monitoring system for automatic detection and localization of partial discharges and defects in the insulation. We have built a test bech to generate acoustic signals similar to those produced by partial discharges inside a propagating media and a data acquisition system with multiple acoustic detectors. We show that the INFOMAX algorithm used for blind-source-separation is able to decouple at least two different sources of partial discharges from the signal of two acoustic detector located at fixed positions.

Nonlinear source separation can be performed by inferring the state of a nonlinear state-space model. We study and improve the inference algorithm in the variational Bayesian blind source separation model introduced by Valpola and... more

Nonlinear source separation can be performed by inferring the state of a nonlinear state-space model. We study and improve the inference algorithm in the variational Bayesian blind source separation model introduced by Valpola and Karhunen in 2002. As comparison methods we use extensions of the Kalman filter that are widely used inference methods in tracking and control theory. The results in stability, speed, and accuracy favour our method especially in difficult inference problems.

A great challenge in neurophysiology is to asses non-invasively the physiological changes occurring in different parts of the brain. These activation can be modeled and measured often as neuronal brain source signals that indicate the... more

A great challenge in neurophysiology is to asses non-invasively the physiological changes occurring in different parts of the brain. These activation can be modeled and measured often as neuronal brain source signals that indicate the function or malfunction of various physiological subsystems. To extract the relevant information for diagnosis and therapy, expert knowledge is required not only in medicine and neuroscience but also statistical signal processing. Besides classical signal analysis tools (such as adaptive supervised filtering, parametric or non-parametric spectral estimation, time-frequency analysis, and higher-order statistics), new and emerging blind signal processing (BSP) methods, especially, generalized component analysis (GCA) including fusion (integration) of independent component analysis (ICA), sparse component analysis (SCA), time-frequency component analyzer (TFCA) and nonnegative matrix factorization (NMF) can be used for analyzing brain data, especially for noise reduction and artefacts elimination, enhancement, detection and estimation of neuronal brain source signals. The recent trends in the BSP is to consider problems in the framework of matrix factorization or more general signals decomposition with probabilistic generative and tree structured graphical models and exploit a priori knowledge about true nature and structure of latent (hidden) variables or brain sources such as spatio-temporal decorrelation, statistical independence, sparseness, smoothness or lowest complexity in the sense e.g., of best linear predictability. The goal of BSP can be considered as estimation of sources and parameters of a mixing system or more generally as finding a new reduced or hierarchical and structured representation for the observed brain data that can be interpreted as physically meaningful coding or blind source estimation. The key issue is to find such transformation or coding (linear or nonlinear) which has true neurophysiological and neuroanatomical meaning and interpretation. In this paper, we briefly discuss how some novel blind signal processing techniques such as blind source separation, blind source extraction and various blind source separation and signal decomposition methods can be applied for analysis and processing EEG data. We discuss also a promising application of BSP to early detection of Alzheimer disease (AD) using only EEG recordings.