Aurelio Uncini | Università degli Studi "La Sapienza" di Roma (original) (raw)
Papers by Aurelio Uncini
In this paper, multi-layer perceptrons with powers-of-two weights are introduced, and a learning ... more In this paper, multi-layer perceptrons with powers-of-two weights are introduced, and a learning procedure, based on back-propagation, is presented for such neural networks. This learning procedure requires full real arithmetic and therefore must be performed off-line. Some
1991., IEEE International Sympoisum on Circuits and Systems
1991., IEEE International Sympoisum on Circuits and Systems
Least squares (LS) algorithms are often used in many spectrum estimation methods. However, when t... more Least squares (LS) algorithms are often used in many spectrum estimation methods. However, when the signals are contaminated by a few strong noise spikes, the standard LS algorithm can easily lead to biased solutions characterized by a strongly reduced dynamic range of the estimated spectra. In order to treat this problem, the classical approach is to weight the prediction errors
1990 IJCNN International Joint Conference on Neural Networks, 1990
1990 IJCNN International Joint Conference on Neural Networks, 1990
IEEE Transactions on Neural Networks, 1993
Parallel Architectures and Neural Networks III, 1990
In this paper a new method is presented to dynamically adapt the topology of a neural network usi... more In this paper a new method is presented to dynamically adapt the topology of a neural network using only the information of the learning set. The proposed algorithm eliminates connections from an initial fully connected network and presents some characteristics which can resemble some behaviour of the biological networks like the so-called critical period. Preliminary experimental results are presented which prove the effectiveness of the proposed algorithm.
CAAI Transactions on Intelligence Technology, 2020
In recent years, hyper-complex deep networks (e.g., quaternion-based) have received increasing in... more In recent years, hyper-complex deep networks (e.g., quaternion-based) have received increasing interest with applications ranging from image reconstruction to 3D audio processing. Similarly to their real-valued counterparts, quaternion neural networks might require custom regularization strategies to avoid overfitting. In addition, for many real-world applications and embedded implementations there is the need of designing sufficiently compact networks, with as few weights and units as possible. However, the problem of how to regularize and/or sparsify quaternion-valued networks has not been properly addressed in the literature as of now. In this paper we show how to address both problems by designing targeted regularization strategies, able to minimize the number of connections and neurons of the network during training. To this end, we investigate two extensions of 1 and structured regularization to the quaternion domain. In our experimental evaluation, we show that these tailored strategies significantly outperform classical (real-valued) regularization strategies, resulting in small networks especially suitable for low-power and real-time applications.
Algorithms, 2018
The combination of adaptive filters is an effective approach to improve filtering performance. In... more The combination of adaptive filters is an effective approach to improve filtering performance. In this paper, we investigate the performance of an adaptive combined scheme between two adaptive multiple-input single-output (MISO) filters, which can be easily extended to the case of multiple outputs. In order to generalize the analysis, we consider the multichannel affine projection algorithm (APA) to update the coefficients of the MISO filters, which increases the possibility of exploiting the capabilities of the filtering scheme. Using energy conservation relations, we derive a theoretical behavior of the proposed adaptive combination scheme at steady state. Such analysis entails some further theoretical insights with respect to the single channel combination scheme. Simulation results prove both the validity of the theoretical steady-state analysis and the effectiveness of the proposed combined scheme.
Neurocomputing, 2017
In this paper, we consider the joint task of simultaneously optimizing (i) the weights of a deep ... more In this paper, we consider the joint task of simultaneously optimizing (i) the weights of a deep neural network, (ii) the number of neurons for each hidden layer, and (iii) the subset of active input features (i.e., feature selection). While these problems are generally dealt with separately, we present a simple regularized formulation allowing to solve all three of them in parallel, using standard optimization routines. Specifically, we extend the group Lasso penalty (originated in the linear regression literature) in order to impose group-level sparsity on the network's connections, where each group is defined as the set of outgoing weights from a unit. Depending on the specific case, the weights can be related to an input variable, to a hidden neuron, or to a bias unit, thus performing simultaneously all the aforementioned tasks in order to obtain a compact network. We perform an extensive experimental evaluation, by comparing with classical weight decay and Lasso penalties. We show that a sparse version of the group Lasso penalty is able to achieve competitive performances, while at the same time resulting in extremely compact networks with a smaller number of input features. We evaluate both on a toy dataset for handwritten digit recognition, and on multiple realistic large-scale classification problems.
Smart Innovation, Systems and Technologies, 2015
ABSTRACT Echo State Networks (ESNs) are a family of Recurrent Neural Networks (RNNs), that can be... more ABSTRACT Echo State Networks (ESNs) are a family of Recurrent Neural Networks (RNNs), that can be trained efficiently and robustly. Their main characteristic is the partitioning of the recurrent part of the network, the reservoir, from the non-recurrent part, the latter being the only component which is explicitly trained. To ensure good generalization capabilities, the reservoir is generally built from a large number of neurons, whose connectivity should be designed in a sparse pattern. Recently, we proposed an unsupervised online criterion for performing this sparsification process, based on the idea of significance of a synapse, i.e., an approximate measure of its importance in the network. In this paper, we extend our criterion to the direct pruning of neurons inside the reservoir, by defining the significance of a neuron in terms of the significance of its neighboring synapses. Our experimental validation shows that, by combining pruning of neurons and synapses, we are able to obtain an optimally sparse ESN in an efficient way. In addition, we briefly investigate the resulting reservoir’s topologies deriving from the application of our procedure.
Neural Networks, 2015
h i g h l i g h t s • This paper proposes an improved split functional link adaptive filter (SFLA... more h i g h l i g h t s • This paper proposes an improved split functional link adaptive filter (SFLAF). • The proposed model is characterized by the adaptive combination of two APA filters. • An advanced scheme is also proposed involving the combination of multiple filters. • The adaptive combinations are performed for all the projections of the APA filters. • The proposed models are assessed in three different nonlinear modeling problems.
2013 8th International Symposium on Image and Signal Processing and Analysis (ISPA), 2013
ABSTRACT This paper introduces a new framework for multichannel adaptive filtering, aiming at imp... more ABSTRACT This paper introduces a new framework for multichannel adaptive filtering, aiming at improving performance of an overall filtering system. The proposed architecture relies on the properties of the adaptive combination of filters which exploits the capabilities of different constituents, thus adaptively providing at least the behaviour of the best performing filter. Applying this concept to multichannel filtering systems, we define a scheme for the combination of multiple-input multiple-output (MIMO) filters. More precisely the proposed structure involves the combination of two different multiple-input singleoutput (MISO) systems for each MIMO output. We propose such framework with application to the multichannel acoustic echo cancellation (MAEC) with the goal of giving robustness to the system against impulsive background noise, and thus improving overall cancelling performance. Experimental results show the effectiveness of the proposed combined MAEC in the presence of adverse environmental conditions.
2013 8th International Symposium on Image and Signal Processing and Analysis (ISPA), 2013
ABSTRACT Over the last years, automatic music classification has become a standard benchmark prob... more ABSTRACT Over the last years, automatic music classification has become a standard benchmark problem in the machine learning community. This is partly due to its inherent difficulty, and also to the impact that a fully automated classification system can have in a commercial application. In this paper we test the efficiency of a relatively new learning tool, Extreme Learning Machines (ELM), for several classification tasks on publicly available song datasets. ELM is gaining increasing attention, due to its versatility and speed in adapting its internal parameters. Since both of these attributes are fundamental in music classification, ELM provides a good alternative to standard learning models. Our results support this claim, showing a sustained gain of ELM over a feedforward neural network architecture. In particular, ELM provides a great decrease in computational training time, and has always higher or comparable results in terms of efficiency.
17th DSP 2011 International Conference on Digital Signal Processing, Proceedings, 2011
The aim of this paper is the presentation of a comparative analysis of Hammerstein and Wiener sys... more The aim of this paper is the presentation of a comparative analysis of Hammerstein and Wiener systems used for the problem of compensation of the nonlinear distortion due to non-ideality of amplifiers and loudspeakers in acoustic echo cancellation. The proposed solutions consist in a cascade of a flexible nonlinear function, whose shape can be modified during the learning process, and
ABSTRACT Classical adaptive algorithms for acoustic echo cancellation (AEC) are often based on er... more ABSTRACT Classical adaptive algorithms for acoustic echo cancellation (AEC) are often based on error-driven optimization strategies, such as the mean-square error minimization. However, these approaches do not always satisfy the quality requirements demanded by users that avail of such audio signal processing systems. In order to meet subjective specifications, in this paper we put forward the idea of a user-driven approach to echo cancellation through the inclusion of an interactive evolutionary algorithm (IEA) in the optimization stage. As a consequence, performance of an AEC system can be adapted to any user preferences in a principled and systematic way, thus reflecting the desired subjective quality. Experiments in the context of AEC prove the effectiveness of the proposed methodology in enhancing the processed signal quality and show significant statistical advantages of the proposed framework with respect to classical approaches.
Short term prediction of air pollution is gaining increasing attention in the research community,... more Short term prediction of air pollution is gaining increasing attention in the research community, due to its social and economical impact. In this paper we study the application of a Kernel Adaptive Filtering (KAF) algorithm to the problem of predicting PM 10 data in the Italian province of Ancona, and we show how this predictor is able to achieve a significant low error with the inclusion of chemical data correlated with the PM 10 such as NO 2 .
Recent Advances of Neural Network Models and Applications, 2014
2014 International Joint Conference on Neural Networks (IJCNN), 2014
ABSTRACT Echo State Networks (ESNs) were introduced to simplify the design and training of Recurr... more ABSTRACT Echo State Networks (ESNs) were introduced to simplify the design and training of Recurrent Neural Networks (RNNs), by explicitly subdividing the recurrent part of the network, the reservoir, from the non-recurrent part. A standard practice in this context is the random initialization of the reservoir, subject to few loose constraints. Although this results in a simple-to-solve optimization problem, it is in general suboptimal, and several additional criteria have been devised to improve its design. In this paper we provide an effective algorithm for removing redundant connections inside the reservoir during training. The algorithm is based on the correlation of the states of the nodes, hence it depends only on the input signal, it is efficient to implement, and it is also local. By applying it, we can obtain an optimally sparse reservoir in a robust way. We present the performance of our algorithm on two synthetic datasets, which show its effectiveness in terms of better generalization and lower computational complexity of the resulting ESN. This behavior is also investigated for increasing levels of memory and non-linearity required by the task.
In this paper, multi-layer perceptrons with powers-of-two weights are introduced, and a learning ... more In this paper, multi-layer perceptrons with powers-of-two weights are introduced, and a learning procedure, based on back-propagation, is presented for such neural networks. This learning procedure requires full real arithmetic and therefore must be performed off-line. Some
1991., IEEE International Sympoisum on Circuits and Systems
1991., IEEE International Sympoisum on Circuits and Systems
Least squares (LS) algorithms are often used in many spectrum estimation methods. However, when t... more Least squares (LS) algorithms are often used in many spectrum estimation methods. However, when the signals are contaminated by a few strong noise spikes, the standard LS algorithm can easily lead to biased solutions characterized by a strongly reduced dynamic range of the estimated spectra. In order to treat this problem, the classical approach is to weight the prediction errors
1990 IJCNN International Joint Conference on Neural Networks, 1990
1990 IJCNN International Joint Conference on Neural Networks, 1990
IEEE Transactions on Neural Networks, 1993
Parallel Architectures and Neural Networks III, 1990
In this paper a new method is presented to dynamically adapt the topology of a neural network usi... more In this paper a new method is presented to dynamically adapt the topology of a neural network using only the information of the learning set. The proposed algorithm eliminates connections from an initial fully connected network and presents some characteristics which can resemble some behaviour of the biological networks like the so-called critical period. Preliminary experimental results are presented which prove the effectiveness of the proposed algorithm.
CAAI Transactions on Intelligence Technology, 2020
In recent years, hyper-complex deep networks (e.g., quaternion-based) have received increasing in... more In recent years, hyper-complex deep networks (e.g., quaternion-based) have received increasing interest with applications ranging from image reconstruction to 3D audio processing. Similarly to their real-valued counterparts, quaternion neural networks might require custom regularization strategies to avoid overfitting. In addition, for many real-world applications and embedded implementations there is the need of designing sufficiently compact networks, with as few weights and units as possible. However, the problem of how to regularize and/or sparsify quaternion-valued networks has not been properly addressed in the literature as of now. In this paper we show how to address both problems by designing targeted regularization strategies, able to minimize the number of connections and neurons of the network during training. To this end, we investigate two extensions of 1 and structured regularization to the quaternion domain. In our experimental evaluation, we show that these tailored strategies significantly outperform classical (real-valued) regularization strategies, resulting in small networks especially suitable for low-power and real-time applications.
Algorithms, 2018
The combination of adaptive filters is an effective approach to improve filtering performance. In... more The combination of adaptive filters is an effective approach to improve filtering performance. In this paper, we investigate the performance of an adaptive combined scheme between two adaptive multiple-input single-output (MISO) filters, which can be easily extended to the case of multiple outputs. In order to generalize the analysis, we consider the multichannel affine projection algorithm (APA) to update the coefficients of the MISO filters, which increases the possibility of exploiting the capabilities of the filtering scheme. Using energy conservation relations, we derive a theoretical behavior of the proposed adaptive combination scheme at steady state. Such analysis entails some further theoretical insights with respect to the single channel combination scheme. Simulation results prove both the validity of the theoretical steady-state analysis and the effectiveness of the proposed combined scheme.
Neurocomputing, 2017
In this paper, we consider the joint task of simultaneously optimizing (i) the weights of a deep ... more In this paper, we consider the joint task of simultaneously optimizing (i) the weights of a deep neural network, (ii) the number of neurons for each hidden layer, and (iii) the subset of active input features (i.e., feature selection). While these problems are generally dealt with separately, we present a simple regularized formulation allowing to solve all three of them in parallel, using standard optimization routines. Specifically, we extend the group Lasso penalty (originated in the linear regression literature) in order to impose group-level sparsity on the network's connections, where each group is defined as the set of outgoing weights from a unit. Depending on the specific case, the weights can be related to an input variable, to a hidden neuron, or to a bias unit, thus performing simultaneously all the aforementioned tasks in order to obtain a compact network. We perform an extensive experimental evaluation, by comparing with classical weight decay and Lasso penalties. We show that a sparse version of the group Lasso penalty is able to achieve competitive performances, while at the same time resulting in extremely compact networks with a smaller number of input features. We evaluate both on a toy dataset for handwritten digit recognition, and on multiple realistic large-scale classification problems.
Smart Innovation, Systems and Technologies, 2015
ABSTRACT Echo State Networks (ESNs) are a family of Recurrent Neural Networks (RNNs), that can be... more ABSTRACT Echo State Networks (ESNs) are a family of Recurrent Neural Networks (RNNs), that can be trained efficiently and robustly. Their main characteristic is the partitioning of the recurrent part of the network, the reservoir, from the non-recurrent part, the latter being the only component which is explicitly trained. To ensure good generalization capabilities, the reservoir is generally built from a large number of neurons, whose connectivity should be designed in a sparse pattern. Recently, we proposed an unsupervised online criterion for performing this sparsification process, based on the idea of significance of a synapse, i.e., an approximate measure of its importance in the network. In this paper, we extend our criterion to the direct pruning of neurons inside the reservoir, by defining the significance of a neuron in terms of the significance of its neighboring synapses. Our experimental validation shows that, by combining pruning of neurons and synapses, we are able to obtain an optimally sparse ESN in an efficient way. In addition, we briefly investigate the resulting reservoir’s topologies deriving from the application of our procedure.
Neural Networks, 2015
h i g h l i g h t s • This paper proposes an improved split functional link adaptive filter (SFLA... more h i g h l i g h t s • This paper proposes an improved split functional link adaptive filter (SFLAF). • The proposed model is characterized by the adaptive combination of two APA filters. • An advanced scheme is also proposed involving the combination of multiple filters. • The adaptive combinations are performed for all the projections of the APA filters. • The proposed models are assessed in three different nonlinear modeling problems.
2013 8th International Symposium on Image and Signal Processing and Analysis (ISPA), 2013
ABSTRACT This paper introduces a new framework for multichannel adaptive filtering, aiming at imp... more ABSTRACT This paper introduces a new framework for multichannel adaptive filtering, aiming at improving performance of an overall filtering system. The proposed architecture relies on the properties of the adaptive combination of filters which exploits the capabilities of different constituents, thus adaptively providing at least the behaviour of the best performing filter. Applying this concept to multichannel filtering systems, we define a scheme for the combination of multiple-input multiple-output (MIMO) filters. More precisely the proposed structure involves the combination of two different multiple-input singleoutput (MISO) systems for each MIMO output. We propose such framework with application to the multichannel acoustic echo cancellation (MAEC) with the goal of giving robustness to the system against impulsive background noise, and thus improving overall cancelling performance. Experimental results show the effectiveness of the proposed combined MAEC in the presence of adverse environmental conditions.
2013 8th International Symposium on Image and Signal Processing and Analysis (ISPA), 2013
ABSTRACT Over the last years, automatic music classification has become a standard benchmark prob... more ABSTRACT Over the last years, automatic music classification has become a standard benchmark problem in the machine learning community. This is partly due to its inherent difficulty, and also to the impact that a fully automated classification system can have in a commercial application. In this paper we test the efficiency of a relatively new learning tool, Extreme Learning Machines (ELM), for several classification tasks on publicly available song datasets. ELM is gaining increasing attention, due to its versatility and speed in adapting its internal parameters. Since both of these attributes are fundamental in music classification, ELM provides a good alternative to standard learning models. Our results support this claim, showing a sustained gain of ELM over a feedforward neural network architecture. In particular, ELM provides a great decrease in computational training time, and has always higher or comparable results in terms of efficiency.
17th DSP 2011 International Conference on Digital Signal Processing, Proceedings, 2011
The aim of this paper is the presentation of a comparative analysis of Hammerstein and Wiener sys... more The aim of this paper is the presentation of a comparative analysis of Hammerstein and Wiener systems used for the problem of compensation of the nonlinear distortion due to non-ideality of amplifiers and loudspeakers in acoustic echo cancellation. The proposed solutions consist in a cascade of a flexible nonlinear function, whose shape can be modified during the learning process, and
ABSTRACT Classical adaptive algorithms for acoustic echo cancellation (AEC) are often based on er... more ABSTRACT Classical adaptive algorithms for acoustic echo cancellation (AEC) are often based on error-driven optimization strategies, such as the mean-square error minimization. However, these approaches do not always satisfy the quality requirements demanded by users that avail of such audio signal processing systems. In order to meet subjective specifications, in this paper we put forward the idea of a user-driven approach to echo cancellation through the inclusion of an interactive evolutionary algorithm (IEA) in the optimization stage. As a consequence, performance of an AEC system can be adapted to any user preferences in a principled and systematic way, thus reflecting the desired subjective quality. Experiments in the context of AEC prove the effectiveness of the proposed methodology in enhancing the processed signal quality and show significant statistical advantages of the proposed framework with respect to classical approaches.
Short term prediction of air pollution is gaining increasing attention in the research community,... more Short term prediction of air pollution is gaining increasing attention in the research community, due to its social and economical impact. In this paper we study the application of a Kernel Adaptive Filtering (KAF) algorithm to the problem of predicting PM 10 data in the Italian province of Ancona, and we show how this predictor is able to achieve a significant low error with the inclusion of chemical data correlated with the PM 10 such as NO 2 .
Recent Advances of Neural Network Models and Applications, 2014
2014 International Joint Conference on Neural Networks (IJCNN), 2014
ABSTRACT Echo State Networks (ESNs) were introduced to simplify the design and training of Recurr... more ABSTRACT Echo State Networks (ESNs) were introduced to simplify the design and training of Recurrent Neural Networks (RNNs), by explicitly subdividing the recurrent part of the network, the reservoir, from the non-recurrent part. A standard practice in this context is the random initialization of the reservoir, subject to few loose constraints. Although this results in a simple-to-solve optimization problem, it is in general suboptimal, and several additional criteria have been devised to improve its design. In this paper we provide an effective algorithm for removing redundant connections inside the reservoir during training. The algorithm is based on the correlation of the states of the nodes, hence it depends only on the input signal, it is efficient to implement, and it is also local. By applying it, we can obtain an optimally sparse reservoir in a robust way. We present the performance of our algorithm on two synthetic datasets, which show its effectiveness in terms of better generalization and lower computational complexity of the resulting ESN. This behavior is also investigated for increasing levels of memory and non-linearity required by the task.
Functional Link Artificial Neural Networks (FLANNs) have been extensively used for tasks of audio... more Functional Link Artificial Neural Networks (FLANNs) have been extensively used for tasks of audio and speech classification, due to their combination of universal approximation capabilities and fast training. The performance of a FLANN, however, is known to be dependent on the specific functional link (FL) expansion that is used. In this paper, we provide an extensive benchmark of multiple FL expansions on several audio classification problems, including speech discrimination , genre classification, and artist recognition. Our experimental results show that a random-vector expansion is well suited for classification tasks, achieving the best accuracy in two out of three tasks.