Concetto Spampinato | Università di Catania (original) (raw)

Uploads

Papers by Concetto Spampinato

Research paper thumbnail of Software Agents for Autonomous Robots: the Eurobot 2006 Experience

Research paper thumbnail of An integrated computer-controlled system for assisting researchers in cortical excitability studies by using Transcranial Magnetic Stimulation

Computer Methods and Programs in Biomedicine

Research paper thumbnail of Enhanced motor cortex facilitation in patients with vascular cognitive impairment-no dementia

Neuroscience Letters, 2011

Research paper thumbnail of Variational Method for Image Denoising by Distributed Genetic Algorithms on GRID Environment

The aim of this paper is to present a novel distributed genetic algorithm architecture implemente... more The aim of this paper is to present a novel distributed genetic algorithm architecture implemented on grid computing by using the G-Lite middleware developed in the EGEE project. Genetic algorithms are known for their capability to solve a wide range of optimization problems and one of the most relevant features of GAs is their structural parallelism that fits well the intrinsically distributed grid architecture. The proposed architecture is based on different specialized autonomous entities able to interact in order to carry out a global optimization task. The interaction is based on exchange of knowledge on the problem and solutions. In this way the main problem can be solved by using many cooperative small entities that can be classified into different specialized families that cover only one aspect of the global problem. The topology is based on archipelagos of islands that interact by chromosomes migrating with a user-definable strategy. Grid has been mainly used in the high performance computing area. The properties of the proposed GAs architecture and its related computing properties have great potential in solving big instances of optimization problems. Furthermore this implementation (distributed genetic algorithms with grid computing) is suitable to solve time consuming problems reducing by executing different instances on many virtual organizations (VOs) according to the grid philosophy. The proposed parallel algorithm has been tested on denoising problems applied to image processing which are known to be time consuming. The paper reports some results about the time performance compared to traditional denoising filter algorithms

Research paper thumbnail of Basal Ganglia Activity Measurement by Automatic 3-D Striatum Segmentation in SPECT Images

IEEE Transactions on Instrumentation and Measurement, 2011

Research paper thumbnail of A Parallel Edge Preserving Algorithm for Salt and Pepper Image Denoising

Research paper thumbnail of Discovering Gene-Disease Associations and Related Neurobiological Functions by Mining Specialized Literature And Microarray Data

Frontiers in Neuroinformatics, 1970

Research paper thumbnail of Bayesian Networks for Edge Preserving Salt and Pepper Image Denoising

In this paper we propose a two-step filter for removing salt-and-pepper impulse noise. In the fir... more In this paper we propose a two-step filter for removing salt-and-pepper impulse noise. In the first phase, a Naive Bayesian network is used to identify pixels, which are likely to be contaminated by noise (noise candidates). In the second phase, the noisy pixels are restored according to a regularization method (based on the optimization of a convex functional) to apply only to those selected noise candidates. The proposed method shows a significant improvement compared to other non linear filters or regularization methods in terms of image details preservation and noise reduction. Our algorithm is also able to remove salt-and-pepper-noise with high noise levels since 70% until 90%.

Research paper thumbnail of Fast Denoising for Moving Object Detection by An Extended Structural Fitness Algorithm

Research paper thumbnail of Automatic Cephalometric Analysis: A Systematic Review

Angle Orthodontist, 2008

To describe the techniques used for automatic landmarking of cephalograms, highlighting the stren... more To describe the techniques used for automatic landmarking of cephalograms, highlighting the strengths and weaknesses of each one and reviewing the percentage of success in locating each cephalometric point. The literature survey was performed by searching the Medline, the Institute of Electrical and Electronics Engineers, and the ISI Web of Science Citation Index databases. The survey covered the period from January 1966 to August 2006. Abstracts that appeared to fulfill the initial selection criteria were selected by consensus. The original articles were then retrieved. Their references were also hand-searched for possible missing articles. The search strategy resulted in 118 articles of which eight met the inclusion criteria. Many articles were rejected for different reasons; among these, the most frequent was that results of accuracy for automatic landmark recognition were presented as a percentage of success. A marked difference in results was found between the included studies consisting of heterogeneity in the performance of techniques to detect the same landmark. All in all, hybrid approaches detected cephalometric points with a higher accuracy in contrast to the results for the same points obtained by the model-based, image filtering plus knowledge-based landmark search and "soft-computing" approaches. The systems described in the literature are not accurate enough to allow their use for clinical purposes. Errors in landmark detection were greater than those expected with manual tracing and, therefore, the scientific evidence supporting the use of automatic landmarking is low.

Research paper thumbnail of SOFT-COMPUTING AGENTS PROCESSING WEBCAM IMAGES TO OPTIMIZEMETROPOLITAN TRAFFIC SYSTEMS

The paper proposes a solution for the optimization of traveling times in a metropolitan area that... more The paper proposes a solution for the optimization of traveling times in a metropolitan area that exploits the traffic images collected from webcams located at the crossroads of the traffic network. This is achieved by optimizing the traffic light cycles according to a distributed mathematical model whose solution is obtained by using suitable soft-computing agents resident on the nodes of the information network where they are responsible of processing the webcam images and of managing the traffic light cycles.

Research paper thumbnail of An Automatic System for Skeletal Bone Age Measurement by Robust Processing of Carpal and Epiphysial/Metaphysial Bones

IEEE Transactions on Instrumentation and Measurement, 2010

Research paper thumbnail of Detecting, Tracking and Counting Fish in Low Quality Unconstrained Underwater Videos

Research paper thumbnail of An Interactive Tool for Customizing Clinical Transacranial Magnetic Stimulation (TMS) Experiments

Transcranial magnetic stimulation (TMS) is a very useful technique for neurophysiological and neu... more Transcranial magnetic stimulation (TMS) is a very useful technique for neurophysiological and neuropsychological investigations. In this paper we propose a user-friendly and a fully customizable system that allows experimental control, data recording for all the currently used TMS paradigms (single and paired pulse TMS). This system consists of two parts: 1) a user-interface that allows the medical doctors to customize the settings of their experiments and to include post-processing and statistical tools for analyzing the acquired patients data, and 2) a hardware-interface that communicates with the existing TMS equipment. New algorithms for post-processing and new user settings can be easily added without interfering with the hardware part communication. The proposed system was used for conducting a clinical experiment for estimating patterns of cortical excitability in patients with geriatric depression and subcortical ischemic vascular disease, achieving very interesting results from the medical point of view.

Research paper thumbnail of Automatic fish classification for underwater species behavior understanding

Research paper thumbnail of Neural Network Combined with Fuzzy Logic to Remove Salt and Pepper Noise in Digital Images

Image denoising is an important step in the pre-processing of images. Aim of the paper is to remo... more Image denoising is an important step in the pre-processing of images. Aim of the paper is to remove the salt and pepper noise on images by using a novel filter based on neural network and fuzzy logic. By this filter it is possible to remove only the pixels that are really affected by noise, thus avoiding image distortion due to the removal of good pixels. A comparison between the proposed filter and the classical median filter shows an increase of about 20% of the peak signal to noise ratio and a better capacity of preserving the details of the images. The proposed approach outperforms the existing algorithms and does not depend on the noise level.

Research paper thumbnail of Evaluation of the Traffic Parameters in a Metropolitan Area by Fusing Visual Perceptions and CNN Processing of Webcam Images

IEEE Transactions on Neural Networks, 2008

This paper proposes a traffic monitoring architecture based on a high-speed communication network... more This paper proposes a traffic monitoring architecture based on a high-speed communication network whose nodes are equipped with fuzzy processors and cellular neural network (CNN) embedded systems. It implements a real-time mobility information system where visual human perceptions sent by people working on the territory and video-sequences of traffic taken from webcams are jointly processed to evaluate the fundamental traffic parameters for every street of a metropolitan area. This paper presents the whole methodology for data collection and analysis and compares the accuracy and the processing time of the proposed soft computing techniques with other existing algorithms. Moreover, this paper discusses when and why it is recommended to fuse the visual perceptions of the traffic with the automated measurements taken from the webcams to compute the maximum traveling time that is likely needed to reach any destination in the traffic network.

Research paper thumbnail of An Automated Tool for Face Recognition using Visual Attention and Active Shape Models Analysis

An entirely automated approach for the recognition of the face of a people starting from her/his ... more An entirely automated approach for the recognition of the face of a people starting from her/his images is presented. The approach uses a computational attention module to find automatically the most relevant facial features using the Focus Of Attentions (FOA) These features are used to build the model of a face during the learning phase and for recognition during the testing phase. The landmarking of the features is performed by applying the active contour model (ACM) technique, whereas the active shape model (ASM) is adopted for constructing a flexible model of the selected facial features. The advantages of this approach and opportunities for further improvements are discussed

Research paper thumbnail of Automatic skeletal bone age assessment by integrating EMROI and CROI processing

In this work we propose a fully automatic system for bone age evaluation, according to the Tanner... more In this work we propose a fully automatic system for bone age evaluation, according to the Tanner and Whitehouse method (TW2), based on the integration between EMROI and CROI analysis, which ensures accurate bone age assessment for the entire age range (0-10). For both approaches novel segmentation techniques will be proposed. In detail, for the CROI analysis the bones extraction has been carried out by integrating anatomical knowledge of the hand and trigonometric concepts, whereas the TW2 stage assignment is implemented by combining the active contour models (ACM) and derivative difference of Gaussian (DrDoG) filter. For the EMROI analysis, image processing techniques and geometrical features analysis, based on difference of Gaussian (DoG), are proposed. The experimental results were conducted on a set of 30 X-Rays, reaching performances of about 87%. The performances of the proposed method are affected by the detection and extraction of the Trapezium and Trapezoid (50%). Without considering such bones the success rate raises to 91%.

Research paper thumbnail of An interactive interface for remote administration of clinical tests based on eye tracking

Research paper thumbnail of Software Agents for Autonomous Robots: the Eurobot 2006 Experience

Research paper thumbnail of An integrated computer-controlled system for assisting researchers in cortical excitability studies by using Transcranial Magnetic Stimulation

Computer Methods and Programs in Biomedicine

Research paper thumbnail of Enhanced motor cortex facilitation in patients with vascular cognitive impairment-no dementia

Neuroscience Letters, 2011

Research paper thumbnail of Variational Method for Image Denoising by Distributed Genetic Algorithms on GRID Environment

The aim of this paper is to present a novel distributed genetic algorithm architecture implemente... more The aim of this paper is to present a novel distributed genetic algorithm architecture implemented on grid computing by using the G-Lite middleware developed in the EGEE project. Genetic algorithms are known for their capability to solve a wide range of optimization problems and one of the most relevant features of GAs is their structural parallelism that fits well the intrinsically distributed grid architecture. The proposed architecture is based on different specialized autonomous entities able to interact in order to carry out a global optimization task. The interaction is based on exchange of knowledge on the problem and solutions. In this way the main problem can be solved by using many cooperative small entities that can be classified into different specialized families that cover only one aspect of the global problem. The topology is based on archipelagos of islands that interact by chromosomes migrating with a user-definable strategy. Grid has been mainly used in the high performance computing area. The properties of the proposed GAs architecture and its related computing properties have great potential in solving big instances of optimization problems. Furthermore this implementation (distributed genetic algorithms with grid computing) is suitable to solve time consuming problems reducing by executing different instances on many virtual organizations (VOs) according to the grid philosophy. The proposed parallel algorithm has been tested on denoising problems applied to image processing which are known to be time consuming. The paper reports some results about the time performance compared to traditional denoising filter algorithms

Research paper thumbnail of Basal Ganglia Activity Measurement by Automatic 3-D Striatum Segmentation in SPECT Images

IEEE Transactions on Instrumentation and Measurement, 2011

Research paper thumbnail of A Parallel Edge Preserving Algorithm for Salt and Pepper Image Denoising

Research paper thumbnail of Discovering Gene-Disease Associations and Related Neurobiological Functions by Mining Specialized Literature And Microarray Data

Frontiers in Neuroinformatics, 1970

Research paper thumbnail of Bayesian Networks for Edge Preserving Salt and Pepper Image Denoising

In this paper we propose a two-step filter for removing salt-and-pepper impulse noise. In the fir... more In this paper we propose a two-step filter for removing salt-and-pepper impulse noise. In the first phase, a Naive Bayesian network is used to identify pixels, which are likely to be contaminated by noise (noise candidates). In the second phase, the noisy pixels are restored according to a regularization method (based on the optimization of a convex functional) to apply only to those selected noise candidates. The proposed method shows a significant improvement compared to other non linear filters or regularization methods in terms of image details preservation and noise reduction. Our algorithm is also able to remove salt-and-pepper-noise with high noise levels since 70% until 90%.

Research paper thumbnail of Fast Denoising for Moving Object Detection by An Extended Structural Fitness Algorithm

Research paper thumbnail of Automatic Cephalometric Analysis: A Systematic Review

Angle Orthodontist, 2008

To describe the techniques used for automatic landmarking of cephalograms, highlighting the stren... more To describe the techniques used for automatic landmarking of cephalograms, highlighting the strengths and weaknesses of each one and reviewing the percentage of success in locating each cephalometric point. The literature survey was performed by searching the Medline, the Institute of Electrical and Electronics Engineers, and the ISI Web of Science Citation Index databases. The survey covered the period from January 1966 to August 2006. Abstracts that appeared to fulfill the initial selection criteria were selected by consensus. The original articles were then retrieved. Their references were also hand-searched for possible missing articles. The search strategy resulted in 118 articles of which eight met the inclusion criteria. Many articles were rejected for different reasons; among these, the most frequent was that results of accuracy for automatic landmark recognition were presented as a percentage of success. A marked difference in results was found between the included studies consisting of heterogeneity in the performance of techniques to detect the same landmark. All in all, hybrid approaches detected cephalometric points with a higher accuracy in contrast to the results for the same points obtained by the model-based, image filtering plus knowledge-based landmark search and "soft-computing" approaches. The systems described in the literature are not accurate enough to allow their use for clinical purposes. Errors in landmark detection were greater than those expected with manual tracing and, therefore, the scientific evidence supporting the use of automatic landmarking is low.

Research paper thumbnail of SOFT-COMPUTING AGENTS PROCESSING WEBCAM IMAGES TO OPTIMIZEMETROPOLITAN TRAFFIC SYSTEMS

The paper proposes a solution for the optimization of traveling times in a metropolitan area that... more The paper proposes a solution for the optimization of traveling times in a metropolitan area that exploits the traffic images collected from webcams located at the crossroads of the traffic network. This is achieved by optimizing the traffic light cycles according to a distributed mathematical model whose solution is obtained by using suitable soft-computing agents resident on the nodes of the information network where they are responsible of processing the webcam images and of managing the traffic light cycles.

Research paper thumbnail of An Automatic System for Skeletal Bone Age Measurement by Robust Processing of Carpal and Epiphysial/Metaphysial Bones

IEEE Transactions on Instrumentation and Measurement, 2010

Research paper thumbnail of Detecting, Tracking and Counting Fish in Low Quality Unconstrained Underwater Videos

Research paper thumbnail of An Interactive Tool for Customizing Clinical Transacranial Magnetic Stimulation (TMS) Experiments

Transcranial magnetic stimulation (TMS) is a very useful technique for neurophysiological and neu... more Transcranial magnetic stimulation (TMS) is a very useful technique for neurophysiological and neuropsychological investigations. In this paper we propose a user-friendly and a fully customizable system that allows experimental control, data recording for all the currently used TMS paradigms (single and paired pulse TMS). This system consists of two parts: 1) a user-interface that allows the medical doctors to customize the settings of their experiments and to include post-processing and statistical tools for analyzing the acquired patients data, and 2) a hardware-interface that communicates with the existing TMS equipment. New algorithms for post-processing and new user settings can be easily added without interfering with the hardware part communication. The proposed system was used for conducting a clinical experiment for estimating patterns of cortical excitability in patients with geriatric depression and subcortical ischemic vascular disease, achieving very interesting results from the medical point of view.

Research paper thumbnail of Automatic fish classification for underwater species behavior understanding

Research paper thumbnail of Neural Network Combined with Fuzzy Logic to Remove Salt and Pepper Noise in Digital Images

Image denoising is an important step in the pre-processing of images. Aim of the paper is to remo... more Image denoising is an important step in the pre-processing of images. Aim of the paper is to remove the salt and pepper noise on images by using a novel filter based on neural network and fuzzy logic. By this filter it is possible to remove only the pixels that are really affected by noise, thus avoiding image distortion due to the removal of good pixels. A comparison between the proposed filter and the classical median filter shows an increase of about 20% of the peak signal to noise ratio and a better capacity of preserving the details of the images. The proposed approach outperforms the existing algorithms and does not depend on the noise level.

Research paper thumbnail of Evaluation of the Traffic Parameters in a Metropolitan Area by Fusing Visual Perceptions and CNN Processing of Webcam Images

IEEE Transactions on Neural Networks, 2008

This paper proposes a traffic monitoring architecture based on a high-speed communication network... more This paper proposes a traffic monitoring architecture based on a high-speed communication network whose nodes are equipped with fuzzy processors and cellular neural network (CNN) embedded systems. It implements a real-time mobility information system where visual human perceptions sent by people working on the territory and video-sequences of traffic taken from webcams are jointly processed to evaluate the fundamental traffic parameters for every street of a metropolitan area. This paper presents the whole methodology for data collection and analysis and compares the accuracy and the processing time of the proposed soft computing techniques with other existing algorithms. Moreover, this paper discusses when and why it is recommended to fuse the visual perceptions of the traffic with the automated measurements taken from the webcams to compute the maximum traveling time that is likely needed to reach any destination in the traffic network.

Research paper thumbnail of An Automated Tool for Face Recognition using Visual Attention and Active Shape Models Analysis

An entirely automated approach for the recognition of the face of a people starting from her/his ... more An entirely automated approach for the recognition of the face of a people starting from her/his images is presented. The approach uses a computational attention module to find automatically the most relevant facial features using the Focus Of Attentions (FOA) These features are used to build the model of a face during the learning phase and for recognition during the testing phase. The landmarking of the features is performed by applying the active contour model (ACM) technique, whereas the active shape model (ASM) is adopted for constructing a flexible model of the selected facial features. The advantages of this approach and opportunities for further improvements are discussed

Research paper thumbnail of Automatic skeletal bone age assessment by integrating EMROI and CROI processing

In this work we propose a fully automatic system for bone age evaluation, according to the Tanner... more In this work we propose a fully automatic system for bone age evaluation, according to the Tanner and Whitehouse method (TW2), based on the integration between EMROI and CROI analysis, which ensures accurate bone age assessment for the entire age range (0-10). For both approaches novel segmentation techniques will be proposed. In detail, for the CROI analysis the bones extraction has been carried out by integrating anatomical knowledge of the hand and trigonometric concepts, whereas the TW2 stage assignment is implemented by combining the active contour models (ACM) and derivative difference of Gaussian (DrDoG) filter. For the EMROI analysis, image processing techniques and geometrical features analysis, based on difference of Gaussian (DoG), are proposed. The experimental results were conducted on a set of 30 X-Rays, reaching performances of about 87%. The performances of the proposed method are affected by the detection and extraction of the Trapezium and Trapezoid (50%). Without considering such bones the success rate raises to 91%.

Research paper thumbnail of An interactive interface for remote administration of clinical tests based on eye tracking