Hamid Pilevar - Academia.edu (original) (raw)
Papers by Hamid Pilevar
Tehran University Medical Journal TUMS Publications, 2015
Background: Magnetic resonance imaging (MRI) is widely applied for examination and diagnosis of b... more Background: Magnetic resonance imaging (MRI) is widely applied for examination and diagnosis of brain tumors based on its advantages of high resolution in detecting the soft tissues and especially of its harmless radiation damages to human bodies. The goal of the processing of images is automatic segmentation of brain edema and tumors, in different dimensions of the magnetic resonance images. Methods: The proposed method is based on the unsupervised method which discovers the tumor region, if there is any, by analyzing the similarities between two hemispheres and computes the image size of the goal function based on Bhattacharyya coefficient which is used in the next stage to detect the tumor region or some part of it. In this stage, for reducing the color variation, the gray brain image is segmented, then it is turned to gray again. The self-organizing map (SOM) neural network is used the segmented brain image is colored and finally the tumor is detected by matching the detected region and the colored image. This method is proposed to analyze MRI images
Proceedings of the 7th International Conference on Software Paradigm Trends, 2012
Proceedings of the 7th International Conference on Software Paradigm Trends, 2012
Journal of Medical Signals & Sensors, 2011
Journal of Medical Signals & Sensors, 2012
Journal of Medical Signals and Sensors, May 28, 2011
Proceedings of the 11th Joint Conference on Information Sciences (JCIS), 2008
According to Daily increase of the documents on the internet, automatic language detection is get... more According to Daily increase of the documents on the internet, automatic language detection is getting more important. In this paper we used language detection system to classify and filtering of the immoral web pages, based on their contents. This system could detect 10 most used languages in the immoral web pages, including FARSI language. As a technique we introduce a new combined method which consists of three parts; URL Processor, page encoding processor, and text processor. In order to generate proper results this system has a voter which combines the results of these three parts. We used the immoral web pages and labeled web pages as an input data set in order to make a linguistic model for each language and system evaluation. Our experiments show 95% success in accuracy of outcome results.
Computers in Biology and Medicine, 2014
This work is focused on application of a new technique in the first steps of computer-aided detec... more This work is focused on application of a new technique in the first steps of computer-aided detection (CAD) of lung nodules. The scheme includes segmenting the lung volume and detecting most of the nodules with a low number of false positive (FP) objects. The juxtapleural nodules were properly included and the airways excluded in the lung segmentation. Among the suspicious regions obtained from the multiscale dot enhancement filter, those containing the center of nodule candidates, were determined. These center points were achieved from a 3D blob detector based on Laplacian of Gaussian kernels. Then the volumetric shape index (SI) that encodes the 3D local shape information was calculated for voxels in the determined regions. The performance of the scheme was evaluated by using 42 CT images from the Lung Image Database Consortium (LIDC). The results show that the average number of FPs reaches to 38.8 per scan with the sensitivity of 95.9% in the initial detections. The scheme is adaptable to detect nodules with wide variations in size, shape, intensity and location. Comparison of results with previously reported ones indicates that the proposed scheme can be satisfactory applied for initial detection of lung nodules in the chest CT images.
Signal, Image and Video Processing, 2013
ABSTRACT In this paper, a new method is presented for reducing salt and pepper noise in color ima... more ABSTRACT In this paper, a new method is presented for reducing salt and pepper noise in color images. This method consists of three steps: In the first step, Laplacian operators and threshold values are used to identify pixels that are likely to have been corrupted by noise; in the second step, these noise candidates are judged by using the neighborhood of each pixel. After recognizing the noisy pixels, the vector median filter is used for replacing the noisy pixels, in the third step. The proposed algorithm is tested against different color images, and it gives a better peak signal-to-noise ratio and a lower normalized mean square error. These results have also been checked and analyzed visually. The performance of the proposed method is compared with common existing vector filters at different noise densities. Extensive experiments show that the proposed algorithm provides better performance than many of the existing vector filters in terms of noise suppression and preserving thin lines, fine details, and image edges.
Computational Linguistics and Intelligent Text Processing, 2011
Pattern Recognition Letters, 2005
Proceedings of the 9th Joint Conference on Information Sciences, JCIS 2006, 2006
Information Sciences 2007, 2007
Information Sciences 2007, 2007
Proc. 2006 Joint Conference on Information Sciences (JCIS 2006)
OCR makes it possible for the user to edit or search the document’s contents. In this paper, we d... more OCR makes it possible for the user to edit or search the document’s contents. In this paper, we describe a special water fill technique for detecting the upside down text document. Each character has certain upside and downside filling capacities. A character may have two sides or one side filling capacity or zero filling capacity. The total upside and downside capacities for the scanned page calculated and the page with bigger downside capacity decided to be upright. The merit of the algorithm is that it requires only simple arithmetic operations per image pixel. Our experimental results, based on detecting inversion for 100 documents demonstrate a high detection performance of more than 98%, indicating the validity of the proposed methods.
Tehran University Medical Journal TUMS Publications, 2015
Background: Magnetic resonance imaging (MRI) is widely applied for examination and diagnosis of b... more Background: Magnetic resonance imaging (MRI) is widely applied for examination and diagnosis of brain tumors based on its advantages of high resolution in detecting the soft tissues and especially of its harmless radiation damages to human bodies. The goal of the processing of images is automatic segmentation of brain edema and tumors, in different dimensions of the magnetic resonance images. Methods: The proposed method is based on the unsupervised method which discovers the tumor region, if there is any, by analyzing the similarities between two hemispheres and computes the image size of the goal function based on Bhattacharyya coefficient which is used in the next stage to detect the tumor region or some part of it. In this stage, for reducing the color variation, the gray brain image is segmented, then it is turned to gray again. The self-organizing map (SOM) neural network is used the segmented brain image is colored and finally the tumor is detected by matching the detected region and the colored image. This method is proposed to analyze MRI images
Proceedings of the 7th International Conference on Software Paradigm Trends, 2012
Proceedings of the 7th International Conference on Software Paradigm Trends, 2012
Journal of Medical Signals & Sensors, 2011
Journal of Medical Signals & Sensors, 2012
Journal of Medical Signals and Sensors, May 28, 2011
Proceedings of the 11th Joint Conference on Information Sciences (JCIS), 2008
According to Daily increase of the documents on the internet, automatic language detection is get... more According to Daily increase of the documents on the internet, automatic language detection is getting more important. In this paper we used language detection system to classify and filtering of the immoral web pages, based on their contents. This system could detect 10 most used languages in the immoral web pages, including FARSI language. As a technique we introduce a new combined method which consists of three parts; URL Processor, page encoding processor, and text processor. In order to generate proper results this system has a voter which combines the results of these three parts. We used the immoral web pages and labeled web pages as an input data set in order to make a linguistic model for each language and system evaluation. Our experiments show 95% success in accuracy of outcome results.
Computers in Biology and Medicine, 2014
This work is focused on application of a new technique in the first steps of computer-aided detec... more This work is focused on application of a new technique in the first steps of computer-aided detection (CAD) of lung nodules. The scheme includes segmenting the lung volume and detecting most of the nodules with a low number of false positive (FP) objects. The juxtapleural nodules were properly included and the airways excluded in the lung segmentation. Among the suspicious regions obtained from the multiscale dot enhancement filter, those containing the center of nodule candidates, were determined. These center points were achieved from a 3D blob detector based on Laplacian of Gaussian kernels. Then the volumetric shape index (SI) that encodes the 3D local shape information was calculated for voxels in the determined regions. The performance of the scheme was evaluated by using 42 CT images from the Lung Image Database Consortium (LIDC). The results show that the average number of FPs reaches to 38.8 per scan with the sensitivity of 95.9% in the initial detections. The scheme is adaptable to detect nodules with wide variations in size, shape, intensity and location. Comparison of results with previously reported ones indicates that the proposed scheme can be satisfactory applied for initial detection of lung nodules in the chest CT images.
Signal, Image and Video Processing, 2013
ABSTRACT In this paper, a new method is presented for reducing salt and pepper noise in color ima... more ABSTRACT In this paper, a new method is presented for reducing salt and pepper noise in color images. This method consists of three steps: In the first step, Laplacian operators and threshold values are used to identify pixels that are likely to have been corrupted by noise; in the second step, these noise candidates are judged by using the neighborhood of each pixel. After recognizing the noisy pixels, the vector median filter is used for replacing the noisy pixels, in the third step. The proposed algorithm is tested against different color images, and it gives a better peak signal-to-noise ratio and a lower normalized mean square error. These results have also been checked and analyzed visually. The performance of the proposed method is compared with common existing vector filters at different noise densities. Extensive experiments show that the proposed algorithm provides better performance than many of the existing vector filters in terms of noise suppression and preserving thin lines, fine details, and image edges.
Computational Linguistics and Intelligent Text Processing, 2011
Pattern Recognition Letters, 2005
Proceedings of the 9th Joint Conference on Information Sciences, JCIS 2006, 2006
Information Sciences 2007, 2007
Information Sciences 2007, 2007
Proc. 2006 Joint Conference on Information Sciences (JCIS 2006)
OCR makes it possible for the user to edit or search the document’s contents. In this paper, we d... more OCR makes it possible for the user to edit or search the document’s contents. In this paper, we describe a special water fill technique for detecting the upside down text document. Each character has certain upside and downside filling capacities. A character may have two sides or one side filling capacity or zero filling capacity. The total upside and downside capacities for the scanned page calculated and the page with bigger downside capacity decided to be upright. The merit of the algorithm is that it requires only simple arithmetic operations per image pixel. Our experimental results, based on detecting inversion for 100 documents demonstrate a high detection performance of more than 98%, indicating the validity of the proposed methods.