Shanto Rahman - Academia.edu (original) (raw)
Papers by Shanto Rahman
Functional Plant Biology, 2022
The density and guard cell length of stomata regulate the physiological processes in plants. Yet,... more The density and guard cell length of stomata regulate the physiological processes in plants. Yet, the variation of stomatal characteristics among different functional groups of trees is not been well understood. Particularly, a comprehensive understanding of stomatal behaviour in Bangladeshi moist forest trees is lacking. The study investigated how abaxial stomatal density (SD) and guard cell length (GCL) vary among tree functional types and leaf phenological groups in a moist tropical forest of Bangladesh. Cluster dendrogram revealed three groups of species based on SD and GCL. The independent sample t-test showed that there was a significant difference in SD between evergreen and deciduous tree species (t = 4.18, P < 0.001) but no significant difference in GCL between the two phenological groups. ANOVA revealed no significant difference in SD among the light demanding, intermediate shade tolerant and shade tolerant species (F = 0.76, P = 0.47). However, GCL significantly differ...
Proceedings of the 12th International Conference on Evaluation of Novel Approaches to Software Engineering, 2017
Existing bug localization techniques suggest source code methods or classes as buggy which requir... more Existing bug localization techniques suggest source code methods or classes as buggy which require manual investigations to find the buggy statements. Considering that issue, this paper proposes Statement level Bug Localization (SBL), which can effectively identify buggy statements from the source code. In SBL, relevant buggy methods are ranked using dynamic analysis followed by static analysis of the source code. For each ranked buggy method, a Method Statement Dependency Graph (MSDG) is constructed where each statement acts as a node of the graph. Since each of the statements contains few information, it is maximized by combining the contents of each node and its predecessor nodes in MSDG, resulting a Node Predecessor-node Dependency Graph (NPDG). To identify relevant statements for a bug, similarity is measured between the bug report and each node of the NPDG using Vector Space Model (VSM). Finally, the buggy statements are ranked based on the similarity scores. Rigorous experiments on three open source projects named as Eclipse, SWT and PasswordProtector show that SBL localizes the buggy statements with reasonable accuracies.
Environment, Development and Sustainability, 2020
We performed a global analysis with data from 149 countries to test whether temperature can expla... more We performed a global analysis with data from 149 countries to test whether temperature can explain the spatial variability of the spread rate and mortality of COVID-19 at the global scale. We performed partial correlation analysis and linear mixed effect modelling to evaluate the association of the spread rate and motility of COVID-19 with maximum, minimum, average temperatures and diurnal temperature variation (difference between daytime maximum and night-time minimum temperature) and other environmental and socioeconomic parameters. After controlling the effect of the duration since the first positive case, partial correlation analysis revealed that temperature was not related with the spatial variability of the spread rate of COVID-19 at the global scale. Mortality was negatively related with temperature in the countries with high-income economies. In contrast, diurnal temperature variation was significantly and positively correlated with mortality in the lowand middle-income countries. Taking the country heterogeneity into account, mixed effect modelling revealed that inclusion of temperature as a fixed factor in the model significantly improved model skill predicting mortality in the low-and middle-income countries. Our analysis suggests that warm climate may reduce the mortality rate in high-income economies, but in low-and middle-income countries, high diurnal temperature variation may increase the mortality risk.
We performed a global analysis with data from 149 countries to test whether temperature can expla... more We performed a global analysis with data from 149 countries to test whether temperature can explain the spatial variability of the spread rate and mortality of COVID-19 at the global scale. We performed partial correlation analysis and linear mixed effect modelling to evaluate the association of the spread rate and motility of COVID-19 with maximum, minimum, average temperatures and temperature extreme (difference between maximum and minimum temperature) and other environmental and socioeconomic parameters. After controlling the effect of the duration after the first positive case, partial correlation analysis revealed that temperature was not related with the spatial variability of the spread rate of COVID-19. Mortality was negatively related with temperature in the countries with high-income economies. In contrast, temperature extreme was significantly and positively correlated with mortality in the low-and middle-income countries. Taking the country heterogeneity into account, mi...
IPSJ Transactions on Computer Vision and Applications, 2018
Traditional image enhancement techniques produce different types of noise such as unnatural effec... more Traditional image enhancement techniques produce different types of noise such as unnatural effects, over-enhancement, and artifacts, and these drawbacks become more prominent in enhancing dark images. To overcome these drawbacks, we propose a dark image enhancement technique where local transformation of the pixels have been performed. Here, we apply a transformation method of different parts of the histogram of an input image to get a desired histogram. Afterwards, histogram specification technique has been done on the input image using this transformed histogram. The performance of the proposed technique has been evaluated in both qualitative and quantitative manner, which shows that the proposed method improves the quality of the image with minimal unexpected artifacts as compared to the other techniques.
Signal, Image and Video Processing, 2017
Census transformation and its variants have gained popularity in image classification for their s... more Census transformation and its variants have gained popularity in image classification for their simplicity and better performance. To describe a texture pattern, these approaches generally use sign information while comparing neighboring pixels. However, our observation is that sign and magnitude in a single color channel as well as in different color channels hold complementary information where sign component captures texture in an image and the saliency of that texture can be captured by the magnitude component. Considering these issues, a multi-channel complementary census transform (MCCT) is proposed in this paper by combining all of these information to capture more discriminating features. Rigorous experiments on nine different datasets which belong to six different applications such as flower, gender, aerial orthoimagery, event, leaf, indoor and outdoor scene classification demonstrate that MCCT outperforms existing state-of-the-art techniques.
EURASIP Journal on Image and Video Processing, 2017
Despite lots of effort being exerted in designing feature descriptors, it is still challenging to... more Despite lots of effort being exerted in designing feature descriptors, it is still challenging to find generalized feature descriptors, with acceptable discrimination ability, which are able to capture prominent features in various image processing applications. To address this issue, we propose a computationally feasible discriminative ternary census transform histogram (DTCTH) for image representation which uses dynamic thresholds to perceive the key properties of a feature descriptor. The code produced by DTCTH is more stable against intensity fluctuation, and it mainly captures the discriminative structural properties of an image by suppressing unnecessary background information. Thus, DTCTH becomes more generalized to be used in different applications with reasonable accuracies. To validate the generalizability of DTCTH, we have conducted rigorous experiments on five different applications considering nine benchmark datasets. The experimental results demonstrate that DTCTH performs as high as 28.08% better than the existing state-of-the-art feature descriptors such as GIST, SIFT, HOG, LBP, CLBP, OC-LBP, LGP, LTP, LAID, and CENTRIST.
Communications in Computer and Information Science, 2016
In automatic software bug localization, source code classes and methods are commonly used as the ... more In automatic software bug localization, source code classes and methods are commonly used as the unit of suggestions. However, existing techniques consider whole source code to find the buggy locations, which degrades the accuracy of bug localization. In this paper, a Method level Bug localization using Minimized code space (MBuM) has been proposed which improves the accuracy by only considering bug specific source code. Later, this source code is used for identifying the similarity to the bug report. These similarity scores are measured using a modified Vector Space Model (mVSM), and based on that scores MBuM ranks a list of source code methods. The validity of MBuM has been checked by providing theoretical proof using formal methods. Case studies have been performed on two large scale open source projects namely Eclipse and Mozilla, and the results show that MBuM outperforms existing bug localization techniques.
Proceedings of the 11th International Conference on Evaluation of Novel Software Approaches to Software Engineering, 2016
In automatic software bug localization, source code analysis is usually used to localize the bugg... more In automatic software bug localization, source code analysis is usually used to localize the buggy code without manual intervention. However, due to considering irrelevant source code, localization accuracy may get biased. In this paper, a Method level Bug localization using Minimized search space (MBuM) is proposed for improving the accuracy, which considers only the liable source code for generating a bug. The relevant search space for a bug is extracted using the execution trace of the source code. By processing these relevant source code and the bug report, code and bug corpora are generated. Afterwards, MBuM ranks the source code methods based on the textual similarity between the bug and code corpora. To do so, modified Vector Space Model (mVSM) is used which incorporates the size of a method with Vector Space Model. Rigorous experimental analysis using different case studies are conducted on two large scale open source projects namely Eclipse and Mozilla. Experiments show that MBuM outperforms existing bug localization techniques.
2016 5th International Conference on Informatics, Electronics and Vision (ICIEV), 2016
Deep learning is a new era of machine learning research, where many layers of information process... more Deep learning is a new era of machine learning research, where many layers of information processing stages are exploited for unsupervised feature learning. Using multiple levels of representation and abstraction, it helps a machine to understand about data (e.g., images, sound and text) more accurately. Many deep learning models have been proposed for solving the problem of different applications. Therefore, a comprehensive knowledge of these models is demanded to select the appropriate one for a specific application areas in signal or data processing. This paper reviews several deep learning models proposed for different application area in the field of computer vision, and makes a comprehensive evaluation of two well-known models namely AlexNet and VGG_S in nine different benchmark datasets. The experimental results show that these two models perform better than the existing state-of-the-art deep learning models in one dataset.
EURASIP Journal on Image and Video Processing, 2016
Due to the limitations of image-capturing devices or the presence of a non-ideal environment, the... more Due to the limitations of image-capturing devices or the presence of a non-ideal environment, the quality of digital images may get degraded. In spite of much advancement in imaging science, captured images do not always fulfill users' expectations of clear and soothing views. Most of the existing methods mainly focus on either global or local enhancement that might not be suitable for all types of images. These methods do not consider the nature of the image, whereas different types of degraded images may demand different types of treatments. Hence, we classify images into several classes based on the statistical information of the respective images. Afterwards, an adaptive gamma correction (AGC) is proposed to appropriately enhance the contrast of the image where the parameters of AGC are set dynamically based on the image information. Extensive experiments along with qualitative and quantitative evaluations show that the performance of AGC is better than other state-of-the-art techniques.
International Journal of Software Innovation, 2016
As image enhancement is a well discussed issue, various methods have already been proposed till t... more As image enhancement is a well discussed issue, various methods have already been proposed till to date. Some of these methods perform well for specific applications but most of the techniques suffer from artifacts due to the over or under enhancement. To mitigate this problem a new technique namely Bilateral Histogram Equalization for contrast enhancement (BHE) which uses Harmonic mean of the image to divide the histogram is introduced. BHE is evaluated in both qualitative and quantitative manner and the results show that BHE creates less artifacts on several standard images than other existing state-of-the-art image enhancement techniques.
2016 17th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), 2016
Various methods have been proposed for enhancing the images. Some of those perform well in some s... more Various methods have been proposed for enhancing the images. Some of those perform well in some specific application areas but most of the techniques suffer from artifacts due to over enhancement. To overcome this problem, we have introduced a new image enhancement technique namely Bilateral Histogram Equalization with Pre-processing (BHEP) which uses Harmonic mean to divide the histogram of the image. We have performed both qualitative and quantitative measurements for experiments and the results show that BHEP creates less artifacts in several standard images than the existing state-of-the-art image enhancement techniques.
2015 18th International Conference on Computer and Information Technology (ICCIT), 2015
Locating buggy files is a time consuming and challenging task because defects can deflate from a ... more Locating buggy files is a time consuming and challenging task because defects can deflate from a large variety of sources. So, researchers proposed several automated bug localization techniques where the accuracy can be improved. In this paper, an information retrieval based bug localization technique has been proposed, where buggy files are identified by measuring the similarity between bug report and source code. Besides this, source code structure and frequently changed files are also incorporated to produce a better rank for buggy files. To evaluate the proposed approach, a large-scale experiment on three open source projects, namely SWT, ZXing and Guava has been conducted. The result shows that the proposed approach improves 7% in terms of Mean Reciprocal Rank (MRR) and about 8% for Mean Average Precision (MAP) compared to existing techniques.
2015 18th International Conference on Computer and Information Technology (ICCIT), 2015
This paper proposes a Noise Adaptive Binary Pattern (NABP) for facial image analysis such as face... more This paper proposes a Noise Adaptive Binary Pattern (NABP) for facial image analysis such as face recognition, expression recognition and gender classification. NABP encodes the face microstructures using an adaptive threshold and generates more discriminative patterns than other existing local feature descriptors. Rigorous experiments on two well-known datasets, LFW and CK+, for three different aforementioned applications demonstrate the excellence of NABP as compared to the current state of the art methods.
International Journal of Information Technology and Computer Science, 2015
Gender recognition from facial images has become an empirical aspect in present world. It is one ... more Gender recognition from facial images has become an empirical aspect in present world. It is one of the main problems of computer vision and researches have been conducting on it. Though several techniques have been proposed, most of the techniques focused on facial images in controlled situation. But the problem arises when the classification is performed in uncontrolled conditions like high rate of noise, lack of illumination, etc. To overcome these problems, we propose a new gender recognition framework which first preprocess and enhances the input images using Adaptive Gama Correction with Weighting Distribution. We used Labeled Faces in the Wild (LFW) database for our experimental purpose which contains real life images of uncontrolled condition. For measuring the performance of our proposed method, we have used confusion matrix, precision, recall, F-measure, True Positive Rate (TPR), and False Positive Rate (FPR). In every case, our proposed framework performs superior over other existing state-of-the-art techniques.
The 8th International Conference on Software, Knowledge, Information Management and Applications (SKIMA 2014), 2014
Image enhancement processes an image to increase the visual information of that image. Image qual... more Image enhancement processes an image to increase the visual information of that image. Image quality can be degraded for several reasons such as lack of operator expertise, quality of image capturing devices, etc. The process of enhancing images may produce different types of noises such as unnatural effects, over-enhancement, artifacts, etc. These drawbacks are more prominent in the dark images. Over the years, many image enhancement techniques have been proposed. However, there have been a few works specifically for dark image enhancement. Though the available methods enhance the dark images, they might not produce desired output for dark images. To overcome the above drawbacks, we propose a method for dark image enhancement. In this paper, we enhance the images by applying local transformation technique on input image histogram. We smooth the input image histogram to find out the location of peaks and valleys from the histogram. Several segments are identified using valley to valley distance. Then a transformation method is applied on each segment of image histogram. Finally, histogram specification is applied on the input image using this transformed histogram. This method improves the quality of the image with minimal unexpected artifacts. Experimental results show that our method outperforms other methods in majority cases.
2014 17th International Conference on Computer and Information Technology (ICCIT), 2014
ABSTRACT With the advancement of imaging science, image enhancement has become an important aspec... more ABSTRACT With the advancement of imaging science, image enhancement has become an important aspect of image processing domain. It is necessary to gather a comprehensive knowledge regarding the existing enhancement technologies to identify and solve their problems and thus to elevate the current image enhancement methodologies. This paper provides the underlying concept of contrast enhancement, brightness preservation as well as brightness enhancement techniques. Besides this, we provide a short description of the existing renowned enhancement methods with their mathematical description and application area. Moreover, experimental results are provided to make a comparative analysis where both qualitative and quantitative measurements are performed. Different enhancement methods are run on same images to examine the qualitative performance. Peak signal to noise ratio (PSNR), normalized cross-correlation (NCC), execution time (ET) and discrete entropy (DE) are quan-titative measurement metrics used for quantitative assessment. Most of the cases, it is found that Histogram Equalization has the highest degree of deviation from the input image which basically generates more visual artifacts. Contextual and Variational Con-trast enhancement technique takes long time for execution with respect to other enhancement techniques. From our quantitative and qualitative evaluation, we find that Layered Difference Rep-resentation performs comparatively produces better enhancement result in all aspect than other existing methods.
Functional Plant Biology, 2022
The density and guard cell length of stomata regulate the physiological processes in plants. Yet,... more The density and guard cell length of stomata regulate the physiological processes in plants. Yet, the variation of stomatal characteristics among different functional groups of trees is not been well understood. Particularly, a comprehensive understanding of stomatal behaviour in Bangladeshi moist forest trees is lacking. The study investigated how abaxial stomatal density (SD) and guard cell length (GCL) vary among tree functional types and leaf phenological groups in a moist tropical forest of Bangladesh. Cluster dendrogram revealed three groups of species based on SD and GCL. The independent sample t-test showed that there was a significant difference in SD between evergreen and deciduous tree species (t = 4.18, P < 0.001) but no significant difference in GCL between the two phenological groups. ANOVA revealed no significant difference in SD among the light demanding, intermediate shade tolerant and shade tolerant species (F = 0.76, P = 0.47). However, GCL significantly differ...
Proceedings of the 12th International Conference on Evaluation of Novel Approaches to Software Engineering, 2017
Existing bug localization techniques suggest source code methods or classes as buggy which requir... more Existing bug localization techniques suggest source code methods or classes as buggy which require manual investigations to find the buggy statements. Considering that issue, this paper proposes Statement level Bug Localization (SBL), which can effectively identify buggy statements from the source code. In SBL, relevant buggy methods are ranked using dynamic analysis followed by static analysis of the source code. For each ranked buggy method, a Method Statement Dependency Graph (MSDG) is constructed where each statement acts as a node of the graph. Since each of the statements contains few information, it is maximized by combining the contents of each node and its predecessor nodes in MSDG, resulting a Node Predecessor-node Dependency Graph (NPDG). To identify relevant statements for a bug, similarity is measured between the bug report and each node of the NPDG using Vector Space Model (VSM). Finally, the buggy statements are ranked based on the similarity scores. Rigorous experiments on three open source projects named as Eclipse, SWT and PasswordProtector show that SBL localizes the buggy statements with reasonable accuracies.
Environment, Development and Sustainability, 2020
We performed a global analysis with data from 149 countries to test whether temperature can expla... more We performed a global analysis with data from 149 countries to test whether temperature can explain the spatial variability of the spread rate and mortality of COVID-19 at the global scale. We performed partial correlation analysis and linear mixed effect modelling to evaluate the association of the spread rate and motility of COVID-19 with maximum, minimum, average temperatures and diurnal temperature variation (difference between daytime maximum and night-time minimum temperature) and other environmental and socioeconomic parameters. After controlling the effect of the duration since the first positive case, partial correlation analysis revealed that temperature was not related with the spatial variability of the spread rate of COVID-19 at the global scale. Mortality was negatively related with temperature in the countries with high-income economies. In contrast, diurnal temperature variation was significantly and positively correlated with mortality in the lowand middle-income countries. Taking the country heterogeneity into account, mixed effect modelling revealed that inclusion of temperature as a fixed factor in the model significantly improved model skill predicting mortality in the low-and middle-income countries. Our analysis suggests that warm climate may reduce the mortality rate in high-income economies, but in low-and middle-income countries, high diurnal temperature variation may increase the mortality risk.
We performed a global analysis with data from 149 countries to test whether temperature can expla... more We performed a global analysis with data from 149 countries to test whether temperature can explain the spatial variability of the spread rate and mortality of COVID-19 at the global scale. We performed partial correlation analysis and linear mixed effect modelling to evaluate the association of the spread rate and motility of COVID-19 with maximum, minimum, average temperatures and temperature extreme (difference between maximum and minimum temperature) and other environmental and socioeconomic parameters. After controlling the effect of the duration after the first positive case, partial correlation analysis revealed that temperature was not related with the spatial variability of the spread rate of COVID-19. Mortality was negatively related with temperature in the countries with high-income economies. In contrast, temperature extreme was significantly and positively correlated with mortality in the low-and middle-income countries. Taking the country heterogeneity into account, mi...
IPSJ Transactions on Computer Vision and Applications, 2018
Traditional image enhancement techniques produce different types of noise such as unnatural effec... more Traditional image enhancement techniques produce different types of noise such as unnatural effects, over-enhancement, and artifacts, and these drawbacks become more prominent in enhancing dark images. To overcome these drawbacks, we propose a dark image enhancement technique where local transformation of the pixels have been performed. Here, we apply a transformation method of different parts of the histogram of an input image to get a desired histogram. Afterwards, histogram specification technique has been done on the input image using this transformed histogram. The performance of the proposed technique has been evaluated in both qualitative and quantitative manner, which shows that the proposed method improves the quality of the image with minimal unexpected artifacts as compared to the other techniques.
Signal, Image and Video Processing, 2017
Census transformation and its variants have gained popularity in image classification for their s... more Census transformation and its variants have gained popularity in image classification for their simplicity and better performance. To describe a texture pattern, these approaches generally use sign information while comparing neighboring pixels. However, our observation is that sign and magnitude in a single color channel as well as in different color channels hold complementary information where sign component captures texture in an image and the saliency of that texture can be captured by the magnitude component. Considering these issues, a multi-channel complementary census transform (MCCT) is proposed in this paper by combining all of these information to capture more discriminating features. Rigorous experiments on nine different datasets which belong to six different applications such as flower, gender, aerial orthoimagery, event, leaf, indoor and outdoor scene classification demonstrate that MCCT outperforms existing state-of-the-art techniques.
EURASIP Journal on Image and Video Processing, 2017
Despite lots of effort being exerted in designing feature descriptors, it is still challenging to... more Despite lots of effort being exerted in designing feature descriptors, it is still challenging to find generalized feature descriptors, with acceptable discrimination ability, which are able to capture prominent features in various image processing applications. To address this issue, we propose a computationally feasible discriminative ternary census transform histogram (DTCTH) for image representation which uses dynamic thresholds to perceive the key properties of a feature descriptor. The code produced by DTCTH is more stable against intensity fluctuation, and it mainly captures the discriminative structural properties of an image by suppressing unnecessary background information. Thus, DTCTH becomes more generalized to be used in different applications with reasonable accuracies. To validate the generalizability of DTCTH, we have conducted rigorous experiments on five different applications considering nine benchmark datasets. The experimental results demonstrate that DTCTH performs as high as 28.08% better than the existing state-of-the-art feature descriptors such as GIST, SIFT, HOG, LBP, CLBP, OC-LBP, LGP, LTP, LAID, and CENTRIST.
Communications in Computer and Information Science, 2016
In automatic software bug localization, source code classes and methods are commonly used as the ... more In automatic software bug localization, source code classes and methods are commonly used as the unit of suggestions. However, existing techniques consider whole source code to find the buggy locations, which degrades the accuracy of bug localization. In this paper, a Method level Bug localization using Minimized code space (MBuM) has been proposed which improves the accuracy by only considering bug specific source code. Later, this source code is used for identifying the similarity to the bug report. These similarity scores are measured using a modified Vector Space Model (mVSM), and based on that scores MBuM ranks a list of source code methods. The validity of MBuM has been checked by providing theoretical proof using formal methods. Case studies have been performed on two large scale open source projects namely Eclipse and Mozilla, and the results show that MBuM outperforms existing bug localization techniques.
Proceedings of the 11th International Conference on Evaluation of Novel Software Approaches to Software Engineering, 2016
In automatic software bug localization, source code analysis is usually used to localize the bugg... more In automatic software bug localization, source code analysis is usually used to localize the buggy code without manual intervention. However, due to considering irrelevant source code, localization accuracy may get biased. In this paper, a Method level Bug localization using Minimized search space (MBuM) is proposed for improving the accuracy, which considers only the liable source code for generating a bug. The relevant search space for a bug is extracted using the execution trace of the source code. By processing these relevant source code and the bug report, code and bug corpora are generated. Afterwards, MBuM ranks the source code methods based on the textual similarity between the bug and code corpora. To do so, modified Vector Space Model (mVSM) is used which incorporates the size of a method with Vector Space Model. Rigorous experimental analysis using different case studies are conducted on two large scale open source projects namely Eclipse and Mozilla. Experiments show that MBuM outperforms existing bug localization techniques.
2016 5th International Conference on Informatics, Electronics and Vision (ICIEV), 2016
Deep learning is a new era of machine learning research, where many layers of information process... more Deep learning is a new era of machine learning research, where many layers of information processing stages are exploited for unsupervised feature learning. Using multiple levels of representation and abstraction, it helps a machine to understand about data (e.g., images, sound and text) more accurately. Many deep learning models have been proposed for solving the problem of different applications. Therefore, a comprehensive knowledge of these models is demanded to select the appropriate one for a specific application areas in signal or data processing. This paper reviews several deep learning models proposed for different application area in the field of computer vision, and makes a comprehensive evaluation of two well-known models namely AlexNet and VGG_S in nine different benchmark datasets. The experimental results show that these two models perform better than the existing state-of-the-art deep learning models in one dataset.
EURASIP Journal on Image and Video Processing, 2016
Due to the limitations of image-capturing devices or the presence of a non-ideal environment, the... more Due to the limitations of image-capturing devices or the presence of a non-ideal environment, the quality of digital images may get degraded. In spite of much advancement in imaging science, captured images do not always fulfill users' expectations of clear and soothing views. Most of the existing methods mainly focus on either global or local enhancement that might not be suitable for all types of images. These methods do not consider the nature of the image, whereas different types of degraded images may demand different types of treatments. Hence, we classify images into several classes based on the statistical information of the respective images. Afterwards, an adaptive gamma correction (AGC) is proposed to appropriately enhance the contrast of the image where the parameters of AGC are set dynamically based on the image information. Extensive experiments along with qualitative and quantitative evaluations show that the performance of AGC is better than other state-of-the-art techniques.
International Journal of Software Innovation, 2016
As image enhancement is a well discussed issue, various methods have already been proposed till t... more As image enhancement is a well discussed issue, various methods have already been proposed till to date. Some of these methods perform well for specific applications but most of the techniques suffer from artifacts due to the over or under enhancement. To mitigate this problem a new technique namely Bilateral Histogram Equalization for contrast enhancement (BHE) which uses Harmonic mean of the image to divide the histogram is introduced. BHE is evaluated in both qualitative and quantitative manner and the results show that BHE creates less artifacts on several standard images than other existing state-of-the-art image enhancement techniques.
2016 17th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), 2016
Various methods have been proposed for enhancing the images. Some of those perform well in some s... more Various methods have been proposed for enhancing the images. Some of those perform well in some specific application areas but most of the techniques suffer from artifacts due to over enhancement. To overcome this problem, we have introduced a new image enhancement technique namely Bilateral Histogram Equalization with Pre-processing (BHEP) which uses Harmonic mean to divide the histogram of the image. We have performed both qualitative and quantitative measurements for experiments and the results show that BHEP creates less artifacts in several standard images than the existing state-of-the-art image enhancement techniques.
2015 18th International Conference on Computer and Information Technology (ICCIT), 2015
Locating buggy files is a time consuming and challenging task because defects can deflate from a ... more Locating buggy files is a time consuming and challenging task because defects can deflate from a large variety of sources. So, researchers proposed several automated bug localization techniques where the accuracy can be improved. In this paper, an information retrieval based bug localization technique has been proposed, where buggy files are identified by measuring the similarity between bug report and source code. Besides this, source code structure and frequently changed files are also incorporated to produce a better rank for buggy files. To evaluate the proposed approach, a large-scale experiment on three open source projects, namely SWT, ZXing and Guava has been conducted. The result shows that the proposed approach improves 7% in terms of Mean Reciprocal Rank (MRR) and about 8% for Mean Average Precision (MAP) compared to existing techniques.
2015 18th International Conference on Computer and Information Technology (ICCIT), 2015
This paper proposes a Noise Adaptive Binary Pattern (NABP) for facial image analysis such as face... more This paper proposes a Noise Adaptive Binary Pattern (NABP) for facial image analysis such as face recognition, expression recognition and gender classification. NABP encodes the face microstructures using an adaptive threshold and generates more discriminative patterns than other existing local feature descriptors. Rigorous experiments on two well-known datasets, LFW and CK+, for three different aforementioned applications demonstrate the excellence of NABP as compared to the current state of the art methods.
International Journal of Information Technology and Computer Science, 2015
Gender recognition from facial images has become an empirical aspect in present world. It is one ... more Gender recognition from facial images has become an empirical aspect in present world. It is one of the main problems of computer vision and researches have been conducting on it. Though several techniques have been proposed, most of the techniques focused on facial images in controlled situation. But the problem arises when the classification is performed in uncontrolled conditions like high rate of noise, lack of illumination, etc. To overcome these problems, we propose a new gender recognition framework which first preprocess and enhances the input images using Adaptive Gama Correction with Weighting Distribution. We used Labeled Faces in the Wild (LFW) database for our experimental purpose which contains real life images of uncontrolled condition. For measuring the performance of our proposed method, we have used confusion matrix, precision, recall, F-measure, True Positive Rate (TPR), and False Positive Rate (FPR). In every case, our proposed framework performs superior over other existing state-of-the-art techniques.
The 8th International Conference on Software, Knowledge, Information Management and Applications (SKIMA 2014), 2014
Image enhancement processes an image to increase the visual information of that image. Image qual... more Image enhancement processes an image to increase the visual information of that image. Image quality can be degraded for several reasons such as lack of operator expertise, quality of image capturing devices, etc. The process of enhancing images may produce different types of noises such as unnatural effects, over-enhancement, artifacts, etc. These drawbacks are more prominent in the dark images. Over the years, many image enhancement techniques have been proposed. However, there have been a few works specifically for dark image enhancement. Though the available methods enhance the dark images, they might not produce desired output for dark images. To overcome the above drawbacks, we propose a method for dark image enhancement. In this paper, we enhance the images by applying local transformation technique on input image histogram. We smooth the input image histogram to find out the location of peaks and valleys from the histogram. Several segments are identified using valley to valley distance. Then a transformation method is applied on each segment of image histogram. Finally, histogram specification is applied on the input image using this transformed histogram. This method improves the quality of the image with minimal unexpected artifacts. Experimental results show that our method outperforms other methods in majority cases.
2014 17th International Conference on Computer and Information Technology (ICCIT), 2014
ABSTRACT With the advancement of imaging science, image enhancement has become an important aspec... more ABSTRACT With the advancement of imaging science, image enhancement has become an important aspect of image processing domain. It is necessary to gather a comprehensive knowledge regarding the existing enhancement technologies to identify and solve their problems and thus to elevate the current image enhancement methodologies. This paper provides the underlying concept of contrast enhancement, brightness preservation as well as brightness enhancement techniques. Besides this, we provide a short description of the existing renowned enhancement methods with their mathematical description and application area. Moreover, experimental results are provided to make a comparative analysis where both qualitative and quantitative measurements are performed. Different enhancement methods are run on same images to examine the qualitative performance. Peak signal to noise ratio (PSNR), normalized cross-correlation (NCC), execution time (ET) and discrete entropy (DE) are quan-titative measurement metrics used for quantitative assessment. Most of the cases, it is found that Histogram Equalization has the highest degree of deviation from the input image which basically generates more visual artifacts. Contextual and Variational Con-trast enhancement technique takes long time for execution with respect to other enhancement techniques. From our quantitative and qualitative evaluation, we find that Layered Difference Rep-resentation performs comparatively produces better enhancement result in all aspect than other existing methods.