Madhabananda Das | KIIT University (original) (raw)
Papers by Madhabananda Das
Materials Today: Proceedings, 2021
Abstract In modern clinical diagnosis image processing plays a significant role. Digital imaging ... more Abstract In modern clinical diagnosis image processing plays a significant role. Digital imaging of vital organs, their processing and analysis is one of the most important areas of research and development to get maximum accuracy in diagnosis. Especially now a days it is an integral part of medical diagnosis. The different techniques applied is begins with radiology to ultra-sonograph, Magnetic Resonance Imaging (MRI) etc. in many preliminary clinical diagnosis images of the effected organ is the one of effective way of diagnosis-process. The different state of the art digital imaging equipment is introduced to capture and render the images of the internal body organs of the patient to carry the diagnostic process more accurate. To gather more relevant diagnostic data, the electronically captured digitized images need to be fine-tuned digitally. The various methods and techniques involved in fine tuning the images are – cleaning the image by removing the noise, separating different shades by segmenting the image and getting the threshold-point etc. Segmenting an image is a method to separate the marked area of an image from the rest of image to single out the marked-area more conspicuously. There are many unique techniques evolved to execute the segmentation operation, one of them is - Image Thresholding. There are many innovative implements are also introducing to execute the steps of image thresholding. Here we use a recent introduced Particle Swarm Optimization (PSO) tool to segment the brain MRI to detect the brain-lesions using image thresholding technique.
Advances in Intelligent Systems and Computing, 2018
Advances in Intelligent Systems and Computing, 2019
This paper provides a framework for non-linearly classifying microarray dataset to determine the ... more This paper provides a framework for non-linearly classifying microarray dataset to determine the existence of malignant neoplasm in the patient's sample set. A m × n microarray technology is used to represent the sample data of the patients. Our model aims at predicting the class label of an unknown new sample that enters into the system during the runtime. With the help of microarray data, we have collaboratively applied game theory approaches along with computational methods to solve the problem. An apodictic approach for the construction of an optimized RFE (Recursive Feature Elimination) feature selection model using Dualist Algorithm is incorporated. Furthermore, the optimized features are subjected to a non-linear classification using decision tree, k-nearest neighbor and logistic regression on Wisconsin Breast cancer dataset. The simulations carried out using the above techniques have proved Dualist algorithm with RFE combined with four different linear classifier models like logistic regression, k-nearest neighbor, decision tree and random forest to be a better choice for classification.
Advances in Intelligent Systems and Computing, 2018
We have proposed a morphological approach based on an evolutionary learning for software developm... more We have proposed a morphological approach based on an evolutionary learning for software development cost estimation (SDCE). The dilation-erosion perceptron (DEP) method which is a hybrid artificial neuron is built on mathematical morphology (MM) framework. This method has its roots in the complete lattice theory. The proposed work also presents an evolutionary learning procedure, i.e., a chaotic modified genetic algorithm (CMGA) to construct the DEP (CMGA) model overcoming the drawbacks arising in the morphological operator's gradient estimation in the classical learning procedure of DEP. The experimental analysis was conducted on estimation of five different SDCE problems and then analyzed using three performance measurement metrics.
2016 International Conference on Inventive Computation Technologies (ICICT), 2016
Data mining based classification is one of the important role in the field of healthcare. Diagnos... more Data mining based classification is one of the important role in the field of healthcare. Diagnosis of health conditions is a very important and challenging task in field of medical science. There are various types of diseases are diagnosis in medical science. Thyroid disease is one of critical diseases that is very serious problem and affected the health of human being. Thyroid decease classification is one of the important problems in medical science because it is directly related to health condition of human body, this type of disease can be solve by proper identify and carefully treatment. This paper focuses on the survey of diagnosis of thyroid. There are various authors have worked in the field of thyroid diseases classification and give the classification accuracy with robust model. This research is also focus on the various techniques that is applied for classification of thyroid data.
This paper puts forward a fresh approach which is a modification of original fuzzy kNN for dealin... more This paper puts forward a fresh approach which is a modification of original fuzzy kNN for dealing with categorical missing values in categorical and mixed attribute datasets. We have removed the irrelevant missing samples through list-wise deletion. Then, rest of the missing samples is estimated using kernel-based fuzzy kNN technique and partial distance strategy. We have calculated the errors at different percentage of missing values. Results highlight that mixture kernel gives minimum average of MAE, MAPE and RMSE at different missing percentage when implemented on lenses, SPECT heart and abalone dataset.
Literature has proved the individual performance of ABC and P SO while solving various optimizati... more Literature has proved the individual performance of ABC and P SO while solving various optimization problems. However, as PSO searches the solution by updating the particles and the ABC searches by bees’ wandering behavior, there are drawbacks persist in the individual performance. Hence in our previous work, we have proposed a hybrid swarm optimization technique to outperform the individual performance of ABC and PSO. The experimentation was done using standard benchmark test function models and the comparisons were made against the individual performance of PSO and ABC. This work is an extension of our previous in which we take an image processing problem called Content-Based Image Retrieval (CBIR) to evaluate the performance of the proposed hybrid algorithm. CBIR systems are the most popular image processing system in which relevant images are retrieved from a huge database when a query image is given. In such CBIR systems, multiple features are used to determine the relevance of...
In swarm intelligence PSO is not so popular algorithm but ABC is recently developed and most popu... more In swarm intelligence PSO is not so popular algorithm but ABC is recently developed and most popular whereas PSO is lagging in finding global solutions however the ABC’s neighborhood search is not sufficient to accelerate the convergence rate. The hybrid technique is developed in such a way that it can solve the issues arising in individual PSO and ABC. As ABC outperforms in the most problems, it will be selected as the primary algorithm and the swarming behavior of the particles are included in the bees. A compromising neighborhood search model is developed for ABC to aid accelerated neighborhood search by considering the property of PSO’s particles updating behavior along with the ABC’s neighborhood search. The introduction of such neighborhood search model fine tunes the neighborhood search property of employed and onlooker bees that helps to converge faster than conventional ABC and PSO. The tests will be carried out using standard benchmark test function models and the performa...
Automatic text summarization (ATS) is a widely used approach. Through the years, various techniqu... more Automatic text summarization (ATS) is a widely used approach. Through the years, various techniques have been implemented to produce the summary. An extractive summary is a traditional mechanism for information extraction, where important sentences are selected which refers to the basic concepts of the article. In this paper, extractive summarization has been considered as a classification problem. Machine learning techniques have been implemented for classification problems in various domains. To solve the summarization problem in this paper, machine learning is taken into consideration, and KNN, random forest, support vector machine, multilayer perceptron, decision tree and logistic regression algorithm have been implemented on Newsroom dataset.
Object recognition is a prominent research area in the world of computer science. It is used to s... more Object recognition is a prominent research area in the world of computer science. It is used to solve a variety of problems such as image processing, medical diagnostics, compression and surveillance. The primary goal of object recognition is to recognize different objects present in an image, even if the objects’ size, shape and other features change. The challenge in object recognition is to recognize different objects having features invariant to rotation, scaling and translation. In this work we apply Cat Swarm Optimization technique to recognize objects. Then we compare our results with the results obtained from particle swarm optimization and genetic algorithm techniques for the same task, and achieve better results in a more efficient manner. KeywordsObject Recognition, Image Processing, Cat Swarm Optimization, Particle Swarm Optimization
One of the most commonly used techniques in the recommendation framework is collaborative filteri... more One of the most commonly used techniques in the recommendation framework is collaborative filtering (CF). It performs better with sufficient records of user rating but is not good in sparse data. Content-based filtering works well in the sparse dataset as it finds the similarity between movies by using attributes of the movies. RBM is an energy-based model serving as a backbone of deep learning and performs well in rating prediction. However, the rating prediction is not preferable by a single model. The hybrid model achieves better results by integrating the results of more than one model. This paper analyses the weighted hybrid CF system by integrating content K-nearest neighbors (KNN) with restricted Boltzmann machine (RBM). Movies are recommended to the active user in the proposed system by integrating the effects of both content-based and collaborative filtering. Model efficacy was tested with MovieLens benchmark datasets.
Computational Intelligence in Pattern Recognition, 2019
This paper is based on the optimization of linear weights in a radial basis function neural netwo... more This paper is based on the optimization of linear weights in a radial basis function neural network that connects the hidden layer and the output layer. A new optimization algorithm called dualist algorithm is applied for choosing an optimal learning parameter. A conventional strategy of random selection of radial basis function (RBF) centers and the width, as well as the weights, are estimated by gradient descent method and least square methods, respectively. The ideology behind this study is to predict the occurrence of metastatic carcinoma in human cells by computational approaches. Our simulation consists of comparing the predictive accuracy of harmony search-radial basis function network (RBFN) and dualist-RBFN by optimizing the weight factor. The Wisconsin breast cancer dataset is used as a benchmark to experiment our training pattern. The learning rate (weight factor) is taken as the optimized parameter to obtain the best possible solution.
Wireless Personal Communications, 2021
2016 International Conference on Computing, Analytics and Security Trends (CAST), 2016
Web applications hosted on the Internet are naturally exposed to a variety of attacks and constan... more Web applications hosted on the Internet are naturally exposed to a variety of attacks and constantly probed by hackers for vulnerabilities. SQL Injection Attack (SQLIA) has been a major security threat on web applications since over 15 years. Detecting SQLIA at runtime is a challenging problem because of extreme heterogeneity of the attack vectors. This paper explores application of node centrality metrics to train a Support Vector Machine (SVM) for identifying malicious queries containing SQL injection attacks. The WHERE clause portion of SQL queries are first normalized into a sequence of tokens and then modeled as interaction networks, from which centrality of the nodes are computed. After applying feature selection by information gain method, the centrality scores of high ranking nodes are used to train the SVM classifier. We experiment with four centrality measures popularly used in Social Network Analysis (SNA). The results on five sample web applications built with PHP/MySQL show that this technique can effectively detect SQLIA with minimal performance overhead. Designed for the database firewall layer, the approach can protect multiple websites on a shared server, which is another advantage.
International Journal of Intelligent Engineering and Systems, 2021
International Journal of Engineering & Technology, 2018
Image processing is most vital area of research and application in field of medical-imaging. Espe... more Image processing is most vital area of research and application in field of medical-imaging. Especially it is a major component in medical science. Starting from radiology to ultrasound (sonography), MRI, etc. in lots of area image is the only source of diagnosis process. Now-a-days, different types of devices are being introduced to capture the internal body parts in medical science to carry the diagnosis process correctly. However, due to various reasons, the captured images need to be tuned digitally to gain the more information. These processes involve noise reduction, segmentations, thresholding etc. . Image segmentation is a process to segment the target area of image to identify the area more prominently. There are different process are evolved to perform the segmentation process, one of which is Image thresholding. Moreover there are different tools are also introduce to perform this step of image thresholding. The recent introduced tool PSO is being used here to segment the...
International Journal of Information Technology, 2018
AbstractFrequent requirement changes are a major point of concern in today’s scenario. As a solut... more AbstractFrequent requirement changes are a major point of concern in today’s scenario. As a solution to such issues, agile software development (ASD) has efficiently replaced the traditional methods of software development in industries. Because of dynamics of different aspects of ASD, it is very difficult to keep track, maintain and estimate the overall product. So, in order to solve the effort estimation problem (EEP) in ASD, different types of artificial neural networks (ANNs) have been applied. This work focuses on two types of ANN-feedforward back-propagation neural network and Elman neural network. These two networks have been applied to a dataset which contains project information of 21 projects based on ASD from 6 different software houses to analyze and solve the EEP. Also, the proposed work uses three different performance metrics i.e. mean magnitude of relative error (MMRE), mean square error (MSE) and prediction (PRED(x)) to examine the performance of the model. The results of the proposed models are compared to the existing models in the literature.
International Journal of Intelligent Systems and Applications, 2019
JOURNAL OF ENGINEERING SCIENCE AND TECHNOLOGY REVIEW, 2017
In the last few years, the size and functionality of software have experienced a massive growth. ... more In the last few years, the size and functionality of software have experienced a massive growth. Along with this, cost estimation plays a major role in the whole cycle of software development, and hence, it is a necessary task that should be done before the development cycle begins and may run throughout the software life cycle. It helps in making accurate estimation for any project so that appropriate charges and delivery date can be obtained. It also helps in identifying the effort required for developing the application, which assures the project acceptance or denial. Since late 90's, Agile Software Development (ASD) methodologies have shown high success rates for projects due to their capability of coping with changing requirements of the customers. Commencing product development using agile methods is a challenging task due to the live and dynamic nature of ASD. So, accurate cost estimation is a must for such development models in order to fine-tune the delivery date and estimation, while keeping the quality of software as the most important priority. This paper presents a systematic survey of cost estimation in ASD, which will be useful for the agile users to understand current trends in cost estimation in ASD.
Materials Today: Proceedings, 2021
Abstract In modern clinical diagnosis image processing plays a significant role. Digital imaging ... more Abstract In modern clinical diagnosis image processing plays a significant role. Digital imaging of vital organs, their processing and analysis is one of the most important areas of research and development to get maximum accuracy in diagnosis. Especially now a days it is an integral part of medical diagnosis. The different techniques applied is begins with radiology to ultra-sonograph, Magnetic Resonance Imaging (MRI) etc. in many preliminary clinical diagnosis images of the effected organ is the one of effective way of diagnosis-process. The different state of the art digital imaging equipment is introduced to capture and render the images of the internal body organs of the patient to carry the diagnostic process more accurate. To gather more relevant diagnostic data, the electronically captured digitized images need to be fine-tuned digitally. The various methods and techniques involved in fine tuning the images are – cleaning the image by removing the noise, separating different shades by segmenting the image and getting the threshold-point etc. Segmenting an image is a method to separate the marked area of an image from the rest of image to single out the marked-area more conspicuously. There are many unique techniques evolved to execute the segmentation operation, one of them is - Image Thresholding. There are many innovative implements are also introducing to execute the steps of image thresholding. Here we use a recent introduced Particle Swarm Optimization (PSO) tool to segment the brain MRI to detect the brain-lesions using image thresholding technique.
Advances in Intelligent Systems and Computing, 2018
Advances in Intelligent Systems and Computing, 2019
This paper provides a framework for non-linearly classifying microarray dataset to determine the ... more This paper provides a framework for non-linearly classifying microarray dataset to determine the existence of malignant neoplasm in the patient's sample set. A m × n microarray technology is used to represent the sample data of the patients. Our model aims at predicting the class label of an unknown new sample that enters into the system during the runtime. With the help of microarray data, we have collaboratively applied game theory approaches along with computational methods to solve the problem. An apodictic approach for the construction of an optimized RFE (Recursive Feature Elimination) feature selection model using Dualist Algorithm is incorporated. Furthermore, the optimized features are subjected to a non-linear classification using decision tree, k-nearest neighbor and logistic regression on Wisconsin Breast cancer dataset. The simulations carried out using the above techniques have proved Dualist algorithm with RFE combined with four different linear classifier models like logistic regression, k-nearest neighbor, decision tree and random forest to be a better choice for classification.
Advances in Intelligent Systems and Computing, 2018
We have proposed a morphological approach based on an evolutionary learning for software developm... more We have proposed a morphological approach based on an evolutionary learning for software development cost estimation (SDCE). The dilation-erosion perceptron (DEP) method which is a hybrid artificial neuron is built on mathematical morphology (MM) framework. This method has its roots in the complete lattice theory. The proposed work also presents an evolutionary learning procedure, i.e., a chaotic modified genetic algorithm (CMGA) to construct the DEP (CMGA) model overcoming the drawbacks arising in the morphological operator's gradient estimation in the classical learning procedure of DEP. The experimental analysis was conducted on estimation of five different SDCE problems and then analyzed using three performance measurement metrics.
2016 International Conference on Inventive Computation Technologies (ICICT), 2016
Data mining based classification is one of the important role in the field of healthcare. Diagnos... more Data mining based classification is one of the important role in the field of healthcare. Diagnosis of health conditions is a very important and challenging task in field of medical science. There are various types of diseases are diagnosis in medical science. Thyroid disease is one of critical diseases that is very serious problem and affected the health of human being. Thyroid decease classification is one of the important problems in medical science because it is directly related to health condition of human body, this type of disease can be solve by proper identify and carefully treatment. This paper focuses on the survey of diagnosis of thyroid. There are various authors have worked in the field of thyroid diseases classification and give the classification accuracy with robust model. This research is also focus on the various techniques that is applied for classification of thyroid data.
This paper puts forward a fresh approach which is a modification of original fuzzy kNN for dealin... more This paper puts forward a fresh approach which is a modification of original fuzzy kNN for dealing with categorical missing values in categorical and mixed attribute datasets. We have removed the irrelevant missing samples through list-wise deletion. Then, rest of the missing samples is estimated using kernel-based fuzzy kNN technique and partial distance strategy. We have calculated the errors at different percentage of missing values. Results highlight that mixture kernel gives minimum average of MAE, MAPE and RMSE at different missing percentage when implemented on lenses, SPECT heart and abalone dataset.
Literature has proved the individual performance of ABC and P SO while solving various optimizati... more Literature has proved the individual performance of ABC and P SO while solving various optimization problems. However, as PSO searches the solution by updating the particles and the ABC searches by bees’ wandering behavior, there are drawbacks persist in the individual performance. Hence in our previous work, we have proposed a hybrid swarm optimization technique to outperform the individual performance of ABC and PSO. The experimentation was done using standard benchmark test function models and the comparisons were made against the individual performance of PSO and ABC. This work is an extension of our previous in which we take an image processing problem called Content-Based Image Retrieval (CBIR) to evaluate the performance of the proposed hybrid algorithm. CBIR systems are the most popular image processing system in which relevant images are retrieved from a huge database when a query image is given. In such CBIR systems, multiple features are used to determine the relevance of...
In swarm intelligence PSO is not so popular algorithm but ABC is recently developed and most popu... more In swarm intelligence PSO is not so popular algorithm but ABC is recently developed and most popular whereas PSO is lagging in finding global solutions however the ABC’s neighborhood search is not sufficient to accelerate the convergence rate. The hybrid technique is developed in such a way that it can solve the issues arising in individual PSO and ABC. As ABC outperforms in the most problems, it will be selected as the primary algorithm and the swarming behavior of the particles are included in the bees. A compromising neighborhood search model is developed for ABC to aid accelerated neighborhood search by considering the property of PSO’s particles updating behavior along with the ABC’s neighborhood search. The introduction of such neighborhood search model fine tunes the neighborhood search property of employed and onlooker bees that helps to converge faster than conventional ABC and PSO. The tests will be carried out using standard benchmark test function models and the performa...
Automatic text summarization (ATS) is a widely used approach. Through the years, various techniqu... more Automatic text summarization (ATS) is a widely used approach. Through the years, various techniques have been implemented to produce the summary. An extractive summary is a traditional mechanism for information extraction, where important sentences are selected which refers to the basic concepts of the article. In this paper, extractive summarization has been considered as a classification problem. Machine learning techniques have been implemented for classification problems in various domains. To solve the summarization problem in this paper, machine learning is taken into consideration, and KNN, random forest, support vector machine, multilayer perceptron, decision tree and logistic regression algorithm have been implemented on Newsroom dataset.
Object recognition is a prominent research area in the world of computer science. It is used to s... more Object recognition is a prominent research area in the world of computer science. It is used to solve a variety of problems such as image processing, medical diagnostics, compression and surveillance. The primary goal of object recognition is to recognize different objects present in an image, even if the objects’ size, shape and other features change. The challenge in object recognition is to recognize different objects having features invariant to rotation, scaling and translation. In this work we apply Cat Swarm Optimization technique to recognize objects. Then we compare our results with the results obtained from particle swarm optimization and genetic algorithm techniques for the same task, and achieve better results in a more efficient manner. KeywordsObject Recognition, Image Processing, Cat Swarm Optimization, Particle Swarm Optimization
One of the most commonly used techniques in the recommendation framework is collaborative filteri... more One of the most commonly used techniques in the recommendation framework is collaborative filtering (CF). It performs better with sufficient records of user rating but is not good in sparse data. Content-based filtering works well in the sparse dataset as it finds the similarity between movies by using attributes of the movies. RBM is an energy-based model serving as a backbone of deep learning and performs well in rating prediction. However, the rating prediction is not preferable by a single model. The hybrid model achieves better results by integrating the results of more than one model. This paper analyses the weighted hybrid CF system by integrating content K-nearest neighbors (KNN) with restricted Boltzmann machine (RBM). Movies are recommended to the active user in the proposed system by integrating the effects of both content-based and collaborative filtering. Model efficacy was tested with MovieLens benchmark datasets.
Computational Intelligence in Pattern Recognition, 2019
This paper is based on the optimization of linear weights in a radial basis function neural netwo... more This paper is based on the optimization of linear weights in a radial basis function neural network that connects the hidden layer and the output layer. A new optimization algorithm called dualist algorithm is applied for choosing an optimal learning parameter. A conventional strategy of random selection of radial basis function (RBF) centers and the width, as well as the weights, are estimated by gradient descent method and least square methods, respectively. The ideology behind this study is to predict the occurrence of metastatic carcinoma in human cells by computational approaches. Our simulation consists of comparing the predictive accuracy of harmony search-radial basis function network (RBFN) and dualist-RBFN by optimizing the weight factor. The Wisconsin breast cancer dataset is used as a benchmark to experiment our training pattern. The learning rate (weight factor) is taken as the optimized parameter to obtain the best possible solution.
Wireless Personal Communications, 2021
2016 International Conference on Computing, Analytics and Security Trends (CAST), 2016
Web applications hosted on the Internet are naturally exposed to a variety of attacks and constan... more Web applications hosted on the Internet are naturally exposed to a variety of attacks and constantly probed by hackers for vulnerabilities. SQL Injection Attack (SQLIA) has been a major security threat on web applications since over 15 years. Detecting SQLIA at runtime is a challenging problem because of extreme heterogeneity of the attack vectors. This paper explores application of node centrality metrics to train a Support Vector Machine (SVM) for identifying malicious queries containing SQL injection attacks. The WHERE clause portion of SQL queries are first normalized into a sequence of tokens and then modeled as interaction networks, from which centrality of the nodes are computed. After applying feature selection by information gain method, the centrality scores of high ranking nodes are used to train the SVM classifier. We experiment with four centrality measures popularly used in Social Network Analysis (SNA). The results on five sample web applications built with PHP/MySQL show that this technique can effectively detect SQLIA with minimal performance overhead. Designed for the database firewall layer, the approach can protect multiple websites on a shared server, which is another advantage.
International Journal of Intelligent Engineering and Systems, 2021
International Journal of Engineering & Technology, 2018
Image processing is most vital area of research and application in field of medical-imaging. Espe... more Image processing is most vital area of research and application in field of medical-imaging. Especially it is a major component in medical science. Starting from radiology to ultrasound (sonography), MRI, etc. in lots of area image is the only source of diagnosis process. Now-a-days, different types of devices are being introduced to capture the internal body parts in medical science to carry the diagnosis process correctly. However, due to various reasons, the captured images need to be tuned digitally to gain the more information. These processes involve noise reduction, segmentations, thresholding etc. . Image segmentation is a process to segment the target area of image to identify the area more prominently. There are different process are evolved to perform the segmentation process, one of which is Image thresholding. Moreover there are different tools are also introduce to perform this step of image thresholding. The recent introduced tool PSO is being used here to segment the...
International Journal of Information Technology, 2018
AbstractFrequent requirement changes are a major point of concern in today’s scenario. As a solut... more AbstractFrequent requirement changes are a major point of concern in today’s scenario. As a solution to such issues, agile software development (ASD) has efficiently replaced the traditional methods of software development in industries. Because of dynamics of different aspects of ASD, it is very difficult to keep track, maintain and estimate the overall product. So, in order to solve the effort estimation problem (EEP) in ASD, different types of artificial neural networks (ANNs) have been applied. This work focuses on two types of ANN-feedforward back-propagation neural network and Elman neural network. These two networks have been applied to a dataset which contains project information of 21 projects based on ASD from 6 different software houses to analyze and solve the EEP. Also, the proposed work uses three different performance metrics i.e. mean magnitude of relative error (MMRE), mean square error (MSE) and prediction (PRED(x)) to examine the performance of the model. The results of the proposed models are compared to the existing models in the literature.
International Journal of Intelligent Systems and Applications, 2019
JOURNAL OF ENGINEERING SCIENCE AND TECHNOLOGY REVIEW, 2017
In the last few years, the size and functionality of software have experienced a massive growth. ... more In the last few years, the size and functionality of software have experienced a massive growth. Along with this, cost estimation plays a major role in the whole cycle of software development, and hence, it is a necessary task that should be done before the development cycle begins and may run throughout the software life cycle. It helps in making accurate estimation for any project so that appropriate charges and delivery date can be obtained. It also helps in identifying the effort required for developing the application, which assures the project acceptance or denial. Since late 90's, Agile Software Development (ASD) methodologies have shown high success rates for projects due to their capability of coping with changing requirements of the customers. Commencing product development using agile methods is a challenging task due to the live and dynamic nature of ASD. So, accurate cost estimation is a must for such development models in order to fine-tune the delivery date and estimation, while keeping the quality of software as the most important priority. This paper presents a systematic survey of cost estimation in ASD, which will be useful for the agile users to understand current trends in cost estimation in ASD.