Saima Farhan | Lahore College For Women University Lahore Pakistan (original) (raw)
Papers by Saima Farhan
... Muhammad Abuzar Fahiem Lahore College for Women University, Lahore, Pakistan; University of E... more ... Muhammad Abuzar Fahiem Lahore College for Women University, Lahore, Pakistan; University of Engineering and Technology, Lahore, Pakistan abuzar@uet.edu.pk Saima Farhan Lahore College for Women University, Lahore, Pakistan Abstract ... Alif Yes ...
... Crystal Saima Farhan 1, Huma Tauseef 1, Muhammad Abuzar Fahiem 1, 2 1 Lahore College for Wome... more ... Crystal Saima Farhan 1, Huma Tauseef 1, Muhammad Abuzar Fahiem 1, 2 1 Lahore College for Women University, Lahore, Pakistan 2 University of Engineering and Technology, Lahore, Pakistan abuzar@uet.edu.pk Abstract ... Res pond to Change ov er Pla n Clos e ...
Computational and Mathematical Methods in Medicine, 2014
Structural brain imaging is playing a vital role in identification of changes that occur in brain... more Structural brain imaging is playing a vital role in identification of changes that occur in brain associated with Alzheimer's disease. This paper proposes an automated image processing based approach for the identification of AD from MRI of the brain. The proposed approach is novel in a sense that it has higher specificity/accuracy values despite the use of smaller feature set as compared to existing approaches. Moreover, the proposed approach is capable of identifying AD patients in early stages. The dataset selected consists of 85 age and gender matched individuals from OASIS database. The features selected are volume of GM, WM, and CSF and size of hippocampus. Three different classification models (SVM, MLP, and J48) are used for identification of patients and controls. In addition, an ensemble of classifiers, based on majority voting, is adopted to overcome the error caused by an independent base classifier. Tenfold cross validation strategy is applied for the evaluation of our scheme. Moreover, to evaluate the performance of proposed approach, individual features and combination of features are fed to individual classifiers and ensemble based classifier. Using size of left hippocampus as feature, the accuracy achieved with ensemble of classifiers is 93.75%, with 100% specificity and 87.5% sensitivity.
Advances in Electrical and Computer Engineering, 2017
The main objective of our research is to introduce an approach that uses noninvasive MRI images t... more The main objective of our research is to introduce an approach that uses noninvasive MRI images to predict the conversion from mild cognitive impairment to Alzheimer's disease at an ea ...
Technical journal, Aug 30, 2018
Software defect prediction has been an interest of research era because predicting defects on ear... more Software defect prediction has been an interest of research era because predicting defects on early stages improves software quality with reduced cost and effective software management. Researchers from different domains are contributing their efforts to propose an approach that effectively and efficiently helps in this regard. Different machine learning techniques have been applied to remove unnecessary and fault data from defect prone modules and many approaches, frameworks, methods and models have been proposed using different datasets, metrics, and evaluation strategies. In this paper, 30 Clarivate Analytics indexed journal papers from 2009-2017 are reviewed for the upcoming practitioners of software defect prediction. Review in this paper reflects some of the work that has been done in software defect prediction so far. Detailed classification taxonomy of the machine learning techniques used for software defect prediction has been presented. Defective, non-defective datasets along with the classification of the metrics used are part of the review. Despite of all works and efforts done in this research domain, there still exist many ambiguities because no single technique and method dominates due to the imbalance nature of different datasets and methods. A lot of research work is needed to overcome the existing issues.
Nucleus, Jun 21, 2019
With the abundance of multimedia content on the World Wide Web, research and learning of effectiv... more With the abundance of multimedia content on the World Wide Web, research and learning of effective feature representation and similarity measures have become crucial. Image searching poses several challenges. Lately, many researchers have been exploring the field. Automatic annotation of images based on digital content processing proves to be an encouraging direction in the field. Content based image retrieval system development is an emerging field. Accuracy of the results of semantic search depends on the understanding of searcher's purpose, the meaning of conditions imposed in the search query and their mapping in the searchable data space. A visual content semantic search engine is proposed in this paper. The search engine employs digital image features for searching the image database. The presented algorithm produces promising results. The performance of our algorithm is tested on an extensive set of tags and queries resulting in accurate and efficient results.
International journal for innovative engineering and management research, Sep 27, 2022
Handwritten characters are seen everywhere in our day-today life. Almost all the things we do inv... more Handwritten characters are seen everywhere in our day-today life. Almost all the things we do involve letters, from writing cheques to writing notes manually. Handwritten character recognition is considered as a core to a diversity of emerging application by using the concepts of machine learning. It is used widely for performing practical applications such as reading computerized bank cheques. However, executing a computerized system to carry out certain types of duties is not easy and it is a challenging matter. There is huge variability and ambiguity of strokes from person to person. Handwriting style of an individual person also varies from time to time and is inconsistent. There are many challenges which we have to deal with while understanding handwritten text. Poor quality of the source document/image due to degradation over time can affect the way of understanding the characters. However, we can find a solution for this using machine learning. This paper illustrates a model which interprets handwritten character accurately with the help of data set which we used to train the data model. The main objective of this paper is to ensure effective and reliable approaches for recognition of handwritten characters.
Journal of Testing and Evaluation, Nov 14, 2017
Lung cancer is one of the most frequent and leading cause of death worldwide. It is the second mo... more Lung cancer is one of the most frequent and leading cause of death worldwide. It is the second most severe disease in men and women with the smallest survival rate after diagnosis. Therefore, prediction and diagnosis at an early stage of lung cancer is crucial. This comparative study was conducted for various existing techniques for lung cancer nodule detection along three dimensions including imaging modalities, image processing techniques, and genetics evaluation. Imaging modalities were used to acquire a lung image to examine the lung field. Existing imaging modalities are illustrated in detail. A comparison has been made regarding different existing image processing techniques for lung nodule cancer detection through medical images. Moreover, genetics plays a vital role in the development of lung cancer. An overview of genetics structure with potential genes was also described for detection and prediction of risk evaluation of lung cancer.
Journal of Testing and Evaluation, Jan 16, 2017
In this paper, we have proposed an automated approach to segment common carotid artery from longi... more In this paper, we have proposed an automated approach to segment common carotid artery from longitudinal carotid B-mode ultrasound images and calculate the carotid intima media thickness. The proposed approach extracts the contours in the ultrasound images using gradient vector flow snakes with an improved snake initialization process. The seed points for this improved snake initialization process are extracted using selected edges returned by the candidate line-selection algorithm. Intima media thickness is calculated from the extracted contours. The proposed approach is tested and clinically validated on a data set of 100 longitudinal ultrasound images of the carotid artery. There is no significant difference between intima media thickness measurements obtained using the proposed approach and the manual measurements. The intra-observer error of 0.088, a coefficient of variation of 12.99 %, Bland–Altman plots with small differences between experts (0.01 and 0.03 for Expert1 and Expert2, respectively) and figure of merit of 98.5 % are obtained. The proposed approach can be used successfully for carotid intima media thickness measurement, which can further be used for risk estimation of cardiovascular diseases and ischemic stroke. The proposed approach makes the segmentation process automated and yet reduces the risk of subjectivity and operator variability for intima media thickness measurement.
... 189 Page 6. [3] SN Nawaz, M. Sarfraz, A. Zidouri, WG Al-Khatib, An approach to offline Arabi... more ... 189 Page 6. [3] SN Nawaz, M. Sarfraz, A. Zidouri, WG Al-Khatib, An approach to offline Arabiccharacter recognition using neural networks, Proceedings of the 10th International Conference on Electronics, Circuits and Systems, 2003 [4] E. Kavallieratou, N. Fakotakis, G ...
Software cost estimation of a project is an essential key to acceptance or rejection of any softw... more Software cost estimation of a project is an essential key to acceptance or rejection of any software project. Various software cost estimation techniques have been in practice with their own advantages and limitations. Accurate cost estimation helps us complete the project within time and budget. For this work we have knowledge of all available techniques methods and tools. This research work provides comparison of various software cost estimation methods and models that are frequently used for the SCE projects. Currently object-oriented approach for software cost estimation is based on classes function point method and Line of Code etc. Comparatively less attention has been paid to the software cost estimation in component-based software engineering. The main aim of this research is to provide a model for cost estimation in component based software in object oriented environment and comparative analysis of various techniques and tools.
Software Engineering Research and Practice, 2009
Intelligent Automation & Soft Computing
Use of deep learning algorithms for the investigation and analysis of medical images has emerged ... more Use of deep learning algorithms for the investigation and analysis of medical images has emerged as a powerful technique. The increase in retinal diseases is alarming as it may lead to permanent blindness if left untreated. Automation of the diagnosis process of retinal diseases not only assists ophthalmologists in correct decision-making but saves time also. Several researchers have worked on automated retinal disease classification but restricted either to hand-crafted feature selection or binary classification. This paper presents a deep learning-based approach for the automated classification of multiple retinal diseases using fundus images. For this research, the data has been collected and combined from three distinct sources. The images are preprocessed for enhancing the details. Six layers of the convolutional neural network (CNN) are used for the automated feature extraction and classification of 20 retinal diseases. It is observed that the results are reliant on the number of classes. For binary classification (healthy vs. unhealthy), up to 100% accuracy has been achieved. When 16 classes are used (treating stages of a disease as a single class), 93.3% accuracy, 92% sensitivity and 93% specificity have been obtained respectively. For 20 classes (treating stages of the disease as separate classes), the accuracy, sensitivity and specificity have dropped to 92.4%, 92% and 92% respectively.
Advances in Electrical and Computer Engineering, 2018
Alzheimer's is a neurodegenerative disease caused by the destruction and death of brain neuro... more Alzheimer's is a neurodegenerative disease caused by the destruction and death of brain neurons resulting in memory loss, impaired thinking ability, and in certain behavioral changes. ...
Abstract: Migration of legacy systems to the web applications is the need in the future of softwa... more Abstract: Migration of legacy systems to the web applications is the need in the future of software industry. The main idea is to develop a web application either by replacing the existing legacy system or only connecting to legacy system/components. Legacy applications are migrating towards Service-Oriented Architecture (SOA) and rich internet applications. Legacy systems are based on old technology and methods, running on old platform that continues to be used, because it performs important, day to day business functions of the company. Our research would be focused on the analysis of two widely used approaches, Wrapping and Re-engineering and a method is presented which is the integration of these two approaches. Firstly legacy systems will be decomposed into components then both reengineering and wrapping techniques are applied on these components. The scope of this paper also covers the benefits accomplished from our approach. The main objective of our proposed technique is to ...
Software defect prediction has been an interest of research era because predicting defects on ear... more Software defect prediction has been an interest of research era because predicting defects on early stages improves software quality with reduced cost and effective software management. Researchers from different domains are contributing their efforts to propose an approach that effectively and efficiently helps in this regard. Different machine learning techniques have been applied to remove unnecessary and fault data from defect prone modules and many approaches, frameworks, methods and models have been proposed using different datasets, metrics, and evaluation strategies. In this paper, 30 Clarivate Analytics indexed journal papers from 2009-2017 are reviewed for the upcoming practitioners of software defect prediction. Review in this paper reflects some of the work that has been done in software defect prediction so far. Detailed classification taxonomy of the machine learning techniques used for software defect prediction has been presented. Defective, non-defective datasets al...
Requirement elicitation is a crucial step in the process of requirement engineering, which itself... more Requirement elicitation is a crucial step in the process of requirement engineering, which itself plays a vital role in software engineering process. Smart requirements lead to the success of software projects. Ambiguous and faulty requirements may result in the failure of a particular software development project. It is essential to identify correct requirements for the software project, in order to make it a success. Requirements are clarified through requirement elicitation process carried out by requirement analyst. There is an immense need to aid analyst in requirement elicitation activity to carry out correct requirement's design. In this paper we present detailed comparison of elicitation techniques, along with its characteristics as well as situational characteristics. The comparative analysis will help analyst in the selection of correct requirement elicitation technique based on different situational characteristics. In the end we present a model that will be helpful in automating the process of requirement elicitation technique selection.
Migration of legacy systems to the web applications is the need in the future of software industr... more Migration of legacy systems to the web applications is the need in the future of software industry. The main idea is to develop a web application either b y replacing the existing legacy system or only connecting to legacy system/components. Legacy applications are migrating towards Service - Oriented Architecture (SOA) and rich internet applications. Legacy systems are based on o ld technology and methods, running on old platform that continues to be used, because it performs important, day to day business functions of the company. Our research would be focused on the analysis of two widely used approaches, Wrapping and Re - engineering and a method is present ed which is the integration of these two approaches. Firstly legacy systems will be decomposed into components then both reengineering and wrapping techniques are applied on these components. The scope of this paper also covers the benefits accomplished fr om our approach. The main objective of our proposed technique is to mi...
Alzheimer's disease is known to be a major cause of death around the globe and according to A... more Alzheimer's disease is known to be a major cause of death around the globe and according to Alzheimer's association report (2013), the death percentage of the disease has increased to 68% since year 2000. Early detection of the disease is crucial in order to help the patients, relatives and care givers to cope with the situation and to help the practitioners discover new drugs. For this reason, there is an imperative need for automated techniques to be developed in order to detect the disease well before irreversible loss is made. In recent years, neuroimaging combined with machine learning techniques have been studied for the detection of Alzheimer's disease. The diagnosis process may be strengthened by incorporating genetic information, as genetics also play a key role in onset and progression of this disease. A comparative study of different neuroimaging techniques is being reported in this paper. In addition, the contribution of research community in this domain is s...
... Muhammad Abuzar Fahiem Lahore College for Women University, Lahore, Pakistan; University of E... more ... Muhammad Abuzar Fahiem Lahore College for Women University, Lahore, Pakistan; University of Engineering and Technology, Lahore, Pakistan abuzar@uet.edu.pk Saima Farhan Lahore College for Women University, Lahore, Pakistan Abstract ... Alif Yes ...
... Crystal Saima Farhan 1, Huma Tauseef 1, Muhammad Abuzar Fahiem 1, 2 1 Lahore College for Wome... more ... Crystal Saima Farhan 1, Huma Tauseef 1, Muhammad Abuzar Fahiem 1, 2 1 Lahore College for Women University, Lahore, Pakistan 2 University of Engineering and Technology, Lahore, Pakistan abuzar@uet.edu.pk Abstract ... Res pond to Change ov er Pla n Clos e ...
Computational and Mathematical Methods in Medicine, 2014
Structural brain imaging is playing a vital role in identification of changes that occur in brain... more Structural brain imaging is playing a vital role in identification of changes that occur in brain associated with Alzheimer's disease. This paper proposes an automated image processing based approach for the identification of AD from MRI of the brain. The proposed approach is novel in a sense that it has higher specificity/accuracy values despite the use of smaller feature set as compared to existing approaches. Moreover, the proposed approach is capable of identifying AD patients in early stages. The dataset selected consists of 85 age and gender matched individuals from OASIS database. The features selected are volume of GM, WM, and CSF and size of hippocampus. Three different classification models (SVM, MLP, and J48) are used for identification of patients and controls. In addition, an ensemble of classifiers, based on majority voting, is adopted to overcome the error caused by an independent base classifier. Tenfold cross validation strategy is applied for the evaluation of our scheme. Moreover, to evaluate the performance of proposed approach, individual features and combination of features are fed to individual classifiers and ensemble based classifier. Using size of left hippocampus as feature, the accuracy achieved with ensemble of classifiers is 93.75%, with 100% specificity and 87.5% sensitivity.
Advances in Electrical and Computer Engineering, 2017
The main objective of our research is to introduce an approach that uses noninvasive MRI images t... more The main objective of our research is to introduce an approach that uses noninvasive MRI images to predict the conversion from mild cognitive impairment to Alzheimer's disease at an ea ...
Technical journal, Aug 30, 2018
Software defect prediction has been an interest of research era because predicting defects on ear... more Software defect prediction has been an interest of research era because predicting defects on early stages improves software quality with reduced cost and effective software management. Researchers from different domains are contributing their efforts to propose an approach that effectively and efficiently helps in this regard. Different machine learning techniques have been applied to remove unnecessary and fault data from defect prone modules and many approaches, frameworks, methods and models have been proposed using different datasets, metrics, and evaluation strategies. In this paper, 30 Clarivate Analytics indexed journal papers from 2009-2017 are reviewed for the upcoming practitioners of software defect prediction. Review in this paper reflects some of the work that has been done in software defect prediction so far. Detailed classification taxonomy of the machine learning techniques used for software defect prediction has been presented. Defective, non-defective datasets along with the classification of the metrics used are part of the review. Despite of all works and efforts done in this research domain, there still exist many ambiguities because no single technique and method dominates due to the imbalance nature of different datasets and methods. A lot of research work is needed to overcome the existing issues.
Nucleus, Jun 21, 2019
With the abundance of multimedia content on the World Wide Web, research and learning of effectiv... more With the abundance of multimedia content on the World Wide Web, research and learning of effective feature representation and similarity measures have become crucial. Image searching poses several challenges. Lately, many researchers have been exploring the field. Automatic annotation of images based on digital content processing proves to be an encouraging direction in the field. Content based image retrieval system development is an emerging field. Accuracy of the results of semantic search depends on the understanding of searcher's purpose, the meaning of conditions imposed in the search query and their mapping in the searchable data space. A visual content semantic search engine is proposed in this paper. The search engine employs digital image features for searching the image database. The presented algorithm produces promising results. The performance of our algorithm is tested on an extensive set of tags and queries resulting in accurate and efficient results.
International journal for innovative engineering and management research, Sep 27, 2022
Handwritten characters are seen everywhere in our day-today life. Almost all the things we do inv... more Handwritten characters are seen everywhere in our day-today life. Almost all the things we do involve letters, from writing cheques to writing notes manually. Handwritten character recognition is considered as a core to a diversity of emerging application by using the concepts of machine learning. It is used widely for performing practical applications such as reading computerized bank cheques. However, executing a computerized system to carry out certain types of duties is not easy and it is a challenging matter. There is huge variability and ambiguity of strokes from person to person. Handwriting style of an individual person also varies from time to time and is inconsistent. There are many challenges which we have to deal with while understanding handwritten text. Poor quality of the source document/image due to degradation over time can affect the way of understanding the characters. However, we can find a solution for this using machine learning. This paper illustrates a model which interprets handwritten character accurately with the help of data set which we used to train the data model. The main objective of this paper is to ensure effective and reliable approaches for recognition of handwritten characters.
Journal of Testing and Evaluation, Nov 14, 2017
Lung cancer is one of the most frequent and leading cause of death worldwide. It is the second mo... more Lung cancer is one of the most frequent and leading cause of death worldwide. It is the second most severe disease in men and women with the smallest survival rate after diagnosis. Therefore, prediction and diagnosis at an early stage of lung cancer is crucial. This comparative study was conducted for various existing techniques for lung cancer nodule detection along three dimensions including imaging modalities, image processing techniques, and genetics evaluation. Imaging modalities were used to acquire a lung image to examine the lung field. Existing imaging modalities are illustrated in detail. A comparison has been made regarding different existing image processing techniques for lung nodule cancer detection through medical images. Moreover, genetics plays a vital role in the development of lung cancer. An overview of genetics structure with potential genes was also described for detection and prediction of risk evaluation of lung cancer.
Journal of Testing and Evaluation, Jan 16, 2017
In this paper, we have proposed an automated approach to segment common carotid artery from longi... more In this paper, we have proposed an automated approach to segment common carotid artery from longitudinal carotid B-mode ultrasound images and calculate the carotid intima media thickness. The proposed approach extracts the contours in the ultrasound images using gradient vector flow snakes with an improved snake initialization process. The seed points for this improved snake initialization process are extracted using selected edges returned by the candidate line-selection algorithm. Intima media thickness is calculated from the extracted contours. The proposed approach is tested and clinically validated on a data set of 100 longitudinal ultrasound images of the carotid artery. There is no significant difference between intima media thickness measurements obtained using the proposed approach and the manual measurements. The intra-observer error of 0.088, a coefficient of variation of 12.99 %, Bland–Altman plots with small differences between experts (0.01 and 0.03 for Expert1 and Expert2, respectively) and figure of merit of 98.5 % are obtained. The proposed approach can be used successfully for carotid intima media thickness measurement, which can further be used for risk estimation of cardiovascular diseases and ischemic stroke. The proposed approach makes the segmentation process automated and yet reduces the risk of subjectivity and operator variability for intima media thickness measurement.
... 189 Page 6. [3] SN Nawaz, M. Sarfraz, A. Zidouri, WG Al-Khatib, An approach to offline Arabi... more ... 189 Page 6. [3] SN Nawaz, M. Sarfraz, A. Zidouri, WG Al-Khatib, An approach to offline Arabiccharacter recognition using neural networks, Proceedings of the 10th International Conference on Electronics, Circuits and Systems, 2003 [4] E. Kavallieratou, N. Fakotakis, G ...
Software cost estimation of a project is an essential key to acceptance or rejection of any softw... more Software cost estimation of a project is an essential key to acceptance or rejection of any software project. Various software cost estimation techniques have been in practice with their own advantages and limitations. Accurate cost estimation helps us complete the project within time and budget. For this work we have knowledge of all available techniques methods and tools. This research work provides comparison of various software cost estimation methods and models that are frequently used for the SCE projects. Currently object-oriented approach for software cost estimation is based on classes function point method and Line of Code etc. Comparatively less attention has been paid to the software cost estimation in component-based software engineering. The main aim of this research is to provide a model for cost estimation in component based software in object oriented environment and comparative analysis of various techniques and tools.
Software Engineering Research and Practice, 2009
Intelligent Automation & Soft Computing
Use of deep learning algorithms for the investigation and analysis of medical images has emerged ... more Use of deep learning algorithms for the investigation and analysis of medical images has emerged as a powerful technique. The increase in retinal diseases is alarming as it may lead to permanent blindness if left untreated. Automation of the diagnosis process of retinal diseases not only assists ophthalmologists in correct decision-making but saves time also. Several researchers have worked on automated retinal disease classification but restricted either to hand-crafted feature selection or binary classification. This paper presents a deep learning-based approach for the automated classification of multiple retinal diseases using fundus images. For this research, the data has been collected and combined from three distinct sources. The images are preprocessed for enhancing the details. Six layers of the convolutional neural network (CNN) are used for the automated feature extraction and classification of 20 retinal diseases. It is observed that the results are reliant on the number of classes. For binary classification (healthy vs. unhealthy), up to 100% accuracy has been achieved. When 16 classes are used (treating stages of a disease as a single class), 93.3% accuracy, 92% sensitivity and 93% specificity have been obtained respectively. For 20 classes (treating stages of the disease as separate classes), the accuracy, sensitivity and specificity have dropped to 92.4%, 92% and 92% respectively.
Advances in Electrical and Computer Engineering, 2018
Alzheimer's is a neurodegenerative disease caused by the destruction and death of brain neuro... more Alzheimer's is a neurodegenerative disease caused by the destruction and death of brain neurons resulting in memory loss, impaired thinking ability, and in certain behavioral changes. ...
Abstract: Migration of legacy systems to the web applications is the need in the future of softwa... more Abstract: Migration of legacy systems to the web applications is the need in the future of software industry. The main idea is to develop a web application either by replacing the existing legacy system or only connecting to legacy system/components. Legacy applications are migrating towards Service-Oriented Architecture (SOA) and rich internet applications. Legacy systems are based on old technology and methods, running on old platform that continues to be used, because it performs important, day to day business functions of the company. Our research would be focused on the analysis of two widely used approaches, Wrapping and Re-engineering and a method is presented which is the integration of these two approaches. Firstly legacy systems will be decomposed into components then both reengineering and wrapping techniques are applied on these components. The scope of this paper also covers the benefits accomplished from our approach. The main objective of our proposed technique is to ...
Software defect prediction has been an interest of research era because predicting defects on ear... more Software defect prediction has been an interest of research era because predicting defects on early stages improves software quality with reduced cost and effective software management. Researchers from different domains are contributing their efforts to propose an approach that effectively and efficiently helps in this regard. Different machine learning techniques have been applied to remove unnecessary and fault data from defect prone modules and many approaches, frameworks, methods and models have been proposed using different datasets, metrics, and evaluation strategies. In this paper, 30 Clarivate Analytics indexed journal papers from 2009-2017 are reviewed for the upcoming practitioners of software defect prediction. Review in this paper reflects some of the work that has been done in software defect prediction so far. Detailed classification taxonomy of the machine learning techniques used for software defect prediction has been presented. Defective, non-defective datasets al...
Requirement elicitation is a crucial step in the process of requirement engineering, which itself... more Requirement elicitation is a crucial step in the process of requirement engineering, which itself plays a vital role in software engineering process. Smart requirements lead to the success of software projects. Ambiguous and faulty requirements may result in the failure of a particular software development project. It is essential to identify correct requirements for the software project, in order to make it a success. Requirements are clarified through requirement elicitation process carried out by requirement analyst. There is an immense need to aid analyst in requirement elicitation activity to carry out correct requirement's design. In this paper we present detailed comparison of elicitation techniques, along with its characteristics as well as situational characteristics. The comparative analysis will help analyst in the selection of correct requirement elicitation technique based on different situational characteristics. In the end we present a model that will be helpful in automating the process of requirement elicitation technique selection.
Migration of legacy systems to the web applications is the need in the future of software industr... more Migration of legacy systems to the web applications is the need in the future of software industry. The main idea is to develop a web application either b y replacing the existing legacy system or only connecting to legacy system/components. Legacy applications are migrating towards Service - Oriented Architecture (SOA) and rich internet applications. Legacy systems are based on o ld technology and methods, running on old platform that continues to be used, because it performs important, day to day business functions of the company. Our research would be focused on the analysis of two widely used approaches, Wrapping and Re - engineering and a method is present ed which is the integration of these two approaches. Firstly legacy systems will be decomposed into components then both reengineering and wrapping techniques are applied on these components. The scope of this paper also covers the benefits accomplished fr om our approach. The main objective of our proposed technique is to mi...
Alzheimer's disease is known to be a major cause of death around the globe and according to A... more Alzheimer's disease is known to be a major cause of death around the globe and according to Alzheimer's association report (2013), the death percentage of the disease has increased to 68% since year 2000. Early detection of the disease is crucial in order to help the patients, relatives and care givers to cope with the situation and to help the practitioners discover new drugs. For this reason, there is an imperative need for automated techniques to be developed in order to detect the disease well before irreversible loss is made. In recent years, neuroimaging combined with machine learning techniques have been studied for the detection of Alzheimer's disease. The diagnosis process may be strengthened by incorporating genetic information, as genetics also play a key role in onset and progression of this disease. A comparative study of different neuroimaging techniques is being reported in this paper. In addition, the contribution of research community in this domain is s...