Bharat Bhushan Sagar | Birla Institute of Technology, Mesra (Ranchi) India (original) (raw)
Papers by Bharat Bhushan Sagar
Journal of Information and Optimization Sciences
This study intends to investigate how machine learning methods may be used to predict lung cancer... more This study intends to investigate how machine learning methods may be used to predict lung cancer. Early detection can considerably improve patient outcomes because lung cancer is the leading cause of cancer-related deaths worldwide. The study focuses on the investigation of several risk variables and biomarkers, including smoking history, age, and family history, that can affect the development of lung cancer. The research analyses the performance of the most recent machine learning algorithms for lung cancer prediction using a considerable. dataset of patient records. The findings show that machine learning algorithms can accurately and precisely forecast the likelihood of developing lung cancer. The study sheds light on the potential of machine learning in enhancing lung cancer screening and preventive methods and offers information on the creation of patient-specific tailored treatment plans.
Wireless sensor networks (WSN) have dependability, integrity, and confidentiality issues because ... more Wireless sensor networks (WSN) have dependability, integrity, and confidentiality issues because of their extensive use. An efficient defensive line for WSN is provided by intrusion detection, a crucial active defence technology. Given the uniqueness of WSN, it is necessary to strike a balance between precise data transmission and constrained sensor energy, as well as between the detecting effect and a lack of network resources. In this study, we suggested a fog computing-based intrusion detection system (IDS) for a smart power grid. This article's main objective is to explain how to use IDS in a smart grid setting. We introduce a stacked model based on the ensemble learning algorithm, which can accurately portray the connections among fog nodes that are vulnerable to cyber-attacks. We conclude by conducting a series of comparative experiments using the KDD CUP 99 dataset with cross-validation, and determine that the Fog-IDS and Stacking-based approaches are more effective in en...
Indian journal of science and technology, Jul 27, 2016
The advent of data mining approach has brought many fascinating situations and several challenges... more The advent of data mining approach has brought many fascinating situations and several challenges to database community. The objective of data mining is to explore the unseen patterns in data, which are valid, novel, potentially subsidiary and ultimately understandable. The authorize and real-time transactional databases often show temporal feature and time validity life-span. Utilizing temporal relationship rule mining one may determine unusual relationship rules regarding different time-intervals. Some relationship rules may hold, through some intervals while not others and this may lead to subsidiary information. Using calendar mined patterns has already been projected by researchers to confine the time-validity relationships. However, when we consider the weight factor like utility of item in transactions and if we incorporate this weight factor in our model to mine then fascinating results of relationships come on time-variant data. This manuscript propose a narrative procedure to find relationship rule on time-variant-weighted data utilizing frequent pattern tree-hierarchical structures which give us a consequential benefit in expressions of time and memory-recollection utilization though including time and weight factor.
The immense use of software is changing our society profoundly. Customers rely on the accuracy an... more The immense use of software is changing our society profoundly. Customers rely on the accuracy and functionality that is provided by the software. Agile development is a framework that achieves project goals sooner by involving every stakeholder in the decision process from the very beginning. This research paper exhibits Analytical Hierarchical Process (AHP), for improving the software quality in the organization. Thus this research provides a framework to calculate the critical attributes based on survey done by the experts’ working on agile project development to improve the quality of the software in the organization. Overall utility is calculated to assess in the perspective of the developers as well as stakeholders described as Two Way Assessment, which further decides the best attribute from both perspective to improve the efficiency of the software in the organization.
Parallel computing operates on the principle that large problems can often be divided into smalle... more Parallel computing operates on the principle that large problems can often be divided into smaller ones, which are then solved concurrently to save time (wall clock time) by taking advantage of non-local resources and overcoming memory constraints. The main aim is to form a common cluster based parallel computing architecture for both MPI and PVM, which demonstrates the performance gain and losses achieved through parallel processing using MPI and PVM as separate cases. This can be realized by implementing the parallel applications like solving matrix multiplication problem, using MPI and PVM separately. The common architecture for MPI and PVM is based on the Master-Slave computing paradigm. The master will monitor the progress and be able to report the time taken to solve the problem, taking into account the time spent in breaking the problem into sub-tasks and combining the results along with the communication delays. The slaves are capable of accepting sub problems from the master and finding the solution and sending back to the master. We aim to evaluate and compare these statistics of both the cases to decide which among MPI and PVM gives faster performance and also compare with the time taken to solve the same problem in serial execution to demonstrate communication overhead involved in parallel computation. The results with runs on different number of nodes are compared to evaluate the efficiency of both MPI and PVM. We also show the performance dependency of parallel and serial computation, on RAM.
CRC Press eBooks, Oct 22, 2019
CRC Press eBooks, Jun 22, 2023
CRC Press eBooks, Jun 22, 2023
Journal of Discrete Mathematical Sciences and Cryptography, Nov 17, 2019
Understanding the ancient script can provide rich details of a civilization, like its cultural, p... more Understanding the ancient script can provide rich details of a civilization, like its cultural, political and social scenarios. Brahmi, an ancient mother script, has been a key to the development of many modern Indian scripts like Gurmukhi, Devanagari, Bangla etc. Inferring ancient writings can be a tedious job and moreover, if executed manually, it requires several language experts. The current paper presents a recognition system for Brahmi characters using linear Support Vector machine classifier. Gradient information of the character images pixels is extracted, and histogram of the gradients is stored as a feature vector for each character image. Character dataset includes both handwritten character images and images from the internet. Linear SVM classifier is trained on the feature set of 24 images of each character. The proposed recognition system is performed with an accuracy of 91.6% to recognize the Brahmi characters from the test images.
Journal of Discrete Mathematical Sciences and Cryptography, Nov 17, 2019
With increasing demand of agile methodology in software development, almost every software organi... more With increasing demand of agile methodology in software development, almost every software organization is trying to move far from previous methods to adopt agile methodology in software development. In place of being extrapolative, agile somewhat being adaptive and focuses on customer satisfaction. It supports self-organized groups that work for technical excellence to increase agility. But number of team members is an issue that is in turn controlled by team perspective. The technique used in this research focuses on three diverse sized agile groups making software's using the same technologies. Both objective and subjective actions are used and the outcomes are reinforced by a study. The result clearly demonstrates that for better results in agile software development, it is critical to select the right persons to be considered as a good team. This paper focuses on different programming advancement techniques, their benefits and demerits, and how to choose the appropriate strategies for a specific circumstance. In addition it examines agile methodologies with the help of a case study that focuses on the advantages of agile over the conventional methods used in the software industry. This paper introduces a concept namely “Two Way Assessment” an approach to boost the process to make it more effective by identifying and removing the defects.
To ensure quality, reliability and scalability of product software industry is swiftly moving tow... more To ensure quality, reliability and scalability of product software industry is swiftly moving towards Agile as this approach helps business developers to address the problem of unpredictability. Customers are relying on the functionality and accuracy that is provided by the software. The last decades of software development, focused on sequential and phased models like the Waterfall model, Spiral model, etc, today the focus is on iterative and incremental methods in form of Agile methods. World’s CMM level 5 companies are also using Agile methods to give a new edge to the software development. Therefore, the main objective of this research is to illustrate a two way assessment technique that can be adopted to infuse a continuous improvement in the software testing lifecycle. The two way assessment takes into consideration two things: The perspective of Management/ Developers is to highlight the importance of various testing attributes andworking on each of the attribute. Thus the ’Two-Fold Approach’ discussed in this paper is an attempt to go beyond the traditional methodology of measuring system effectiveness. It clearly emerges that agile methodology encourages better planning due to customer involvement and accommodates the desired changes easily.
Modern Physics Letters B, Jul 21, 2020
In Big data domain, platform dependency can alter the behavior of the business. It is because of ... more In Big data domain, platform dependency can alter the behavior of the business. It is because of the different kinds (Structured, Semi-structured and Unstructured) and characteristics of the data. By the traditional infrastructure, different kinds of data cannot be processed simultaneously due to their platform dependency for a particular task. Therefore, the responsibility of selecting suitable tools lies with the user. The variety of data generated by different sources requires the selection of suitable tools without human intervention. Further, these tools also face the limitation of recourses to deal with a large volume of data. This limitation of resources affects the performance of the tools in terms of execution time. Therefore, in this work, we proposed a model in which different data analytics tools share a common infrastructure to provide data independence and resource sharing environment, i.e. the proposed model shares common (Hybrid) Hadoop Distributed File System (HDFS) between three Name-Node (Master Node), three Data-Node and one Client-node, which works under the DeMilitarized zone (DMZ). To realize this model, we have implemented Mahout, R-Hadoop and Splunk sharing a common HDFS. Further using our model, we run [Formula: see text]-means clustering, Naïve Bayes and recommender algorithms on three different datasets, movie rating, newsgroup, and Spam SMS dataset, representing structured, semi-structured and unstructured, respectively. Our model selected the appropriate tool, e.g. Mahout to run on the newsgroup dataset as other tools cannot run on this data. This shows that our model provides data independence. Further results of our proposed model are compared with the legacy (individual) model in terms of execution time and scalability. The improved performance of the proposed model establishes the hypothesis that our model overcomes the limitation of the resources of the legacy model.
International Journal of Enterprise Network Management, 2017
Changing trends and globalization has given rise to various challenges to the software industry. ... more Changing trends and globalization has given rise to various challenges to the software industry. In today's scenario software engineering is related to its quality and timely delivery of product to keep in pace with the new expertise and changing market condition. Thus, Interpretive Structural Modelling (ISM) is a methodology for interactive learning process which is a deep-rooted and has been increasingly used by various researchers to represent the interrelationships among various attributes and their related issues. ISM approach starts with an identification of attributes, which are relevant to the problem or issue. Having decided the contextual relation, a structural self-interaction matrix (SSIM) is developed based on pairwise comparison of variables. Which is then converted into a reachability matrix (RM) and its transitivity is checked. It is necessary to analyze the performance of the various attributes for their operative applications in the execution of software development process. The main objective of this paper is to identify the communication of these attributes and identify the dependent attributes. This paper focuses on the ISM model to recognize significant attributes and their organizational consequences from software release time point of view.
Advances in intelligent systems and computing, Jul 11, 2019
Business intelligence is an arrangement of strategies, designs, and innovations that change crude... more Business intelligence is an arrangement of strategies, designs, and innovations that change crude information into significant and helpful data used to empower more compelling vital and operational experiences and basic leadership. Decisions support-systems (DSSs) assist in translating raw information into further understandable forms to be used by the advanced stage executives. Business intelligence apparatuses are utilized to make DSS which separate required information from an extensive database to produce easy to use outlines for basic leadership, to create such client graphs; we utilize an open-source business intelligence apparatus fusion charts—which have the capacity to use the accessible data, to pick up a superior comprehension of the past, and to foresee or impact the future through better basic leadership. Extensively characterized, information mining depends on marketable insights, counterfeit awareness and machine learning or information disclosure in databases. DSS uses accessible data and data mining techniques (DMT) to give a basic leadership instrument more often than not depending on human–PC cooperation. Together, DMT and DSS tell us about the range of investigative data advancements and gives us information-directed human-driven goals. Here, we are presenting a case study of DSS for BI for relating data mining procedures for the calculation of energy produced by wind power plant; remarkable outcomes were accomplished in by placing the cutoff. Hence, the data mining procedures were capable to be trained and to ascertain enhanced reliance among variables and are a lot nearer to in fact calculated values.
Advances in intelligent systems and computing, 2016
Software Reliability Engineering is an area that created from family history in the dependability... more Software Reliability Engineering is an area that created from family history in the dependability controls of electrical, auxiliaryAbstract, and equipment building. Reliability models are the most prevailing devices in Programming Dependability Building for approximating, insidious, gauging, and assessing the unwavering quality of the product. In order to attain solutions to issues accurately, speedily and reasonably, a huge amount of soft computing approaches has been established. However, it is extremely difficult to discover among the capabilities which is the utmost one that can be exploited all over. These various soft computing approaches can able to give better prediction, dynamic behavior, and extraordinary performance of modelling capabilities. In this paper, we show a wide survey of existing delicate processing methodologies and after that diagnostically inspected the work which is finished by various analysts in the area of software reliability.
Computers, Materials & Continua
2015 2nd International Conference on Computing for Sustainable Global Development (INDIACom), 2015
Estimating software reliability is always a keen interest of researchers for last three decades d... more Estimating software reliability is always a keen interest of researchers for last three decades due to day to day increase in software industry. Various Software Reliability Growth Models (SRGM) have been proposed to estimate software failure rate, number of faults remaining, and software reliability for the same. This paper describes the review of the various software faults, performance testing, detection of fault tolerance, and evaluation of reliability of software systems.
Journal of Information and Optimization Sciences
This study intends to investigate how machine learning methods may be used to predict lung cancer... more This study intends to investigate how machine learning methods may be used to predict lung cancer. Early detection can considerably improve patient outcomes because lung cancer is the leading cause of cancer-related deaths worldwide. The study focuses on the investigation of several risk variables and biomarkers, including smoking history, age, and family history, that can affect the development of lung cancer. The research analyses the performance of the most recent machine learning algorithms for lung cancer prediction using a considerable. dataset of patient records. The findings show that machine learning algorithms can accurately and precisely forecast the likelihood of developing lung cancer. The study sheds light on the potential of machine learning in enhancing lung cancer screening and preventive methods and offers information on the creation of patient-specific tailored treatment plans.
Wireless sensor networks (WSN) have dependability, integrity, and confidentiality issues because ... more Wireless sensor networks (WSN) have dependability, integrity, and confidentiality issues because of their extensive use. An efficient defensive line for WSN is provided by intrusion detection, a crucial active defence technology. Given the uniqueness of WSN, it is necessary to strike a balance between precise data transmission and constrained sensor energy, as well as between the detecting effect and a lack of network resources. In this study, we suggested a fog computing-based intrusion detection system (IDS) for a smart power grid. This article's main objective is to explain how to use IDS in a smart grid setting. We introduce a stacked model based on the ensemble learning algorithm, which can accurately portray the connections among fog nodes that are vulnerable to cyber-attacks. We conclude by conducting a series of comparative experiments using the KDD CUP 99 dataset with cross-validation, and determine that the Fog-IDS and Stacking-based approaches are more effective in en...
Indian journal of science and technology, Jul 27, 2016
The advent of data mining approach has brought many fascinating situations and several challenges... more The advent of data mining approach has brought many fascinating situations and several challenges to database community. The objective of data mining is to explore the unseen patterns in data, which are valid, novel, potentially subsidiary and ultimately understandable. The authorize and real-time transactional databases often show temporal feature and time validity life-span. Utilizing temporal relationship rule mining one may determine unusual relationship rules regarding different time-intervals. Some relationship rules may hold, through some intervals while not others and this may lead to subsidiary information. Using calendar mined patterns has already been projected by researchers to confine the time-validity relationships. However, when we consider the weight factor like utility of item in transactions and if we incorporate this weight factor in our model to mine then fascinating results of relationships come on time-variant data. This manuscript propose a narrative procedure to find relationship rule on time-variant-weighted data utilizing frequent pattern tree-hierarchical structures which give us a consequential benefit in expressions of time and memory-recollection utilization though including time and weight factor.
The immense use of software is changing our society profoundly. Customers rely on the accuracy an... more The immense use of software is changing our society profoundly. Customers rely on the accuracy and functionality that is provided by the software. Agile development is a framework that achieves project goals sooner by involving every stakeholder in the decision process from the very beginning. This research paper exhibits Analytical Hierarchical Process (AHP), for improving the software quality in the organization. Thus this research provides a framework to calculate the critical attributes based on survey done by the experts’ working on agile project development to improve the quality of the software in the organization. Overall utility is calculated to assess in the perspective of the developers as well as stakeholders described as Two Way Assessment, which further decides the best attribute from both perspective to improve the efficiency of the software in the organization.
Parallel computing operates on the principle that large problems can often be divided into smalle... more Parallel computing operates on the principle that large problems can often be divided into smaller ones, which are then solved concurrently to save time (wall clock time) by taking advantage of non-local resources and overcoming memory constraints. The main aim is to form a common cluster based parallel computing architecture for both MPI and PVM, which demonstrates the performance gain and losses achieved through parallel processing using MPI and PVM as separate cases. This can be realized by implementing the parallel applications like solving matrix multiplication problem, using MPI and PVM separately. The common architecture for MPI and PVM is based on the Master-Slave computing paradigm. The master will monitor the progress and be able to report the time taken to solve the problem, taking into account the time spent in breaking the problem into sub-tasks and combining the results along with the communication delays. The slaves are capable of accepting sub problems from the master and finding the solution and sending back to the master. We aim to evaluate and compare these statistics of both the cases to decide which among MPI and PVM gives faster performance and also compare with the time taken to solve the same problem in serial execution to demonstrate communication overhead involved in parallel computation. The results with runs on different number of nodes are compared to evaluate the efficiency of both MPI and PVM. We also show the performance dependency of parallel and serial computation, on RAM.
CRC Press eBooks, Oct 22, 2019
CRC Press eBooks, Jun 22, 2023
CRC Press eBooks, Jun 22, 2023
Journal of Discrete Mathematical Sciences and Cryptography, Nov 17, 2019
Understanding the ancient script can provide rich details of a civilization, like its cultural, p... more Understanding the ancient script can provide rich details of a civilization, like its cultural, political and social scenarios. Brahmi, an ancient mother script, has been a key to the development of many modern Indian scripts like Gurmukhi, Devanagari, Bangla etc. Inferring ancient writings can be a tedious job and moreover, if executed manually, it requires several language experts. The current paper presents a recognition system for Brahmi characters using linear Support Vector machine classifier. Gradient information of the character images pixels is extracted, and histogram of the gradients is stored as a feature vector for each character image. Character dataset includes both handwritten character images and images from the internet. Linear SVM classifier is trained on the feature set of 24 images of each character. The proposed recognition system is performed with an accuracy of 91.6% to recognize the Brahmi characters from the test images.
Journal of Discrete Mathematical Sciences and Cryptography, Nov 17, 2019
With increasing demand of agile methodology in software development, almost every software organi... more With increasing demand of agile methodology in software development, almost every software organization is trying to move far from previous methods to adopt agile methodology in software development. In place of being extrapolative, agile somewhat being adaptive and focuses on customer satisfaction. It supports self-organized groups that work for technical excellence to increase agility. But number of team members is an issue that is in turn controlled by team perspective. The technique used in this research focuses on three diverse sized agile groups making software's using the same technologies. Both objective and subjective actions are used and the outcomes are reinforced by a study. The result clearly demonstrates that for better results in agile software development, it is critical to select the right persons to be considered as a good team. This paper focuses on different programming advancement techniques, their benefits and demerits, and how to choose the appropriate strategies for a specific circumstance. In addition it examines agile methodologies with the help of a case study that focuses on the advantages of agile over the conventional methods used in the software industry. This paper introduces a concept namely “Two Way Assessment” an approach to boost the process to make it more effective by identifying and removing the defects.
To ensure quality, reliability and scalability of product software industry is swiftly moving tow... more To ensure quality, reliability and scalability of product software industry is swiftly moving towards Agile as this approach helps business developers to address the problem of unpredictability. Customers are relying on the functionality and accuracy that is provided by the software. The last decades of software development, focused on sequential and phased models like the Waterfall model, Spiral model, etc, today the focus is on iterative and incremental methods in form of Agile methods. World’s CMM level 5 companies are also using Agile methods to give a new edge to the software development. Therefore, the main objective of this research is to illustrate a two way assessment technique that can be adopted to infuse a continuous improvement in the software testing lifecycle. The two way assessment takes into consideration two things: The perspective of Management/ Developers is to highlight the importance of various testing attributes andworking on each of the attribute. Thus the ’Two-Fold Approach’ discussed in this paper is an attempt to go beyond the traditional methodology of measuring system effectiveness. It clearly emerges that agile methodology encourages better planning due to customer involvement and accommodates the desired changes easily.
Modern Physics Letters B, Jul 21, 2020
In Big data domain, platform dependency can alter the behavior of the business. It is because of ... more In Big data domain, platform dependency can alter the behavior of the business. It is because of the different kinds (Structured, Semi-structured and Unstructured) and characteristics of the data. By the traditional infrastructure, different kinds of data cannot be processed simultaneously due to their platform dependency for a particular task. Therefore, the responsibility of selecting suitable tools lies with the user. The variety of data generated by different sources requires the selection of suitable tools without human intervention. Further, these tools also face the limitation of recourses to deal with a large volume of data. This limitation of resources affects the performance of the tools in terms of execution time. Therefore, in this work, we proposed a model in which different data analytics tools share a common infrastructure to provide data independence and resource sharing environment, i.e. the proposed model shares common (Hybrid) Hadoop Distributed File System (HDFS) between three Name-Node (Master Node), three Data-Node and one Client-node, which works under the DeMilitarized zone (DMZ). To realize this model, we have implemented Mahout, R-Hadoop and Splunk sharing a common HDFS. Further using our model, we run [Formula: see text]-means clustering, Naïve Bayes and recommender algorithms on three different datasets, movie rating, newsgroup, and Spam SMS dataset, representing structured, semi-structured and unstructured, respectively. Our model selected the appropriate tool, e.g. Mahout to run on the newsgroup dataset as other tools cannot run on this data. This shows that our model provides data independence. Further results of our proposed model are compared with the legacy (individual) model in terms of execution time and scalability. The improved performance of the proposed model establishes the hypothesis that our model overcomes the limitation of the resources of the legacy model.
International Journal of Enterprise Network Management, 2017
Changing trends and globalization has given rise to various challenges to the software industry. ... more Changing trends and globalization has given rise to various challenges to the software industry. In today's scenario software engineering is related to its quality and timely delivery of product to keep in pace with the new expertise and changing market condition. Thus, Interpretive Structural Modelling (ISM) is a methodology for interactive learning process which is a deep-rooted and has been increasingly used by various researchers to represent the interrelationships among various attributes and their related issues. ISM approach starts with an identification of attributes, which are relevant to the problem or issue. Having decided the contextual relation, a structural self-interaction matrix (SSIM) is developed based on pairwise comparison of variables. Which is then converted into a reachability matrix (RM) and its transitivity is checked. It is necessary to analyze the performance of the various attributes for their operative applications in the execution of software development process. The main objective of this paper is to identify the communication of these attributes and identify the dependent attributes. This paper focuses on the ISM model to recognize significant attributes and their organizational consequences from software release time point of view.
Advances in intelligent systems and computing, Jul 11, 2019
Business intelligence is an arrangement of strategies, designs, and innovations that change crude... more Business intelligence is an arrangement of strategies, designs, and innovations that change crude information into significant and helpful data used to empower more compelling vital and operational experiences and basic leadership. Decisions support-systems (DSSs) assist in translating raw information into further understandable forms to be used by the advanced stage executives. Business intelligence apparatuses are utilized to make DSS which separate required information from an extensive database to produce easy to use outlines for basic leadership, to create such client graphs; we utilize an open-source business intelligence apparatus fusion charts—which have the capacity to use the accessible data, to pick up a superior comprehension of the past, and to foresee or impact the future through better basic leadership. Extensively characterized, information mining depends on marketable insights, counterfeit awareness and machine learning or information disclosure in databases. DSS uses accessible data and data mining techniques (DMT) to give a basic leadership instrument more often than not depending on human–PC cooperation. Together, DMT and DSS tell us about the range of investigative data advancements and gives us information-directed human-driven goals. Here, we are presenting a case study of DSS for BI for relating data mining procedures for the calculation of energy produced by wind power plant; remarkable outcomes were accomplished in by placing the cutoff. Hence, the data mining procedures were capable to be trained and to ascertain enhanced reliance among variables and are a lot nearer to in fact calculated values.
Advances in intelligent systems and computing, 2016
Software Reliability Engineering is an area that created from family history in the dependability... more Software Reliability Engineering is an area that created from family history in the dependability controls of electrical, auxiliaryAbstract, and equipment building. Reliability models are the most prevailing devices in Programming Dependability Building for approximating, insidious, gauging, and assessing the unwavering quality of the product. In order to attain solutions to issues accurately, speedily and reasonably, a huge amount of soft computing approaches has been established. However, it is extremely difficult to discover among the capabilities which is the utmost one that can be exploited all over. These various soft computing approaches can able to give better prediction, dynamic behavior, and extraordinary performance of modelling capabilities. In this paper, we show a wide survey of existing delicate processing methodologies and after that diagnostically inspected the work which is finished by various analysts in the area of software reliability.
Computers, Materials & Continua
2015 2nd International Conference on Computing for Sustainable Global Development (INDIACom), 2015
Estimating software reliability is always a keen interest of researchers for last three decades d... more Estimating software reliability is always a keen interest of researchers for last three decades due to day to day increase in software industry. Various Software Reliability Growth Models (SRGM) have been proposed to estimate software failure rate, number of faults remaining, and software reliability for the same. This paper describes the review of the various software faults, performance testing, detection of fault tolerance, and evaluation of reliability of software systems.