Dr. K. Jairam Naik Assistant Professor | NIT RAIPUR (original) (raw)
Papers by Dr. K. Jairam Naik Assistant Professor
International Journal of Communication Networks and Distributed Systems
2020 International Conference on Data Analytics for Business and Industry: Way Towards a Sustainable Economy (ICDABI)
Improvement in Internet of Technology (IoT) has amplified the usage of smart devices like cameras... more Improvement in Internet of Technology (IoT) has amplified the usage of smart devices like cameras, sensors, drones, vehicles, and etc, by the individuals and domain organizations. The day-to-day data created and usage by these devices also increasing proportionally. The storage-space and processor-speed available at the individual devices is not enough for managing the generated data. Hence, a huge amount of storage space and processing capabilities is required. The remotely available global cloud resources are in use at present for fulfilling these needs. But, cloud resources are not recommended always for execution due to network delay and service cost. With the aim of optimally utilizing the existing IoT devices, it was formed as a logical cluster of devices called fog-nodes and their operations are managed by fog-agents. Instead of immediately forwarding every IoT application requests to the cloud, based on suitability they are accommodated on the IoT devices of the same or different fog-nodes for execution. Finding a suitable fog-node for executing IoT application is a challenging task. In this article, A Co-scheduling system that recommends suitable fog-node for executing IoT application and managing load in the cloud-fog environment (CoS_FRLM) was proposed. The CoS_FRLM calculates Pheramone Indicator (PI) value for each application on each Fog-node and recommends a fog-node having greater PI value for executing the IoT application. Simulation results states that the proposed CoS_FRLM outperforms the existing scheduling and load management methodologies by effective resource utilization, reduced makespan-time and service-costs.
2021 International Conference on Decision Aid Sciences and Application (DASA)
Water is essential for the survival and wellness of humans and other ecosystems. The mix of physi... more Water is essential for the survival and wellness of humans and other ecosystems. The mix of physical and chemical characteristics in a water sample is referred to as water quality. Mainly, water quality is critical in achieving a long-term sustainable aquaculture system. Conducting water quality parameter evaluations is crucial for executing an assessment operation and developing a more effective water resources management and planning strategy in aquaculture. The cumulative impact of water quality parameters may devastate the whole system if not managed properly. Predominantly, early diagnosis of fish illnesses and identifying the underlying causes are critical for farmers to take precautions to prevent an epidemic. Typically, fish illnesses are caused by viruses and bacteria. The presence of these pathogens may alter the pH, D.O., BOD, COD, E.C., PO43-, NO3-N, and NH3-N levels in the water. Previously, water quality was measured using handheld devices and mathematical functions. But these models fail in getting practical water quality analysis. The advances of IoT and deep learning technology adopted to aquaculture give insight analysis on the water for aquaculture. Most of the researchers working with water quality data based on one parameter and analysis with limited samples. This paper used the convolution neural networks (CNN)to classify water quality data to classify the water quality effectively for fish growth and survival with all the water quality parameters and with a good number of samples. The proposed model is compared with the existing models. Performance results show the impact of the proposed model.
2020 14th International Conference on Innovations in Information Technology (IIT), 2020
Materialization in Internet-of-Things (IoT) has exponentially raised the usage of smart devices b... more Materialization in Internet-of-Things (IoT) has exponentially raised the usage of smart devices by the individuals or business organizations. Fog computing was introduced for serving the raising needs of IoT applications locally with minimal delay and cost. Based on the Quality of Service (QoS) requirements like data requirements, rate of data updating, and accessing authority of IoT applications, their requests may be processed on the locally available fog nodes at low cost or forwarded to the globally available rentedcloud-nodes for processing at higher cost. Hence, there is a key need of optimizing the information-centric IoT architecture for classifying the tasks of IoT applications and scheduling them on to the most suitable fog or cloud nodes for processing. The proposed CS_IcIoTA identifies the application needs and classifies them into diverse categories. The scheduler assigns the tasks from these categories either to the local fog nodes or to the remotely available rented-cloud-nodes for execution based on the current resource requirements of tasks. If the demanded computing or storage resources by the tasks is huge and if that is not attainable at fogs then cloud nodes are preferred otherwise local fog nodes are used. Three cloud nodes, four fog nodes and three IoT application domains with a sum of 1500 tasks are considered for the experimental analysis and performance evaluation. Simulation results states that the proposed CS_IcIoTA minimizes the average makespan time and service-cost up to 11.45%, and 10.60% respectively. Proposed CS_IcIoTA also maximizes the average fog node utilization up to 77.83%.
International Journal of Intelligent Information and Database Systems, 2022
The conventional method of taking attendance by calling names or signing is a very tedious and in... more The conventional method of taking attendance by calling names or signing is a very tedious and insecure process with loopholes, like missing records, paper consumption, etc. To overcome such difficulties, an echo-based attendance system is proposed in this paper, which provides an effective and efficient way of recording and maintaining the attendance of students on a daily basis. The attendance is recorded for an echo-receiving device (microphone) from an echo-sending device (speaker) and all the records are saved in a real-time cloud database. Data is encoded in unique frequency at the transmitter's end and sent over sound, which is then decoded at the receiver's end and attendance is marked in the database. The process is one-shot, where each receiver's attendance is marked simultaneously, unlike the existing autonomous attendance systems. The proposed system uses liquid software-defined radio digital signal processing (DSR) to transmit data from the transmitter's end and to process received data at the receiver's end. This abolishes capturing the specimen of students (like fingerprints, facial images), required for the dataset to train the model in case of other autonomous attendance systems and hence, saves time and effort by completely eliminating human intervention and ensures high reliability as well.
International Journal of Cloud Computing, 2021
International Journal of Communication Networks and Distributed Systems, 2021
2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS), 2021
Efficient allocation of achievable virtual resources to the diverse users is a key challenging is... more Efficient allocation of achievable virtual resources to the diverse users is a key challenging issue in a controlled and collaborative cloud environment. As well, balancing the load among the resources and mapping these virtual resources to the physical machines is even bigger challenge of present distributed computing arena. Numerous approaches were introduced by different researchers including Genetic Algorithm for dealing with these challenges, but their scope was limited to certain specific performance elements. Hence, there is a need of optimizing the existing research implementations for efficient allocation of virtualized resources in cloud computing environment. Usually, in a typical distributed computing environment like cloud computing, allocation of virtual resource and balancing of workload among them is realized by means of virtual machines live migration. This article introduces an optimization of existing Genetic algorithm (GA) that mainly intended for VM resource provisioning and load balancing. The proposed OGA_EAVRC considers Population size, Fitness function, Mutation probability, and success rate of resource for optimizing the performance through efficient resource allocation. Key objective of this work is to utilize each physical resource effectively and allocated them to end users efficiently. For studying the operational performance of OGA_EAVRC, an event based CloudSim was chosen. Simulation results states that the proposed OGA_EAVRC can efficiently allocates the workload among virtualized resource by reducing VM’s migration among the physical machines.
2020 International Conference on Decision Aid Sciences and Application (DASA), 2020
During the past few years, powerful Fog computing has become a perfect counterpart to the cloud c... more During the past few years, powerful Fog computing has become a perfect counterpart to the cloud computing where the resources are dealing comfortably with computational, communicational and data needs of recently advanced cloud based Internet of Things (IoT) applications. Studying the interoperability and collaboration among the Fog (Edge) and Cloud (Core) for collective service provisioning has recently gained a considerable attention of researchers. Where, a delay tolerant management of IoT workflows and distributing their tasks among the fog and cloud node was became an important issue. As a possible defeat for this, a successive association among native fog nodes and the hired cloud nodes were exploited. In this work, a heuristic-based delay tolerant task scheduling and workflow management system for Cloud-fog computing (DTWM_CfS) were formulated and proposed for distributing the workflow tasks of IoT applications to the most successive fog or cloud resource nodes for execution. The proposed DTWM_CfS ensures efficient utilization of underlying resources by diminishing the average makespan time and total cost up to 11.23% and 10.6% respectively when compared with other existing methods. Simulation results of DTWM_CfS also ensure a better Cost Makespan Trade-off gain up to 42.6 % when compared with other existing methods.
2021 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT), 2021
Users of Cloud-fog can access elastic clusters of available Virtual Machines (VM) for their data ... more Users of Cloud-fog can access elastic clusters of available Virtual Machines (VM) for their data processing needs. Consumption of lesser hardware by the individuals in additional to the VM technologies was attained by means of introducing Fog-IoT set-up and Cloud-resources employment. A better recapturing of failure renovation services requires VM based infrastructure. Service accuracy for the provider's Virtual Domain Controller (DC) can be achieved through dedicated routing resolution. In case of node failure, it is very difficult to decide which and how-many VM's should be chosen for migration to keeps up the accuracy assurance at the failed node. The selected VM can restrict the number of VM's should be migrated. For decreasing the given cloud-fog resources loads, choosing a single or multiple promising VMs to be migrated may create an issue. An Alternate switch Identification and Fault Tolerant Load Administration (AsI_FTLA) for implementing the Cloud-fog data center infrastructure while VM migration via improved recovery method of Virtual Network (VN) is proposed in this work. A model of linear-integer programming is imposed to study the path traffic that examines every associated numerical factor for selecting the most favorable VM through the best route. The new VM migration is then established by alternative switch identification algorithm and routing is achieved. CloudSim was employed to study the performance of proposed AsI_FTLA system. Simulation results states, there is a considerable improvements in average resource and storage utilization along with throughput, and reduction in total execution time against existing strategies.
International Journal of Computational Science and Engineering, 2021
ABSTRACT The aim of this study was to evaluate the role of alloying and rapid solidification proc... more ABSTRACT The aim of this study was to evaluate the role of alloying and rapid solidification processing in direct structural control in lead base batteries. A detailed investigation on rapid quenching of liquid lead base alloys for high performance storage battery applications was made in order to choose suitable lead grid alloys for lead acid batteries as melt-spun ribbons. S o this paper provides a comprehensive review of the physical metallurgy and mechanical properties of the melt-spun ordered alloy based on Pb, Egyptian grid battery, Germany grid battery, Pb-13.1wt.%S b, Pb-13.1wt.%S b-6.9wt.%S n, Pb-13.1wt.%S b-5.9wt.%S n-1wt.%Ca, Pb-13.1wt.%S b-4.9wt.%S n-1wt.%Ca-1wt.%Al, Pb-1wt.%Ca and Pb-0.5wt.%S n-0.1wt.%Ca for storage battery applications. The results indicate that the composition of alloys plays an important role on grid batteries performed. It is found that Pb-0.5wt.%S n-0.1wt.%Ca can be used to make the grids used for ribbon grid lead-acid batteries.
2021 International Conference on Data Analytics for Business and Industry (ICDABI), 2021
Crop yield depends on various parameters like the type of soil, irrigational water, environmental... more Crop yield depends on various parameters like the type of soil, irrigational water, environmental conditions, nutritional contents of the plants, and etc. In India, farmers are facing many problems due to improper nutrition management for receiving a better crop yield in groundnut crop. This work deals with nutrition management in groundnut crop through proper detection of nutrition deficiency, based on identified nutrient deficiency predicting the crop yields. Now a days, most of the research on it considers only primary nutrients, whereas few others are considering micro nutrients deficiency. But there is no specific nutrient deficiency method available for finding primary and micro nutrient deficiencies at present. Though they consider, there is no exclusive measurements available for knowing deficiency percentage. Also, accuracy is not in acceptable level for their work. The proposed work focuses on finding primary (Nitrogen, Potassium, Phosphorus) and micro (Zinc) nutrient deficiencies. For experimentation, ESP32 camara images that are captured from crop field are used. Visual geometry group (VGG16) convolutional neural network architecture was used for classification of images and the K-Means clustering algorithm is used for the estimation of nutrient deficient percentage.
InderScience, 2022
The conventional method of taking attendance by calling names or signing is a very tedious and in... more The conventional method of taking attendance by calling names or signing is a very tedious and insecure process with loopholes, like missing records, paper consumption, etc. To overcome such difficulties, an echo-based attendance system is proposed in this paper, which provides an effective and efficient way of recording and maintaining the attendance of students on a daily basis. The attendance is recorded for an echo-receiving device (microphone) from an echo-sending device (speaker) and all the records are saved in a real-time cloud database. Data is encoded in unique frequency at the transmitter's end and sent over sound, which is then decoded at the receiver's end and attendance is marked in the database. The process is one-shot, where each receiver's attendance is marked simultaneously, unlike the existing autonomous attendance systems. The proposed system uses liquid software-defined radio digital signal processing (DSR) to transmit data from the transmitter's end and to process received data at the receiver's end. This abolishes capturing the specimen of students (like fingerprints, facial images), required for the dataset to train the model in case of other autonomous attendance systems and hence, saves time and effort by completely eliminating human intervention and ensures high reliability as well.
InderScience, 2022
The cloud servicing environment allows sharing of parallelised virtual resource among tasks based... more The cloud servicing environment allows sharing of parallelised virtual resource among tasks based on quality of service requirements. Proficient resource management and scheduling of tasks to execute within deadline is a challenging issue. Several approaches available at present for scheduling and workload balancing among the virtual machines (VM) in cloud, but most of them are not conforming to emerging features like elasticity for dynamic provisioning or deprovisioning of VM's while allocating workload. The proposed deadline based elastic approach for load balancing and scheduling (DL_ELBalTSch) considers the percentage of VM's overloaded or underloaded as a supporting threshold at that movement and takes decision either to raise or cut the VM's. This approach is competent enough to meet established deadline by raising successful execution rate of tasks on variable number of resources. The extensive simulations performed on CloudSim obtained higher task execution-ratio, and lower makespan-time and execution-cost when compared with existing approaches.
InderScience, 2022
Workflow is a series of jobs that are executed in order to complete a specific activity where the... more Workflow is a series of jobs that are executed in order to complete a specific activity where the jobs are often dependent on each other. Data transfer that might occur between such jobs results into the creation of a workflow that aims at utilising resources for workflow tasks by optimising the use of cloud resources. Few of the existing single objectives workflows scheduling solutions have linearly combine multiple objectives to get a multi-objective solution, but it might not be able to model the real-world problem efficiently for certain conditions where the environment is dynamic in nature. Hence, a Neural Network based Multi-Objective Evolutionary Algorithm (NN-MOHEFT) that solves the multi-objective workflow scheduling issues in a dynamic environment was proposed in this article. The NN-MOHEFT learns the pattern behind changing Pareto optimal front for successive environment and tries to predict the Pareto optimal front for the next environment from the input Pareto optimal set of the current environment. The proposed NN-MOHEFT algorithm is at par with the original constructs when it comes to the hypervolume of objectives generated. It generates 10% more non-dominated solutions as compared to the original construct.
InderScience, 2022
Cloud users submit their applications for processing on the provider's infrastructure expecting a... more Cloud users submit their applications for processing on the provider's infrastructure expecting a diverse set of computational, storage, or communicational resources. Classifying the tasks of these applications based on their needs and scheduling them onto the most suitable resources is a big obstacle. To manage with such issues, a system that supports classification and scheduling them on to the most efficient resource for execution is essential. An information-centric (IC) internet of things (IoT) system for cloud-fog computing environment, which supports these two key functionalities through ensured QoS was proposed in this article. The experimental simulations and performance analysis of the proposed approach was prepared by considering a global cloud with three servers, four fogs, and four types of IoT applications with a sum of 3,000 tasks collectively. Experimental outcomes state that the anticipated cloud-fog computing architecture reduces the average makespan time and service cost up to a maximum of 11.8%, and 12.63% respectively when compared with other existing approaches. Also, the cost-makespan trade-off (CMT) gains with the proposed CSIc_IoTA is up to 28.5% and can guarantee better QoS requirements of real-time IoT applications.
InderScience, 2022
The growing volumes of information and multifaceted nature of information processing, workflow sc... more The growing volumes of information and multifaceted nature of information processing, workflow scheduling in distributed environment are a prominent component for computing operations to diminish the amount of information transferring, computation load allocation to resources, reducing the task's waiting time and execution time. The basic objective of this article is to find an optimal schedule (Sopt) which can reduce the makespan of workflow. Artificial intelligence and neural network (NN) systems are the mainstream , but they were not effectively employed nevertheless for workflow scheduling. Hence, we enhance the scheduling by realising artificial neural networks and reinforcement Q-learning standards. An optimised NN-based scheduling algorithm (WfSo_ANRL) that represents an agent which can effectively schedule the tasks among computational nodes was provided in this article. The agent interacts with the external environment, i.e., the computing environment and collects the current status of load encoded in the form of a state vector. The agent then predicts an action and efficiently allocates the tasks on to the attainable resources. The external computing environment then awards incentives to the agent. The agent then learns to produce optimal schedules for reducing the makespan. In this way, the WfSo_ANRL produces optimal solution for workload.
Workflow is a series of jobs that are executed in order to complete a specific activity where the... more Workflow is a series of jobs that are executed in order to complete a specific activity where the jobs are often dependent on each other. Data transfer that might occur between such jobs results into the creation of a workflow that aims at utilising resources for workflow tasks by optimising the use of cloud resources. Few of the existing single objectives workflows scheduling solutions have linearly combine multiple objectives to get a multi-objective solution, but it might not be able to model the real-world problem efficiently for certain conditions where the environment is dynamic in nature. Hence, a Neural Network based Multi-Objective Evolutionary Algorithm (NN-MOHEFT) that solves the multi-objective workflow scheduling issues in a dynamic environment was proposed in this article. The NN-MOHEFT learns the pattern behind changing Pareto optimal front for successive environment and tries to predict the Pareto optimal front for the next environment from the input Pareto optimal set of the current environment. The proposed NN-MOHEFT algorithm is at par with the original constructs when it comes to the hypervolume of objectives generated. It generates 10% more non-dominated solutions as compared to the original construct.
Workflow is a series of jobs that are executed in order to complete a specific activity where the... more Workflow is a series of jobs that are executed in order to complete a specific activity where the jobs are often dependent on each other. Data transfer that might occur between such jobs results into the creation of a workflow that aims at utilising resources for workflow tasks by optimising the use of cloud resources. Few of the existing single objectives workflows scheduling solutions have linearly combine multiple objectives to get a multi-objective solution, but it might not be able to model the real-world problem efficiently for certain conditions where the environment is dynamic in nature. Hence, a Neural Network based Multi-Objective Evolutionary Algorithm (NN-MOHEFT) that solves the multi-objective workflow scheduling issues in a dynamic environment was proposed in this article. The NN-MOHEFT learns the pattern behind changing Pareto optimal front for successive environment and tries to predict the Pareto optimal front for the next environment from the input Pareto optimal set of the current environment. The proposed NN-MOHEFT algorithm is at par with the original constructs when it comes to the hypervolume of objectives generated. It generates 10% more non-dominated solutions as compared to the original construct.
International Journal of Communication Networks and Distributed Systems
2020 International Conference on Data Analytics for Business and Industry: Way Towards a Sustainable Economy (ICDABI)
Improvement in Internet of Technology (IoT) has amplified the usage of smart devices like cameras... more Improvement in Internet of Technology (IoT) has amplified the usage of smart devices like cameras, sensors, drones, vehicles, and etc, by the individuals and domain organizations. The day-to-day data created and usage by these devices also increasing proportionally. The storage-space and processor-speed available at the individual devices is not enough for managing the generated data. Hence, a huge amount of storage space and processing capabilities is required. The remotely available global cloud resources are in use at present for fulfilling these needs. But, cloud resources are not recommended always for execution due to network delay and service cost. With the aim of optimally utilizing the existing IoT devices, it was formed as a logical cluster of devices called fog-nodes and their operations are managed by fog-agents. Instead of immediately forwarding every IoT application requests to the cloud, based on suitability they are accommodated on the IoT devices of the same or different fog-nodes for execution. Finding a suitable fog-node for executing IoT application is a challenging task. In this article, A Co-scheduling system that recommends suitable fog-node for executing IoT application and managing load in the cloud-fog environment (CoS_FRLM) was proposed. The CoS_FRLM calculates Pheramone Indicator (PI) value for each application on each Fog-node and recommends a fog-node having greater PI value for executing the IoT application. Simulation results states that the proposed CoS_FRLM outperforms the existing scheduling and load management methodologies by effective resource utilization, reduced makespan-time and service-costs.
2021 International Conference on Decision Aid Sciences and Application (DASA)
Water is essential for the survival and wellness of humans and other ecosystems. The mix of physi... more Water is essential for the survival and wellness of humans and other ecosystems. The mix of physical and chemical characteristics in a water sample is referred to as water quality. Mainly, water quality is critical in achieving a long-term sustainable aquaculture system. Conducting water quality parameter evaluations is crucial for executing an assessment operation and developing a more effective water resources management and planning strategy in aquaculture. The cumulative impact of water quality parameters may devastate the whole system if not managed properly. Predominantly, early diagnosis of fish illnesses and identifying the underlying causes are critical for farmers to take precautions to prevent an epidemic. Typically, fish illnesses are caused by viruses and bacteria. The presence of these pathogens may alter the pH, D.O., BOD, COD, E.C., PO43-, NO3-N, and NH3-N levels in the water. Previously, water quality was measured using handheld devices and mathematical functions. But these models fail in getting practical water quality analysis. The advances of IoT and deep learning technology adopted to aquaculture give insight analysis on the water for aquaculture. Most of the researchers working with water quality data based on one parameter and analysis with limited samples. This paper used the convolution neural networks (CNN)to classify water quality data to classify the water quality effectively for fish growth and survival with all the water quality parameters and with a good number of samples. The proposed model is compared with the existing models. Performance results show the impact of the proposed model.
2020 14th International Conference on Innovations in Information Technology (IIT), 2020
Materialization in Internet-of-Things (IoT) has exponentially raised the usage of smart devices b... more Materialization in Internet-of-Things (IoT) has exponentially raised the usage of smart devices by the individuals or business organizations. Fog computing was introduced for serving the raising needs of IoT applications locally with minimal delay and cost. Based on the Quality of Service (QoS) requirements like data requirements, rate of data updating, and accessing authority of IoT applications, their requests may be processed on the locally available fog nodes at low cost or forwarded to the globally available rentedcloud-nodes for processing at higher cost. Hence, there is a key need of optimizing the information-centric IoT architecture for classifying the tasks of IoT applications and scheduling them on to the most suitable fog or cloud nodes for processing. The proposed CS_IcIoTA identifies the application needs and classifies them into diverse categories. The scheduler assigns the tasks from these categories either to the local fog nodes or to the remotely available rented-cloud-nodes for execution based on the current resource requirements of tasks. If the demanded computing or storage resources by the tasks is huge and if that is not attainable at fogs then cloud nodes are preferred otherwise local fog nodes are used. Three cloud nodes, four fog nodes and three IoT application domains with a sum of 1500 tasks are considered for the experimental analysis and performance evaluation. Simulation results states that the proposed CS_IcIoTA minimizes the average makespan time and service-cost up to 11.45%, and 10.60% respectively. Proposed CS_IcIoTA also maximizes the average fog node utilization up to 77.83%.
International Journal of Intelligent Information and Database Systems, 2022
The conventional method of taking attendance by calling names or signing is a very tedious and in... more The conventional method of taking attendance by calling names or signing is a very tedious and insecure process with loopholes, like missing records, paper consumption, etc. To overcome such difficulties, an echo-based attendance system is proposed in this paper, which provides an effective and efficient way of recording and maintaining the attendance of students on a daily basis. The attendance is recorded for an echo-receiving device (microphone) from an echo-sending device (speaker) and all the records are saved in a real-time cloud database. Data is encoded in unique frequency at the transmitter's end and sent over sound, which is then decoded at the receiver's end and attendance is marked in the database. The process is one-shot, where each receiver's attendance is marked simultaneously, unlike the existing autonomous attendance systems. The proposed system uses liquid software-defined radio digital signal processing (DSR) to transmit data from the transmitter's end and to process received data at the receiver's end. This abolishes capturing the specimen of students (like fingerprints, facial images), required for the dataset to train the model in case of other autonomous attendance systems and hence, saves time and effort by completely eliminating human intervention and ensures high reliability as well.
International Journal of Cloud Computing, 2021
International Journal of Communication Networks and Distributed Systems, 2021
2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS), 2021
Efficient allocation of achievable virtual resources to the diverse users is a key challenging is... more Efficient allocation of achievable virtual resources to the diverse users is a key challenging issue in a controlled and collaborative cloud environment. As well, balancing the load among the resources and mapping these virtual resources to the physical machines is even bigger challenge of present distributed computing arena. Numerous approaches were introduced by different researchers including Genetic Algorithm for dealing with these challenges, but their scope was limited to certain specific performance elements. Hence, there is a need of optimizing the existing research implementations for efficient allocation of virtualized resources in cloud computing environment. Usually, in a typical distributed computing environment like cloud computing, allocation of virtual resource and balancing of workload among them is realized by means of virtual machines live migration. This article introduces an optimization of existing Genetic algorithm (GA) that mainly intended for VM resource provisioning and load balancing. The proposed OGA_EAVRC considers Population size, Fitness function, Mutation probability, and success rate of resource for optimizing the performance through efficient resource allocation. Key objective of this work is to utilize each physical resource effectively and allocated them to end users efficiently. For studying the operational performance of OGA_EAVRC, an event based CloudSim was chosen. Simulation results states that the proposed OGA_EAVRC can efficiently allocates the workload among virtualized resource by reducing VM’s migration among the physical machines.
2020 International Conference on Decision Aid Sciences and Application (DASA), 2020
During the past few years, powerful Fog computing has become a perfect counterpart to the cloud c... more During the past few years, powerful Fog computing has become a perfect counterpart to the cloud computing where the resources are dealing comfortably with computational, communicational and data needs of recently advanced cloud based Internet of Things (IoT) applications. Studying the interoperability and collaboration among the Fog (Edge) and Cloud (Core) for collective service provisioning has recently gained a considerable attention of researchers. Where, a delay tolerant management of IoT workflows and distributing their tasks among the fog and cloud node was became an important issue. As a possible defeat for this, a successive association among native fog nodes and the hired cloud nodes were exploited. In this work, a heuristic-based delay tolerant task scheduling and workflow management system for Cloud-fog computing (DTWM_CfS) were formulated and proposed for distributing the workflow tasks of IoT applications to the most successive fog or cloud resource nodes for execution. The proposed DTWM_CfS ensures efficient utilization of underlying resources by diminishing the average makespan time and total cost up to 11.23% and 10.6% respectively when compared with other existing methods. Simulation results of DTWM_CfS also ensure a better Cost Makespan Trade-off gain up to 42.6 % when compared with other existing methods.
2021 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT), 2021
Users of Cloud-fog can access elastic clusters of available Virtual Machines (VM) for their data ... more Users of Cloud-fog can access elastic clusters of available Virtual Machines (VM) for their data processing needs. Consumption of lesser hardware by the individuals in additional to the VM technologies was attained by means of introducing Fog-IoT set-up and Cloud-resources employment. A better recapturing of failure renovation services requires VM based infrastructure. Service accuracy for the provider's Virtual Domain Controller (DC) can be achieved through dedicated routing resolution. In case of node failure, it is very difficult to decide which and how-many VM's should be chosen for migration to keeps up the accuracy assurance at the failed node. The selected VM can restrict the number of VM's should be migrated. For decreasing the given cloud-fog resources loads, choosing a single or multiple promising VMs to be migrated may create an issue. An Alternate switch Identification and Fault Tolerant Load Administration (AsI_FTLA) for implementing the Cloud-fog data center infrastructure while VM migration via improved recovery method of Virtual Network (VN) is proposed in this work. A model of linear-integer programming is imposed to study the path traffic that examines every associated numerical factor for selecting the most favorable VM through the best route. The new VM migration is then established by alternative switch identification algorithm and routing is achieved. CloudSim was employed to study the performance of proposed AsI_FTLA system. Simulation results states, there is a considerable improvements in average resource and storage utilization along with throughput, and reduction in total execution time against existing strategies.
International Journal of Computational Science and Engineering, 2021
ABSTRACT The aim of this study was to evaluate the role of alloying and rapid solidification proc... more ABSTRACT The aim of this study was to evaluate the role of alloying and rapid solidification processing in direct structural control in lead base batteries. A detailed investigation on rapid quenching of liquid lead base alloys for high performance storage battery applications was made in order to choose suitable lead grid alloys for lead acid batteries as melt-spun ribbons. S o this paper provides a comprehensive review of the physical metallurgy and mechanical properties of the melt-spun ordered alloy based on Pb, Egyptian grid battery, Germany grid battery, Pb-13.1wt.%S b, Pb-13.1wt.%S b-6.9wt.%S n, Pb-13.1wt.%S b-5.9wt.%S n-1wt.%Ca, Pb-13.1wt.%S b-4.9wt.%S n-1wt.%Ca-1wt.%Al, Pb-1wt.%Ca and Pb-0.5wt.%S n-0.1wt.%Ca for storage battery applications. The results indicate that the composition of alloys plays an important role on grid batteries performed. It is found that Pb-0.5wt.%S n-0.1wt.%Ca can be used to make the grids used for ribbon grid lead-acid batteries.
2021 International Conference on Data Analytics for Business and Industry (ICDABI), 2021
Crop yield depends on various parameters like the type of soil, irrigational water, environmental... more Crop yield depends on various parameters like the type of soil, irrigational water, environmental conditions, nutritional contents of the plants, and etc. In India, farmers are facing many problems due to improper nutrition management for receiving a better crop yield in groundnut crop. This work deals with nutrition management in groundnut crop through proper detection of nutrition deficiency, based on identified nutrient deficiency predicting the crop yields. Now a days, most of the research on it considers only primary nutrients, whereas few others are considering micro nutrients deficiency. But there is no specific nutrient deficiency method available for finding primary and micro nutrient deficiencies at present. Though they consider, there is no exclusive measurements available for knowing deficiency percentage. Also, accuracy is not in acceptable level for their work. The proposed work focuses on finding primary (Nitrogen, Potassium, Phosphorus) and micro (Zinc) nutrient deficiencies. For experimentation, ESP32 camara images that are captured from crop field are used. Visual geometry group (VGG16) convolutional neural network architecture was used for classification of images and the K-Means clustering algorithm is used for the estimation of nutrient deficient percentage.
InderScience, 2022
The conventional method of taking attendance by calling names or signing is a very tedious and in... more The conventional method of taking attendance by calling names or signing is a very tedious and insecure process with loopholes, like missing records, paper consumption, etc. To overcome such difficulties, an echo-based attendance system is proposed in this paper, which provides an effective and efficient way of recording and maintaining the attendance of students on a daily basis. The attendance is recorded for an echo-receiving device (microphone) from an echo-sending device (speaker) and all the records are saved in a real-time cloud database. Data is encoded in unique frequency at the transmitter's end and sent over sound, which is then decoded at the receiver's end and attendance is marked in the database. The process is one-shot, where each receiver's attendance is marked simultaneously, unlike the existing autonomous attendance systems. The proposed system uses liquid software-defined radio digital signal processing (DSR) to transmit data from the transmitter's end and to process received data at the receiver's end. This abolishes capturing the specimen of students (like fingerprints, facial images), required for the dataset to train the model in case of other autonomous attendance systems and hence, saves time and effort by completely eliminating human intervention and ensures high reliability as well.
InderScience, 2022
The cloud servicing environment allows sharing of parallelised virtual resource among tasks based... more The cloud servicing environment allows sharing of parallelised virtual resource among tasks based on quality of service requirements. Proficient resource management and scheduling of tasks to execute within deadline is a challenging issue. Several approaches available at present for scheduling and workload balancing among the virtual machines (VM) in cloud, but most of them are not conforming to emerging features like elasticity for dynamic provisioning or deprovisioning of VM's while allocating workload. The proposed deadline based elastic approach for load balancing and scheduling (DL_ELBalTSch) considers the percentage of VM's overloaded or underloaded as a supporting threshold at that movement and takes decision either to raise or cut the VM's. This approach is competent enough to meet established deadline by raising successful execution rate of tasks on variable number of resources. The extensive simulations performed on CloudSim obtained higher task execution-ratio, and lower makespan-time and execution-cost when compared with existing approaches.
InderScience, 2022
Workflow is a series of jobs that are executed in order to complete a specific activity where the... more Workflow is a series of jobs that are executed in order to complete a specific activity where the jobs are often dependent on each other. Data transfer that might occur between such jobs results into the creation of a workflow that aims at utilising resources for workflow tasks by optimising the use of cloud resources. Few of the existing single objectives workflows scheduling solutions have linearly combine multiple objectives to get a multi-objective solution, but it might not be able to model the real-world problem efficiently for certain conditions where the environment is dynamic in nature. Hence, a Neural Network based Multi-Objective Evolutionary Algorithm (NN-MOHEFT) that solves the multi-objective workflow scheduling issues in a dynamic environment was proposed in this article. The NN-MOHEFT learns the pattern behind changing Pareto optimal front for successive environment and tries to predict the Pareto optimal front for the next environment from the input Pareto optimal set of the current environment. The proposed NN-MOHEFT algorithm is at par with the original constructs when it comes to the hypervolume of objectives generated. It generates 10% more non-dominated solutions as compared to the original construct.
InderScience, 2022
Cloud users submit their applications for processing on the provider's infrastructure expecting a... more Cloud users submit their applications for processing on the provider's infrastructure expecting a diverse set of computational, storage, or communicational resources. Classifying the tasks of these applications based on their needs and scheduling them onto the most suitable resources is a big obstacle. To manage with such issues, a system that supports classification and scheduling them on to the most efficient resource for execution is essential. An information-centric (IC) internet of things (IoT) system for cloud-fog computing environment, which supports these two key functionalities through ensured QoS was proposed in this article. The experimental simulations and performance analysis of the proposed approach was prepared by considering a global cloud with three servers, four fogs, and four types of IoT applications with a sum of 3,000 tasks collectively. Experimental outcomes state that the anticipated cloud-fog computing architecture reduces the average makespan time and service cost up to a maximum of 11.8%, and 12.63% respectively when compared with other existing approaches. Also, the cost-makespan trade-off (CMT) gains with the proposed CSIc_IoTA is up to 28.5% and can guarantee better QoS requirements of real-time IoT applications.
InderScience, 2022
The growing volumes of information and multifaceted nature of information processing, workflow sc... more The growing volumes of information and multifaceted nature of information processing, workflow scheduling in distributed environment are a prominent component for computing operations to diminish the amount of information transferring, computation load allocation to resources, reducing the task's waiting time and execution time. The basic objective of this article is to find an optimal schedule (Sopt) which can reduce the makespan of workflow. Artificial intelligence and neural network (NN) systems are the mainstream , but they were not effectively employed nevertheless for workflow scheduling. Hence, we enhance the scheduling by realising artificial neural networks and reinforcement Q-learning standards. An optimised NN-based scheduling algorithm (WfSo_ANRL) that represents an agent which can effectively schedule the tasks among computational nodes was provided in this article. The agent interacts with the external environment, i.e., the computing environment and collects the current status of load encoded in the form of a state vector. The agent then predicts an action and efficiently allocates the tasks on to the attainable resources. The external computing environment then awards incentives to the agent. The agent then learns to produce optimal schedules for reducing the makespan. In this way, the WfSo_ANRL produces optimal solution for workload.
Workflow is a series of jobs that are executed in order to complete a specific activity where the... more Workflow is a series of jobs that are executed in order to complete a specific activity where the jobs are often dependent on each other. Data transfer that might occur between such jobs results into the creation of a workflow that aims at utilising resources for workflow tasks by optimising the use of cloud resources. Few of the existing single objectives workflows scheduling solutions have linearly combine multiple objectives to get a multi-objective solution, but it might not be able to model the real-world problem efficiently for certain conditions where the environment is dynamic in nature. Hence, a Neural Network based Multi-Objective Evolutionary Algorithm (NN-MOHEFT) that solves the multi-objective workflow scheduling issues in a dynamic environment was proposed in this article. The NN-MOHEFT learns the pattern behind changing Pareto optimal front for successive environment and tries to predict the Pareto optimal front for the next environment from the input Pareto optimal set of the current environment. The proposed NN-MOHEFT algorithm is at par with the original constructs when it comes to the hypervolume of objectives generated. It generates 10% more non-dominated solutions as compared to the original construct.
Workflow is a series of jobs that are executed in order to complete a specific activity where the... more Workflow is a series of jobs that are executed in order to complete a specific activity where the jobs are often dependent on each other. Data transfer that might occur between such jobs results into the creation of a workflow that aims at utilising resources for workflow tasks by optimising the use of cloud resources. Few of the existing single objectives workflows scheduling solutions have linearly combine multiple objectives to get a multi-objective solution, but it might not be able to model the real-world problem efficiently for certain conditions where the environment is dynamic in nature. Hence, a Neural Network based Multi-Objective Evolutionary Algorithm (NN-MOHEFT) that solves the multi-objective workflow scheduling issues in a dynamic environment was proposed in this article. The NN-MOHEFT learns the pattern behind changing Pareto optimal front for successive environment and tries to predict the Pareto optimal front for the next environment from the input Pareto optimal set of the current environment. The proposed NN-MOHEFT algorithm is at par with the original constructs when it comes to the hypervolume of objectives generated. It generates 10% more non-dominated solutions as compared to the original construct.
IEEE, 2021
During the past few years, powerful Fog computing has become a perfect counterpart to the cloud c... more During the past few years, powerful Fog computing has become a perfect counterpart to the cloud computing where the resources are dealing comfortably with computational, communicational and data needs of recently advanced cloud based Internet of Things (IoT) applications. Studying the interoperability and collaboration among the Fog (Edge) and Cloud (Core) for collective service provisioning has recently gained a considerable attention of researchers. Where, a delay tolerant management of IoT workflows and distributing their tasks among the fog and cloud node was became an important issue. As a possible defeat for this, a successive association among native fog nodes and the hired cloud nodes were exploited. In this work, a heuristic-based delay tolerant task scheduling and workflow management system for Cloud-fog computing (DTWM_CfS) were formulated and proposed for distributing the workflow tasks of IoT applications to the most successive fog or cloud resource nodes for execution. The proposed DTWM_CfS ensures efficient utilization of underlying resources by diminishing the average makespan time and total cost up to 11.23% and 10.6% respectively when compared with other existing methods. Simulation results of DTWM_CfS also ensure a better Cost Makespan Trade-off gain up to 42.6 % when compared with other existing methods.
IEEE, 2021
Materialization in Internet-of-Things (IoT) has exponentially raised the usage of smart devices b... more Materialization in Internet-of-Things (IoT) has exponentially raised the usage of smart devices by the individuals or business organizations. Fog computing was introduced for serving the raising needs of IoT applications locally with minimal delay and cost. Based on the Quality of Service (QoS) requirements like data requirements, rate of data updating, and accessing authority of IoT applications, their requests may be processed on the locally available fog nodes at low cost or forwarded to the globally available rentedcloud-nodes for processing at higher cost. Hence, there is a key need of optimizing the information-centric IoT architecture for classifying the tasks of IoT applications and scheduling them on to the most suitable fog or cloud nodes for processing. The proposed CS_IcIoTA identifies the application needs and classifies them into diverse categories. The scheduler assigns the tasks from these categories either to the local fog nodes or to the remotely available rented-cloud-nodes for execution based on the current resource requirements of tasks. If the demanded computing or storage resources by the tasks is huge and if that is not attainable at fogs then cloud nodes are preferred otherwise local fog nodes are used. Three cloud nodes, four fog nodes and three IoT application domains with a sum of 1500 tasks are considered for the experimental analysis and performance evaluation. Simulation results states that the proposed CS_IcIoTA minimizes the average makespan time and service-cost up to 11.45%, and 10.60% respectively. Proposed CS_IcIoTA also maximizes the average fog node utilization up to 77.83%.
IEEE, 2021
Efficient allocation of achievable virtual resources to the diverse users is a key challenging is... more Efficient allocation of achievable virtual resources to the diverse users is a key challenging issue in a controlled and collaborative cloud environment. As well, balancing the load among the resources and mapping these virtual resources to the physical machines is even bigger challenge of present distributed computing arena. Numerous approaches were introduced by different researchers including Genetic Algorithm for dealing with these challenges, but their scope was limited to certain specific performance elements. Hence, there is a need of optimizing the existing research implementations for efficient allocation of virtualized resources in cloud computing environment. Usually, in a typical distributed computing environment like cloud computing, allocation of virtual resource and balancing of workload among them is realized by means of virtual machines live migration. This article introduces an optimization of existing Genetic algorithm (GA) that mainly intended for VM resource provisioning and load balancing. The proposed OGA_EAVRC considers Population size, Fitness function, Mutation probability, and success rate of resource for optimizing the performance through efficient resource allocation. Key objective of this work is to utilize each physical resource effectively and allocated them to end users efficiently. For studying the operational performance of OGA_EAVRC, an event based CloudS im was chosen. S imulation results states that the proposed OGA_EAVRC can efficiently allocates the workload among virtualized resource by reducing VM's migration among the physical machines.
IEEE, 2021
Users of Cloud-fog can access elastic clusters of available Virtual Machines (VM) for their data ... more Users of Cloud-fog can access elastic clusters of available Virtual Machines (VM) for their data processing needs. Consumption of lesser hardware by the individuals in additional to the VM technologies was attained by means of introducing Fog-IoT setup and Cloud-resources employment. A better recapturing of failure renovation services requires VM based infrastructure. Service accuracy for the provider's Virtual Domain Controller (DC) can be achieved through dedicated routing resolution. In case of node failure, it is very difficult to decide which and how-many VM's should be chosen for migration to keeps up the accuracy assurance at the failed node. The selected VM can restrict the number of VM's should be migrated. For decreasing the given cloud-fog resources loads, choosing a single or multiple promising VMs to be migrated may create an issue. An Alternate switch Identification and Fault Tolerant Load Administration (AsI_FTLA) for implementing the Cloud-fog data center infrastructure while VM migration via improved recovery method of Virtual Network (VN) is proposed in this work. A model of linear-integer programming is imposed to study the path traffic that examines every associated numerical factor for selecting the most favorable VM through the best route. The new VM migration is then established by alternative switch identification algorithm and routing is achieved. CloudSim was employed to study the performance of proposed AsI_FTLA system. Simulation results states, there is a considerable improvements in average resource and storage utilization along with throughput, and reduction in total execution time against existing strategies.
IEEE, 2020
Crop yield depends on various parameters like the type of soil, irrigational water, environmental... more Crop yield depends on various parameters like the type of soil, irrigational water, environmental conditions, nutritional contents of the plants, and etc. In India, farmers are facing many problems due to improper nutrition management for receiving a better crop yield in groundnut crop. This work deals with nutrition management in groundnut crop through proper detection of nutrition deficiency, based on identified nutrient deficiency predicting the crop yields. Now a days, most of the research on it considers only primary nutrients, whereas few others are considering micro nutrients deficiency. But there is no specific nutrient deficiency method available for finding primary and micro nutrient deficiencies at present. Though they consider, there is no exclusive measurements available for knowing deficiency percentage. Also, accuracy is not in acceptable level for their work. The proposed work focuses on finding primary (Nitrogen, Potassium, Phosphorus) and micro (Zinc) nutrient deficiencies. For experimentation, ESP32 camara images that are captured from crop field are used. Visual geometry group (VGG16) convolutional neural network architecture was used for classification of images and the K-Means clustering algorithm is used for the estimation of nutrient deficient percentage.