Mohammad Samadi | Instituto Politecnico do Porto (original) (raw)

Papers by Mohammad Samadi

Research paper thumbnail of Intelligent Velocity Control of Mobile Robots Using Fuzzy and Supervised Machine Learning

Proceedings of conference GIS Ostrava 2021 Advances in Localization and Navigation, 2021

This paper proposes an intelligent technique for velocity control of a wheeled mobile robot by si... more This paper proposes an intelligent technique for velocity control of a wheeled mobile robot by simultaneously using a fuzzy controller and a supervised machine learning (SML) algorithm. The technique is suitable for flexible leader-follower formation control on straight paths where a follower robot maintains a safe but flexible distance from a leader robot. The fuzzy controller determines the ultimate distance of the follower with respect to the leader from the measurements of two ultrasonic sensors. The SML algorithm calculates an appropriate velocity for the follower based on the ultimate distance. Simulations showed the effectiveness of the proposed technique in adjusting the follower robot's velocity in order to maintain a flexible formation with the leader robot.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of A Double-controller Fuzzy Scheme for Intelligent Resource Discovery on IaaS Cloud Systems

International Journal of Networking and Virtual Organisations, 2021

The resource discovery process discovers appropriate computing resources (e.g., shared memory) in... more The resource discovery process discovers appropriate computing resources (e.g., shared memory) in infrastructure as a service (IaaS) cloud systems according to user requirements. Since fuzzy systems use intelligent procedures based on humanistic experiences, they can be used in cloud computing to increase the number of completed jobs and improves the success rate. This paper proposes a double-controller fuzzy scheme for intelligent resource discovery in IaaS cloud systems, called DOCFIR. This scheme applies two fuzzy controllers to perform the intelligent resource discovery across the network. The first controller determines the number of virtual machines in the deployment phase based on the most important characteristics of the physical machines. The second controller discovers the appropriate computing resources for the user's job in the service phase based on characteristics of the physical machines and user requirements. The simulation results show that the proposed scheme surpasses some of the existing related works in terms of the number of completed jobs and success rate.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of A Knowledge and Intelligent-based Strategy for Resource Discovery on IaaS Cloud Systems

International Journal of Grid and Utility Computing, 2021

Resource discovery selects appropriate computing resources in cloud systems to accomplish the use... more Resource discovery selects appropriate computing resources in cloud systems to accomplish the users' jobs. This paper proposes a knowledge and intelligent-based strategy for resource discovery in IaaS cloud systems, called KINRED. It uses a fuzzy system, a Multi-Criteria Decision Making (MCDM) controller and an artificial neural node to discover suitable resources under various changes on network metrics. The suggested fuzzy system uses hardware specifications of the computing resources in which CPU speed, CPU core, memory, disk, the number of virtual machines and utilisation rate are considered as inputs, and hardware type is considered as output of the system. The suggested MCDM controller makes proper decisions based on users' requirements in which CPU speed, CPU core, memory, and disk are assumed as inputs, and job type is assumed as output of the controller. Furthermore, the artificial neural node selects the computing resource having the highest success rate based on both outputs of the fuzzy system and MCDM controller. Simulation results show that the proposed strategy surpasses some of the existing related works in terms of the number of successful jobs, system throughput and service price.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Novel Reversible CLA, Optimized RCA and Parallel Adder/Subtractor Circuits

Serbian Journal of Electrical Engineering, Oct 1, 2020

This paper proposes reversible circuit designs of the three most commonly used adders: carry look... more This paper proposes reversible circuit designs of the three most commonly used adders: carry look-ahead adder (CLA adder), ripple carry adder (RCA adder), and parallel adder/subtractor. The n-bit reversible CLA adder, called CLA-GH, is designed using the Peres and Fredkin gates. The n-bit optimized reversible RCA adder, called ORCA-GH, is designed using the reversible circuit of a parity-preserving reversible full adder. Both circuits reduce the quantum cost. However, the ORCA-GH circuit also improves the number of constant inputs. Furthermore, the n-bit reversible parallel adder/subtractor, called PAS-GH, is designed using the Feynman, Peres, and Fredkin gates. It decreases the number of garbage outputs and quantum cost. The transistor realizations of the CLA-GH and PAS-GH circuits are provided accordingly. The evaluation results indicate that the proposed circuits surpass the existing works in all figures of merit.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Towards Designing Quantum Reversible 32-bit MIPS Register File

International Journal of High Performance Systems Architecture, May 1, 2020

Reversible circuit design can be applied in various emerging technologies such as quantum computi... more Reversible circuit design can be applied in various emerging technologies such as quantum computing. Since researchers have proposed many building blocks and designed small circuits (e.g., reversible full adder) already, it is the time to design large-scale reversible circuits. This paper proposes a novel quantum reversible 32-bit MIPS register file for quantum computer processors. It presents a reversible 5-to-32 decoder, thirty-two reversible buffer registers, and two reversible 32-to-1 multiplexers, too. The proposed reversible decoder block, namely GH-DEC, and the proposed reversible multiplexer block, namely GH-MUX, use the Feynman, Toffoli, and Fredkin gates. They have been designed by a minimum number of constant inputs, number of garbage outputs, and quantum cost. Besides, output expressions of all the circuits are simplified to enhance the performance of proposed quantum design, considerably. Comparison results show that the proposed reversible design surpasses the existing works in terms of the number of constant inputs, number of garbage outputs, and quantum cost.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Waterative Model: an Integration of the Waterfall and Iterative Software Development Paradigms

Database Systems Journal, Aug 21, 2019

Software development paradigms help a software developer to select appropriate strategies to deve... more Software development paradigms help a software developer to select appropriate strategies to develop software projects. They include various methods, procedures, and tools to describe and define the software development life cycle (SDLC). The waterfall and iterative models are two useful development paradigms, which have been used by various software developers in the last decades. This paper proposes a new software development methodology, called waterative model, which applies an integration of the waterfall and iterative development paradigms. In this model, the iterative model is embedded into the waterfall model to use the advantages of both models as an integrated one. It, in the most cases, is appropriate for large software products that need a long-term period of time for the development process. Experimental results demonstrate that the customer satisfaction score could be high by using the proposed model in various software projects.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of A Dynamic Replication Mechanism in Data Grid Based on a Weighted Priority-based Scheme

i-manager's Journal on Cloud Computing, Aug 2019

Replication is one of the popular tools to determine the availability degree of resources (e.g., ... more Replication is one of the popular tools to determine the availability degree of resources (e.g., data files) in data grids. Since data grids involve limited file storages and high computing costs, replication process is very essential in these networks. This paper proposes a dynamic replication mechanism in a data grid that uses a weighted priority-based replication scheme, called WPRS. It specifies a value for each existing in a local storage based on three parameters including price, number of access time, and present time. When a resource is not available for a desired job, it is hired from other sites in the network. The proposed mechanism removes the file having the least value to increase the free space of data storage. Simulation results show that the proposed replication mechanism surpasses some of the existing replication methods in terms of the number of successful jobs, number of non-successful jobs, and buy price.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Implementation of an Autonomous Intelligent Mobile Robot for Climate Purposes

International Journal of Ad Hoc and Ubiquitous Computing, Jun 30, 2019

This paper proposes an autonomous intelligent mobile robot for climate purposes, called ClimateRo... more This paper proposes an autonomous intelligent mobile robot for climate purposes, called ClimateRobo, to notify the weather condition based on environmental data. An ATmega32 microcontroller is used to measure temperature, gas, light intensity, and distance to obstacles using the LM35DZ, MQ-2, photocell, and infrared (IR) sensors. A utility function is proposed to calculate the weather condition according to the temperature and gas data. Afterwards, the weather condition will be monitored on a liquid crystal display (LCD), an appropriate light-emitting diode (LED) will be illuminated, and an audio alarm would be enabled when weather condition is emergency as well as ambient brightness is high. The ambient brightness is calculated by a proposed supervised machine learning using sensed data of the photocell sensor. A fuzzy decision system is proposed to adjust the speed of DC motors based on weather condition and light intensity. The robot can detect and pass stationary obstacles with the six reflective sensors installed in the left, front, and right sides under six detection scenarios. Simulation results show performance of the proposed supervised machine learning, fuzzy decision system, and obstacle detection mechanism under various simulation parameters. The robot, initially, is simulated in the Proteus simulator and, then, is implemented by electronic circuits and mechanical devices.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of FSB-System: A Detection System for Fire, Suffocation, and Burn Based on Fuzzy Decision Making, MCDM, and RGB Model in Wireless Sensor Networks

Wireless Personal Communications, Mar 25, 2019

Wireless sensor networks (WSNs) are composed of low-power, large-scale, low-cost sensor nodes to ... more Wireless sensor networks (WSNs) are composed of low-power, large-scale, low-cost sensor nodes to sense environmental conditions (e.g., temperature). Fire is one of the most common hazards in the world so that detection of the fires can prevent a lot of damages to the lives. Fire detection process can be improved by using knowledge-based systems such as fuzzy decision making and multi-criteria decision making (MCDM). This paper proposes a detection system, called FSB-System, to predict the fire, suffocation, and burn probabilities over areas using fuzzy theory, MCDM, and an RGB model. The system uses sensing data of the temperature, smoke, and light sensors to determine appropriate, assorted decisions under different conditions. Three fuzzy controllers are suggested in FSB-System: fire fuzzy controller (namely FFC), suffocation fuzzy controller (namely SFC), and burn fuzzy controller (namely BFC). FFC determines the fire probability, SFC measures the suffocation probability, and BFC calculates the burn probability. Sensor nodes are randomly scattered over areas in a way that they form multiple clusters. Non-cluster heads (NCHs) transmit their sensing data to cluster heads (CHs). Furthermore, CHs transmit the gathered data to the native sink to report environmental conditions toward a base station (e.g., a fire department). The number of sinks is determined by a suggested MCDM controller based on network size and the number of clusters. Simulation results demonstrate that the proposed system surpasses the threshold methods in terms of remaining energy, the number of alive nodes, network lifetime, the number of wrong alerts, and financial losses. This system can be applied in various environments including forests, buildings, etc.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of T*: A Weighted Double-heuristic Search Algorithm to Find the Shortest Path

International Journal of Computing Science and Mathematics, Jan 30, 2019

This paper proposes a weighted double-heuristic search algorithm to find the shortest path betwee... more This paper proposes a weighted double-heuristic search algorithm to find the shortest path between two points. It can be used in numerous fields such as graph theory, game theory, and network. This algorithm, called T*, uses a weighted and heuristic function as f(x) = α × t(x) + β × h1(x) + γ × h2(x). It selects the path which minimises f(x) where x is a current node on the path, t(x) is cost of the path from start to x, h1(x) is a heuristic to estimate the cost from x to the straight line passing through start and target, and h2(x) is a heuristic to estimate cost of the cheapest path from x to target. Furthermore, α, β, and γ indicate effective weights of each sub-function on f(x). T* algorithm is compared to the Greedy and A* algorithms in terms of hit rate and the number of processed nodes. Comparison results show that the proposed algorithm has a high efficiency compared to other algorithms.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Behavior-Based Decision Making: A Tutorial

Springer Berlin Heidelberg, Dec 1, 2018

There are ingenious characteristics in humanistic behaviors so that they can be utilized by the m... more There are ingenious characteristics in humanistic behaviors so that they can be utilized by the most developers to design smart and complex systems. This paper proposes a novel, knowledge and learning based method called behavior-based decision making, BBDM, in control and system engineering. It is an expert decision support system containing the learning ability to work based on humanistic behavioral reasoning. The knowledge base is built by the system based on various behavioral styles (e.g., safe) associated to other systems and humans. BBDM uses the knowledge-based information to make appropriate decisions when any desired behavioral style is requested from the system. It specifies a success rate for any desired style based on the obtained knowledge base with the aid of a behavioral inference system. This procedure can be used to select a proper system or human to accomplish a requested job. All operations of the BBDM method are performed by a proposed behavioral decision system, called BDS, which consists of three main units: decomposition, behavioral inference, and composition. The decomposition unit splits any behavioral style into several optional features (e.g., safety). The behavioral aggregation sub-unit aggregates all behavioral styles obtained by the system to define the total behavior. The behavioral inference unit produces a success set for any desired behavioral style. Finally, the composition unit converts success set to success rate to specify the success probability of the desired style. Simulation results show that the proposed method has a high efficiency compared to some of the existing decision-making methods.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of A Neural-MCDM-Based Routing Protocol for Packet Transmission in Mobile Ad Hoc Networks

Inderscience Publishers, Sep 6, 2018

This paper proposes a novel neural-MCDM-based routing protocol, called NMRouting, to transmit dat... more This paper proposes a novel neural-MCDM-based routing protocol, called NMRouting, to transmit data packets in mobile ad hoc network (MANET). Data packets are transmitted through neighbouring nodes that have high success rates from among a list of available neighbours. A perceptron-based neural controller specifies a success rate for every neighbour based on remaining energy, occupied buffer, and mobility speed. A MCDM controller determines the number of paths based on packet priority rate, distance, and average mobility speed. An energy management model is suggested to conduct energy consumption of the nodes efficiently as well as three queue types - critical, reliability, and ordinary - are considered to guarantee network reliability. Besides, the paper presents UML diagrams to develop NMRouting in real applications. Simulation results show that the proposed protocol has a high performance in terms of packet delivery ratio, network lifetime, and network traffic, compared to DSR, AODV, DPG and QMRPRNS.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of An Intelligent and Knowledge-based Overlapping Clustering Protocol for Wireless Sensor Networks

Overlapping is one of the topics in wireless sensor networks that is considered by researchers in... more Overlapping is one of the topics in wireless sensor networks that is considered by researchers in the last decades. An appropriate overlapping management system can prolong network lifetime and decrease network recovery time. This paper proposes an intelligent and knowledge‐based overlapping clustering protocol for wireless sensor networks, called IKOCP. This protocol uses some of the intelligent and knowledge‐based systems to construct a robust overlapping strategy for sensor networks. The overall network is partitioned to several regions by a proposed multicriteria decision‐making controller to monitor both small‐scale and large‐scale areas. Each region is managed by a sink, where the whole network is managed by a base station. The sensor nodes are categorized by various clusters using the low‐energy adaptive clustering hierarchy (LEACH)‐improved protocol in a way that the value of p is defined by a proposed support vector machine–based mechanism. A proposed fuzzy system determines that noncluster heads associate with several clusters in order to manage overlapping conditions over the network. Cluster heads are changed into clusters in a period by a suggested utility function. Since network lifetime should be prolonged and network traffic should be alleviated, a data aggregation mechanism is proposed to transmit only crucial data packets from cluster heads to sinks. Cluster heads apply a weighted criteria matrix to perform an inner‐cluster routing for transmitting data packets to sinks. Simulation results demonstrate that the proposed protocol surpasses the existing methods in terms of the number of alive nodes, network lifetime, average time to recover, dead time of first node, and dead time of last node.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of To Measure the Perimeter of an Ellipse Using Image Processing and Mathematical Reasoning

Image processing and mathematical reasoning are two powerful techniques to solve some of the comp... more Image processing and mathematical reasoning are two powerful techniques to solve some of the complex problems. Since the perimeter of ellipse is not determined accurately as well as the existing perimeter equations are too complex, researchers attempt to find proper solutions for this issue. This paper proposes a novel approach to measure the perimeter of an ellipse by using image processing and mathematical reasoning. This approach consists of two stages. In the first stage, value of a pixel is calculated via a pixel-by-pixel image processing based on the perimeter of several circles having different radii. In the second stage, the perimeter of an ellipse is defined by the pixels of various ellipses having different diameters and the value of a pixel through a mathematical reasoning. Simulation results show that P = 1.14167p (a + b) is the suggested perimeter of an ellipse according to the considered simulation scenarios.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of A Routing Protocol for Data Transferring in Wireless Sensor Networks Using Predictive Fuzzy Inference System and Neural Node

Wireless sensor networks (WSNs) are a community of large-scale, low-power, low-cost wireless sens... more Wireless sensor networks (WSNs) are a community of large-scale, low-power, low-cost wireless sensor nodes. This paper proposes a new fuzzy-neural based routing protocol, called Routing Protocol using Fuzzy system and Neural node, RPFN. Data packets are transferred from sensor nodes to a desired base station by hop-to-hop delivery. When a sensor node has a new sensed data or a data packet has been received from its neighbors, it selects an appropriate neighbor called candidate node by a fuzzy inference system and a neural node. The proposed Perceptron-based neural node uses four essential parameters including remaining energy, distance to the base station, available buffer, and link quality to choose the best candidate node according to local information. Moreover, parameter “link quality” is determined by the proposed fuzzy system based on distance to neighbor node and response rate. Simulation results demonstrate that RPFN surpasses some existing routing protocols in terms of packet delivery ratio and network lifetime.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of SMIER: An SVM and MCDA Based, Intelligent Approach for Enhanced Reliability in Wireless Sensor Networks

i-manager Publications, 2017

Reliability is one of the big challenges in Wireless Sensor Networks (WSNs). It can be improved b... more Reliability is one of the big challenges in Wireless Sensor Networks (WSNs). It can be improved by using some of the intelligent and knowledge-based mechanisms, such as Support Vector Machine (SVM) and Multiple Criteria Decision Analysis (MCDA). This paper proposes an SVM and MCDA based, intelligent approach for enhanced reliability in WSNs, called SMIER. It is considered for a cluster-based sensor network in a way that every Cluster-Head (CH) selects one of its Non Cluster-Head (NCH) nodes as a backup node in a period of time. Initially, the suggested SVM algorithm determines failure probability of each NCH node based on number of events and average distance to events. Afterward, the suggested MCDA controller calculates success rate of the NCH node by using three parameters, including remaining energy, distance, and failure probability. Simulation results show that the proposed approach surpasses some of the existing works in terms of packet delivery ratio, number of alive nodes, and average remaining energy.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Sensory Life in Sensory World

i-manager's Journal on Wireless Communication Networks, 2017

In the recent decades, sensors have affected on our life in various fields, including research ac... more In the recent decades, sensors have affected on our life in various fields, including research activities, standardization procedures, and industrial investments. Various types of sensors (e.g., pressure, temperature, and humidity) can be used in Wireless Sensor Networks (WSNs) to design and implement some of the important applications, such as environmental monitoring, healthcare systems, and military issues. WSNs consist of the low-power, large-scale, and low-cost sensor nodes. The nodes measure phenomena data (e.g., smoke density) to transmit the measured data to a center (e.g., sink or base station). Routing, security, and fault tolerance are some of the crucial challenges in sensor networks. This paper describes various physical features and key usages of the popular sensors. Furthermore, three WSNs applications in monitoring, healthcare, and military are considered subsequently. Since sensor localization and data mining are two important topics in WSNs, their categories and characteristics are addressed too. Evaluation results show the performance of sensor networks compared to radio-frequency identification (RFID) in terms of utilization, purpose, components, mobility, power supply, programmability, and deployment. Besides, some of the WSNs simulators are compared to each other in terms of computation time and memory usage.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Improving the Fault Tolerance of Wireless Sensor Networks by a Weighted Criteria Matrix

Wireless sensor networks (WSNs) are used in various fields such as official buildings, industrial... more Wireless sensor networks (WSNs) are used in various fields such as official buildings, industrial factories, and military applications to monitor environmental conditions (e.g., humidity and temperature). They can be affected by various faults because of some natural occasions (e.g., internal conflicts). Such faults cause the sensed data not to be transmitted to given stations and the network efficiency to be diminished, considerably. Hence, fault tolerance is one of the main challenges in WSNs that has been considered by researches in the last decades. The sensor nodes must discover faulty data and, thereby, transmit only faultless data to the stations (e.g., sink). This paper proposes a new fault tolerance approach for WSNs designed by a weighted criteria matrix, called FTW-WCM. Any cluster node uses this approach to select an appropriate successor node on transmission paths in order to enhance the fault tolerance of network. The FTW-WCM approach applies four parameters including remaining energy, hop count, selection count, and node rate via a weighted multi-criteria procedure. The neighbor node with the highest node rate is selected as an appropriate successor node from among a list of neighboring nodes. Simulation results show that the proposed approach surpasses some of the existing methods in terms of packet delivery ratio, network lifetime, and average remaining energy.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of DFRTP: Dynamic 3D Fuzzy Routing Based on Traffic Probability in Wireless Sensor Networks

IET, Dec 1, 2016

Routing protocols are used in wireless sensor network (WSN) to transmit data to a centre (e.g. a ... more Routing protocols are used in wireless sensor network (WSN) to transmit data to a centre (e.g. a base station). In this study, the authors propose a routing protocol called dynamic three-dimensional fuzzy routing based on traffic probability to enhance network lifetime and increase packet delivery ratio. It uses a fuzzy-based procedure to transmit packets by hop-to-hop delivery from source nodes toward destination nodes. The proposed fuzzy system uses two input parameters including ‘distance’ and ‘number of neighbours’ and one output parameter denoted by ‘traffic probability’. When a node has a sensed data or buffered data packet, it selects one of its neighbours, called chosen node, from among a list of candidate nodes (CNs). Candidates are the neighbours which have power energy higher than average remaining energy and free buffer more than average available buffer size. Distance and number of neighbours for each CN are fed in the fuzzy system to calculate traffic probability. The CN having the lowest traffic probability is selected as an appropriate chosen node to transmit packets to the destination. Simulation results show that the proposed protocol surpasses the greedy and A* heuristic routing for wireless sensor networks in home automation, dynamic optimal progress routing, and A-star & Fuzzy methods in terms of network lifetime and packet delivery ratio.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of OPCA: Optimized Prioritized Congestion Avoidance and Control for Wireless Body Sensor Networks

Background: Nowadays, wireless sensor networks are commonly used in healthcare applications. Cong... more Background: Nowadays, wireless sensor networks are commonly used in healthcare applications. Congestion is one of the main challenges in these networks, which causes the network efficiency to be decreased considerably. Most of the existing congestion control methods are presented for wireless sensor networks and the least of them are discussed on wireless body sensor networks. Moreover, they usually use one of the basic controlling factors instead of considering a group of such factors. Objective: Congestion is one of the main problems in wireless body sensor networks that can be carefully avoided and controlled by a group of basic controlling parameters. Methods: The goal of this work is to propose an optimized prioritized congestion avoidance and control protocol in wireless body sensor networks. The proposed protocol uses a hop-to-hop delivery to transmit data packets from sensor nodes to a base station. Data packets are prioritized by source nodes based on node priority and data volatility. They are also prioritized by intermediate nodes through three prioritized queues. Each node selects one of its neighbors by a dynamic congestion aware routing strategy to transmit data packets to the base station. Increasing the data reliability and decreasing the network traffic on transmission paths are some of the main goals of the proposed protocol. Results: Simulation results show that the proposed protocol surpasses one of the existing congestion protocols in terms of system throughput, network lifetime, number of lost packets, traffic load, and number of delivered packets. Conclusion: The proposed protocol can avoid and control the congestion problem in wireless body sensor networks.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Intelligent Velocity Control of Mobile Robots Using Fuzzy and Supervised Machine Learning

Proceedings of conference GIS Ostrava 2021 Advances in Localization and Navigation, 2021

This paper proposes an intelligent technique for velocity control of a wheeled mobile robot by si... more This paper proposes an intelligent technique for velocity control of a wheeled mobile robot by simultaneously using a fuzzy controller and a supervised machine learning (SML) algorithm. The technique is suitable for flexible leader-follower formation control on straight paths where a follower robot maintains a safe but flexible distance from a leader robot. The fuzzy controller determines the ultimate distance of the follower with respect to the leader from the measurements of two ultrasonic sensors. The SML algorithm calculates an appropriate velocity for the follower based on the ultimate distance. Simulations showed the effectiveness of the proposed technique in adjusting the follower robot's velocity in order to maintain a flexible formation with the leader robot.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of A Double-controller Fuzzy Scheme for Intelligent Resource Discovery on IaaS Cloud Systems

International Journal of Networking and Virtual Organisations, 2021

The resource discovery process discovers appropriate computing resources (e.g., shared memory) in... more The resource discovery process discovers appropriate computing resources (e.g., shared memory) in infrastructure as a service (IaaS) cloud systems according to user requirements. Since fuzzy systems use intelligent procedures based on humanistic experiences, they can be used in cloud computing to increase the number of completed jobs and improves the success rate. This paper proposes a double-controller fuzzy scheme for intelligent resource discovery in IaaS cloud systems, called DOCFIR. This scheme applies two fuzzy controllers to perform the intelligent resource discovery across the network. The first controller determines the number of virtual machines in the deployment phase based on the most important characteristics of the physical machines. The second controller discovers the appropriate computing resources for the user's job in the service phase based on characteristics of the physical machines and user requirements. The simulation results show that the proposed scheme surpasses some of the existing related works in terms of the number of completed jobs and success rate.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of A Knowledge and Intelligent-based Strategy for Resource Discovery on IaaS Cloud Systems

International Journal of Grid and Utility Computing, 2021

Resource discovery selects appropriate computing resources in cloud systems to accomplish the use... more Resource discovery selects appropriate computing resources in cloud systems to accomplish the users' jobs. This paper proposes a knowledge and intelligent-based strategy for resource discovery in IaaS cloud systems, called KINRED. It uses a fuzzy system, a Multi-Criteria Decision Making (MCDM) controller and an artificial neural node to discover suitable resources under various changes on network metrics. The suggested fuzzy system uses hardware specifications of the computing resources in which CPU speed, CPU core, memory, disk, the number of virtual machines and utilisation rate are considered as inputs, and hardware type is considered as output of the system. The suggested MCDM controller makes proper decisions based on users' requirements in which CPU speed, CPU core, memory, and disk are assumed as inputs, and job type is assumed as output of the controller. Furthermore, the artificial neural node selects the computing resource having the highest success rate based on both outputs of the fuzzy system and MCDM controller. Simulation results show that the proposed strategy surpasses some of the existing related works in terms of the number of successful jobs, system throughput and service price.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Novel Reversible CLA, Optimized RCA and Parallel Adder/Subtractor Circuits

Serbian Journal of Electrical Engineering, Oct 1, 2020

This paper proposes reversible circuit designs of the three most commonly used adders: carry look... more This paper proposes reversible circuit designs of the three most commonly used adders: carry look-ahead adder (CLA adder), ripple carry adder (RCA adder), and parallel adder/subtractor. The n-bit reversible CLA adder, called CLA-GH, is designed using the Peres and Fredkin gates. The n-bit optimized reversible RCA adder, called ORCA-GH, is designed using the reversible circuit of a parity-preserving reversible full adder. Both circuits reduce the quantum cost. However, the ORCA-GH circuit also improves the number of constant inputs. Furthermore, the n-bit reversible parallel adder/subtractor, called PAS-GH, is designed using the Feynman, Peres, and Fredkin gates. It decreases the number of garbage outputs and quantum cost. The transistor realizations of the CLA-GH and PAS-GH circuits are provided accordingly. The evaluation results indicate that the proposed circuits surpass the existing works in all figures of merit.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Towards Designing Quantum Reversible 32-bit MIPS Register File

International Journal of High Performance Systems Architecture, May 1, 2020

Reversible circuit design can be applied in various emerging technologies such as quantum computi... more Reversible circuit design can be applied in various emerging technologies such as quantum computing. Since researchers have proposed many building blocks and designed small circuits (e.g., reversible full adder) already, it is the time to design large-scale reversible circuits. This paper proposes a novel quantum reversible 32-bit MIPS register file for quantum computer processors. It presents a reversible 5-to-32 decoder, thirty-two reversible buffer registers, and two reversible 32-to-1 multiplexers, too. The proposed reversible decoder block, namely GH-DEC, and the proposed reversible multiplexer block, namely GH-MUX, use the Feynman, Toffoli, and Fredkin gates. They have been designed by a minimum number of constant inputs, number of garbage outputs, and quantum cost. Besides, output expressions of all the circuits are simplified to enhance the performance of proposed quantum design, considerably. Comparison results show that the proposed reversible design surpasses the existing works in terms of the number of constant inputs, number of garbage outputs, and quantum cost.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Waterative Model: an Integration of the Waterfall and Iterative Software Development Paradigms

Database Systems Journal, Aug 21, 2019

Software development paradigms help a software developer to select appropriate strategies to deve... more Software development paradigms help a software developer to select appropriate strategies to develop software projects. They include various methods, procedures, and tools to describe and define the software development life cycle (SDLC). The waterfall and iterative models are two useful development paradigms, which have been used by various software developers in the last decades. This paper proposes a new software development methodology, called waterative model, which applies an integration of the waterfall and iterative development paradigms. In this model, the iterative model is embedded into the waterfall model to use the advantages of both models as an integrated one. It, in the most cases, is appropriate for large software products that need a long-term period of time for the development process. Experimental results demonstrate that the customer satisfaction score could be high by using the proposed model in various software projects.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of A Dynamic Replication Mechanism in Data Grid Based on a Weighted Priority-based Scheme

i-manager's Journal on Cloud Computing, Aug 2019

Replication is one of the popular tools to determine the availability degree of resources (e.g., ... more Replication is one of the popular tools to determine the availability degree of resources (e.g., data files) in data grids. Since data grids involve limited file storages and high computing costs, replication process is very essential in these networks. This paper proposes a dynamic replication mechanism in a data grid that uses a weighted priority-based replication scheme, called WPRS. It specifies a value for each existing in a local storage based on three parameters including price, number of access time, and present time. When a resource is not available for a desired job, it is hired from other sites in the network. The proposed mechanism removes the file having the least value to increase the free space of data storage. Simulation results show that the proposed replication mechanism surpasses some of the existing replication methods in terms of the number of successful jobs, number of non-successful jobs, and buy price.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Implementation of an Autonomous Intelligent Mobile Robot for Climate Purposes

International Journal of Ad Hoc and Ubiquitous Computing, Jun 30, 2019

This paper proposes an autonomous intelligent mobile robot for climate purposes, called ClimateRo... more This paper proposes an autonomous intelligent mobile robot for climate purposes, called ClimateRobo, to notify the weather condition based on environmental data. An ATmega32 microcontroller is used to measure temperature, gas, light intensity, and distance to obstacles using the LM35DZ, MQ-2, photocell, and infrared (IR) sensors. A utility function is proposed to calculate the weather condition according to the temperature and gas data. Afterwards, the weather condition will be monitored on a liquid crystal display (LCD), an appropriate light-emitting diode (LED) will be illuminated, and an audio alarm would be enabled when weather condition is emergency as well as ambient brightness is high. The ambient brightness is calculated by a proposed supervised machine learning using sensed data of the photocell sensor. A fuzzy decision system is proposed to adjust the speed of DC motors based on weather condition and light intensity. The robot can detect and pass stationary obstacles with the six reflective sensors installed in the left, front, and right sides under six detection scenarios. Simulation results show performance of the proposed supervised machine learning, fuzzy decision system, and obstacle detection mechanism under various simulation parameters. The robot, initially, is simulated in the Proteus simulator and, then, is implemented by electronic circuits and mechanical devices.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of FSB-System: A Detection System for Fire, Suffocation, and Burn Based on Fuzzy Decision Making, MCDM, and RGB Model in Wireless Sensor Networks

Wireless Personal Communications, Mar 25, 2019

Wireless sensor networks (WSNs) are composed of low-power, large-scale, low-cost sensor nodes to ... more Wireless sensor networks (WSNs) are composed of low-power, large-scale, low-cost sensor nodes to sense environmental conditions (e.g., temperature). Fire is one of the most common hazards in the world so that detection of the fires can prevent a lot of damages to the lives. Fire detection process can be improved by using knowledge-based systems such as fuzzy decision making and multi-criteria decision making (MCDM). This paper proposes a detection system, called FSB-System, to predict the fire, suffocation, and burn probabilities over areas using fuzzy theory, MCDM, and an RGB model. The system uses sensing data of the temperature, smoke, and light sensors to determine appropriate, assorted decisions under different conditions. Three fuzzy controllers are suggested in FSB-System: fire fuzzy controller (namely FFC), suffocation fuzzy controller (namely SFC), and burn fuzzy controller (namely BFC). FFC determines the fire probability, SFC measures the suffocation probability, and BFC calculates the burn probability. Sensor nodes are randomly scattered over areas in a way that they form multiple clusters. Non-cluster heads (NCHs) transmit their sensing data to cluster heads (CHs). Furthermore, CHs transmit the gathered data to the native sink to report environmental conditions toward a base station (e.g., a fire department). The number of sinks is determined by a suggested MCDM controller based on network size and the number of clusters. Simulation results demonstrate that the proposed system surpasses the threshold methods in terms of remaining energy, the number of alive nodes, network lifetime, the number of wrong alerts, and financial losses. This system can be applied in various environments including forests, buildings, etc.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of T*: A Weighted Double-heuristic Search Algorithm to Find the Shortest Path

International Journal of Computing Science and Mathematics, Jan 30, 2019

This paper proposes a weighted double-heuristic search algorithm to find the shortest path betwee... more This paper proposes a weighted double-heuristic search algorithm to find the shortest path between two points. It can be used in numerous fields such as graph theory, game theory, and network. This algorithm, called T*, uses a weighted and heuristic function as f(x) = α × t(x) + β × h1(x) + γ × h2(x). It selects the path which minimises f(x) where x is a current node on the path, t(x) is cost of the path from start to x, h1(x) is a heuristic to estimate the cost from x to the straight line passing through start and target, and h2(x) is a heuristic to estimate cost of the cheapest path from x to target. Furthermore, α, β, and γ indicate effective weights of each sub-function on f(x). T* algorithm is compared to the Greedy and A* algorithms in terms of hit rate and the number of processed nodes. Comparison results show that the proposed algorithm has a high efficiency compared to other algorithms.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Behavior-Based Decision Making: A Tutorial

Springer Berlin Heidelberg, Dec 1, 2018

There are ingenious characteristics in humanistic behaviors so that they can be utilized by the m... more There are ingenious characteristics in humanistic behaviors so that they can be utilized by the most developers to design smart and complex systems. This paper proposes a novel, knowledge and learning based method called behavior-based decision making, BBDM, in control and system engineering. It is an expert decision support system containing the learning ability to work based on humanistic behavioral reasoning. The knowledge base is built by the system based on various behavioral styles (e.g., safe) associated to other systems and humans. BBDM uses the knowledge-based information to make appropriate decisions when any desired behavioral style is requested from the system. It specifies a success rate for any desired style based on the obtained knowledge base with the aid of a behavioral inference system. This procedure can be used to select a proper system or human to accomplish a requested job. All operations of the BBDM method are performed by a proposed behavioral decision system, called BDS, which consists of three main units: decomposition, behavioral inference, and composition. The decomposition unit splits any behavioral style into several optional features (e.g., safety). The behavioral aggregation sub-unit aggregates all behavioral styles obtained by the system to define the total behavior. The behavioral inference unit produces a success set for any desired behavioral style. Finally, the composition unit converts success set to success rate to specify the success probability of the desired style. Simulation results show that the proposed method has a high efficiency compared to some of the existing decision-making methods.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of A Neural-MCDM-Based Routing Protocol for Packet Transmission in Mobile Ad Hoc Networks

Inderscience Publishers, Sep 6, 2018

This paper proposes a novel neural-MCDM-based routing protocol, called NMRouting, to transmit dat... more This paper proposes a novel neural-MCDM-based routing protocol, called NMRouting, to transmit data packets in mobile ad hoc network (MANET). Data packets are transmitted through neighbouring nodes that have high success rates from among a list of available neighbours. A perceptron-based neural controller specifies a success rate for every neighbour based on remaining energy, occupied buffer, and mobility speed. A MCDM controller determines the number of paths based on packet priority rate, distance, and average mobility speed. An energy management model is suggested to conduct energy consumption of the nodes efficiently as well as three queue types - critical, reliability, and ordinary - are considered to guarantee network reliability. Besides, the paper presents UML diagrams to develop NMRouting in real applications. Simulation results show that the proposed protocol has a high performance in terms of packet delivery ratio, network lifetime, and network traffic, compared to DSR, AODV, DPG and QMRPRNS.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of An Intelligent and Knowledge-based Overlapping Clustering Protocol for Wireless Sensor Networks

Overlapping is one of the topics in wireless sensor networks that is considered by researchers in... more Overlapping is one of the topics in wireless sensor networks that is considered by researchers in the last decades. An appropriate overlapping management system can prolong network lifetime and decrease network recovery time. This paper proposes an intelligent and knowledge‐based overlapping clustering protocol for wireless sensor networks, called IKOCP. This protocol uses some of the intelligent and knowledge‐based systems to construct a robust overlapping strategy for sensor networks. The overall network is partitioned to several regions by a proposed multicriteria decision‐making controller to monitor both small‐scale and large‐scale areas. Each region is managed by a sink, where the whole network is managed by a base station. The sensor nodes are categorized by various clusters using the low‐energy adaptive clustering hierarchy (LEACH)‐improved protocol in a way that the value of p is defined by a proposed support vector machine–based mechanism. A proposed fuzzy system determines that noncluster heads associate with several clusters in order to manage overlapping conditions over the network. Cluster heads are changed into clusters in a period by a suggested utility function. Since network lifetime should be prolonged and network traffic should be alleviated, a data aggregation mechanism is proposed to transmit only crucial data packets from cluster heads to sinks. Cluster heads apply a weighted criteria matrix to perform an inner‐cluster routing for transmitting data packets to sinks. Simulation results demonstrate that the proposed protocol surpasses the existing methods in terms of the number of alive nodes, network lifetime, average time to recover, dead time of first node, and dead time of last node.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of To Measure the Perimeter of an Ellipse Using Image Processing and Mathematical Reasoning

Image processing and mathematical reasoning are two powerful techniques to solve some of the comp... more Image processing and mathematical reasoning are two powerful techniques to solve some of the complex problems. Since the perimeter of ellipse is not determined accurately as well as the existing perimeter equations are too complex, researchers attempt to find proper solutions for this issue. This paper proposes a novel approach to measure the perimeter of an ellipse by using image processing and mathematical reasoning. This approach consists of two stages. In the first stage, value of a pixel is calculated via a pixel-by-pixel image processing based on the perimeter of several circles having different radii. In the second stage, the perimeter of an ellipse is defined by the pixels of various ellipses having different diameters and the value of a pixel through a mathematical reasoning. Simulation results show that P = 1.14167p (a + b) is the suggested perimeter of an ellipse according to the considered simulation scenarios.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of A Routing Protocol for Data Transferring in Wireless Sensor Networks Using Predictive Fuzzy Inference System and Neural Node

Wireless sensor networks (WSNs) are a community of large-scale, low-power, low-cost wireless sens... more Wireless sensor networks (WSNs) are a community of large-scale, low-power, low-cost wireless sensor nodes. This paper proposes a new fuzzy-neural based routing protocol, called Routing Protocol using Fuzzy system and Neural node, RPFN. Data packets are transferred from sensor nodes to a desired base station by hop-to-hop delivery. When a sensor node has a new sensed data or a data packet has been received from its neighbors, it selects an appropriate neighbor called candidate node by a fuzzy inference system and a neural node. The proposed Perceptron-based neural node uses four essential parameters including remaining energy, distance to the base station, available buffer, and link quality to choose the best candidate node according to local information. Moreover, parameter “link quality” is determined by the proposed fuzzy system based on distance to neighbor node and response rate. Simulation results demonstrate that RPFN surpasses some existing routing protocols in terms of packet delivery ratio and network lifetime.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of SMIER: An SVM and MCDA Based, Intelligent Approach for Enhanced Reliability in Wireless Sensor Networks

i-manager Publications, 2017

Reliability is one of the big challenges in Wireless Sensor Networks (WSNs). It can be improved b... more Reliability is one of the big challenges in Wireless Sensor Networks (WSNs). It can be improved by using some of the intelligent and knowledge-based mechanisms, such as Support Vector Machine (SVM) and Multiple Criteria Decision Analysis (MCDA). This paper proposes an SVM and MCDA based, intelligent approach for enhanced reliability in WSNs, called SMIER. It is considered for a cluster-based sensor network in a way that every Cluster-Head (CH) selects one of its Non Cluster-Head (NCH) nodes as a backup node in a period of time. Initially, the suggested SVM algorithm determines failure probability of each NCH node based on number of events and average distance to events. Afterward, the suggested MCDA controller calculates success rate of the NCH node by using three parameters, including remaining energy, distance, and failure probability. Simulation results show that the proposed approach surpasses some of the existing works in terms of packet delivery ratio, number of alive nodes, and average remaining energy.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Sensory Life in Sensory World

i-manager's Journal on Wireless Communication Networks, 2017

In the recent decades, sensors have affected on our life in various fields, including research ac... more In the recent decades, sensors have affected on our life in various fields, including research activities, standardization procedures, and industrial investments. Various types of sensors (e.g., pressure, temperature, and humidity) can be used in Wireless Sensor Networks (WSNs) to design and implement some of the important applications, such as environmental monitoring, healthcare systems, and military issues. WSNs consist of the low-power, large-scale, and low-cost sensor nodes. The nodes measure phenomena data (e.g., smoke density) to transmit the measured data to a center (e.g., sink or base station). Routing, security, and fault tolerance are some of the crucial challenges in sensor networks. This paper describes various physical features and key usages of the popular sensors. Furthermore, three WSNs applications in monitoring, healthcare, and military are considered subsequently. Since sensor localization and data mining are two important topics in WSNs, their categories and characteristics are addressed too. Evaluation results show the performance of sensor networks compared to radio-frequency identification (RFID) in terms of utilization, purpose, components, mobility, power supply, programmability, and deployment. Besides, some of the WSNs simulators are compared to each other in terms of computation time and memory usage.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Improving the Fault Tolerance of Wireless Sensor Networks by a Weighted Criteria Matrix

Wireless sensor networks (WSNs) are used in various fields such as official buildings, industrial... more Wireless sensor networks (WSNs) are used in various fields such as official buildings, industrial factories, and military applications to monitor environmental conditions (e.g., humidity and temperature). They can be affected by various faults because of some natural occasions (e.g., internal conflicts). Such faults cause the sensed data not to be transmitted to given stations and the network efficiency to be diminished, considerably. Hence, fault tolerance is one of the main challenges in WSNs that has been considered by researches in the last decades. The sensor nodes must discover faulty data and, thereby, transmit only faultless data to the stations (e.g., sink). This paper proposes a new fault tolerance approach for WSNs designed by a weighted criteria matrix, called FTW-WCM. Any cluster node uses this approach to select an appropriate successor node on transmission paths in order to enhance the fault tolerance of network. The FTW-WCM approach applies four parameters including remaining energy, hop count, selection count, and node rate via a weighted multi-criteria procedure. The neighbor node with the highest node rate is selected as an appropriate successor node from among a list of neighboring nodes. Simulation results show that the proposed approach surpasses some of the existing methods in terms of packet delivery ratio, network lifetime, and average remaining energy.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of DFRTP: Dynamic 3D Fuzzy Routing Based on Traffic Probability in Wireless Sensor Networks

IET, Dec 1, 2016

Routing protocols are used in wireless sensor network (WSN) to transmit data to a centre (e.g. a ... more Routing protocols are used in wireless sensor network (WSN) to transmit data to a centre (e.g. a base station). In this study, the authors propose a routing protocol called dynamic three-dimensional fuzzy routing based on traffic probability to enhance network lifetime and increase packet delivery ratio. It uses a fuzzy-based procedure to transmit packets by hop-to-hop delivery from source nodes toward destination nodes. The proposed fuzzy system uses two input parameters including ‘distance’ and ‘number of neighbours’ and one output parameter denoted by ‘traffic probability’. When a node has a sensed data or buffered data packet, it selects one of its neighbours, called chosen node, from among a list of candidate nodes (CNs). Candidates are the neighbours which have power energy higher than average remaining energy and free buffer more than average available buffer size. Distance and number of neighbours for each CN are fed in the fuzzy system to calculate traffic probability. The CN having the lowest traffic probability is selected as an appropriate chosen node to transmit packets to the destination. Simulation results show that the proposed protocol surpasses the greedy and A* heuristic routing for wireless sensor networks in home automation, dynamic optimal progress routing, and A-star & Fuzzy methods in terms of network lifetime and packet delivery ratio.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of OPCA: Optimized Prioritized Congestion Avoidance and Control for Wireless Body Sensor Networks

Background: Nowadays, wireless sensor networks are commonly used in healthcare applications. Cong... more Background: Nowadays, wireless sensor networks are commonly used in healthcare applications. Congestion is one of the main challenges in these networks, which causes the network efficiency to be decreased considerably. Most of the existing congestion control methods are presented for wireless sensor networks and the least of them are discussed on wireless body sensor networks. Moreover, they usually use one of the basic controlling factors instead of considering a group of such factors. Objective: Congestion is one of the main problems in wireless body sensor networks that can be carefully avoided and controlled by a group of basic controlling parameters. Methods: The goal of this work is to propose an optimized prioritized congestion avoidance and control protocol in wireless body sensor networks. The proposed protocol uses a hop-to-hop delivery to transmit data packets from sensor nodes to a base station. Data packets are prioritized by source nodes based on node priority and data volatility. They are also prioritized by intermediate nodes through three prioritized queues. Each node selects one of its neighbors by a dynamic congestion aware routing strategy to transmit data packets to the base station. Increasing the data reliability and decreasing the network traffic on transmission paths are some of the main goals of the proposed protocol. Results: Simulation results show that the proposed protocol surpasses one of the existing congestion protocols in terms of system throughput, network lifetime, number of lost packets, traffic load, and number of delivered packets. Conclusion: The proposed protocol can avoid and control the congestion problem in wireless body sensor networks.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of The Significant Concepts of Cloud Computing: Technology, Architecture, Applications, and Security

Everybody has a supposition on what is a Cloud computing. Cloud computing is a modern area emerge... more Everybody has a supposition on what is a Cloud computing. Cloud computing is a modern area emerged by distributed computing that offers many powerful benefits to different organizations. It has an ability to rent a server or a thousand of servers via information technology (IT) services whole the world. The capabilities of Cloud computing are obtained by running a geophysical modeling application on most powerful systems. Organizations can improve their efficiency to quickly and reliably respond to the needs of their customers. It is performed by making a contract for various Cloud services such as applications, software, data storages, and processing capabilities. There are some risks in Cloud-based servers such as maintaining the security of systems, asserting the privacy of information, and insuring the wise expenditures of IT resources. In 2004, Cloud computing progressed to public awareness. Essentially, various companies such as Amazon, Google, and Facebook use Cloud technology as a model for running their own infrastructures. This causes the dedicated services to be always on, occurring on the running systems to be upgraded, and accordance with demand to be scaled considerably. Cloud computing involves a range of underlying technologies and configuration options instead of being as a single system. Organizations should consider the strengths and weaknesses of the Cloud technology, service models, and deployment methods through evaluating services to meet their requirements.
A Cloud system can extremely store and secure large amounts of data that is accessible only by authorized users and applications. It is supported and sponsored by a Cloud service provider which installs a powerful platform on the Cloud systems. The platform involves some of the required abilities such as the operating system, Apache, a MySQL database, Perl, Python, and PHP with a capability for automatically scaling in response to changing the different workloads. Cloud computing can utilize some applications such as sales automation, email, and forum management on the Internet. Internet can protect data while providing a consumer’s service as well it is able to utilize the Cloud storage for holding an application, personal data, and business. A Cloud system with the aid of the Internet can use a small number of Web services in order to integrate maps, photos, and GPS information.
This book discusses about the main concepts of Cloud computing. It is an appropriate tutorial for ordinary and professional people to acquire some required information about Cloud technology. Chapter 1 introduces the overall and fundamental characteristic of Cloud systems such as Web services, Grid computing, and hardware virtualization. In Chapter 2, the architectures of Cloud computing including the deployment models and the service models defined for Cloud-based servers are described carefully. Chapter 3 explains various applications of Cloud computing in various applications such as file storage, Cloud database, and email. In Chapter 4, some popular consumer applications designed by Cloud-based systems such as Evernote, iCloud, and Spotify are represented completely. Chapter 5 discusses about the different usages of Cloud servers such Cloud monitoring, healthcare, and banking. In Chapter 6, the security issues of Cloud computing such as privacy, reliability, and compliance are presented carefully. Chapter 7 points out the famous simulation tools designed for Cloud-based issues such as CloudSim, Xen hypervisor, and UEC. Finally, Chapter 8 introduces some well-liked companies established for Cloud-based usages such as CloudLock, CloudMunch, and CloudPhysics.
I hope that this book can help to ordinary people and professional researchers to design and implement various applications with Cloud technology. Undoubtedly, this book like any humanistic product is not devoid of any problem. Hence, the next version of this book can be published more appropriate than current version according to valuable suggestions of dear readers. I wish that this book can assist to computer science to design complex systems and to solve some of the exiting problems. Finally, I am grateful from anyone who assisted me to publish this book.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of SFRRP: 3D Fuzzy Routing for Wireless Sensor Networks, Advances in Control and Mechatronic Systems, Volume: I

Wireless sensor networks (WSNs) are composed of low-cost, low-energy, large-scale wireless sensor... more Wireless sensor networks (WSNs) are composed of low-cost, low-energy, large-scale wireless sensor nodes. They are used to design and implement various applications such as environmental monitoring, transportation systems, and health-care organizations. There are some challenges in these networks including fault tolerant, routing mechanism, collaborative signal processing, etc. Routing protocol is one of the main topics in WSNs that discusses on data transmission strategies. A sensor node should transmit its data packets to destination node (e.g., a sink or a base station) via appropriate transmission paths. Most of the existing routing protocols use some global information of the network that lead to enhance network traffic, decrease packet delivery ratio, and increase packet delivery time. The efficiency of WSNs can be improved by knowledge-based (e.g., fuzzy logic) routing protocols. This chapter discusses on 3D fuzzy-based routing protocol called SFRRP (Static three-dimensional Fuzzy Routing based on the Receiving Probability). This protocol transmits data packets from sensor nodes to a base station by a fuzzy controller based on humanistic experiences. It increases the network performance under various environmental conditions. Data packets are transmitted from sensor nodes to the base station by hop-to-hop delivery. A sensor including a new sensed data or a received data packet selects one of its neighbor nodes (called selected node) by the fuzzy controller to transmit the data packet to the base station. The parameters “distance” and “number of neighbors” are the inputs and the parameter “receiving probability” is the output of the controller. SFRRP uses some neighborhood information of sensor nodes to select appropriate transmission paths. It does not produce any redundant packet in the route discovery phase. Simulation results show that SFRRP surpasses some of the existing routing protocols in terms of packet delivery ratio, packet delivery time, and network lifetime.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of A Learning Analytics Approach for Job Scheduling on Cloud Servers

Learning analytics: Fundaments, applications, and trends, 2017

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Security Issues and Privacy Challenges of NoSQL Databases

NoSQL: Database for Storage and Retrieval of Data in Cloud, 2017

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Applications of Virtualization Technology in Grid Systems and Cloud Servers

Design and Use of Virtualization Technology in Cloud Computing, Aug 11, 2017

Grid systems and cloud servers are two distributed networks that deliver computing resources (e.g... more Grid systems and cloud servers are two distributed networks that deliver computing resources (e.g., file storages) to users’ services via a large and often global network of computers. Virtualization technology can enhance the efficiency of these networks by dedicating the available resources to multiple execution environments. This chapter describes applications of virtualization technology in grid systems and cloud servers. It presents different aspects of virtualized networks in systematic and teaching issues. Virtual machine abstraction virtualizes high-performance computing environments to increase the service quality. Besides, grid virtualization engine and virtual clusters are used in grid systems to accomplish users’ services in virtualized environments, efficiently. The chapter, also, explains various virtualization technologies in cloud severs. The evaluation results analyze performance rate of the high-performance computing and virtualized grid systems in terms of bandwidth, latency, number of nodes, and throughput.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Big Data Analytics for Connected Intelligence with the Internet of Things

Big Data Analytics: Tools and Technology for Effective Planning, Oct 26, 2017

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Biological Big Data Analytics

A Deep Dive into NoSQL Databases: The Use Cases and Applications, May 2, 2018

Big data analytics uses efficient analytic techniques to discover hidden patterns, correlations, ... more Big data analytics uses efficient analytic techniques to discover hidden patterns, correlations, and other insights from big data. It brings significant cost advantages, enhances the performance of decision making, and creates new products to meet customers’ needs. This method has various applications in plants, bioinformatics, healthcare, etc. It can be improved with various techniques such as machine learning, intelligent tools, and network analysis. This chapter describes applications of big data analytics in biological systems. These applications can be conducted in systems biology by using cloud-based databases (e.g., NoSQL). The chapter explains the improvement of big data technology in plants community with machine learning. Furthermore, it presents various tools to apply big data analytics in bioinformatics systems. Medical signal and genomics are two major fields in healthcare environments that would be improved by this type of analytical method. Finally, the chapter discusses on several use cases of healthcare information system.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of The Fundamentals and Empirical Design of a Smart Fire Detection System

Cambridge Scholars Publishing, Nov 9, 2020

This book introduces a smart fire detection system designed using a wireless sensor network and f... more This book introduces a smart fire detection system designed using a wireless sensor network and fuzzy methods. This system predicts, controls, and provides alerts to various events based on intelligent techniques. Routing protocols are performed based on intelligent procedures in which they are classified into two main groups: static and dynamic. Static protocols are used to transmit data packets between stationary nodes, while dynamic protocols are applied to transmit messages between rescue teams and fire departments. The active and passive states are specified for sensor nodes to balance the remaining energy of the nodes and prolong the network lifetime. The probability of explosion, fire, burn, and suffocation is determined based on fuzzy procedures. People affected can be guided to the exit at event places based on an intelligent method. In addition, members and dispatch routes of rescue and support teams are selected using intelligent methods to reduce financial losses and human casualties. The book will be useful for professors, researchers, and engineers in computer and electrical engineering.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Navigation Control of Agent Automobiles Using Wireless Sensor Network

Islamic Azad University, Tabriz Branch, 2013

Wireless sensor network is an interesting research area that has been extensively discussed becau... more Wireless sensor network is an interesting research area that has been extensively discussed because of its importance in the most applications such as environmental monitoring, healthcare purposes, traffic control, and military systems. Sensor network consists of a large number of sensor nodes that are widely distributed in the environment to collect phenomena data. In this thesis, a smart fire system is proposed to predict, control, and alert fire occurrences by using multiple fuzzy-based methods. This system aids less energy to be consumed for transmitting various messages between wireless nodes, network traffic to be reduced over the network, and network lifetime to be prolonged consequently. The proposed routing protocols are, generally, categorized into two groups: static and dynamic. The static protocols are used to transmit data packets between the stationary nodes placed in different locations. The dynamic protocols direct, control, and transmit messages between vehicles and rescue team members. Besides, several fuzzy systems are offered to detect explosion possibility, determine fire probability, measure the intensity and volume of the fire, estimate fire progress, detect the burn possibility, and determine suffocation probability. In addition, the system determines the active and passive nodes as well as detects failure nodes throughout the network. Rescue teams are dispatched to events on the best path, between fire department and event place, that is selected by another fuzzy-based procedure. This procedure leads the rescue and support teams to be dispatched to events in a short time. Simulation and evaluation results show that the proposed fire system has a high performance compared to the most existing fire systems.

Bookmarks Related papers MentionsView impact