Mohsen Sharifi | Iran University of Science and Technology (IUST) (original) (raw)

Papers by Mohsen Sharifi

Research paper thumbnail of A Survey on Task Scheduling Algorithms in Cloud Computing for Fast Big Data Processing

International Journal of Information and Communication Technology Research

The recent explosion of data of all kinds (persistent and short-lived) have imposed processing sp... more The recent explosion of data of all kinds (persistent and short-lived) have imposed processing speed constraints on big data processing systems (BDPSs). One such constraint on running these systems in Cloud computing environments is to utilize as many parallel processors as required to process data fast. Consequently, the nodes in a Cloud environment encounter highly crowded clusters of computational units. To properly cater for high degree of parallelism to process data fast, efficient task and resource allocation schemes are required. These schemes must distribute tasks on the nodes in a way to yield highest resource utilization as possible. Such scheduling has proved even more complex in the case of processing of short-lived data. Task scheduling is vital not only to handle big data but also to provide fast processing of data to satisfy modern time data processing constraints. To this end, this paper reviews the most recently published (2020-2021) task scheduling schemes and their deployed algorithms from the fast data processing perspective .

Research paper thumbnail of Scalable Complex Event Processing Using Rule Distribution

Azerbaijan Journal of High Performance Computing, 2018

Complex event processing (CEP) systems are currently widely used in large-scale enterprises for t... more Complex event processing (CEP) systems are currently widely used in large-scale enterprises for the processing of high and dynamically changing rates of input events using large number of complex rules. Given the hardware limitations of vertically scaled CEP solutions, horizontal scalability has become an essential requirement for modern CEP systems. In this paper, we propose an adaptive load-balancing technique via rule distribution (called ARD) for a cluster of CEP engines that provides horizontal scalability for CEP systems. Our experiments show our proposed technique provides higher scalability and yields higher throughput in comparison with two previously proposed non-adaptive load-balancing techniques, namely VISIRI and SCTXPF, when the system faces with variable workload. In addition, ARD keeps the system balanced more often.

Research paper thumbnail of Avoiding Register Overflow in the Bakery Algorithm

49th International Conference on Parallel Processing - ICPP : Workshops, 2020

Computer systems are designed to make resources available to users and users may be interested in... more Computer systems are designed to make resources available to users and users may be interested in some resources more than others, therefore, a coordination scheme is required to satisfy the users' requirements. This scheme may implement certain policies such as "never allocate more than X units of resource Z". One policy that is of particular interest is the inability of users to access a single resource at the same time, which is called the problem of mutual exclusion. Resource management concerns the coordination and collaboration of users, and it is usually based on making a decision. In the case of mutual exclusion, that decision is about granting access to a resource. Therefore, mutual exclusion is useful for supporting resource access management. The first true solution to the mutual exclusion problem is known as the Bakery algorithm that does not rely on any lower-lever mutual exclusion. We examine the problem of register overflow in realworld implementations of the Bakery algorithm and present a variant algorithm named Bakery++ that prevents overflows from ever happening. Bakery++ avoids overflows without allowing a process to write into other processes' memory and without using additional memory or complex arithmetic or redefining the operations and functions used in Bakery. Bakery++ is almost as simple as Bakery and it is straightforward to implement in real systems. With Bakery++, there is no reason to keep implementing Bakery in real computers because Bakery++ eliminates the probability of overflows and hence it is more practical than Bakery. Previous approaches to circumvent the problem of register overflow included introducing new variables or redefining the operations or functions used in the original Bakery algorithm, while Bakery++ avoids overflows by using simple conditional statements. The result is a new mutual exclusion algorithm that is guaranteed never to allow an overflow and it is simple, correct and easy to implement. Bakery++ has the same temporal and spatial complexities as the original Bakery. We have specified Bakery++ in PlusCal and we have used the TLC model checker to assert that Bakery++ maintains the mutual exclusion property and that it never allows an overflow. CCS CONCEPTS • Software and its engineering → Software organization and properties → Contextual software domains → Operating systems → Process management → Mutual exclusion •

Research paper thumbnail of Process Patterns for Service Oriented Development

ArXiv, 2020

Software systems development nowadays has moved towards dynamic composition of services that run ... more Software systems development nowadays has moved towards dynamic composition of services that run on distributed infrastructures aligned with continuous changes in the system requirements. Consequently, software developers need to tailor project specific methodologies to fit their methodology requirements. Process patterns present a suitable solution by providing reusable method chunks of software development methodologies for constructing methodologies to fit specific requirements. In this paper, we propose a set of high-level service-oriented process patterns that can be used for constructing and enhancing situational service-oriented methodologies. We show how these patterns are used to construct a specific service-oriented methodology for the development of a sample system. Keywords. Service-Oriented Software Development Methodologies, Process Patterns, Process Meta-Model, Situational Method Engineering

Research paper thumbnail of Process Framework with service-oriented method fragments

Service orientation is a promising paradigm that enables the engineering of large-scale distribut... more Service orientation is a promising paradigm that enables the engineering of large-scale distributed software systems using rigorous software development processes. The existing problem is that every service-oriented software development project often requires a customized development process that provides specific service-oriented software engineering tasks in support of requirements unique to that project. To resolve this problem and allow situational method engineering, we have defined a set of method fragments in support of the engineering of the project-specific service-oriented software development processes. We have derived the proposed method fragments from the recurring features of 11 prominent service-oriented software development methodologies using a systematic mining approach. We Communicated by Prof. Brian Henderson-Sellers. Preliminary contributions of authors on the proposed subject matter of this paper presented in conferences are as follows: (1) Fifth IEEE Internati...

Research paper thumbnail of Yawn

Proceedings of the 10th ACM SIGOPS Asia-Pacific Workshop on Systems - APSys '19, 2019

Idle-state governors partially turn off idle CPUs, allowing them to go to states known as idle-st... more Idle-state governors partially turn off idle CPUs, allowing them to go to states known as idle-states to save power. Exiting from these idle-sates, however, imposes delays on the execution of tasks and aggravates tail latency. Menu, the default idle-state governor of Linux, predicts periods of idleness based on the historical data and the disk I/O information to choose proper idle-sates. Our experiments show that Menu can save power, but at the cost of sacrificing tail latency, making Menu an inappropriate governor for data centers that host latency-sensitive applications. In this paper, we present the initial design of Yawn, an idle-state governor that aims to mitigate tail latency without sacrificing power. Yawn leverages online machine learning techniques to predict the idle periods based on information gathered from all parameters affecting idleness, including network I/O, resulting in more accurate predictions, which in turn leads to reduced response times. Preliminary benchmarking results demonstrate that Yawn reduces the 99 t h latency percentile of Memcached requests by up to 40%.

Research paper thumbnail of ETAS: predictive scheduling of functions on worker nodes of Apache OpenWhisk platform

The Journal of Supercomputing, 2021

Fast execution of functions is an inevitable challenge in the serverless computing landscape. Ine... more Fast execution of functions is an inevitable challenge in the serverless computing landscape. Inefficient dispatching, fluctuations in invocation rates, burstiness of workloads, and wide range of execution times of serverless functions result in load imbalances and overloading of worker nodes of serverless platforms that impose high latency on invocations. This paper is concentrated on function scheduling within worker nodes of Apache OpenWhisk serverless platform and presents ETAS, a predictive scheduling scheme to reduce response times of invocations besides increasing workers throughputs and resource utilizations. ETAS schedules functions using their execution times, estimated by their previous execution's history, arrival times, and containers' status. We have implemented ETAS in Apache OpenWhisk and show that, compared to the OpenWhisk worker scheduler and several queue scheduling schemes, ETAS outperforms others by reducing the average waiting time by 30% and increasing the throughput by 40%.

Research paper thumbnail of CTS: An operating system CPU scheduler to mitigate tail latency for latency-sensitive multi-threaded applications

Journal of Parallel and Distributed Computing, 2019

h i g h l i g h t s • It has been proven that FCFS scheduling of threads leads to lower tail late... more h i g h l i g h t s • It has been proven that FCFS scheduling of threads leads to lower tail latency. • Experiments show that CFS policies lead to LCFS scheduling, aggravating tail latency. • CTS policies ensure FCFS thread scheduling, which yields lower tail latency. • CTS enforces our policies while maintaining the key features of the Linux scheduler. • Experimental results show that CTS significantly outperforms the Linux scheduler.

Research paper thumbnail of Sequence Similarity Parallelization over Heterogeneous Computer Clusters Using Data Parallel Programming Model

Scalable Computing: Practice and Experience, 2017

Sequence similarity, as a special case of data intensive applications, is one of the neediest app... more Sequence similarity, as a special case of data intensive applications, is one of the neediest applications for parallelization. Clustered commodity computers as a cost-effective platform for distributed and parallel processing, can be leveraged to parallelize sequence similarity. However, manually designing and developing parallel programs on commodity computers is a time-consuming, complex and error-prone process. In this paper, we present a sequence similarity parallelization technique using the Apache Storm as a stream processing framework with a data parallel programming model. Storm automatically parallelizes computations via a special user-defined topology that is represented as a directed acyclic graph. The proposed technique collects streams of data from a disk and sends them sequence by sequence to clustered computers for parallel processing. We also present a dispatching policy for balancing the cluster workload and managing the cluster heterogeneity to achieve more than 99 percent parallelism. An alignment-free method, known as n-gram modeling, is used to calculate similarities between the sequences. To show the cost-performance superiority of our method on clustered commodity computers over serial processing in powerful computers, we simply use UniProtKB/SwissProt dataset for evaluation of the performance of sequence similarity as an interesting large-scale Bioinformatics application.

Research paper thumbnail of ECCO Mnemonic Authentication—Two-Factor Authentication Method with Ease-of-Use

International Journal of Computer Network and Information Security, 2014

Not very long ago, organizations used to identify their customers by means of one-factor authenti... more Not very long ago, organizations used to identify their customers by means of one-factor authentication mechanisms. In today's world, however, these mechanisms cannot overcome the new security threats at least when it comes to high risk situations. Hence, identity providers have introduced varieties of two-factor authentication mechanisms. It may be argued that users may experience difficulties at time of authentication in systems that use two-factor authentication mechanisms for example because they may be forced to carry extra devices to be authenticated more accurately. This is however the tradeoff between ease-of-use and having a secure system that may be decided by the users and not the security providers. In this paper we present a new two-factor authentication mechanism that secures systems and at the same time is easier to use. We have used mnemonic features and the cache concept to achieve ease-of-use and security, respectively. Also, we have tested our method with almost 6500 users in real world using The Mechanical Turk Developer Sandbox.

Research paper thumbnail of Simulative study of target tracking accuracy based on time synchronization error in wireless sensor networks

2008 IEEE Conference on Virtual Environments, Human-Computer Interfaces and Measurement Systems, 2008

Target tracking is one of the popular applications of wireless sensor networks (WSN). Randomly di... more Target tracking is one of the popular applications of wireless sensor networks (WSN). Randomly distributed sensor nodes(motes) in environment gather spatio-temporal information about target(s) and send them to a sink node for further processing. Motes like personal computers use low cost crystal based clocks. In order to maintain synchronization accuracy to acceptable levels, we shall need to synchronize them quite often. Time synchronization tries to control the time accuracy in WSN but it still has a specific limited accuracy. Motes partly process their sensed data using local fusion before sending them to the sink. Target tracking has two major error sources, first of all is that the equation system most of times gives two different answers which target can be placed in one of them. Second factor is that low time accuracy causes the great amounts of errors in results. Choosing proper fusion methods and sensing coverage degree helps us to increase the accuracy of target tracking results.

Research paper thumbnail of Using XCS as a Prediction Engine in Data Compression

Proceedings on Intelligent Systems and Knowledge Engineering (ISKE2007), 2007

XCS has been used in several fields as a prediction engine before; its population based nature en... more XCS has been used in several fields as a prediction engine before; its population based nature enables XCS to generate sets of properly pruned classification rules. Further, unlike other population based algorithms, it learns using rewards. These properties encouraged us to use it as a predictor engine for lossless data compression. In the compression context, XCS can be used to find the hidden relations in files of the same type. So, we used it as a preprocessor before an entropy encoder, to remove the existing correlations between file's bits or symbols. Removing correlations causes entropy encoders to achieve higher rates of compression. The results support this conclusion.

Research paper thumbnail of A low-overhead structure maintenance approach for building robust structured P2P systems

6th International Symposium on Telecommunications (IST), 2012

Structured peer-to-peer (P2P) systems have been recognized as an efficient approach to solve the ... more Structured peer-to-peer (P2P) systems have been recognized as an efficient approach to solve the resource discovery problem in large-scale dynamic distributed systems. The efficiency of structured P2P resource discovery approaches is attributed to their structured property. However, system dynamism caused by changes in the system membership, i.e., nodes that join or leave the system or simply fail, perturbs the structure of the system and endangers the expected correctness and efficiency of the resource discovery mechanism. In this paper we propose an event-oriented low-overhead approach to the maintenance of the structure of such systems in the face of node perturbations, by updating only those parts of the system state that are affected by perturbations, upon detection of a node membership change. This way, system robustness is improved too because system structure is kept up-to-date upon each perturbation. The proposed approach is general and can be applied to any structured P2P system. However, we suffice to show how it can be applied to the Chord system to demonstrate its applicability. We show experimentally that our proposed approach has less communication overhead than Chord and that it keeps the system up-to-date and consistent in its lifetime rather than in some periods as in Chord.

Research paper thumbnail of Task allocation to actors in wireless sensor actor networks: an energy and time aware technique

Procedia Computer Science, 2011

Task allocation is a critical issue in proper engineering of cooperative applications in embedded... more Task allocation is a critical issue in proper engineering of cooperative applications in embedded systems with latency and energy constraints, as in wireless sensor and actor networks (WSANs). Existing task allocation algorithms are mostly concerned with energy savings and ignore time constraints and thus increase the makespan of tasks in the network as well as the probability of malfunctioning of the network. In this paper we take both energy awareness and reduction of actor tasks' times to completion in WSANs into account and propose a two-phase task allocation technique based on Queuing theory. In the first phase, tasks are equally assigned to actors just to measure the capability of each actor to perform the assigned tasks. Tasks are then allocated to actors according to their measured capabilities in such a way to reduce the total completion times of all tasks in the network. The results of simulations on typical scenarios shows 45% improvement in the makespan of tasks in a network compared to the wellknown opportunistic load balancing (OLB) task allocation algorithm that is generally used in distributed systems. It is shown that our algorithms provide better tradeoffs between load balancing and completion times of all tasks in a WSAN compared to OLB.

Research paper thumbnail of Formal Specification and Verification of Concurrent Agents in Event-B

2013 19th International Conference on Control Systems and Computer Science, 2013

ABSTRACT This paper presents a formal modeling and proof of a multi-agent system for requesting s... more ABSTRACT This paper presents a formal modeling and proof of a multi-agent system for requesting services, in which agents performs operations concurrently. The concurrent operations made by agents are specified and validated using a formal specification method Event-B.

Research paper thumbnail of A mathematical approach to reduce the mean number of waiting tasks in wireless sensor actor networks

Research paper thumbnail of A new fractal-based approach for 3D visualization of mountains in VRML standard

Proceedings of the 2nd international conference on Computer graphics and interactive techniques in Austalasia and Southe East Asia - GRAPHITE '04, 2004

Several factors currently limit the size of Virtual Reality Modeling Language (VRML) models that ... more Several factors currently limit the size of Virtual Reality Modeling Language (VRML) models that can be effectively visualized over the Web. Main factors include network bandwidth limitations and inefficient encoding schemes for geometry and its associated properties. The delays caused by these factors reduce the attractiveness of VRML usage for a large range of virtual reality models, CAD data, and scientific visualizations. To solve this problem, we have tried to decrease the size of data by deploying fractal geometry in VRML standard. A novel approach is proposed for generating a "fractal mountain" using a random Midpoint-Displacement method in VRML standard. Our VRML 2.0 implementation, which is based on two newly defined nodes, TriangleGrid and FractMountain, and uses PROTO mechanism and Java in the Script nodes for the logic, is presented too. It is shown that our approach is more flexible and memory efficient than other approaches for computing mountain structures. Besides, mountains visualized by this approach look much more natural than those generated by other approaches.

Research paper thumbnail of Enhancing the OPEN Process Framework with service-oriented method fragments

Software & Systems Modeling, 2014

Service orientation is a promising paradigm that enables the engineering of large-scale distribut... more Service orientation is a promising paradigm that enables the engineering of large-scale distributed software systems using rigorous software development processes. The existing problem is that every service-oriented software development project often requires a customized development process that provides specific service-oriented software engineering tasks in support of requirements unique to that project. To resolve this problem and allow situational method engineering, we have defined a set of method fragments in support of the engineering of the project-specific service-oriented software development processes. We have derived the proposed method fragments from the recurring features of 11 prominent service-oriented software development methodologies using a systematic mining approach. We Communicated by Prof. Brian Henderson-Sellers.

Research paper thumbnail of Coverage rate calculation in wireless sensor networks

Computing, 2012

ABSTRACT The deployment of sensors without enough coverage can result in unreliable outputs in wi... more ABSTRACT The deployment of sensors without enough coverage can result in unreliable outputs in wireless sensor networks (WSNs). Thus sensing coverage is one of the most important quality of service factors in WSNs. A useful metric for quantifying the coverage reliability is the coverage rate that is the area covered by sensor nodes in a region of interest. The network sink can be informed about locations of all nodes and calculate the coverage rate centrally. However, this approach creates huge load on the network nodes that had to send their location information to the sink. Thus, a distributed approach is required to calculate the coverage rate. This paper is among the very first to provide a localized approach to calculate the coverage rate. We provide two coverage rate calculation (CRC) protocols, namely distributed exact coverage rate calculation (DECRC) and distributed probabilistic coverage rate calculation (DPCRC). DECRC calculates the coverage rate precisely using the idealized disk graph model. Precise calculation of the coverage rate is a unique property of DECRC compared to similar works that have used the disk graph model. In contrast, DPCRC uses a more realistic model that is probabilistic coverage model to determine an approximate coverage rate. DPCRC is in fact an extended version of DECRC that uses a set of localized techniques to make it a low cost protocol. Simulation results show significant overall performance improvement of CRC protocols compared to related works.

Research paper thumbnail of SLDRM: A self local license digital rights management system

Abstract – Digital Rights Management (DRM) systems try to protect copyrights and digital contents... more Abstract – Digital Rights Management (DRM) systems try to protect copyrights and digital contents by limiting access by users to contents. They provide facilities for electronic publishers to distribute their precious contents to prevent any illegal distribution and usage. Existing DRM ...

Research paper thumbnail of A Survey on Task Scheduling Algorithms in Cloud Computing for Fast Big Data Processing

International Journal of Information and Communication Technology Research

The recent explosion of data of all kinds (persistent and short-lived) have imposed processing sp... more The recent explosion of data of all kinds (persistent and short-lived) have imposed processing speed constraints on big data processing systems (BDPSs). One such constraint on running these systems in Cloud computing environments is to utilize as many parallel processors as required to process data fast. Consequently, the nodes in a Cloud environment encounter highly crowded clusters of computational units. To properly cater for high degree of parallelism to process data fast, efficient task and resource allocation schemes are required. These schemes must distribute tasks on the nodes in a way to yield highest resource utilization as possible. Such scheduling has proved even more complex in the case of processing of short-lived data. Task scheduling is vital not only to handle big data but also to provide fast processing of data to satisfy modern time data processing constraints. To this end, this paper reviews the most recently published (2020-2021) task scheduling schemes and their deployed algorithms from the fast data processing perspective .

Research paper thumbnail of Scalable Complex Event Processing Using Rule Distribution

Azerbaijan Journal of High Performance Computing, 2018

Complex event processing (CEP) systems are currently widely used in large-scale enterprises for t... more Complex event processing (CEP) systems are currently widely used in large-scale enterprises for the processing of high and dynamically changing rates of input events using large number of complex rules. Given the hardware limitations of vertically scaled CEP solutions, horizontal scalability has become an essential requirement for modern CEP systems. In this paper, we propose an adaptive load-balancing technique via rule distribution (called ARD) for a cluster of CEP engines that provides horizontal scalability for CEP systems. Our experiments show our proposed technique provides higher scalability and yields higher throughput in comparison with two previously proposed non-adaptive load-balancing techniques, namely VISIRI and SCTXPF, when the system faces with variable workload. In addition, ARD keeps the system balanced more often.

Research paper thumbnail of Avoiding Register Overflow in the Bakery Algorithm

49th International Conference on Parallel Processing - ICPP : Workshops, 2020

Computer systems are designed to make resources available to users and users may be interested in... more Computer systems are designed to make resources available to users and users may be interested in some resources more than others, therefore, a coordination scheme is required to satisfy the users' requirements. This scheme may implement certain policies such as "never allocate more than X units of resource Z". One policy that is of particular interest is the inability of users to access a single resource at the same time, which is called the problem of mutual exclusion. Resource management concerns the coordination and collaboration of users, and it is usually based on making a decision. In the case of mutual exclusion, that decision is about granting access to a resource. Therefore, mutual exclusion is useful for supporting resource access management. The first true solution to the mutual exclusion problem is known as the Bakery algorithm that does not rely on any lower-lever mutual exclusion. We examine the problem of register overflow in realworld implementations of the Bakery algorithm and present a variant algorithm named Bakery++ that prevents overflows from ever happening. Bakery++ avoids overflows without allowing a process to write into other processes' memory and without using additional memory or complex arithmetic or redefining the operations and functions used in Bakery. Bakery++ is almost as simple as Bakery and it is straightforward to implement in real systems. With Bakery++, there is no reason to keep implementing Bakery in real computers because Bakery++ eliminates the probability of overflows and hence it is more practical than Bakery. Previous approaches to circumvent the problem of register overflow included introducing new variables or redefining the operations or functions used in the original Bakery algorithm, while Bakery++ avoids overflows by using simple conditional statements. The result is a new mutual exclusion algorithm that is guaranteed never to allow an overflow and it is simple, correct and easy to implement. Bakery++ has the same temporal and spatial complexities as the original Bakery. We have specified Bakery++ in PlusCal and we have used the TLC model checker to assert that Bakery++ maintains the mutual exclusion property and that it never allows an overflow. CCS CONCEPTS • Software and its engineering → Software organization and properties → Contextual software domains → Operating systems → Process management → Mutual exclusion •

Research paper thumbnail of Process Patterns for Service Oriented Development

ArXiv, 2020

Software systems development nowadays has moved towards dynamic composition of services that run ... more Software systems development nowadays has moved towards dynamic composition of services that run on distributed infrastructures aligned with continuous changes in the system requirements. Consequently, software developers need to tailor project specific methodologies to fit their methodology requirements. Process patterns present a suitable solution by providing reusable method chunks of software development methodologies for constructing methodologies to fit specific requirements. In this paper, we propose a set of high-level service-oriented process patterns that can be used for constructing and enhancing situational service-oriented methodologies. We show how these patterns are used to construct a specific service-oriented methodology for the development of a sample system. Keywords. Service-Oriented Software Development Methodologies, Process Patterns, Process Meta-Model, Situational Method Engineering

Research paper thumbnail of Process Framework with service-oriented method fragments

Service orientation is a promising paradigm that enables the engineering of large-scale distribut... more Service orientation is a promising paradigm that enables the engineering of large-scale distributed software systems using rigorous software development processes. The existing problem is that every service-oriented software development project often requires a customized development process that provides specific service-oriented software engineering tasks in support of requirements unique to that project. To resolve this problem and allow situational method engineering, we have defined a set of method fragments in support of the engineering of the project-specific service-oriented software development processes. We have derived the proposed method fragments from the recurring features of 11 prominent service-oriented software development methodologies using a systematic mining approach. We Communicated by Prof. Brian Henderson-Sellers. Preliminary contributions of authors on the proposed subject matter of this paper presented in conferences are as follows: (1) Fifth IEEE Internati...

Research paper thumbnail of Yawn

Proceedings of the 10th ACM SIGOPS Asia-Pacific Workshop on Systems - APSys '19, 2019

Idle-state governors partially turn off idle CPUs, allowing them to go to states known as idle-st... more Idle-state governors partially turn off idle CPUs, allowing them to go to states known as idle-states to save power. Exiting from these idle-sates, however, imposes delays on the execution of tasks and aggravates tail latency. Menu, the default idle-state governor of Linux, predicts periods of idleness based on the historical data and the disk I/O information to choose proper idle-sates. Our experiments show that Menu can save power, but at the cost of sacrificing tail latency, making Menu an inappropriate governor for data centers that host latency-sensitive applications. In this paper, we present the initial design of Yawn, an idle-state governor that aims to mitigate tail latency without sacrificing power. Yawn leverages online machine learning techniques to predict the idle periods based on information gathered from all parameters affecting idleness, including network I/O, resulting in more accurate predictions, which in turn leads to reduced response times. Preliminary benchmarking results demonstrate that Yawn reduces the 99 t h latency percentile of Memcached requests by up to 40%.

Research paper thumbnail of ETAS: predictive scheduling of functions on worker nodes of Apache OpenWhisk platform

The Journal of Supercomputing, 2021

Fast execution of functions is an inevitable challenge in the serverless computing landscape. Ine... more Fast execution of functions is an inevitable challenge in the serverless computing landscape. Inefficient dispatching, fluctuations in invocation rates, burstiness of workloads, and wide range of execution times of serverless functions result in load imbalances and overloading of worker nodes of serverless platforms that impose high latency on invocations. This paper is concentrated on function scheduling within worker nodes of Apache OpenWhisk serverless platform and presents ETAS, a predictive scheduling scheme to reduce response times of invocations besides increasing workers throughputs and resource utilizations. ETAS schedules functions using their execution times, estimated by their previous execution's history, arrival times, and containers' status. We have implemented ETAS in Apache OpenWhisk and show that, compared to the OpenWhisk worker scheduler and several queue scheduling schemes, ETAS outperforms others by reducing the average waiting time by 30% and increasing the throughput by 40%.

Research paper thumbnail of CTS: An operating system CPU scheduler to mitigate tail latency for latency-sensitive multi-threaded applications

Journal of Parallel and Distributed Computing, 2019

h i g h l i g h t s • It has been proven that FCFS scheduling of threads leads to lower tail late... more h i g h l i g h t s • It has been proven that FCFS scheduling of threads leads to lower tail latency. • Experiments show that CFS policies lead to LCFS scheduling, aggravating tail latency. • CTS policies ensure FCFS thread scheduling, which yields lower tail latency. • CTS enforces our policies while maintaining the key features of the Linux scheduler. • Experimental results show that CTS significantly outperforms the Linux scheduler.

Research paper thumbnail of Sequence Similarity Parallelization over Heterogeneous Computer Clusters Using Data Parallel Programming Model

Scalable Computing: Practice and Experience, 2017

Sequence similarity, as a special case of data intensive applications, is one of the neediest app... more Sequence similarity, as a special case of data intensive applications, is one of the neediest applications for parallelization. Clustered commodity computers as a cost-effective platform for distributed and parallel processing, can be leveraged to parallelize sequence similarity. However, manually designing and developing parallel programs on commodity computers is a time-consuming, complex and error-prone process. In this paper, we present a sequence similarity parallelization technique using the Apache Storm as a stream processing framework with a data parallel programming model. Storm automatically parallelizes computations via a special user-defined topology that is represented as a directed acyclic graph. The proposed technique collects streams of data from a disk and sends them sequence by sequence to clustered computers for parallel processing. We also present a dispatching policy for balancing the cluster workload and managing the cluster heterogeneity to achieve more than 99 percent parallelism. An alignment-free method, known as n-gram modeling, is used to calculate similarities between the sequences. To show the cost-performance superiority of our method on clustered commodity computers over serial processing in powerful computers, we simply use UniProtKB/SwissProt dataset for evaluation of the performance of sequence similarity as an interesting large-scale Bioinformatics application.

Research paper thumbnail of ECCO Mnemonic Authentication—Two-Factor Authentication Method with Ease-of-Use

International Journal of Computer Network and Information Security, 2014

Not very long ago, organizations used to identify their customers by means of one-factor authenti... more Not very long ago, organizations used to identify their customers by means of one-factor authentication mechanisms. In today's world, however, these mechanisms cannot overcome the new security threats at least when it comes to high risk situations. Hence, identity providers have introduced varieties of two-factor authentication mechanisms. It may be argued that users may experience difficulties at time of authentication in systems that use two-factor authentication mechanisms for example because they may be forced to carry extra devices to be authenticated more accurately. This is however the tradeoff between ease-of-use and having a secure system that may be decided by the users and not the security providers. In this paper we present a new two-factor authentication mechanism that secures systems and at the same time is easier to use. We have used mnemonic features and the cache concept to achieve ease-of-use and security, respectively. Also, we have tested our method with almost 6500 users in real world using The Mechanical Turk Developer Sandbox.

Research paper thumbnail of Simulative study of target tracking accuracy based on time synchronization error in wireless sensor networks

2008 IEEE Conference on Virtual Environments, Human-Computer Interfaces and Measurement Systems, 2008

Target tracking is one of the popular applications of wireless sensor networks (WSN). Randomly di... more Target tracking is one of the popular applications of wireless sensor networks (WSN). Randomly distributed sensor nodes(motes) in environment gather spatio-temporal information about target(s) and send them to a sink node for further processing. Motes like personal computers use low cost crystal based clocks. In order to maintain synchronization accuracy to acceptable levels, we shall need to synchronize them quite often. Time synchronization tries to control the time accuracy in WSN but it still has a specific limited accuracy. Motes partly process their sensed data using local fusion before sending them to the sink. Target tracking has two major error sources, first of all is that the equation system most of times gives two different answers which target can be placed in one of them. Second factor is that low time accuracy causes the great amounts of errors in results. Choosing proper fusion methods and sensing coverage degree helps us to increase the accuracy of target tracking results.

Research paper thumbnail of Using XCS as a Prediction Engine in Data Compression

Proceedings on Intelligent Systems and Knowledge Engineering (ISKE2007), 2007

XCS has been used in several fields as a prediction engine before; its population based nature en... more XCS has been used in several fields as a prediction engine before; its population based nature enables XCS to generate sets of properly pruned classification rules. Further, unlike other population based algorithms, it learns using rewards. These properties encouraged us to use it as a predictor engine for lossless data compression. In the compression context, XCS can be used to find the hidden relations in files of the same type. So, we used it as a preprocessor before an entropy encoder, to remove the existing correlations between file's bits or symbols. Removing correlations causes entropy encoders to achieve higher rates of compression. The results support this conclusion.

Research paper thumbnail of A low-overhead structure maintenance approach for building robust structured P2P systems

6th International Symposium on Telecommunications (IST), 2012

Structured peer-to-peer (P2P) systems have been recognized as an efficient approach to solve the ... more Structured peer-to-peer (P2P) systems have been recognized as an efficient approach to solve the resource discovery problem in large-scale dynamic distributed systems. The efficiency of structured P2P resource discovery approaches is attributed to their structured property. However, system dynamism caused by changes in the system membership, i.e., nodes that join or leave the system or simply fail, perturbs the structure of the system and endangers the expected correctness and efficiency of the resource discovery mechanism. In this paper we propose an event-oriented low-overhead approach to the maintenance of the structure of such systems in the face of node perturbations, by updating only those parts of the system state that are affected by perturbations, upon detection of a node membership change. This way, system robustness is improved too because system structure is kept up-to-date upon each perturbation. The proposed approach is general and can be applied to any structured P2P system. However, we suffice to show how it can be applied to the Chord system to demonstrate its applicability. We show experimentally that our proposed approach has less communication overhead than Chord and that it keeps the system up-to-date and consistent in its lifetime rather than in some periods as in Chord.

Research paper thumbnail of Task allocation to actors in wireless sensor actor networks: an energy and time aware technique

Procedia Computer Science, 2011

Task allocation is a critical issue in proper engineering of cooperative applications in embedded... more Task allocation is a critical issue in proper engineering of cooperative applications in embedded systems with latency and energy constraints, as in wireless sensor and actor networks (WSANs). Existing task allocation algorithms are mostly concerned with energy savings and ignore time constraints and thus increase the makespan of tasks in the network as well as the probability of malfunctioning of the network. In this paper we take both energy awareness and reduction of actor tasks' times to completion in WSANs into account and propose a two-phase task allocation technique based on Queuing theory. In the first phase, tasks are equally assigned to actors just to measure the capability of each actor to perform the assigned tasks. Tasks are then allocated to actors according to their measured capabilities in such a way to reduce the total completion times of all tasks in the network. The results of simulations on typical scenarios shows 45% improvement in the makespan of tasks in a network compared to the wellknown opportunistic load balancing (OLB) task allocation algorithm that is generally used in distributed systems. It is shown that our algorithms provide better tradeoffs between load balancing and completion times of all tasks in a WSAN compared to OLB.

Research paper thumbnail of Formal Specification and Verification of Concurrent Agents in Event-B

2013 19th International Conference on Control Systems and Computer Science, 2013

ABSTRACT This paper presents a formal modeling and proof of a multi-agent system for requesting s... more ABSTRACT This paper presents a formal modeling and proof of a multi-agent system for requesting services, in which agents performs operations concurrently. The concurrent operations made by agents are specified and validated using a formal specification method Event-B.

Research paper thumbnail of A mathematical approach to reduce the mean number of waiting tasks in wireless sensor actor networks

Research paper thumbnail of A new fractal-based approach for 3D visualization of mountains in VRML standard

Proceedings of the 2nd international conference on Computer graphics and interactive techniques in Austalasia and Southe East Asia - GRAPHITE '04, 2004

Several factors currently limit the size of Virtual Reality Modeling Language (VRML) models that ... more Several factors currently limit the size of Virtual Reality Modeling Language (VRML) models that can be effectively visualized over the Web. Main factors include network bandwidth limitations and inefficient encoding schemes for geometry and its associated properties. The delays caused by these factors reduce the attractiveness of VRML usage for a large range of virtual reality models, CAD data, and scientific visualizations. To solve this problem, we have tried to decrease the size of data by deploying fractal geometry in VRML standard. A novel approach is proposed for generating a "fractal mountain" using a random Midpoint-Displacement method in VRML standard. Our VRML 2.0 implementation, which is based on two newly defined nodes, TriangleGrid and FractMountain, and uses PROTO mechanism and Java in the Script nodes for the logic, is presented too. It is shown that our approach is more flexible and memory efficient than other approaches for computing mountain structures. Besides, mountains visualized by this approach look much more natural than those generated by other approaches.

Research paper thumbnail of Enhancing the OPEN Process Framework with service-oriented method fragments

Software & Systems Modeling, 2014

Service orientation is a promising paradigm that enables the engineering of large-scale distribut... more Service orientation is a promising paradigm that enables the engineering of large-scale distributed software systems using rigorous software development processes. The existing problem is that every service-oriented software development project often requires a customized development process that provides specific service-oriented software engineering tasks in support of requirements unique to that project. To resolve this problem and allow situational method engineering, we have defined a set of method fragments in support of the engineering of the project-specific service-oriented software development processes. We have derived the proposed method fragments from the recurring features of 11 prominent service-oriented software development methodologies using a systematic mining approach. We Communicated by Prof. Brian Henderson-Sellers.

Research paper thumbnail of Coverage rate calculation in wireless sensor networks

Computing, 2012

ABSTRACT The deployment of sensors without enough coverage can result in unreliable outputs in wi... more ABSTRACT The deployment of sensors without enough coverage can result in unreliable outputs in wireless sensor networks (WSNs). Thus sensing coverage is one of the most important quality of service factors in WSNs. A useful metric for quantifying the coverage reliability is the coverage rate that is the area covered by sensor nodes in a region of interest. The network sink can be informed about locations of all nodes and calculate the coverage rate centrally. However, this approach creates huge load on the network nodes that had to send their location information to the sink. Thus, a distributed approach is required to calculate the coverage rate. This paper is among the very first to provide a localized approach to calculate the coverage rate. We provide two coverage rate calculation (CRC) protocols, namely distributed exact coverage rate calculation (DECRC) and distributed probabilistic coverage rate calculation (DPCRC). DECRC calculates the coverage rate precisely using the idealized disk graph model. Precise calculation of the coverage rate is a unique property of DECRC compared to similar works that have used the disk graph model. In contrast, DPCRC uses a more realistic model that is probabilistic coverage model to determine an approximate coverage rate. DPCRC is in fact an extended version of DECRC that uses a set of localized techniques to make it a low cost protocol. Simulation results show significant overall performance improvement of CRC protocols compared to related works.

Research paper thumbnail of SLDRM: A self local license digital rights management system

Abstract – Digital Rights Management (DRM) systems try to protect copyrights and digital contents... more Abstract – Digital Rights Management (DRM) systems try to protect copyrights and digital contents by limiting access by users to contents. They provide facilities for electronic publishers to distribute their precious contents to prevent any illegal distribution and usage. Existing DRM ...