Fatma Omara - Academia.edu (original) (raw)

IJCSIS Volumes by Fatma Omara

Research paper thumbnail of Journal of Computer Science and Information Security May 2010

by Fatma Omara, Sadiq Altaweel, Srini Sir, Priya Prabhu, PORKUMARAN KARANTHARAJ, Ba-prateek Dhawan, Kasiganesan Ranganath, Shanthi Mother Teresa Guide, Yasir Safeer, Nitin Bhatia, Journal of Computer Science IJCSIS, and Gunaseelan Devaraj

We thank all those authors who contributed papers to the May 2010 issue and the reviewers, all of... more We thank all those authors who contributed papers to the May 2010 issue and the reviewers, all of whom provided valuable feedback comments. We hope that you will find this IJCSIS edition a useful state-of-the-art literature reference for
your research projects. We look forward to receiving your submissions and to receiving feedback.

https://sites.google.com/site/ijcsis/Home

Papers by Fatma Omara

Research paper thumbnail of ICSD: Integrated Cloud Services Dataset

Services – SERVICES 2018, 2018

The service composition problem in Cloud computing is formulated as a multiple criteria decision ... more The service composition problem in Cloud computing is formulated as a multiple criteria decision making problem. Due to the extensive search space, Cloud service composition is addressed as an NP-hard problem. Using a proper dataset is considered one of the main challenges to evaluate the efficiency of the developed service composition algorithms. According to the work in this paper, a new dataset has been introduced, called Integrated Cloud Services Dataset (ICSD). This dataset is constructed by amalgamating the Google cluster-usage traces, and a real QoS dataset. To evaluate the efficiency of the ICSD dataset, a proof of concept has been done by implementing and evaluating an existing Cloud service compositing approach; PSO algorithm with skyline operator using ICSD dataset. According to the implementation results, it is found that the ICSD dataset achieved a high degree of optimality with low time complexity, which significantly increases the ICSD dataset accuracy in Cloud services composition environment. Keywords: Cloud computing Á Cloud services composition Non-functional attributes Á QoS dataset Á Quality of services Á Service selection

Research paper thumbnail of Location-aware deep learning-based framework for optimizing cloud consumer quality of service-based service composition

International Journal of Electrical and Computer Engineering (IJECE)

The expanding propensity of organization users to utilize cloud services urges to deliver service... more The expanding propensity of organization users to utilize cloud services urges to deliver services in a service pool with a variety of functional and non-functional attributes from online service providers. brokers of cloud services must intense rivalry competing with one another to provide quality of service (QoS) enhancements. Such rivalry prompts a troublesome and muddled providing composite services on the cloud using a simple service selection and composition approach. Therefore, cloud composition is considered a non-deterministic polynomial (NP-hard) and economically motivated problem. Hence, developing a reliable economic model for composition is of tremendous interest and to have importance for the cloud consumer. This paper provides “A location-aware deep learning framework for improving the QoS-based service composition for cloud consumers”. The proposed framework is firstly reducing the dimensions of data. Secondly, it applies a combination of the deep learning long short...

Research paper thumbnail of Enhancing highly-collaborative access control system using a new role-mapping algorithm

International Journal of Electrical and Computer Engineering (IJECE), 2022

The collaboration among different organizations is considered one of the main benefits of moving ... more The collaboration among different organizations is considered one of the main benefits of moving applications and services to a cloud computing environment. Unfortunately, this collaboration raises many challenges such as the access of sensitive resources by unauthorized people. Usually, role based access-control (RBAC) Model is deployed in large organizations. The work in this paper is mainly considering the authorization scalability problem, which comes out due to the increase of shared resources and/or the number of collaborating organizations in the same cloud environment. Therefore, this paper proposes replacing the cross-domain RBAC rules with role-to-role (RTR) mapping rules among all organizations. The RTR mapping rules are generated using a newly proposed role-mapping algorithm. A comparative study has been performed to evaluate the performance of the proposed algorithm with concerning the rule-store size and the authorization response time. According to the results, it is ...

Research paper thumbnail of A Hybrid Hashing Security Algorithm for Data Storage on Cloud Computing

— In today's modern IT everything is possible on the web by cloud computing, it allows us to ... more — In today's modern IT everything is possible on the web by cloud computing, it allows us to create, configure, use and customize the applications, services, and storage online. The Cloud Computing is a kind of Internet-based computing, where shared data, information and resources are provided with computers and other devices on-demand. The Cloud Computing offers several advantages to the organizations such as scalability, low cost, and flexibility. In spite of these advantages, there is a major problem of cloud computing, which is the security of cloud storage. There are a lot of mechanisms that is used to realize the security of data in the cloud storage. Cryptography is the most used mechanism. The science of designing ciphers, block ciphers, stream ciphers and hash functions is called cryptography. Cryptographic techniques in the cloud must enable security services such as authorization, availability, confidentiality, integrity, and non-repudiation. To ensure these services ...

Research paper thumbnail of ISSN 2006-9731 ©2011 Academic Journals

A new algorithm for static task scheduling for heterogeneous distributed computing systems

Research paper thumbnail of Enhancing Pixel Value Difference (PVD) Image Steganography by Using Mobile Phone Keypad (MPK) Coding

Research paper thumbnail of GeoLocalitySim: Geographical Cloud Simulator with Data Locality

Internet of Things—Applications and Future, 2020

Cloud simulator is a framework which supports cloud modelling, testing functionality (e.g. alloca... more Cloud simulator is a framework which supports cloud modelling, testing functionality (e.g. allocating, provisioning, scheduling, etc.), analysing and evaluating performance, and reporting cloud computing environment. Cloud simulators save cost and time of building real experiments on real environment. The current simulators (e.g. CloudSim, NetworkCloudSim, GreenCloud, etc.) deal with data as a workflow. According to our previous work, LocalitySim simulator has been proposed with considering data locality and its effect on the task execution time. This simulator deals with splitting and allocating data based on network topology. According to the work in this paper, LocalitySim simulator has been modified and extended to support extra feature (e.g. geographical distributed data centre(s), geographical file allocation, MapReduce task execution model, etc.) with friendly graphical user interface (GUI). This modified simulator is called GeoLocalitySim. The main issue of the proposed GeoLocalitySim simulator is that it could be extended easily to support more features to meet any future module(s). To validate the accuracy of the proposed GeoLocalitySim simulator, a comparative study has been done between our proposed GeoLocalitySim simulator and Purlieus simulator.

Research paper thumbnail of A Comparative Study of HDFS Replication Approaches

International Journal in IT & Engineering, 2015

The Hadoop Distributed File System (HDFS) is designed to store, analysis, transfers large scale o... more The Hadoop Distributed File System (HDFS) is designed to store, analysis, transfers large scale of data sets, and stream it at high bandwidth to the user applications. It handles fault tolerance by using data replication, where each data block is replicated and stored in multiple DataNodes. Therefore, the HDFS supports reliability and availability. The data replication of the HDFS in Hadoop is implemented in a pipelined manner which takes much time for replication. Other approaches have been proposed to improve the performance of the data replication in THE Hadoop HDFS .The paper provides the comprehensive and theoretical analysis of three existed HDFS replication approaches; the default pipeline approach, parallel (Broadcast) approach and parallel (Master/Slave) approach. The study describes the technical specification, features, and specialization for each approach along with its applications. A comparative study has been performed to evaluate the performance of these approaches using TestDFSIO benchmark. According to the experimental results it is found that the performance (i.e., the execution time and throughput) of the parallel (Broadcast) replication approach and the parallel (Master/Slave) outperform the default pipelined replication. Also, it is noticed that the throughput is decreased with increasing the file size in the three approaches.

Research paper thumbnail of Enhanced QoS-Based Service Composition Approach in Multi-Cloud Environment

2020 International Conference on Innovative Trends in Communication and Computer Engineering (ITCE), 2020

The service composition problem in Cloud computing is formulated as a multiple criteria decision-... more The service composition problem in Cloud computing is formulated as a multiple criteria decision-making problem. Due to the extensive search space, Cloud service composition is addressed as an NP-hard problem. In addition, it is a long term based and economically driven. Composting accurate services have great attention to interest and importance for the Cloud consumer in the multi-Cloud environment. Therefore, an enhanced QoS-based Service Composition Approach in the multi-Cloud environment has been proposed to accurately compose the best Cloud providers to contract with them for composing the needed services to minimize the Cloud consumer cost function. In this paper, a modified Particle Swarm Optimization (PSO) has been employed to compose the best services based on the uncertainty of QoS attributes. The proposed approach has been implemented using a real QoS dataset. According to the comparative results, it is found that the proposed approach has achieved a high degree of optima...

Research paper thumbnail of xploiting coarse-grained reused-based opportunities in Big Data ulti-query optimization adhya

Multi-query optimization in Big Data becomes a promising research direction due to the popularity... more Multi-query optimization in Big Data becomes a promising research direction due to the popularity of massive data analytical systems (e.g., MapReduce and Flink). The multi-query is translated into jobs. These jobs are routinely submitted with similar tasks to the underling Big Data analytical systems. These similar tasks are considered complicated and computation overhead. Therefore, there are some existing techniques that have been proposed for exploiting sharing tasks in Big Data multi-query optimization (e.g., MRShare and Relaxed MRShare). These techniques are heavily tailored relaxed optimizing factors of fine-grained reused-based opportunities. In accordance with Big Data multi-query optimization, the existing fine-grained techniques are only concerned with equal tuples size and uniform data distribution. These issues are not applicable to the real-world distributed applications which depend on coarse-grained reused-based opportunities, such as non-equal tuples size and non-uni...

Research paper thumbnail of Towards standard PaaS implementation APIs

International Journal of Cloud Computing, 2017

Platform as a service (PaaS) supports application developers with the ability to implement and de... more Platform as a service (PaaS) supports application developers with the ability to implement and deploy their applications in the cloud. Several heterogeneous PaaS platforms are available, such as Google App Engine (GAE), Windows Azure, Cloud Foundry, and OpenShift. Each PaaS provider has its own proprietary implementation and deployment APIs. The heterogeneity of these APIs makes developers worry about their application portability and interoperability. The work in this paper concerns about the heterogeneity of different PaaS implementation APIs. Standard PaaS implementation APIs, called Std-PaaS APIs, have been proposed to solve the application portability problem. Std-PaaS APIs allow developers to develop generic cloud application by writing their applications once and deploying many times on heterogeneous PaaS providers. Std-PaaS APIs have been evaluated using two case studies, in which generic APIs for cloud persistentstorage service and NoSQL datastore service have been developed and used to developed applications to be deployed onto GAE and Windows Azure.

Research paper thumbnail of A deep learning based framework for optimizing cloud consumer QoS-based service composition

Computing, 2020

The service composition problem in Cloud computing is formulated as a multiple criteria decision-... more The service composition problem in Cloud computing is formulated as a multiple criteria decision-making problem. Due to the extensive search space, Cloud service composition is addressed as an NP-hard problem. In addition, it is a long term based and economically driven. Building an accurate economic model for service composition has great attention to interest and importance for the Cloud consumer. A deep learning based service composition (DLSC) framework has been proposed in this paper. The proposed DLSC framework is considered an amalgamation between the deep learning long short term memory (LSTM) network and particle swarm optimization (PSO) algorithm. The LSTM network is applied to accurately predict the Cloud QoS provisioned values, and the output of LSTM network is fed to PSO algorithm to compose the best Cloud providers to contract with them for composing the needed services to minimize the consumer cost function. The proposed DLSC framework has been implemented using a real QoS dataset. According to the comparative results, it is found that the performance of the proposed framework outperforms the existing models with respect to the predictive accuracy and composition accuracy.

Research paper thumbnail of Pso Optimization algorithm for Task Scheduling on The Cloud Computing Environment

INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY, 2014

The Cloud computing is a most recent computing paradigm where IT services are provided and delive... more The Cloud computing is a most recent computing paradigm where IT services are provided and delivered over the Internet on demand. The Scheduling problem for cloud computing environment has a lot of awareness as the applications tasks could be mapped to the available resources to achieve better results. One of the main existed algorithms of task scheduling on the available resources on the cloud environment is based on the Particle Swarm Optimization (PSO). According to this PSO algorithm, the application’s tasks are allocated to the available resources to minimize the computation cost only.In this paper, a modified PSO algorithm has been introduced and implemented for solving task scheduling problem in the cloud. The main idea of the modified PSO is that the tasks are allocated on the available resources to minimize the execution time in addition to the computation cost. This modified PSO algorithm is called Modified Particle Swarm Optimization (MPOS).The MPOS evaluations have been...

Research paper thumbnail of A Low Latency Proxy Prefetching Caching Algorithm

International Conference on Aerospace Sciences and Aviation Technology, 2003

The Web proxy cache system was deployed to save network bandwidth, balance server load, and reduc... more The Web proxy cache system was deployed to save network bandwidth, balance server load, and reduce network latency by storing copies of popular documents in the client and proxy caches for the Uniform Resource Locator (URL) requests. To solve the problem of the Web's slow end-user response time, a Web proxy caching and prefetching strategy has been developed and implemented by the auother to provide the users by the information they mostly likely want to browse in user profiles. This developed strategy uses the Reverse Aggressive technique for prefetching, which was proposed theortically. This developed strategy has been implemented with different cache sizes using a Web caching simulator. The tradional caching replacement policies such as Least-Recently-Used (LRU), Hybrid, and Size policies were already existed in this simultor. This simulator has been modified by the work in this paper such that the most recent replacement policies; Last-In-First-Out (LIFO), First-Try, Swapping and Place-Holder policies under infinite sized cache have been implemented. The performance measurements of the developed strategy have been studied using the tradional replacement policies, and the most recent replacement policies. Also, a comparative study has been done to clarify the benefits of the Reverse Aggressive caching prefetching algorithm with respect to the Fixed-Horizon caching prefetching algorithm with respect to the Reduced Latency (RL). According to the implementation results, it has been found that the average latency has been reduced at a higher degree by using the Reverse Aggressive cache prefetching strategy.

Research paper thumbnail of An Enhanced Task Scheduling Algorithm on Cloud Computing Environment

International Journal of Grid and Distributed Computing, 2016

Cloud computing is the technology that moves the information technology (IT) services out of the ... more Cloud computing is the technology that moves the information technology (IT) services out of the office. Unfortunately, Cloud computing has faced some challenges. The task scheduling problem is considered one of the main challenges because a good mapping between the available resources and the users' tasks is needed to reduce the execution time of the users' tasks (i.e., reduce make-span), and increase resource utilization. The objective of this paper is to introduce and implement an enhanced task scheduling algorithm to assign the users' tasks to multiple computing resources. The aim of the proposed algorithm is to reduce the execution time, and cost, as well as, increase resource utilization. The proposed algorithm is considered an amalgamation of the Particle Swarm Optimization (PSO),the Best-Fit (BF), and Tabu-Search (TS) algorithms; called BFPSOTS. According to the proposed BFPSOTS algorithm, the BF algorithm has been used to generate the initial population of the standard PSO algorithm instead of to be random. The Tabu-Search (TS) algorithm has been used to improve the local research by avoiding the trap of the local optimality which could be occurred using the standard PSO algorithm. The proposed hybrid algorithm (i.e., BFPSOTS) has been implemented using Cloudsim. A comparative study has been done to evaluate the performance of the proposed algorithm relative to the standard PSO algorithm using five problems with different number of independent task, and Virtual Machines (VMs). The performance parameters which have been considered are the execution time (Makspan), cost, and resources utilization. The implementation results prove that the proposed hybrid algorithm (i.e., BFPSOTS) outperforms the standard PSO algorithm..

Research paper thumbnail of Comparative Study of Multi-query Optimization Techniques using Shared Predicate-based for Big Data

International Journal of Grid and Distributed Computing, 2016

Big data analytical systems, such as MapReduce, have become main issues for many enterprises and ... more Big data analytical systems, such as MapReduce, have become main issues for many enterprises and research groups. Currently, multi-query which translated into MapReduce jobs is submitted repeatedly with similar tasks. So, exploiting these similar tasks can offer possibilities to avoid repeated computations of MapReduce jobs. Therefore, many researches have addressed the sharing opportunity to optimize multiquery processing. Consequently, the main goal of this work is to study and compare comprehensively two existed sharing opportunity techniques using predicate-based filters; MRShare and relaxed MRShare. The comparative study has been performed over TPC-H benchmark and confirmed that the relaxed MRShare technique significantly outperforms the MRShare for shared data in terms of predicate-based filters among multi-query.

Research paper thumbnail of Developing SLA Documents for e-Learning System Based on Cloud

International Review on Computers and Software (IRECOS), 2016

The guarantee of delivering the service from the service provider is a vital requirement for the ... more The guarantee of delivering the service from the service provider is a vital requirement for the Cloud’s users. This guarantee can be achieved through the Service Level Agreement, which is a contract between the user and the provider. In this paper, an SLA document is defined for an E-learning system called “go learn cloud system – glcs” to clarify the rights, terms and conditions for the users and providers. The SLAin this system has two types; a document between the Cloud service provider and the coordinator of the E-learning system, and a document between the students and the instructor from a side and thecoordinator from the other side. On the other hand, this paper presents how the coordinator of the E-Learning systemscan take hisdecision for choosing the suitable Cloud Computing platformthat can serve the system’s user with minimum cost.

Research paper thumbnail of Finding the pin in the haystack: A Bot Traceback service for public clouds

2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS), 2015

Cloud computing permits customers to host their data and applications to the cloud with an intere... more Cloud computing permits customers to host their data and applications to the cloud with an interesting economic cost-benefit tradeoff. However, the low price of cloud computing resources encourages attackers to rent a bulk of their botnets on the cloud and launch their attacks from there, which makes customers worry about using cloud computing. Therefore, in this paper, we propose a Bot Traceback (BTB) service for reporting and tracing back the presence of a bot inside an IaaS cloud provider. BTB aims to identify the virtual machine on which a bot runs either inside the same provider or inside a federated provider. The BTB service has been implemented as a part of the security tools in the EASI-CLOUDS project and has been deployed online. We present the implementation details of the BTB service and its main components (the BTB reporting service and BTB detection service). The BTB detection service will start running after a BTB report is received either from the same provider or from another federated provider.

Research paper thumbnail of A generalized architecture of quantum secure direct communication for N disjointed users with authentication

Scientific reports, Jan 18, 2015

In this paper, we generalize a secured direct communication process between N users with partial ... more In this paper, we generalize a secured direct communication process between N users with partial and full cooperation of quantum server. So, N - 1 disjointed users u1, u2, …, uN-1 can transmit a secret message of classical bits to a remote user uN by utilizing the property of dense coding and Pauli unitary transformations. The authentication process between the quantum server and the users are validated by EPR entangled pair and CNOT gate. Afterwards, the remained EPR will generate shared GHZ states which are used for directly transmitting the secret message. The partial cooperation process indicates that N - 1 users can transmit a secret message directly to a remote user uN through a quantum channel. Furthermore, N - 1 users and a remote user uN can communicate without an established quantum channel among them by a full cooperation process. The security analysis of authentication and communication processes against many types of attacks proved that the attacker cannot gain any info...

Research paper thumbnail of Journal of Computer Science and Information Security May 2010

by Fatma Omara, Sadiq Altaweel, Srini Sir, Priya Prabhu, PORKUMARAN KARANTHARAJ, Ba-prateek Dhawan, Kasiganesan Ranganath, Shanthi Mother Teresa Guide, Yasir Safeer, Nitin Bhatia, Journal of Computer Science IJCSIS, and Gunaseelan Devaraj

We thank all those authors who contributed papers to the May 2010 issue and the reviewers, all of... more We thank all those authors who contributed papers to the May 2010 issue and the reviewers, all of whom provided valuable feedback comments. We hope that you will find this IJCSIS edition a useful state-of-the-art literature reference for
your research projects. We look forward to receiving your submissions and to receiving feedback.

https://sites.google.com/site/ijcsis/Home

Research paper thumbnail of ICSD: Integrated Cloud Services Dataset

Services – SERVICES 2018, 2018

The service composition problem in Cloud computing is formulated as a multiple criteria decision ... more The service composition problem in Cloud computing is formulated as a multiple criteria decision making problem. Due to the extensive search space, Cloud service composition is addressed as an NP-hard problem. Using a proper dataset is considered one of the main challenges to evaluate the efficiency of the developed service composition algorithms. According to the work in this paper, a new dataset has been introduced, called Integrated Cloud Services Dataset (ICSD). This dataset is constructed by amalgamating the Google cluster-usage traces, and a real QoS dataset. To evaluate the efficiency of the ICSD dataset, a proof of concept has been done by implementing and evaluating an existing Cloud service compositing approach; PSO algorithm with skyline operator using ICSD dataset. According to the implementation results, it is found that the ICSD dataset achieved a high degree of optimality with low time complexity, which significantly increases the ICSD dataset accuracy in Cloud services composition environment. Keywords: Cloud computing Á Cloud services composition Non-functional attributes Á QoS dataset Á Quality of services Á Service selection

Research paper thumbnail of Location-aware deep learning-based framework for optimizing cloud consumer quality of service-based service composition

International Journal of Electrical and Computer Engineering (IJECE)

The expanding propensity of organization users to utilize cloud services urges to deliver service... more The expanding propensity of organization users to utilize cloud services urges to deliver services in a service pool with a variety of functional and non-functional attributes from online service providers. brokers of cloud services must intense rivalry competing with one another to provide quality of service (QoS) enhancements. Such rivalry prompts a troublesome and muddled providing composite services on the cloud using a simple service selection and composition approach. Therefore, cloud composition is considered a non-deterministic polynomial (NP-hard) and economically motivated problem. Hence, developing a reliable economic model for composition is of tremendous interest and to have importance for the cloud consumer. This paper provides “A location-aware deep learning framework for improving the QoS-based service composition for cloud consumers”. The proposed framework is firstly reducing the dimensions of data. Secondly, it applies a combination of the deep learning long short...

Research paper thumbnail of Enhancing highly-collaborative access control system using a new role-mapping algorithm

International Journal of Electrical and Computer Engineering (IJECE), 2022

The collaboration among different organizations is considered one of the main benefits of moving ... more The collaboration among different organizations is considered one of the main benefits of moving applications and services to a cloud computing environment. Unfortunately, this collaboration raises many challenges such as the access of sensitive resources by unauthorized people. Usually, role based access-control (RBAC) Model is deployed in large organizations. The work in this paper is mainly considering the authorization scalability problem, which comes out due to the increase of shared resources and/or the number of collaborating organizations in the same cloud environment. Therefore, this paper proposes replacing the cross-domain RBAC rules with role-to-role (RTR) mapping rules among all organizations. The RTR mapping rules are generated using a newly proposed role-mapping algorithm. A comparative study has been performed to evaluate the performance of the proposed algorithm with concerning the rule-store size and the authorization response time. According to the results, it is ...

Research paper thumbnail of A Hybrid Hashing Security Algorithm for Data Storage on Cloud Computing

— In today's modern IT everything is possible on the web by cloud computing, it allows us to ... more — In today's modern IT everything is possible on the web by cloud computing, it allows us to create, configure, use and customize the applications, services, and storage online. The Cloud Computing is a kind of Internet-based computing, where shared data, information and resources are provided with computers and other devices on-demand. The Cloud Computing offers several advantages to the organizations such as scalability, low cost, and flexibility. In spite of these advantages, there is a major problem of cloud computing, which is the security of cloud storage. There are a lot of mechanisms that is used to realize the security of data in the cloud storage. Cryptography is the most used mechanism. The science of designing ciphers, block ciphers, stream ciphers and hash functions is called cryptography. Cryptographic techniques in the cloud must enable security services such as authorization, availability, confidentiality, integrity, and non-repudiation. To ensure these services ...

Research paper thumbnail of ISSN 2006-9731 ©2011 Academic Journals

A new algorithm for static task scheduling for heterogeneous distributed computing systems

Research paper thumbnail of Enhancing Pixel Value Difference (PVD) Image Steganography by Using Mobile Phone Keypad (MPK) Coding

Research paper thumbnail of GeoLocalitySim: Geographical Cloud Simulator with Data Locality

Internet of Things—Applications and Future, 2020

Cloud simulator is a framework which supports cloud modelling, testing functionality (e.g. alloca... more Cloud simulator is a framework which supports cloud modelling, testing functionality (e.g. allocating, provisioning, scheduling, etc.), analysing and evaluating performance, and reporting cloud computing environment. Cloud simulators save cost and time of building real experiments on real environment. The current simulators (e.g. CloudSim, NetworkCloudSim, GreenCloud, etc.) deal with data as a workflow. According to our previous work, LocalitySim simulator has been proposed with considering data locality and its effect on the task execution time. This simulator deals with splitting and allocating data based on network topology. According to the work in this paper, LocalitySim simulator has been modified and extended to support extra feature (e.g. geographical distributed data centre(s), geographical file allocation, MapReduce task execution model, etc.) with friendly graphical user interface (GUI). This modified simulator is called GeoLocalitySim. The main issue of the proposed GeoLocalitySim simulator is that it could be extended easily to support more features to meet any future module(s). To validate the accuracy of the proposed GeoLocalitySim simulator, a comparative study has been done between our proposed GeoLocalitySim simulator and Purlieus simulator.

Research paper thumbnail of A Comparative Study of HDFS Replication Approaches

International Journal in IT & Engineering, 2015

The Hadoop Distributed File System (HDFS) is designed to store, analysis, transfers large scale o... more The Hadoop Distributed File System (HDFS) is designed to store, analysis, transfers large scale of data sets, and stream it at high bandwidth to the user applications. It handles fault tolerance by using data replication, where each data block is replicated and stored in multiple DataNodes. Therefore, the HDFS supports reliability and availability. The data replication of the HDFS in Hadoop is implemented in a pipelined manner which takes much time for replication. Other approaches have been proposed to improve the performance of the data replication in THE Hadoop HDFS .The paper provides the comprehensive and theoretical analysis of three existed HDFS replication approaches; the default pipeline approach, parallel (Broadcast) approach and parallel (Master/Slave) approach. The study describes the technical specification, features, and specialization for each approach along with its applications. A comparative study has been performed to evaluate the performance of these approaches using TestDFSIO benchmark. According to the experimental results it is found that the performance (i.e., the execution time and throughput) of the parallel (Broadcast) replication approach and the parallel (Master/Slave) outperform the default pipelined replication. Also, it is noticed that the throughput is decreased with increasing the file size in the three approaches.

Research paper thumbnail of Enhanced QoS-Based Service Composition Approach in Multi-Cloud Environment

2020 International Conference on Innovative Trends in Communication and Computer Engineering (ITCE), 2020

The service composition problem in Cloud computing is formulated as a multiple criteria decision-... more The service composition problem in Cloud computing is formulated as a multiple criteria decision-making problem. Due to the extensive search space, Cloud service composition is addressed as an NP-hard problem. In addition, it is a long term based and economically driven. Composting accurate services have great attention to interest and importance for the Cloud consumer in the multi-Cloud environment. Therefore, an enhanced QoS-based Service Composition Approach in the multi-Cloud environment has been proposed to accurately compose the best Cloud providers to contract with them for composing the needed services to minimize the Cloud consumer cost function. In this paper, a modified Particle Swarm Optimization (PSO) has been employed to compose the best services based on the uncertainty of QoS attributes. The proposed approach has been implemented using a real QoS dataset. According to the comparative results, it is found that the proposed approach has achieved a high degree of optima...

Research paper thumbnail of xploiting coarse-grained reused-based opportunities in Big Data ulti-query optimization adhya

Multi-query optimization in Big Data becomes a promising research direction due to the popularity... more Multi-query optimization in Big Data becomes a promising research direction due to the popularity of massive data analytical systems (e.g., MapReduce and Flink). The multi-query is translated into jobs. These jobs are routinely submitted with similar tasks to the underling Big Data analytical systems. These similar tasks are considered complicated and computation overhead. Therefore, there are some existing techniques that have been proposed for exploiting sharing tasks in Big Data multi-query optimization (e.g., MRShare and Relaxed MRShare). These techniques are heavily tailored relaxed optimizing factors of fine-grained reused-based opportunities. In accordance with Big Data multi-query optimization, the existing fine-grained techniques are only concerned with equal tuples size and uniform data distribution. These issues are not applicable to the real-world distributed applications which depend on coarse-grained reused-based opportunities, such as non-equal tuples size and non-uni...

Research paper thumbnail of Towards standard PaaS implementation APIs

International Journal of Cloud Computing, 2017

Platform as a service (PaaS) supports application developers with the ability to implement and de... more Platform as a service (PaaS) supports application developers with the ability to implement and deploy their applications in the cloud. Several heterogeneous PaaS platforms are available, such as Google App Engine (GAE), Windows Azure, Cloud Foundry, and OpenShift. Each PaaS provider has its own proprietary implementation and deployment APIs. The heterogeneity of these APIs makes developers worry about their application portability and interoperability. The work in this paper concerns about the heterogeneity of different PaaS implementation APIs. Standard PaaS implementation APIs, called Std-PaaS APIs, have been proposed to solve the application portability problem. Std-PaaS APIs allow developers to develop generic cloud application by writing their applications once and deploying many times on heterogeneous PaaS providers. Std-PaaS APIs have been evaluated using two case studies, in which generic APIs for cloud persistentstorage service and NoSQL datastore service have been developed and used to developed applications to be deployed onto GAE and Windows Azure.

Research paper thumbnail of A deep learning based framework for optimizing cloud consumer QoS-based service composition

Computing, 2020

The service composition problem in Cloud computing is formulated as a multiple criteria decision-... more The service composition problem in Cloud computing is formulated as a multiple criteria decision-making problem. Due to the extensive search space, Cloud service composition is addressed as an NP-hard problem. In addition, it is a long term based and economically driven. Building an accurate economic model for service composition has great attention to interest and importance for the Cloud consumer. A deep learning based service composition (DLSC) framework has been proposed in this paper. The proposed DLSC framework is considered an amalgamation between the deep learning long short term memory (LSTM) network and particle swarm optimization (PSO) algorithm. The LSTM network is applied to accurately predict the Cloud QoS provisioned values, and the output of LSTM network is fed to PSO algorithm to compose the best Cloud providers to contract with them for composing the needed services to minimize the consumer cost function. The proposed DLSC framework has been implemented using a real QoS dataset. According to the comparative results, it is found that the performance of the proposed framework outperforms the existing models with respect to the predictive accuracy and composition accuracy.

Research paper thumbnail of Pso Optimization algorithm for Task Scheduling on The Cloud Computing Environment

INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY, 2014

The Cloud computing is a most recent computing paradigm where IT services are provided and delive... more The Cloud computing is a most recent computing paradigm where IT services are provided and delivered over the Internet on demand. The Scheduling problem for cloud computing environment has a lot of awareness as the applications tasks could be mapped to the available resources to achieve better results. One of the main existed algorithms of task scheduling on the available resources on the cloud environment is based on the Particle Swarm Optimization (PSO). According to this PSO algorithm, the application’s tasks are allocated to the available resources to minimize the computation cost only.In this paper, a modified PSO algorithm has been introduced and implemented for solving task scheduling problem in the cloud. The main idea of the modified PSO is that the tasks are allocated on the available resources to minimize the execution time in addition to the computation cost. This modified PSO algorithm is called Modified Particle Swarm Optimization (MPOS).The MPOS evaluations have been...

Research paper thumbnail of A Low Latency Proxy Prefetching Caching Algorithm

International Conference on Aerospace Sciences and Aviation Technology, 2003

The Web proxy cache system was deployed to save network bandwidth, balance server load, and reduc... more The Web proxy cache system was deployed to save network bandwidth, balance server load, and reduce network latency by storing copies of popular documents in the client and proxy caches for the Uniform Resource Locator (URL) requests. To solve the problem of the Web's slow end-user response time, a Web proxy caching and prefetching strategy has been developed and implemented by the auother to provide the users by the information they mostly likely want to browse in user profiles. This developed strategy uses the Reverse Aggressive technique for prefetching, which was proposed theortically. This developed strategy has been implemented with different cache sizes using a Web caching simulator. The tradional caching replacement policies such as Least-Recently-Used (LRU), Hybrid, and Size policies were already existed in this simultor. This simulator has been modified by the work in this paper such that the most recent replacement policies; Last-In-First-Out (LIFO), First-Try, Swapping and Place-Holder policies under infinite sized cache have been implemented. The performance measurements of the developed strategy have been studied using the tradional replacement policies, and the most recent replacement policies. Also, a comparative study has been done to clarify the benefits of the Reverse Aggressive caching prefetching algorithm with respect to the Fixed-Horizon caching prefetching algorithm with respect to the Reduced Latency (RL). According to the implementation results, it has been found that the average latency has been reduced at a higher degree by using the Reverse Aggressive cache prefetching strategy.

Research paper thumbnail of An Enhanced Task Scheduling Algorithm on Cloud Computing Environment

International Journal of Grid and Distributed Computing, 2016

Cloud computing is the technology that moves the information technology (IT) services out of the ... more Cloud computing is the technology that moves the information technology (IT) services out of the office. Unfortunately, Cloud computing has faced some challenges. The task scheduling problem is considered one of the main challenges because a good mapping between the available resources and the users' tasks is needed to reduce the execution time of the users' tasks (i.e., reduce make-span), and increase resource utilization. The objective of this paper is to introduce and implement an enhanced task scheduling algorithm to assign the users' tasks to multiple computing resources. The aim of the proposed algorithm is to reduce the execution time, and cost, as well as, increase resource utilization. The proposed algorithm is considered an amalgamation of the Particle Swarm Optimization (PSO),the Best-Fit (BF), and Tabu-Search (TS) algorithms; called BFPSOTS. According to the proposed BFPSOTS algorithm, the BF algorithm has been used to generate the initial population of the standard PSO algorithm instead of to be random. The Tabu-Search (TS) algorithm has been used to improve the local research by avoiding the trap of the local optimality which could be occurred using the standard PSO algorithm. The proposed hybrid algorithm (i.e., BFPSOTS) has been implemented using Cloudsim. A comparative study has been done to evaluate the performance of the proposed algorithm relative to the standard PSO algorithm using five problems with different number of independent task, and Virtual Machines (VMs). The performance parameters which have been considered are the execution time (Makspan), cost, and resources utilization. The implementation results prove that the proposed hybrid algorithm (i.e., BFPSOTS) outperforms the standard PSO algorithm..

Research paper thumbnail of Comparative Study of Multi-query Optimization Techniques using Shared Predicate-based for Big Data

International Journal of Grid and Distributed Computing, 2016

Big data analytical systems, such as MapReduce, have become main issues for many enterprises and ... more Big data analytical systems, such as MapReduce, have become main issues for many enterprises and research groups. Currently, multi-query which translated into MapReduce jobs is submitted repeatedly with similar tasks. So, exploiting these similar tasks can offer possibilities to avoid repeated computations of MapReduce jobs. Therefore, many researches have addressed the sharing opportunity to optimize multiquery processing. Consequently, the main goal of this work is to study and compare comprehensively two existed sharing opportunity techniques using predicate-based filters; MRShare and relaxed MRShare. The comparative study has been performed over TPC-H benchmark and confirmed that the relaxed MRShare technique significantly outperforms the MRShare for shared data in terms of predicate-based filters among multi-query.

Research paper thumbnail of Developing SLA Documents for e-Learning System Based on Cloud

International Review on Computers and Software (IRECOS), 2016

The guarantee of delivering the service from the service provider is a vital requirement for the ... more The guarantee of delivering the service from the service provider is a vital requirement for the Cloud’s users. This guarantee can be achieved through the Service Level Agreement, which is a contract between the user and the provider. In this paper, an SLA document is defined for an E-learning system called “go learn cloud system – glcs” to clarify the rights, terms and conditions for the users and providers. The SLAin this system has two types; a document between the Cloud service provider and the coordinator of the E-learning system, and a document between the students and the instructor from a side and thecoordinator from the other side. On the other hand, this paper presents how the coordinator of the E-Learning systemscan take hisdecision for choosing the suitable Cloud Computing platformthat can serve the system’s user with minimum cost.

Research paper thumbnail of Finding the pin in the haystack: A Bot Traceback service for public clouds

2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS), 2015

Cloud computing permits customers to host their data and applications to the cloud with an intere... more Cloud computing permits customers to host their data and applications to the cloud with an interesting economic cost-benefit tradeoff. However, the low price of cloud computing resources encourages attackers to rent a bulk of their botnets on the cloud and launch their attacks from there, which makes customers worry about using cloud computing. Therefore, in this paper, we propose a Bot Traceback (BTB) service for reporting and tracing back the presence of a bot inside an IaaS cloud provider. BTB aims to identify the virtual machine on which a bot runs either inside the same provider or inside a federated provider. The BTB service has been implemented as a part of the security tools in the EASI-CLOUDS project and has been deployed online. We present the implementation details of the BTB service and its main components (the BTB reporting service and BTB detection service). The BTB detection service will start running after a BTB report is received either from the same provider or from another federated provider.

Research paper thumbnail of A generalized architecture of quantum secure direct communication for N disjointed users with authentication

Scientific reports, Jan 18, 2015

In this paper, we generalize a secured direct communication process between N users with partial ... more In this paper, we generalize a secured direct communication process between N users with partial and full cooperation of quantum server. So, N - 1 disjointed users u1, u2, …, uN-1 can transmit a secret message of classical bits to a remote user uN by utilizing the property of dense coding and Pauli unitary transformations. The authentication process between the quantum server and the users are validated by EPR entangled pair and CNOT gate. Afterwards, the remained EPR will generate shared GHZ states which are used for directly transmitting the secret message. The partial cooperation process indicates that N - 1 users can transmit a secret message directly to a remote user uN through a quantum channel. Furthermore, N - 1 users and a remote user uN can communicate without an established quantum channel among them by a full cooperation process. The security analysis of authentication and communication processes against many types of attacks proved that the attacker cannot gain any info...

Research paper thumbnail of N. A. Ismail, F. A. Omara, C. R. Jesshope, and M. A. R. Ghonaimy, "An Execution Model for Paral-lel Implementation for PROLOG," Proc. Of the 15th International Congress for Statistics, Computer Science, Social and Demographic Research, Cairo, Egypt, March 1990, pp. 112-119

N. A. Ismail, F. A. Omara, C. R. Jesshope, and M. A. R. Ghonaimy, "An Execution Model for Paral-lel Implementation for PROLOG," Proc. Of the 15th International Congress for Statistics, Computer Science, Social and Demographic Research, Cairo, Egypt, March 1990, pp. 112-119