Tara Salman - Academia.edu (original) (raw)

Papers by Tara Salman

Research paper thumbnail of Probabilistic Blockchains: A Blockchain Paradigm for Collaborative Decision-Making

IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), 2018

A blockchain provides a secured paradigm to achieve consensus using a distributed and peer-to-pee... more A blockchain provides a secured paradigm to achieve consensus using a distributed and peer-to-peer network in which no trusted central party is required. As a result, it has the potential to resolve many challenges that are faced with current centralized controllers in globally distributed applications. To date, the blockchain technology has been used for recording transactions and tracking objects in which multiple participants reach a consensus on whether a transaction is valid or not. This paper introduces the novel paradigm of probabilistic blockchains, an extension of the current blockchains that allows building efficient and distributed risk assessment and decision-making applications in which multiple untrusting parties collaborate but may not completely agree on the outcome. The paradigm is particularly useful for risk assessment, where a group of decision-makers needs to decide or analyze an event based on imperfect information. The proposed approach can be used in applications like intrusion detections, stock market predictions, insurance, and recommendation systems. The paper presents and analyzes the application of probabilistic blockchains for intrusion detection systems for computer networks. The results show the feasibility and efficiency of the proposed paradigm in making such decisions.

Research paper thumbnail of A Reputation Management Framework for Knowledge-Based and Probabilistic Blockchains

2019 IEEE International Conference on Blockchain (Blockchain),, 2019

Recently, leading research communities have been investigating the use of blockchains for Artific... more Recently, leading research communities have been investigating the use of blockchains for Artificial Intelligence (AI) applications, where multiple participants, or agents, collaborate to make consensus decisions. To achieve this, the data in the blockchain storage have to be transformed into blockchain knowledge. We refer to these types of blockchains as knowledge-based blockchains. Knowledge-based blockchains are potentially useful in building efficient risk assessment applications. An earlier work introduced probabilistic blockchain which facilitates knowledge-based blockchains. This paper proposes an extension for the probabilistic blockchain concept. The design of a reputation management framework, suitable for such blockchains, is proposed. The framework has been developed to suit the requirements of a wide range of applications. In particular, we apply it to the detection of malicious nodes and reduce their effect on the probabilistic blockchains' consensus process. We evaluate the framework by comparing it to a baseline using several adversarial strategies. Further, we analyze the collaborative decisions with and without the malicious node detection. Both results show a sustainable performance, where the proposed work outperforms others and achieves excellent results.

Research paper thumbnail of HYPER-VINES: A HYbrid Learning Fault and Performance Issues ERadicator for VIrtual NEtwork Services over Multi-cloud

ICNC, 2019

Fault and performance management systems, in the traditional carrier networks, are based on rule-... more Fault and performance management systems, in the traditional carrier networks, are based on rule-based diagnostics that correlates alarms and other markers to detect and localize faults and performance issues. As carriers move to Virtual Network Services, based on Network Function Virtualization and multi-cloud, the traditional methods fail to deliver because of the impalpability of the constituent Virtual Network Functions and increased complexity of the resulting architecture. In this paper, we propose a framework, called HYPER-VINES, that interfaces with various management platforms involved, to process markers, through a system of shallow and deep machine learning models, to detect and localize manifested and impending fault and performance issues. Our experiments validate the functionality and feasibility of the framework in terms of accurate detection and localization of such issues and unambiguous prediction of impending issues. Simulations with real network fault datasets show effectiveness of its architecture in large networks.

Research paper thumbnail of Estimating the number of sources in white Gaussian noise: simple eigenvalues based approaches

IET Signal Processing, 2017

Estimating the number of sources is a key task in many array signal processing applications. Conv... more Estimating the number of sources is a key task in many array signal processing applications. Conventional algorithms such as Akaike's information criterion (AIC) and minimum description length (MDL) suffer from underestimation and overestimation errors. In this study, the authors propose four algorithms to estimate the number of sources in white Gaussian noise. The authors' proposed algorithms are categorised into two main categories; namely, sample correlation matrix (CorrM) based and correlation coefficient matrix (CoefM) based. Their proposed algorithms are applied on the CorrM and CoefM eigenvalues. They propose to use two decision statistics, which are the moving increment and the moving standard deviation of the estimated eigenvalues as metrics to estimate the number of sources. For their two CorrM based algorithms, the decision statistics are compared to thresholds to decide on the number of sources. They show that the conventional process to estimate the threshold is mathematically tedious with high computational complexity. Alternatively, they define two threshold formulas through linear regression fitting. For their two CoefM based algorithms, they redefine the problem as a simple maximum value search problem. Results show that the proposed algorithms perform on par or better than AIC and MDL as well as recently modified algorithms at medium and high signal-to-noise ratio (SNR) levels and better at low SNR levels and low number of samples, while using a lower complexity criterion function.

Research paper thumbnail of Estimating the Number of Sources: An Efficient Maximization Approach

2015 International Wireless Communications and Mobile Computing Conference (IWCMC), 2015

Estimating the number of sources received by an antenna array have been well known and investigat... more Estimating the number of sources received by an antenna array have been well known and investigated since the starting of array signal processing. Accurate estimation of such parameter is critical in many applications that involve prior knowledge of the number of received signals. Information theo-retic approaches such as Akaikes information criterion (AIC) and minimum description length (MDL) have been used extensively even though they are complex and show bad performance at some stages. In this paper, a new algorithm for estimating the number of sources is presented. This algorithm exploits the estimated eigenvalues of the auto correlation coefficient matrix rather than the auto covariance matrix, which is conventionally used, to estimate the number of sources. We propose to use either of a two simply estimated decision statistics, which are the moving increment and moving standard deviation as metric to estimate the number of sources. Then process a simple calculation of the increment or standard deviation of eigenvalues to find the number of sources at the location of the maximum value. Results showed that our proposed algorithms have a better performance in comparison to the popular and more computationally expensive AIC and MDL at low SNR values and low number of collected samples.

Research paper thumbnail of Non-data-aided SNR Estimation for QPSK Modulation in AWGN Channel

2014 IEEE 10th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), 2014

—Signal-to-noise ratio (SNR) estimation is an important parameter that is required in any receive... more —Signal-to-noise ratio (SNR) estimation is an important parameter that is required in any receiver or communication systems. It can be computed either by a pilot signal data-aided approach in which the transmitted signal would be known to the receiver, or without any knowledge of the transmitted signal, which is a non-data-aided (NDA) estimation approach. In this paper, a NDA SNR estimation algorithm for QPSK signal is proposed. The proposed algorithm modifies the existing Signal-to-Variation Ratio (SVR) SNR estimation algorithm in the aim to reduce its bias and mean square error in case of negative SNR values at low number of samples of it. We first present the existing SVR algorithm and then show the mathematical derivation of the new NDA algorithm. In addition, we compare our algorithm to two baselines estimation methods, namely the M2M4 and SVR algorithms, using different test cases. Those test cases include low SNR values, extremely high SNR values and low number of samples. Results showed that our algorithm had a better performance compared to second and fourth moment estimation (M2M4) and original SVR algorithms in terms of normalized mean square error (NMSE) and bias estimation while keeping almost the same complexity as the original algorithms.

Research paper thumbnail of Feasibility of Supervised Machine Learning for Cloud Security

International Conference on Information Science and Security (ICISS), 2016

Cloud computing is gaining significant attention, however, security is the biggest hurdle in its ... more Cloud computing is gaining significant attention, however, security is the biggest hurdle in its wide acceptance. Users of cloud services are under constant fear of data loss, security threats and availability issues. Recently, learning-based methods for security applications are gaining popularity in the literature with the advents in machine learning techniques. However, the major challenge in these methods is obtaining real-time and unbiased datasets. Many datasets are internal and cannot be shared due to privacy issues or may lack certain statistical characteristics. As a result of this, researchers prefer to generate datasets for training and testing purpose in the simulated or closed experimental environments which may lack comprehensiveness. Machine learning models trained with such a single dataset generally result in a semantic gap between results and their application. There is a dearth of research work which demonstrates the effectiveness of these models across multiple datasets obtained in different environments. We argue that it is necessary to test the robustness of the machine learning models, especially in diversified operating conditions, which are prevalent in cloud scenarios. In this work, we use the UNSW dataset to train the supervised machine learning models. We then test these models with ISOT dataset. We present our results and argue that more research in the field of machine learning is still required for its applicability to the cloud security.

Research paper thumbnail of A Survey of Protocols and Standards for Internet of Things

Advanced Computing and communication, 2017

The rapid growth in technology and internet connected devices has enabled Internet of Things (IoT... more The rapid growth in technology and internet connected devices has enabled Internet of Things (IoT) to be one of the important fields in computing. Standards, technologies and platforms targeting IoT ecosystem are being developed at a very fast pace. IoT enables things to communicate and coordinate decisions for many different types of applications including healthcare, home automation, disaster recovery, and industry automation. It is expected to expand to even more applications in the future. This paper surveys several standards by IEEE, IETF and ITU that enable technologies enabling the rapid growth of IoT. These standards include communications, routing, network and session layer protocols that are being developed to meet IoT requirements. The discussion also includes management and security protocols in addition to the current challenges in IoT which gives insights into the current research to solve such challenges.

Research paper thumbnail of SCADA System Testbed for Cybersecurity Research Using Machine Learning Approach

Future Internet , 2018

This paper presents the development of a Supervisory Control and Data Acquisition (SCADA) system ... more This paper presents the development of a Supervisory Control and Data Acquisition (SCADA) system testbed used for cybersecurity research. The testbed consists of a water storage tank's control system, which is a stage in the process of water treatment and distribution. Sophisticated cyber-attacks were conducted against the testbed. During the attacks, the network traffic was captured, and features were extracted from the traffic to build a dataset for training and testing different machine learning algorithms. Five traditional machine learning algorithms were trained to detect the attacks: Random Forest, Decision Tree, Logistic Regression, Naïve Bayes and KNN. Then, the trained machine learning models were built and deployed in the network, where new tests were made using online network traffic. The performance obtained during the training and testing of the machine learning models was compared to the performance obtained during the online deployment of these models in the network. The results show the efficiency of the machine learning models in detecting the attacks in real time. The testbed provides a good understanding of the effects and consequences of attacks on real SCADA environments.

Research paper thumbnail of Machine Learning for Anomaly Detection and Categorization in Multi-cloud Environments

CSCloud, 2017

Cloud computing has been widely adopted by application service providers (ASPs) and enterprises t... more Cloud computing has been widely adopted by application service providers (ASPs) and enterprises to reduce both capital expenditures (CAPEX) and operational expenditures (OPEX). Applications and services previously running on private data centers are now being migrated to private or public clouds. Since most of the ASPs and enterprises have globally distributed user bases, their services need to be distributed across multiple clouds, spread across the globe which can achieve better performance in terms of latency, scalability and load balancing. The shift has eventually led the research community to study multi-cloud environments. However, the widespread acceptance of such environments has been hampered by major security concerns. Firewalls and traditional rule-based security protection techniques are not sufficient to protect user-data in multi-cloud scenarios. Recently, advances in machine learning techniques have attracted the attention of the research community to build intrusion detection systems (IDS) that can detect anomalies in the network traffic. Most of the research works, however, do not differentiate among different types of attacks. This is, in fact, necessary for appropriate countermeasures and defense against attacks. In this paper, we investigate both detecting and categorizing anomalies rather than just detecting, which is a common trend in the contemporary research works. We have used a popular publicly available dataset to build and test learning models for both detection and categorization of different attacks. To be precise, we have used two supervised machine learning techniques, namely linear regression (LR) and random forest (RF). We show that even if detection is perfect, categorization can be less accurate due to similarities between attacks. Our results demonstrate more than 99% detection accuracy and categorization accuracy of 93.6%, with the inability to categorize some attacks. Further, we argue that such categorization can be applied to multi-cloud environments using the same machine learning techniques.

Research paper thumbnail of Security Services Using Blockchains: A State of the Art Survey

IEEE Communications Surveys & Tutorials, 2018

This article surveys blockchain-based approaches for several security services. These services in... more This article surveys blockchain-based approaches for several security services. These services include authentication, confidentiality, privacy and access control list (ACL), data and resource provenance, and integrity assurance. All these services are critical for the current distributed applications, especially due to the large amount of data being processed over the networks and the use of cloud computing. Authentication ensures that the user is who he/she claims to be. Confidentiality guarantees that data cannot be read by unauthorized users. Privacy provides the users the ability to control who can access their data. Provenance allows an efficient tracking of the data and resources along with their ownership and utilization over the network. Integrity helps in verifying that the data has not been modified or altered. These services are currently managed by centralized controllers, for example, a certificate authority. Therefore, the services are prone to attacks on the centralized controller. On the other hand, blockchain is a secured and distributed ledger that can help resolve many of the problems with centralization. The objectives of this paper are to give insights on the use of security services for current applications, to highlight the state of the art techniques that are currently used to provide these services, to describe their challenges, and to discuss how the blockchain technology can resolve these challenges. Further, several blockchain-based approaches providing such security services are compared thoroughly. Challenges associated with using blockchain-based security services are also discussed to spur further research in this area.

Research paper thumbnail of Probabilistic Blockchains: A Blockchain Paradigm for Collaborative Decision-Making

IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), 2018

A blockchain provides a secured paradigm to achieve consensus using a distributed and peer-to-pee... more A blockchain provides a secured paradigm to achieve consensus using a distributed and peer-to-peer network in which no trusted central party is required. As a result, it has the potential to resolve many challenges that are faced with current centralized controllers in globally distributed applications. To date, the blockchain technology has been used for recording transactions and tracking objects in which multiple participants reach a consensus on whether a transaction is valid or not. This paper introduces the novel paradigm of probabilistic blockchains, an extension of the current blockchains that allows building efficient and distributed risk assessment and decision-making applications in which multiple untrusting parties collaborate but may not completely agree on the outcome. The paradigm is particularly useful for risk assessment, where a group of decision-makers needs to decide or analyze an event based on imperfect information. The proposed approach can be used in applications like intrusion detections, stock market predictions, insurance, and recommendation systems. The paper presents and analyzes the application of probabilistic blockchains for intrusion detection systems for computer networks. The results show the feasibility and efficiency of the proposed paradigm in making such decisions.

Research paper thumbnail of A Reputation Management Framework for Knowledge-Based and Probabilistic Blockchains

2019 IEEE International Conference on Blockchain (Blockchain),, 2019

Recently, leading research communities have been investigating the use of blockchains for Artific... more Recently, leading research communities have been investigating the use of blockchains for Artificial Intelligence (AI) applications, where multiple participants, or agents, collaborate to make consensus decisions. To achieve this, the data in the blockchain storage have to be transformed into blockchain knowledge. We refer to these types of blockchains as knowledge-based blockchains. Knowledge-based blockchains are potentially useful in building efficient risk assessment applications. An earlier work introduced probabilistic blockchain which facilitates knowledge-based blockchains. This paper proposes an extension for the probabilistic blockchain concept. The design of a reputation management framework, suitable for such blockchains, is proposed. The framework has been developed to suit the requirements of a wide range of applications. In particular, we apply it to the detection of malicious nodes and reduce their effect on the probabilistic blockchains' consensus process. We evaluate the framework by comparing it to a baseline using several adversarial strategies. Further, we analyze the collaborative decisions with and without the malicious node detection. Both results show a sustainable performance, where the proposed work outperforms others and achieves excellent results.

Research paper thumbnail of HYPER-VINES: A HYbrid Learning Fault and Performance Issues ERadicator for VIrtual NEtwork Services over Multi-cloud

ICNC, 2019

Fault and performance management systems, in the traditional carrier networks, are based on rule-... more Fault and performance management systems, in the traditional carrier networks, are based on rule-based diagnostics that correlates alarms and other markers to detect and localize faults and performance issues. As carriers move to Virtual Network Services, based on Network Function Virtualization and multi-cloud, the traditional methods fail to deliver because of the impalpability of the constituent Virtual Network Functions and increased complexity of the resulting architecture. In this paper, we propose a framework, called HYPER-VINES, that interfaces with various management platforms involved, to process markers, through a system of shallow and deep machine learning models, to detect and localize manifested and impending fault and performance issues. Our experiments validate the functionality and feasibility of the framework in terms of accurate detection and localization of such issues and unambiguous prediction of impending issues. Simulations with real network fault datasets show effectiveness of its architecture in large networks.

Research paper thumbnail of Estimating the number of sources in white Gaussian noise: simple eigenvalues based approaches

IET Signal Processing, 2017

Estimating the number of sources is a key task in many array signal processing applications. Conv... more Estimating the number of sources is a key task in many array signal processing applications. Conventional algorithms such as Akaike's information criterion (AIC) and minimum description length (MDL) suffer from underestimation and overestimation errors. In this study, the authors propose four algorithms to estimate the number of sources in white Gaussian noise. The authors' proposed algorithms are categorised into two main categories; namely, sample correlation matrix (CorrM) based and correlation coefficient matrix (CoefM) based. Their proposed algorithms are applied on the CorrM and CoefM eigenvalues. They propose to use two decision statistics, which are the moving increment and the moving standard deviation of the estimated eigenvalues as metrics to estimate the number of sources. For their two CorrM based algorithms, the decision statistics are compared to thresholds to decide on the number of sources. They show that the conventional process to estimate the threshold is mathematically tedious with high computational complexity. Alternatively, they define two threshold formulas through linear regression fitting. For their two CoefM based algorithms, they redefine the problem as a simple maximum value search problem. Results show that the proposed algorithms perform on par or better than AIC and MDL as well as recently modified algorithms at medium and high signal-to-noise ratio (SNR) levels and better at low SNR levels and low number of samples, while using a lower complexity criterion function.

Research paper thumbnail of Estimating the Number of Sources: An Efficient Maximization Approach

2015 International Wireless Communications and Mobile Computing Conference (IWCMC), 2015

Estimating the number of sources received by an antenna array have been well known and investigat... more Estimating the number of sources received by an antenna array have been well known and investigated since the starting of array signal processing. Accurate estimation of such parameter is critical in many applications that involve prior knowledge of the number of received signals. Information theo-retic approaches such as Akaikes information criterion (AIC) and minimum description length (MDL) have been used extensively even though they are complex and show bad performance at some stages. In this paper, a new algorithm for estimating the number of sources is presented. This algorithm exploits the estimated eigenvalues of the auto correlation coefficient matrix rather than the auto covariance matrix, which is conventionally used, to estimate the number of sources. We propose to use either of a two simply estimated decision statistics, which are the moving increment and moving standard deviation as metric to estimate the number of sources. Then process a simple calculation of the increment or standard deviation of eigenvalues to find the number of sources at the location of the maximum value. Results showed that our proposed algorithms have a better performance in comparison to the popular and more computationally expensive AIC and MDL at low SNR values and low number of collected samples.

Research paper thumbnail of Non-data-aided SNR Estimation for QPSK Modulation in AWGN Channel

2014 IEEE 10th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), 2014

—Signal-to-noise ratio (SNR) estimation is an important parameter that is required in any receive... more —Signal-to-noise ratio (SNR) estimation is an important parameter that is required in any receiver or communication systems. It can be computed either by a pilot signal data-aided approach in which the transmitted signal would be known to the receiver, or without any knowledge of the transmitted signal, which is a non-data-aided (NDA) estimation approach. In this paper, a NDA SNR estimation algorithm for QPSK signal is proposed. The proposed algorithm modifies the existing Signal-to-Variation Ratio (SVR) SNR estimation algorithm in the aim to reduce its bias and mean square error in case of negative SNR values at low number of samples of it. We first present the existing SVR algorithm and then show the mathematical derivation of the new NDA algorithm. In addition, we compare our algorithm to two baselines estimation methods, namely the M2M4 and SVR algorithms, using different test cases. Those test cases include low SNR values, extremely high SNR values and low number of samples. Results showed that our algorithm had a better performance compared to second and fourth moment estimation (M2M4) and original SVR algorithms in terms of normalized mean square error (NMSE) and bias estimation while keeping almost the same complexity as the original algorithms.

Research paper thumbnail of Feasibility of Supervised Machine Learning for Cloud Security

International Conference on Information Science and Security (ICISS), 2016

Cloud computing is gaining significant attention, however, security is the biggest hurdle in its ... more Cloud computing is gaining significant attention, however, security is the biggest hurdle in its wide acceptance. Users of cloud services are under constant fear of data loss, security threats and availability issues. Recently, learning-based methods for security applications are gaining popularity in the literature with the advents in machine learning techniques. However, the major challenge in these methods is obtaining real-time and unbiased datasets. Many datasets are internal and cannot be shared due to privacy issues or may lack certain statistical characteristics. As a result of this, researchers prefer to generate datasets for training and testing purpose in the simulated or closed experimental environments which may lack comprehensiveness. Machine learning models trained with such a single dataset generally result in a semantic gap between results and their application. There is a dearth of research work which demonstrates the effectiveness of these models across multiple datasets obtained in different environments. We argue that it is necessary to test the robustness of the machine learning models, especially in diversified operating conditions, which are prevalent in cloud scenarios. In this work, we use the UNSW dataset to train the supervised machine learning models. We then test these models with ISOT dataset. We present our results and argue that more research in the field of machine learning is still required for its applicability to the cloud security.

Research paper thumbnail of A Survey of Protocols and Standards for Internet of Things

Advanced Computing and communication, 2017

The rapid growth in technology and internet connected devices has enabled Internet of Things (IoT... more The rapid growth in technology and internet connected devices has enabled Internet of Things (IoT) to be one of the important fields in computing. Standards, technologies and platforms targeting IoT ecosystem are being developed at a very fast pace. IoT enables things to communicate and coordinate decisions for many different types of applications including healthcare, home automation, disaster recovery, and industry automation. It is expected to expand to even more applications in the future. This paper surveys several standards by IEEE, IETF and ITU that enable technologies enabling the rapid growth of IoT. These standards include communications, routing, network and session layer protocols that are being developed to meet IoT requirements. The discussion also includes management and security protocols in addition to the current challenges in IoT which gives insights into the current research to solve such challenges.

Research paper thumbnail of SCADA System Testbed for Cybersecurity Research Using Machine Learning Approach

Future Internet , 2018

This paper presents the development of a Supervisory Control and Data Acquisition (SCADA) system ... more This paper presents the development of a Supervisory Control and Data Acquisition (SCADA) system testbed used for cybersecurity research. The testbed consists of a water storage tank's control system, which is a stage in the process of water treatment and distribution. Sophisticated cyber-attacks were conducted against the testbed. During the attacks, the network traffic was captured, and features were extracted from the traffic to build a dataset for training and testing different machine learning algorithms. Five traditional machine learning algorithms were trained to detect the attacks: Random Forest, Decision Tree, Logistic Regression, Naïve Bayes and KNN. Then, the trained machine learning models were built and deployed in the network, where new tests were made using online network traffic. The performance obtained during the training and testing of the machine learning models was compared to the performance obtained during the online deployment of these models in the network. The results show the efficiency of the machine learning models in detecting the attacks in real time. The testbed provides a good understanding of the effects and consequences of attacks on real SCADA environments.

Research paper thumbnail of Machine Learning for Anomaly Detection and Categorization in Multi-cloud Environments

CSCloud, 2017

Cloud computing has been widely adopted by application service providers (ASPs) and enterprises t... more Cloud computing has been widely adopted by application service providers (ASPs) and enterprises to reduce both capital expenditures (CAPEX) and operational expenditures (OPEX). Applications and services previously running on private data centers are now being migrated to private or public clouds. Since most of the ASPs and enterprises have globally distributed user bases, their services need to be distributed across multiple clouds, spread across the globe which can achieve better performance in terms of latency, scalability and load balancing. The shift has eventually led the research community to study multi-cloud environments. However, the widespread acceptance of such environments has been hampered by major security concerns. Firewalls and traditional rule-based security protection techniques are not sufficient to protect user-data in multi-cloud scenarios. Recently, advances in machine learning techniques have attracted the attention of the research community to build intrusion detection systems (IDS) that can detect anomalies in the network traffic. Most of the research works, however, do not differentiate among different types of attacks. This is, in fact, necessary for appropriate countermeasures and defense against attacks. In this paper, we investigate both detecting and categorizing anomalies rather than just detecting, which is a common trend in the contemporary research works. We have used a popular publicly available dataset to build and test learning models for both detection and categorization of different attacks. To be precise, we have used two supervised machine learning techniques, namely linear regression (LR) and random forest (RF). We show that even if detection is perfect, categorization can be less accurate due to similarities between attacks. Our results demonstrate more than 99% detection accuracy and categorization accuracy of 93.6%, with the inability to categorize some attacks. Further, we argue that such categorization can be applied to multi-cloud environments using the same machine learning techniques.

Research paper thumbnail of Security Services Using Blockchains: A State of the Art Survey

IEEE Communications Surveys & Tutorials, 2018

This article surveys blockchain-based approaches for several security services. These services in... more This article surveys blockchain-based approaches for several security services. These services include authentication, confidentiality, privacy and access control list (ACL), data and resource provenance, and integrity assurance. All these services are critical for the current distributed applications, especially due to the large amount of data being processed over the networks and the use of cloud computing. Authentication ensures that the user is who he/she claims to be. Confidentiality guarantees that data cannot be read by unauthorized users. Privacy provides the users the ability to control who can access their data. Provenance allows an efficient tracking of the data and resources along with their ownership and utilization over the network. Integrity helps in verifying that the data has not been modified or altered. These services are currently managed by centralized controllers, for example, a certificate authority. Therefore, the services are prone to attacks on the centralized controller. On the other hand, blockchain is a secured and distributed ledger that can help resolve many of the problems with centralization. The objectives of this paper are to give insights on the use of security services for current applications, to highlight the state of the art techniques that are currently used to provide these services, to describe their challenges, and to discuss how the blockchain technology can resolve these challenges. Further, several blockchain-based approaches providing such security services are compared thoroughly. Challenges associated with using blockchain-based security services are also discussed to spur further research in this area.