Hongmei He | University of Salford (original) (raw)

Papers by Hongmei He

Research paper thumbnail of A framework for Operational Security Metrics Development for industrial control environment

Journal of Cyber Security Technology, 2018

Security metrics are very crucial towards providing insights when measuring security states and s... more Security metrics are very crucial towards providing insights when measuring security states and susceptibilities in industrial operational environments. Obtaining practical security metrics depend on effective security metrics development approaches. To be effective, a security metrics development framework should be scopedefinitive, objective-oriented, reliable, simple, adaptable, and repeatable (SORSAR). A framework for Operational Security Metrics Development (OSMD) for industry control environments is presented, which combines concepts and characteristics from existing approaches. It also adds the new characteristic of adaptability. The OSMD framework is broken down into three phases of: target definition, objective definition, and metrics synthesis. A case study scenario is used to demonstrate an instance of how to implement and apply the proposed framework to demonstrate its usability and workability. Expert elicitation has also be used to consolidate the validity of the proposed framework. Both validation approaches have helped to show that the proposed framework can help create effective and efficient ICScentric security metrics taxonomy that can be used to evaluate capabilities or vulnerabilities. The understanding from this can help enhance security assurance within industrial operational environments.

Research paper thumbnail of A Knowledge-Based Cognitive Architecture Supported by Machine Learning Algorithms for Interpretable Monitoring of Large-Scale Satellite Networks

Sensors

Cyber–physical systems such as satellite telecommunications networks generate vast amounts of dat... more Cyber–physical systems such as satellite telecommunications networks generate vast amounts of data and currently, very crude data processing is used to extract salient information. Only a small subset of data is used reactively by operators for troubleshooting and finding problems. Sometimes, problematic events in the network may go undetected for weeks before they are reported. This becomes even more challenging as the size of the network grows due to the continuous proliferation of Internet of Things type devices. To overcome these challenges, this research proposes a knowledge-based cognitive architecture supported by machine learning algorithms for monitoring satellite network traffic. The architecture is capable of supporting and augmenting infrastructure engineers in finding and understanding the causes of faults in network through the fusion of the results of machine learning models and rules derived from human domain experience. The system is characterised by (1) the flexibi...

Research paper thumbnail of Multi-Capacity Combinatorial Ordering GA in Application to Cloud resources allocation and efficient virtual machines consolidation

Future Generation Computer Systems, 2017

Resource allocation is the process of mapping the available resources to competing jobs based on ... more Resource allocation is the process of mapping the available resources to competing jobs based on the individual job requirements [5]. Computing resources must be well-managed to prevent overloading and waste of bandwidth, processing unit, memory, etc. This waste relates directly to significant financial loss for large Cloud service providers with regards to energy, operational cost as well as dissatisfaction of the Cloud service user [6], [7]. Resources allocation systems control how multiple VMs share the underlying Physical Machines (PM). Fast and efficient resource allocation algorithms can help to save energy and cost while increasing customer satisfaction. Resource allocation is typically performed in two stages as shown in Figure 1. The first stage is the jobs assignment to the Virtual Machines: applications or jobs (both terms are used synonymously in the context of this paper) are executed on VMs. Each application has its own requirements of compute power, disk space and RAM space, communication bandwidth, priority, etc. (see [7]). Any VM must meet these requirements when resources are allocated.

Research paper thumbnail of The security challenges in the IoT enabled cyber-physical systems and opportunities for evolutionary computing & other computational intelligence

2016 IEEE Congress on Evolutionary Computation (CEC), 2016

Internet of Things (IoT) has given rise to the fourth industrial revolution (Industrie 4.0), and ... more Internet of Things (IoT) has given rise to the fourth industrial revolution (Industrie 4.0), and it brings great benefits by connecting people, processes and data. However, cybersecurity has become a critical challenge in the IoT enabled cyber physical systems, from connected supply chain, Big Data produced by huge amount of IoT devices, to industry control systems. Evolutionary computation combining with other computational intelligence will play an important role for cybersecurity, such as artificial immune mechanism for IoT security architecture, data mining/fusion in IoT enabled cyber physical systems, and data driven cybersecurity. This paper provides an overview of security challenges in IoT enabled cyber-physical systems and what evolutionary computation and other computational intelligence technology could contribute for the challenges. The overview could provide clues and guidance for research in IoT security with computational intelligence.

Research paper thumbnail of A Novel Approach for Detecting Cyberattacks in Embedded Systems Based on Anomalous Patterns of Resource Utilization-Part I

IEEE Access, 2021

This paper presents a novel security approach called Anomalous Resource Consumption Detection (AR... more This paper presents a novel security approach called Anomalous Resource Consumption Detection (ARCD), which acts as an additional layer of protection to detect cyberattacks in embedded systems (ESs). The ARCD approach is based on the differentiation between the predefined standard resource consumption pattern and the anomalous consumption pattern of system resource utilization. The effectiveness of the proposed approach is tested in a rigorous manner by simulating four types of cyberattacks: a denial-of-service attack, a brute-force attack, a remote code execution attack, and a man-in-the-middle attack, which are executed on a Smart PiCar (used as the testbed). A septenary tuple model consisting of seven parameters, representing the embedded system's architecture, has been created as the core of the detection mechanism. The approach's efficiency and effectiveness has been validated in terms of range and pattern by analyzing the collected data statistically in terms of mean, median, mode, standard deviation, range, minimum, and maximum values. The results demonstrated the potential for defining a standard pattern of resource utilization and performance of the embedded system due to a significant similarity of the parameters' values at normal states. In contrast, the attacked cases showed a definite, observable, and detectable impact on resource consumption and performance of the embedded system, causing an anomalous pattern. Thus, by merging these two findings, the ARCD approach has been developed. ARCD facilitates building secure operating systems in line with the ES's capabilities. Furthermore, the ARCD approach can work along with existing countermeasures to augment the security of the operating system layer. INDEX TERMS Anomalous resource consumption, brute-force attack, cyberattacks, denial-of-service attack, embedded systems, password attack, remote code execution, testbed.

Research paper thumbnail of The Challenges and Opportunities of Artificial Intelligence for Trustworthy Robots and Autonomous Systems

2020 3rd International Conference on Intelligent Robotic and Control Engineering (IRCE)

Effective Robots and Autonomous Systems (RAS) must be trustworthy. Trust is essential in designin... more Effective Robots and Autonomous Systems (RAS) must be trustworthy. Trust is essential in designing autonomous and semi-autonomous technologies, because "No trust, no use". RAS should provide high quality of services, with the four key properties that make it trust, i.e. they must be (i) robust for any health issues, (ii) safe for any matters in their surrounding environments, (iii) secure for any threats from cyber spaces, and (iv) trusted for human-machine interaction. We have thoroughly analysed the challenges in implementing the trustworthy RAS in respects of the four properties, and addressed the power of AI in improving the trustworthiness of RAS. While we put our eyes on the benefits that AI brings to human, we should realise the potential risks that could be caused by AI. The new concept of human-centred AI will be the core in implementing the trustworthy RAS. This review could provide a brief reference for the research on AI for trustworthy RAS.

Research paper thumbnail of Share price prediction of aerospace relevant companies with recurrent neural networks based on PCA

Expert Systems with Applications, 2021

The capital market plays a vital role in marketing operations for the rapid development of the ae... more The capital market plays a vital role in marketing operations for the rapid development of the aerospace industry. However, due to the uncertainty and complexity of the stock market and many cyclical factors, the stock prices of listed aerospace companies fluctuate significantly. This makes the share price prediction challengeable. To improve the prediction of share price for aerospace industry sector and well understand the impact of various indicators on stock prices, we provided a hybrid prediction model by the combination of Principal Component Analysis (PCA) and Recurrent Neural Networks. We investigated two types of aerospace industries (manufacturer and operator). The experimental results show that PCA could improve both accuracy and efficiency of prediction. Various factors could influence the performance of prediction models, such as finance data, extracted features, optimisation algorithms, and parameters of the prediction model. The selection of features may depend on the stability of historical data: technical features could be the first option when the share price is stable, whereas fundamental features could be better when the share price has high fluctuation. The delays of RNN also depend on the stability of historical data for different types of companies. It would be more accurate through using short-term historical data for aerospace manufacturers, whereas using long-term historical data for aerospace operating airlines. The developed model could be an intelligent agent in an automatic stock prediction system, with which, the financial industry could make a prompt decision for their economic strategies and business activities in terms of predicted future share price, thus

Research paper thumbnail of Various heuristic algorithms to minimise the two-page crossing numbers of graphs

Open Computer Science, 2015

We propose several new heuristics for the twopage book crossing problem, which are based on recen... more We propose several new heuristics for the twopage book crossing problem, which are based on recent algorithms for the corresponding one-page problem. Especially, the neural network model for edge allocation is combined for the first time with various one-page algorithms. We investigate the performance of the new heuristics by testing them on various benchmark test suites. It is found out that the new heuristics outperform the previously known heuristics and produce good approximations of the planar crossing number for severalwell-known graph families. We conjecture that the optimal two-page drawing of a graph represents the planar drawing of the graph.

Research paper thumbnail of Human factor security: evaluating the cybersecurity capacity of the industrial workforce

Journal of Systems and Information Technology, 2018

Research paper thumbnail of One- and two-page crossing numbers for some types of graphs

International Journal of Computer Mathematics, 2010

The simplest graph drawing method is that of putting the vertices of a graph on a line (spine) an... more The simplest graph drawing method is that of putting the vertices of a graph on a line (spine) and drawing the edges as half-circles on k half planes (pages). Such drawings are called k-page book drawings and the minimal number of edge crossings in such a drawing is called the k-page crossing number. In a one-page book drawing, all edges are placed on one side of the spine, and in a two-page book drawing all edges are placed either above or below the spine. The one-page and two-page crossing numbers of a graph provide upper bounds for the standard planar crossing. In this paper, we derive the exact one-page crossing numbers for four-row meshes, present a new proof for the one-page crossing numbers of Halin graphs, and derive the exact two-page crossing numbers for circulant graphs C n (1, n 2). We also give explicit constructions of the optimal drawings for each kind of graphs.

Research paper thumbnail of An Improved Neural Network Model for the Two-Page Crossing Number Problem

IEEE Transactions on Neural Networks, 2006

The simplest graph drawing method is that of putting the vertices of a graph on a line and drawin... more The simplest graph drawing method is that of putting the vertices of a graph on a line and drawing the edges as half-circles either above or below the line. Such drawings are called two-page book drawings. The smallest number of crossings over all two-page drawings of a graph G is called the two-page crossing number of G. Cimikowski and Shope have solved the two-page crossing number problem for an n-vertex and m-edge graph by using a Hopfield network with 2 m neurons. We present here an improved Hopfield model with m neurons. The new model achieves much better performance in the quality of solutions and is more efficient than the model of Cimikowski and Shope for all graphs tested. The parallel time complexity of the algorithm, without considering the crossing number calculations, is O(m) for the new Hopfield model with m processors clearly outperforming the previous algorithm.

Research paper thumbnail of A Neural Network Model to Minimize the Connected Dominating Set for Self-Configuration of Wireless Sensor Networks

IEEE Transactions on Neural Networks, 2009

A wireless ad hoc sensor network consists of a number of sensors spreading across a geographical ... more A wireless ad hoc sensor network consists of a number of sensors spreading across a geographical area. The performance of the network suffers as the number of nodes grows, and a large sensor network quickly becomes difficult to manage. Thus, it is essential that the network be able to self-organize. Clustering is an efficient approach to simplify the network structure and to alleviate the scalability problem. One method to create clusters is to use weakly connected dominating sets (WCDSs). Finding the minimum WCDS in an arbitrary graph is an NP-complete problem. We propose a neural network model to find the minimum WCDS in a wireless sensor network. We present a directed convergence algorithm. The new algorithm outperforms the normal convergence algorithm both in efficiency and in the quality of solutions. Moreover, it is shown that the neural network is robust. We investigate the scalability of the neural network model by testing it on a range of sized graphs and on a range of transmission radii. Compared with Guha and Khuller's centralized algorithm, the proposed neural network with directed convergency achieves better results when the transmission radius is short, and equal performance when the transmission radius becomes larger. The parallel version of the neural network model takes time O(d), where d is the maximal degree in the graph corresponding to the sensor network, while the centralized algorithm takes O(n2). We also investigate the effect of the transmission radius on the size of WCDS. The results show that it is important to select a suitable transmission radius to make the network stable and to extend the lifespan of the network. The proposed model can be used on sink nodes in sensor networks, so that a sink node can inform the nodes to be a coordinator (clusterhead) in the WCDS obtained by the algorithm. Thus, the message overhead is O(M), where M is the size of the WCDS.

Research paper thumbnail of A Comprehensive Obstacle Avoidance System of Mobile Robots Using an Adaptive Threshold Clustering and the Morphin Algorithm

Advances in Intelligent Systems and Computing

To solve the problem of obstacle avoidance for a mobile robot in unknown environment, a comprehen... more To solve the problem of obstacle avoidance for a mobile robot in unknown environment, a comprehensive obstacle avoidance system (called ATCM system) is developed. It integrates obstacle detection, obstacle classification, collision prediction and obstacle avoidance. Especially, an Adaptive-Threshold Clustering algorithm is developed to detect obstacles, and the Morphin algorithm is applied for path planning when the robot predicts a collision ahead. A dynamic circular window is set to continuously scan the surrounding environment of the robot during the task period. The simulation results show that the obstacle avoidance system enables robot to avoid any static and dynamic obstacles effectively.

Research paper thumbnail of Network performance analysis for CBM implementation based on OSA-CBM framework

2018 IEEE Aerospace Conference

In aircraft industry, after labour and fuel costs, maintenance costs are the third largest expens... more In aircraft industry, after labour and fuel costs, maintenance costs are the third largest expense item for both regional and national carriers. By implementing CBM technologies not only the maintenance costs can be reduced, also it can provide more specific scheduled maintenance, onboard diagnostics and prognostics services. Maintenance department can be notified about the fault in advance and can arrange for components while aircraft is in mid-air. CBM technologies minimize the physical diagnostics costs and provide more realistic condition based maintenance (CBM). The aim of this project is to create and analyse network architecture for Condition Based Maintenance Systems. CBM consists of subsystems, sensors, model based reasoning systems for subsystem and system level managers, diagnostic and prognostics software for subsystems. In CBM systems, usually there is large amount of data (collected from sensors), which needs to be delivered to right places at the right time so communication paradigm is very essential design consideration which impacts many key properties such as scalability, reliability, availability, timeliness and cost of overall system. The OSA-CBM (Open System Architecture for Condition Based Maintenance) is an open standard that defines an open architecture for moving information in a condition-based maintenance system. Typically, companies developing condition-based maintenance systems must develop software and hardware components, in addition a framework for these components to integrate. OSA-CBM is a standard framework for implementing condition-based maintenance systems. It not only describes the six functional blocks of condition based maintenance systems but also the interfaces to establish communication among these blocks. OSA-CBM specifies the input and output between the CBM modules. In simple words, it describes the information that is moved and how to move it. OSA-CBM can be implemented using various available distributed middleware but it is not clear which implementation is more efficient. This paper presents an approach to design, implement and analysis of network architecture of CBM systems using OSACBM data model.

Research paper thumbnail of The Challenges and Opportunities of Human-Centred AI for Trustworthy Robots and Autonomous Systems

IEEE Transactions on Cognitive and Developmental Systems

The trustworthiness of robots and autonomous systems (RAS) has taken a prominent position on the ... more The trustworthiness of robots and autonomous systems (RAS) has taken a prominent position on the way towards full autonomy. This work is the first to systematically explore the key facets of human-centred AI for trustworthy RAS. We identified five key properties of a trustworthy RAS, i.e., RAS must be (i) safe in any uncertain and dynamic environment; (ii) secure, i.e., protect itself from cyber threats; (iii) healthy and fault-tolerant; (iv) trusted and easy to use to enable effective human-machine interaction (HMI); (v) compliant with the law and ethical expectations. While the applications of RAS have mainly focused on performance and productivity, not enough scientific attention has been paid to the risks posed by advanced AI in RAS. We analytically examine the challenges of implementing trustworthy RAS with respect to the five key properties and explore the role and roadmap of AI technologies in ensuring the trustworthiness of RAS in respect of safety, security, health, HMI, and ethics. A new acceptance model of RAS is provided as a framework for human-centric AI requirements and for implementing trustworthy RAS by design. This approach promotes human-level intelligence to augment human capabilities and focuses on contribution to humanity.

Research paper thumbnail of Knowledge-Based Linguistic Attribute Hierarchy for Diabetes Diagnosis

2019 International Conference on Computational Science and Computational Intelligence (CSCI)

A hierarchy of Linguistic Decision Trees (LDTs), called linguistic attribute hierarchy (LAH), can... more A hierarchy of Linguistic Decision Trees (LDTs), called linguistic attribute hierarchy (LAH), can provide a transparent information propagation and a hierarchical decision making process. In this paper, we quantified the effect of various factors on the diagnosis of Diabetes with the information gain of each attribute to the decision variable, and developed an LAH, where, the LDTs are constructed under the framework of the knowledge-based label semantics, referring to the knowledge of the diagnosis criteria of Diabetes, defined by the World Health Organisation. A genetic wrapper algorithm was developed to find the best LAH for improving the accuracy of Diabetes diagnosis. The optimal LAH for Diabetes diagnosis achieved the accuracy up to 92% on the benchmark database, Pima Indian Diabetes data. The accuracy is much better than that in the research literature.

Research paper thumbnail of How Good a Shadow Neural Network is for Solving Non-linear Decision Making Problems

The universe approximate theorem states that a shadow neural network (one hidden layer) can repre... more The universe approximate theorem states that a shadow neural network (one hidden layer) can represent any non-linear function. In this paper, we aim at examining how good a shadow neural network is for solving non-linear decision making problems. We proposed a performance driven incremental approach to searching the best shadow neural network for decision making, given a data set. The experimental results on the two benchmark data sets, Breast Cancer in Wisconsin and SMS Spams, demonstrate the correction of universe approximate theorem, and show that the number of hidden neurons, taking about the half of input number, is good enough to represent the function from data. It is shown that the performance driven BP learning is faster than the error-driven BP learning, and that the performance of the SNN obtained by the former is not worse than that of the SNN obtained by the latter. This indicates that when learning a neural network with the BP algorithm, the performance reaches a certa...

Research paper thumbnail of Analytical Review of Cybersecurity for Embedded Systems

IEEE Access, 2021

To identify the key factors and create the landscape of cybersecurity for embedded systems (CSES)... more To identify the key factors and create the landscape of cybersecurity for embedded systems (CSES), an analytical review of the existing research on CSES has been conducted. The common properties of embedded systems, such as mobility, small size, low cost, independence, and limited power consumption when compared to traditional computer systems, have caused many challenges in CSES. The conflict between cybersecurity requirements and the computing capabilities of embedded systems makes it critical to implement sophisticated security countermeasures against cyber-attacks in an embedded system with limited resources, without draining those resources. In this study, twelve factors influencing CSES have been identified: (1) the components; (2) the characteristics; (3) the implementation; (4) the technical domain; (5) the security requirements; (6) the security problems; (7) the connectivity protocols; (8) the attack surfaces; (9) the impact of the cyber-attacks; (10) the security challeng...

Research paper thumbnail of A Configurable Multi-Engine System Based on Performance Matrices for Face Recognition

Research paper thumbnail of Various Island-based Parallel Genetic Algorithms for the 2-Page Drawing Problem

Genetic algorithms have been applied to solve the 2-page drawing problem successfully, but they w... more Genetic algorithms have been applied to solve the 2-page drawing problem successfully, but they work with one global population, so the search time and space are limited. Parallelization provides an attractive prospect in improving the efficiency and solution quality of genetic algorithms. One of the most popular tools for parallel computing is Message Passing Interface (MPI). In this paper, we present four island models of Parallel Genetic Algorithms with MPI: island models with linear, grid, random graph topologies, and island model with periodical synchronisation. We compare their efficiency and quality of solutions for the 2-page drawing problem on a variety of graphs.

Research paper thumbnail of A framework for Operational Security Metrics Development for industrial control environment

Journal of Cyber Security Technology, 2018

Security metrics are very crucial towards providing insights when measuring security states and s... more Security metrics are very crucial towards providing insights when measuring security states and susceptibilities in industrial operational environments. Obtaining practical security metrics depend on effective security metrics development approaches. To be effective, a security metrics development framework should be scopedefinitive, objective-oriented, reliable, simple, adaptable, and repeatable (SORSAR). A framework for Operational Security Metrics Development (OSMD) for industry control environments is presented, which combines concepts and characteristics from existing approaches. It also adds the new characteristic of adaptability. The OSMD framework is broken down into three phases of: target definition, objective definition, and metrics synthesis. A case study scenario is used to demonstrate an instance of how to implement and apply the proposed framework to demonstrate its usability and workability. Expert elicitation has also be used to consolidate the validity of the proposed framework. Both validation approaches have helped to show that the proposed framework can help create effective and efficient ICScentric security metrics taxonomy that can be used to evaluate capabilities or vulnerabilities. The understanding from this can help enhance security assurance within industrial operational environments.

Research paper thumbnail of A Knowledge-Based Cognitive Architecture Supported by Machine Learning Algorithms for Interpretable Monitoring of Large-Scale Satellite Networks

Sensors

Cyber–physical systems such as satellite telecommunications networks generate vast amounts of dat... more Cyber–physical systems such as satellite telecommunications networks generate vast amounts of data and currently, very crude data processing is used to extract salient information. Only a small subset of data is used reactively by operators for troubleshooting and finding problems. Sometimes, problematic events in the network may go undetected for weeks before they are reported. This becomes even more challenging as the size of the network grows due to the continuous proliferation of Internet of Things type devices. To overcome these challenges, this research proposes a knowledge-based cognitive architecture supported by machine learning algorithms for monitoring satellite network traffic. The architecture is capable of supporting and augmenting infrastructure engineers in finding and understanding the causes of faults in network through the fusion of the results of machine learning models and rules derived from human domain experience. The system is characterised by (1) the flexibi...

Research paper thumbnail of Multi-Capacity Combinatorial Ordering GA in Application to Cloud resources allocation and efficient virtual machines consolidation

Future Generation Computer Systems, 2017

Resource allocation is the process of mapping the available resources to competing jobs based on ... more Resource allocation is the process of mapping the available resources to competing jobs based on the individual job requirements [5]. Computing resources must be well-managed to prevent overloading and waste of bandwidth, processing unit, memory, etc. This waste relates directly to significant financial loss for large Cloud service providers with regards to energy, operational cost as well as dissatisfaction of the Cloud service user [6], [7]. Resources allocation systems control how multiple VMs share the underlying Physical Machines (PM). Fast and efficient resource allocation algorithms can help to save energy and cost while increasing customer satisfaction. Resource allocation is typically performed in two stages as shown in Figure 1. The first stage is the jobs assignment to the Virtual Machines: applications or jobs (both terms are used synonymously in the context of this paper) are executed on VMs. Each application has its own requirements of compute power, disk space and RAM space, communication bandwidth, priority, etc. (see [7]). Any VM must meet these requirements when resources are allocated.

Research paper thumbnail of The security challenges in the IoT enabled cyber-physical systems and opportunities for evolutionary computing & other computational intelligence

2016 IEEE Congress on Evolutionary Computation (CEC), 2016

Internet of Things (IoT) has given rise to the fourth industrial revolution (Industrie 4.0), and ... more Internet of Things (IoT) has given rise to the fourth industrial revolution (Industrie 4.0), and it brings great benefits by connecting people, processes and data. However, cybersecurity has become a critical challenge in the IoT enabled cyber physical systems, from connected supply chain, Big Data produced by huge amount of IoT devices, to industry control systems. Evolutionary computation combining with other computational intelligence will play an important role for cybersecurity, such as artificial immune mechanism for IoT security architecture, data mining/fusion in IoT enabled cyber physical systems, and data driven cybersecurity. This paper provides an overview of security challenges in IoT enabled cyber-physical systems and what evolutionary computation and other computational intelligence technology could contribute for the challenges. The overview could provide clues and guidance for research in IoT security with computational intelligence.

Research paper thumbnail of A Novel Approach for Detecting Cyberattacks in Embedded Systems Based on Anomalous Patterns of Resource Utilization-Part I

IEEE Access, 2021

This paper presents a novel security approach called Anomalous Resource Consumption Detection (AR... more This paper presents a novel security approach called Anomalous Resource Consumption Detection (ARCD), which acts as an additional layer of protection to detect cyberattacks in embedded systems (ESs). The ARCD approach is based on the differentiation between the predefined standard resource consumption pattern and the anomalous consumption pattern of system resource utilization. The effectiveness of the proposed approach is tested in a rigorous manner by simulating four types of cyberattacks: a denial-of-service attack, a brute-force attack, a remote code execution attack, and a man-in-the-middle attack, which are executed on a Smart PiCar (used as the testbed). A septenary tuple model consisting of seven parameters, representing the embedded system's architecture, has been created as the core of the detection mechanism. The approach's efficiency and effectiveness has been validated in terms of range and pattern by analyzing the collected data statistically in terms of mean, median, mode, standard deviation, range, minimum, and maximum values. The results demonstrated the potential for defining a standard pattern of resource utilization and performance of the embedded system due to a significant similarity of the parameters' values at normal states. In contrast, the attacked cases showed a definite, observable, and detectable impact on resource consumption and performance of the embedded system, causing an anomalous pattern. Thus, by merging these two findings, the ARCD approach has been developed. ARCD facilitates building secure operating systems in line with the ES's capabilities. Furthermore, the ARCD approach can work along with existing countermeasures to augment the security of the operating system layer. INDEX TERMS Anomalous resource consumption, brute-force attack, cyberattacks, denial-of-service attack, embedded systems, password attack, remote code execution, testbed.

Research paper thumbnail of The Challenges and Opportunities of Artificial Intelligence for Trustworthy Robots and Autonomous Systems

2020 3rd International Conference on Intelligent Robotic and Control Engineering (IRCE)

Effective Robots and Autonomous Systems (RAS) must be trustworthy. Trust is essential in designin... more Effective Robots and Autonomous Systems (RAS) must be trustworthy. Trust is essential in designing autonomous and semi-autonomous technologies, because "No trust, no use". RAS should provide high quality of services, with the four key properties that make it trust, i.e. they must be (i) robust for any health issues, (ii) safe for any matters in their surrounding environments, (iii) secure for any threats from cyber spaces, and (iv) trusted for human-machine interaction. We have thoroughly analysed the challenges in implementing the trustworthy RAS in respects of the four properties, and addressed the power of AI in improving the trustworthiness of RAS. While we put our eyes on the benefits that AI brings to human, we should realise the potential risks that could be caused by AI. The new concept of human-centred AI will be the core in implementing the trustworthy RAS. This review could provide a brief reference for the research on AI for trustworthy RAS.

Research paper thumbnail of Share price prediction of aerospace relevant companies with recurrent neural networks based on PCA

Expert Systems with Applications, 2021

The capital market plays a vital role in marketing operations for the rapid development of the ae... more The capital market plays a vital role in marketing operations for the rapid development of the aerospace industry. However, due to the uncertainty and complexity of the stock market and many cyclical factors, the stock prices of listed aerospace companies fluctuate significantly. This makes the share price prediction challengeable. To improve the prediction of share price for aerospace industry sector and well understand the impact of various indicators on stock prices, we provided a hybrid prediction model by the combination of Principal Component Analysis (PCA) and Recurrent Neural Networks. We investigated two types of aerospace industries (manufacturer and operator). The experimental results show that PCA could improve both accuracy and efficiency of prediction. Various factors could influence the performance of prediction models, such as finance data, extracted features, optimisation algorithms, and parameters of the prediction model. The selection of features may depend on the stability of historical data: technical features could be the first option when the share price is stable, whereas fundamental features could be better when the share price has high fluctuation. The delays of RNN also depend on the stability of historical data for different types of companies. It would be more accurate through using short-term historical data for aerospace manufacturers, whereas using long-term historical data for aerospace operating airlines. The developed model could be an intelligent agent in an automatic stock prediction system, with which, the financial industry could make a prompt decision for their economic strategies and business activities in terms of predicted future share price, thus

Research paper thumbnail of Various heuristic algorithms to minimise the two-page crossing numbers of graphs

Open Computer Science, 2015

We propose several new heuristics for the twopage book crossing problem, which are based on recen... more We propose several new heuristics for the twopage book crossing problem, which are based on recent algorithms for the corresponding one-page problem. Especially, the neural network model for edge allocation is combined for the first time with various one-page algorithms. We investigate the performance of the new heuristics by testing them on various benchmark test suites. It is found out that the new heuristics outperform the previously known heuristics and produce good approximations of the planar crossing number for severalwell-known graph families. We conjecture that the optimal two-page drawing of a graph represents the planar drawing of the graph.

Research paper thumbnail of Human factor security: evaluating the cybersecurity capacity of the industrial workforce

Journal of Systems and Information Technology, 2018

Research paper thumbnail of One- and two-page crossing numbers for some types of graphs

International Journal of Computer Mathematics, 2010

The simplest graph drawing method is that of putting the vertices of a graph on a line (spine) an... more The simplest graph drawing method is that of putting the vertices of a graph on a line (spine) and drawing the edges as half-circles on k half planes (pages). Such drawings are called k-page book drawings and the minimal number of edge crossings in such a drawing is called the k-page crossing number. In a one-page book drawing, all edges are placed on one side of the spine, and in a two-page book drawing all edges are placed either above or below the spine. The one-page and two-page crossing numbers of a graph provide upper bounds for the standard planar crossing. In this paper, we derive the exact one-page crossing numbers for four-row meshes, present a new proof for the one-page crossing numbers of Halin graphs, and derive the exact two-page crossing numbers for circulant graphs C n (1, n 2). We also give explicit constructions of the optimal drawings for each kind of graphs.

Research paper thumbnail of An Improved Neural Network Model for the Two-Page Crossing Number Problem

IEEE Transactions on Neural Networks, 2006

The simplest graph drawing method is that of putting the vertices of a graph on a line and drawin... more The simplest graph drawing method is that of putting the vertices of a graph on a line and drawing the edges as half-circles either above or below the line. Such drawings are called two-page book drawings. The smallest number of crossings over all two-page drawings of a graph G is called the two-page crossing number of G. Cimikowski and Shope have solved the two-page crossing number problem for an n-vertex and m-edge graph by using a Hopfield network with 2 m neurons. We present here an improved Hopfield model with m neurons. The new model achieves much better performance in the quality of solutions and is more efficient than the model of Cimikowski and Shope for all graphs tested. The parallel time complexity of the algorithm, without considering the crossing number calculations, is O(m) for the new Hopfield model with m processors clearly outperforming the previous algorithm.

Research paper thumbnail of A Neural Network Model to Minimize the Connected Dominating Set for Self-Configuration of Wireless Sensor Networks

IEEE Transactions on Neural Networks, 2009

A wireless ad hoc sensor network consists of a number of sensors spreading across a geographical ... more A wireless ad hoc sensor network consists of a number of sensors spreading across a geographical area. The performance of the network suffers as the number of nodes grows, and a large sensor network quickly becomes difficult to manage. Thus, it is essential that the network be able to self-organize. Clustering is an efficient approach to simplify the network structure and to alleviate the scalability problem. One method to create clusters is to use weakly connected dominating sets (WCDSs). Finding the minimum WCDS in an arbitrary graph is an NP-complete problem. We propose a neural network model to find the minimum WCDS in a wireless sensor network. We present a directed convergence algorithm. The new algorithm outperforms the normal convergence algorithm both in efficiency and in the quality of solutions. Moreover, it is shown that the neural network is robust. We investigate the scalability of the neural network model by testing it on a range of sized graphs and on a range of transmission radii. Compared with Guha and Khuller's centralized algorithm, the proposed neural network with directed convergency achieves better results when the transmission radius is short, and equal performance when the transmission radius becomes larger. The parallel version of the neural network model takes time O(d), where d is the maximal degree in the graph corresponding to the sensor network, while the centralized algorithm takes O(n2). We also investigate the effect of the transmission radius on the size of WCDS. The results show that it is important to select a suitable transmission radius to make the network stable and to extend the lifespan of the network. The proposed model can be used on sink nodes in sensor networks, so that a sink node can inform the nodes to be a coordinator (clusterhead) in the WCDS obtained by the algorithm. Thus, the message overhead is O(M), where M is the size of the WCDS.

Research paper thumbnail of A Comprehensive Obstacle Avoidance System of Mobile Robots Using an Adaptive Threshold Clustering and the Morphin Algorithm

Advances in Intelligent Systems and Computing

To solve the problem of obstacle avoidance for a mobile robot in unknown environment, a comprehen... more To solve the problem of obstacle avoidance for a mobile robot in unknown environment, a comprehensive obstacle avoidance system (called ATCM system) is developed. It integrates obstacle detection, obstacle classification, collision prediction and obstacle avoidance. Especially, an Adaptive-Threshold Clustering algorithm is developed to detect obstacles, and the Morphin algorithm is applied for path planning when the robot predicts a collision ahead. A dynamic circular window is set to continuously scan the surrounding environment of the robot during the task period. The simulation results show that the obstacle avoidance system enables robot to avoid any static and dynamic obstacles effectively.

Research paper thumbnail of Network performance analysis for CBM implementation based on OSA-CBM framework

2018 IEEE Aerospace Conference

In aircraft industry, after labour and fuel costs, maintenance costs are the third largest expens... more In aircraft industry, after labour and fuel costs, maintenance costs are the third largest expense item for both regional and national carriers. By implementing CBM technologies not only the maintenance costs can be reduced, also it can provide more specific scheduled maintenance, onboard diagnostics and prognostics services. Maintenance department can be notified about the fault in advance and can arrange for components while aircraft is in mid-air. CBM technologies minimize the physical diagnostics costs and provide more realistic condition based maintenance (CBM). The aim of this project is to create and analyse network architecture for Condition Based Maintenance Systems. CBM consists of subsystems, sensors, model based reasoning systems for subsystem and system level managers, diagnostic and prognostics software for subsystems. In CBM systems, usually there is large amount of data (collected from sensors), which needs to be delivered to right places at the right time so communication paradigm is very essential design consideration which impacts many key properties such as scalability, reliability, availability, timeliness and cost of overall system. The OSA-CBM (Open System Architecture for Condition Based Maintenance) is an open standard that defines an open architecture for moving information in a condition-based maintenance system. Typically, companies developing condition-based maintenance systems must develop software and hardware components, in addition a framework for these components to integrate. OSA-CBM is a standard framework for implementing condition-based maintenance systems. It not only describes the six functional blocks of condition based maintenance systems but also the interfaces to establish communication among these blocks. OSA-CBM specifies the input and output between the CBM modules. In simple words, it describes the information that is moved and how to move it. OSA-CBM can be implemented using various available distributed middleware but it is not clear which implementation is more efficient. This paper presents an approach to design, implement and analysis of network architecture of CBM systems using OSACBM data model.

Research paper thumbnail of The Challenges and Opportunities of Human-Centred AI for Trustworthy Robots and Autonomous Systems

IEEE Transactions on Cognitive and Developmental Systems

The trustworthiness of robots and autonomous systems (RAS) has taken a prominent position on the ... more The trustworthiness of robots and autonomous systems (RAS) has taken a prominent position on the way towards full autonomy. This work is the first to systematically explore the key facets of human-centred AI for trustworthy RAS. We identified five key properties of a trustworthy RAS, i.e., RAS must be (i) safe in any uncertain and dynamic environment; (ii) secure, i.e., protect itself from cyber threats; (iii) healthy and fault-tolerant; (iv) trusted and easy to use to enable effective human-machine interaction (HMI); (v) compliant with the law and ethical expectations. While the applications of RAS have mainly focused on performance and productivity, not enough scientific attention has been paid to the risks posed by advanced AI in RAS. We analytically examine the challenges of implementing trustworthy RAS with respect to the five key properties and explore the role and roadmap of AI technologies in ensuring the trustworthiness of RAS in respect of safety, security, health, HMI, and ethics. A new acceptance model of RAS is provided as a framework for human-centric AI requirements and for implementing trustworthy RAS by design. This approach promotes human-level intelligence to augment human capabilities and focuses on contribution to humanity.

Research paper thumbnail of Knowledge-Based Linguistic Attribute Hierarchy for Diabetes Diagnosis

2019 International Conference on Computational Science and Computational Intelligence (CSCI)

A hierarchy of Linguistic Decision Trees (LDTs), called linguistic attribute hierarchy (LAH), can... more A hierarchy of Linguistic Decision Trees (LDTs), called linguistic attribute hierarchy (LAH), can provide a transparent information propagation and a hierarchical decision making process. In this paper, we quantified the effect of various factors on the diagnosis of Diabetes with the information gain of each attribute to the decision variable, and developed an LAH, where, the LDTs are constructed under the framework of the knowledge-based label semantics, referring to the knowledge of the diagnosis criteria of Diabetes, defined by the World Health Organisation. A genetic wrapper algorithm was developed to find the best LAH for improving the accuracy of Diabetes diagnosis. The optimal LAH for Diabetes diagnosis achieved the accuracy up to 92% on the benchmark database, Pima Indian Diabetes data. The accuracy is much better than that in the research literature.

Research paper thumbnail of How Good a Shadow Neural Network is for Solving Non-linear Decision Making Problems

The universe approximate theorem states that a shadow neural network (one hidden layer) can repre... more The universe approximate theorem states that a shadow neural network (one hidden layer) can represent any non-linear function. In this paper, we aim at examining how good a shadow neural network is for solving non-linear decision making problems. We proposed a performance driven incremental approach to searching the best shadow neural network for decision making, given a data set. The experimental results on the two benchmark data sets, Breast Cancer in Wisconsin and SMS Spams, demonstrate the correction of universe approximate theorem, and show that the number of hidden neurons, taking about the half of input number, is good enough to represent the function from data. It is shown that the performance driven BP learning is faster than the error-driven BP learning, and that the performance of the SNN obtained by the former is not worse than that of the SNN obtained by the latter. This indicates that when learning a neural network with the BP algorithm, the performance reaches a certa...

Research paper thumbnail of Analytical Review of Cybersecurity for Embedded Systems

IEEE Access, 2021

To identify the key factors and create the landscape of cybersecurity for embedded systems (CSES)... more To identify the key factors and create the landscape of cybersecurity for embedded systems (CSES), an analytical review of the existing research on CSES has been conducted. The common properties of embedded systems, such as mobility, small size, low cost, independence, and limited power consumption when compared to traditional computer systems, have caused many challenges in CSES. The conflict between cybersecurity requirements and the computing capabilities of embedded systems makes it critical to implement sophisticated security countermeasures against cyber-attacks in an embedded system with limited resources, without draining those resources. In this study, twelve factors influencing CSES have been identified: (1) the components; (2) the characteristics; (3) the implementation; (4) the technical domain; (5) the security requirements; (6) the security problems; (7) the connectivity protocols; (8) the attack surfaces; (9) the impact of the cyber-attacks; (10) the security challeng...

Research paper thumbnail of A Configurable Multi-Engine System Based on Performance Matrices for Face Recognition

Research paper thumbnail of Various Island-based Parallel Genetic Algorithms for the 2-Page Drawing Problem

Genetic algorithms have been applied to solve the 2-page drawing problem successfully, but they w... more Genetic algorithms have been applied to solve the 2-page drawing problem successfully, but they work with one global population, so the search time and space are limited. Parallelization provides an attractive prospect in improving the efficiency and solution quality of genetic algorithms. One of the most popular tools for parallel computing is Message Passing Interface (MPI). In this paper, we present four island models of Parallel Genetic Algorithms with MPI: island models with linear, grid, random graph topologies, and island model with periodical synchronisation. We compare their efficiency and quality of solutions for the 2-page drawing problem on a variety of graphs.