shahenda sarhan - Academia.edu (original) (raw)
Papers by shahenda sarhan
Mathematics
The Teaching Learning-Based Algorithm (TLBA) is a powerful and effective optimization approach. T... more The Teaching Learning-Based Algorithm (TLBA) is a powerful and effective optimization approach. TLBA mimics the teaching-learning process in a classroom, where TLBA’s iterative computing process is separated into two phases, unlike standard evolutionary algorithms and swarm intelligence algorithms, and each phase conducts an iterative learning operation. Advanced technologies of Voltage Source Converters (VSCs) enable greater active and reactive power regulation in these networks. Various objectives are addressed for optimal energy management, with the goal of attaining economic and technical advantages by decreasing overall production fuel costs and transmission power losses in AC-DC transmission networks. In this paper, the TLBA is applied for various sorts of nonlinear and multimodal functioning of hybrid alternating current (AC) and multi-terminal direct current (DC) power grids. The proposed TLBA is evaluated on modified IEEE 30-bus and IEEE 57-bus AC-DC networks and compared t...
Mathematics
This paper proposes a multi-objective teaching–learning studying-based algorithm (MTLSBA) to hand... more This paper proposes a multi-objective teaching–learning studying-based algorithm (MTLSBA) to handle different objective frameworks for solving the large-scale Combined Heat and Power Economic Environmental Dispatch (CHPEED) problem. It aims at minimizing the fuel costs and emissions by managing the power-only, CHP and heat-only units. TLSBA is a modified version of TLBA to increase its global optimization performance by merging a new studying strategy. Based on this integrated tactic, every participant gathers knowledge from someone else randomly to improve his position. The position is specified as the vector of the design variables, which are the power and heat outputs from the power-only, CHP and heat-only units. TLSBA has been upgraded to include an extra Pareto archiving to capture and sustain the non-dominated responses. The objective characteristic is dynamically adapted by systematically modifying the shape of the applicable objective model. Likewise, a decision-making appro...
Mathematics
This article suggests a novel enhanced slime mould optimizer (ESMO) that incorporates a chaotic s... more This article suggests a novel enhanced slime mould optimizer (ESMO) that incorporates a chaotic strategy and an elitist group for handling various mathematical optimization benchmark functions and engineering problems. In the newly suggested solver, a chaotic strategy was integrated into the movement updating rule of the basic SMO, whereas the exploitation mechanism was enhanced via searching around an elitist group instead of only the global best dependence. To handle the mathematical optimization problems, 13 benchmark functions were utilized. To handle the engineering optimization problems, the optimal power flow (OPF) was handled first, where three studied cases were considered. The suggested scheme was scrutinized on a typical IEEE test grid, and the simulation results were compared with the results given in the former publications and found to be competitive in terms of the quality of the solution. The suggested ESMO outperformed the basic SMO in terms of the convergence rate,...
Mathematics
The optimal operation of modern power systems aims at achieving the increased power demand requir... more The optimal operation of modern power systems aims at achieving the increased power demand requirements regarding economic and technical aspects. Another concern is preserving the emissions within the environmental limitations. In this regard, this paper aims at finding the optimal scheduling of power generation units that are able to meet the load requirements based on a multi-objective optimal power flow framework. In the proposed multi-objective framework, objective functions, technical economical, and emissions are considered. The solution methodology is performed based on a developed turbulent flow of a water-based optimizer (TFWO). Single and multi-objective functions are employed to minimize the cost of fuel, emission level, power losses, enhance voltage deviation, and voltage stability index. The proposed algorithm is tested and investigated on the IEEE 30-bus and 57-bus systems, and 17 cases are studied. Four additional cases studied are applied on four large scale test sys...
International Journal of Intelligent Computing and Information Science, Mar 7, 2015
arXiv (Cornell University), Jun 1, 2017
Multimodal biometric identification has been grown a great attention in the most interests in the... more Multimodal biometric identification has been grown a great attention in the most interests in the security fields. In the real world there exist modern system devices that are able to detect, recognize, and classify the human identities with reliable and fast recognition rates. Unfortunately most of these systems rely on one modality, and the reliability for two or more modalities are further decreased. The variations of face images with respect to different poses are considered as one of the important challenges in face recognition systems. In this paper, we propose a multimodal biometric system that able to detect the human face images that are not only one view face image, but also multi-view face images. Each subject entered to the system adjusted their face at front of the three cameras, and then the features of the face images are extracted based on Speeded Up Robust Features (SURF) algorithm. We utilize Multi-Layer Perceptron (MLP) and combined classifiers based on both Learning Vector Quantization (LVQ), and Radial Basis Function (RBF) for classification purposes. The proposed system has been tested using SDUMLA-HMT, and CASIA datasets. Furthermore, we collected a database of multi-view face images by which we take the additive white Gaussian noise into considerations. The results indicated the reliability, robustness of the proposed system with different poses and variations including noise images.
IEEE Access, 2021
Recently Noninvasive Electroencephalogram (EEG) systems are gaining much attention. Brain-compute... more Recently Noninvasive Electroencephalogram (EEG) systems are gaining much attention. Brain-computer Interface (BCI) systems rely on EEG analysis to identify the mental state of the user, change in cognitive state and response to the events. Motor Execution (ME) is a very important control paradigm. This paper introduces a robust and useful User-Independent Hybrid Brain-computer Interface (UIHBCI) model to classify signals from fourteen EEG channels that are used to record the reactions of the brain neurons of nine subjects. Through this study the researchers identified relevant multisensory features of multi-channel EEG that represent the specific mental processes depending on two different evaluation models (Audio/Video) and (Male/Female). The Deep Belief Network (DBN) was applied independently on the two models where, the overall achieved classification rates were better in ME classification compared to the state of art. For evaluation four models were tested in addition to the proposed model, Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Brain-computer Interface Lower-Limb Motor Recovery (BCI LLMR) and Hybrid Steady-State Visual Evoked Potential Rapid Serial Visual Presentation Brain-computer Interface (Hybrid SSVEP-RSVP BCI). Results indicated the proposed model, LDA, SVM, BCI LLMR and Hybrid SSVEP-RSVP BCI accuracies for (A/V) model are 94.44%, 66.67%, 61.11%, 83.33% and 89.67% respectively, while for (M/F) model, the overall accuracies are 94.44%, 88.89%, 83.331%, 85.44% and 89.45%. Finally, the proposed model achieved superiority over the state of art algorithms in both (A/V) and (M/F) models. INDEX TERMS Deep neural networks, independent component analysis, machine learning, motor execution.
J. Inf. Hiding Multim. Signal Process., 2017
Deep neural network (DNN) techniques are utilised extensively to handle big data problems as well... more Deep neural network (DNN) techniques are utilised extensively to handle big data problems as well as predicting missing information in retrieval systems. In this paper, we propose a multimodal biometric retrieval system based on adaptive deep learning vector quantisation (ADLVQ) that resolves big data and prediction problems. Intuitively, each subject enrolled in the system is authenticated according to the required degree of security determined by the administrator. We authenticate using not only one face and fingerprint modality but also multi-sample, multi-instances face and fingerprints. The proposed system utilises local gradient pattern with variance (LGPV) to extract the features of the input modalities that are dynamically enrolled in the system. These enrolled features are classified using DNN after quantisation using the K-means algorithm based on prior learning vector quantisation (LVQ) knowledge. Further, the system assesses the performance of the input features adopted ...
Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way... more Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. Neural networks, have remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. A trained neural network can be thought of as an "expert" in the category of information it has been given to analyze. This expert can then be used to provide projections given new situations of interest and answer "what if" questions. so in this paper we tried to introduce a brief overview of ANN to help researchers in their way throw ANN.
Believing of the importance of biometrics in this research, we have presented a fused system that... more Believing of the importance of biometrics in this research, we have presented a fused system that depends upon multimodal biometric system traits face, iris, and fingerprint, achieving higher performance than the unimodal biometrics. The proposed system used Local Binary Pattern with Variance histogram (LBPV) for extracting the preprocessed features. Canny edge detection and Hough Circular Transform (HCT) were used in the preprocessing stage while, the Combined Learning Vector Quantization classifier (CLVQ) was used for matching and classification. Reduced feature dimensions are obtained using LBPV histograms which are the input patterns for CLVQ producing the classes as its outputs. The fusion process was performed at the decision level based on majority voting algorithm of the output classes resulting from CLVQ classifier. The experimental results indicated that the fusion of face, iris, and fingerprint has achieved higher genuine acceptance recognition rate (GAR) 99.50% with mini...
IEEE Access, 2021
Energy consumption always represents a challenge in the ad hoc networks which spurred the researc... more Energy consumption always represents a challenge in the ad hoc networks which spurred the researchers to benefit from the bio-inspired algorithms and their fitness functions to evaluate nodes energy through the path discovery stage. In this paper we propose energy efficient routing protocol based on the well-known Ad Hoc On-Demand Multipath Distance Vector (AOMDV) routing protocol and a bio-inspired algorithm called Elephant Herding Optimization (EHO). In the proposed EHO-AOMDV the overall consumed energy of nodes is optimized by classifying nodes into two classes, while paths are discovered from the class of the fittest nodes with sufficient energy for transmission to reduce the probability of path failure and the increasing number of dead nodes through higher data loads. The EHO updating operator updates classes based on separating operator that evaluates nodes based on residual energy after each transmission round. Experiments were conducted using Ns-3 with five evaluation metrics routing overhead, packet delivery ratio, average energy consumption, end-to-end delay and number of dead nodes and four implemented protocols the proposed protocol, AOMDV and two bio-inspired protocols ACO-FDRPSO and FF-AOMDV. Results indicated that the proposed EHO-AOMDV attained higher packet delivery ratio with less routing overhead, average energy consumption and number of dead nodes over the state of art while in the end-to-end delay AOMDV has outperformed the proposed protocol. INDEX TERMS AOMDV, elephant herding optimization, MANET, routing protocol.
Computers in Biology and Medicine, 2021
Electrooculography (EOG) is a method to concurrently obtain electrophysiological signals accompan... more Electrooculography (EOG) is a method to concurrently obtain electrophysiological signals accompanying an Electroencephalography (EEG), where both methods have a common cerebral pattern and imply a similar medical significance. The most common electrophysiological signal source is EOG that contaminated the EEG signal and thereby decreases the accuracy of measurement and the predicated signal strength. In this study, we introduce a method to improve the correction efficiency for EOG artifacts (EOAs) on raw EEG recordings: We retrieve cerebral information from three EEG signals with high system performance and accuracy by applying feature engineering and a novel machine-learning (ML) procedure. To this end, we use two adaptive algorithms for signal decomposition to remove EOAs from multichannel EEG signals: empirical mode decomposition (EMD) and complete ensemble empirical mode decomposition (CEEMD), both using the Hilbert-Huang transform. First, the signal components are decomposed into multiple intrinsic mode functions. Next, statistical feature extraction and dimension reduction using principal component analysis are employed to select optimal feature sets for the ML procedure that is based on classification and regression models. The proposed CEEMD algorithm enhances the accuracy compared to the EMD algorithm and considerably improves the multi-sensory classification of EEG signals. Models of three different categories are applied, and the classification is based on a K-nearest neighbor (k-NN) algorithm, a decision tree (DT) algorithm, and a support vector machine (SVM) algorithm with accuracies of 94% for K-NN, 75% for DT, and 69% for SVM. For each classification model, a regression learner is used to assist as an evidence rule for the proposed artificial system and to influence the learning process from classification and regression models. The regression learning algorithms applied include algorithms based on an ensemble of trees (ET), a DT, and a SVM. We find that the ET-based regression model exhibits a determination coefficient R2 = 1.00 outperforming the other two approaches with R2 = 0.80 for DT and R2 = 0.76 for SVM.
PeerJ Computer Science, 2021
Chest X-ray (CXR) imaging is one of the most feasible diagnosis modalities for early detection of... more Chest X-ray (CXR) imaging is one of the most feasible diagnosis modalities for early detection of the infection of COVID-19 viruses, which is classified as a pandemic according to the World Health Organization (WHO) report in December 2019. COVID-19 is a rapid natural mutual virus that belongs to the coronavirus family. CXR scans are one of the vital tools to early detect COVID-19 to monitor further and control its virus spread. Classification of COVID-19 aims to detect whether a subject is infected or not. In this article, a model is proposed for analyzing and evaluating grayscale CXR images called Chest X-Ray COVID Network (CXRVN) based on three different COVID-19 X-Ray datasets. The proposed CXRVN model is a lightweight architecture that depends on a single fully connected layer representing the essential features and thus reducing the total memory usage and processing time verse pre-trained models and others. The CXRVN adopts two optimizers: mini-batch gradient descent and Adam ...
IET Intelligent Transport Systems, 2020
This study presents a secret key sharing protocol that establishes cryptographically secured comm... more This study presents a secret key sharing protocol that establishes cryptographically secured communication between two entities. A new symmetric key exchange scenario for smart city applications is presented in this research. The protocol is based on the specific properties of the Fuss-Catalan numbers and the Lattice Path combinatorics. The proposed scenario consists of three phases: generating a Fuss-Catalan object based on the grid dimension, defining the movement in the Lattice Path Grid and defining the key equalisation rules. In the experimental part, the authors present the security analysis of the protocol as well as its test. Also, they examine the equivalence of the proposed with Maurer's satellite scenario and suggest a new scenario that implements an information-theoretical protocol for the public key distribution. Additionally, a comparison with related studies and methods is provided, as well as a comparison with satellite scenario, which proves the advantages of solution presented by the authors. Finally, they propose further research directions regarding key management in smart city applications.
Human-centric Computing and Information Sciences, 2018
Different approaches have been used to estimate language models from a given corpus. Recently, re... more Different approaches have been used to estimate language models from a given corpus. Recently, researchers have used different neural network architectures to estimate the language models from a given corpus using unsupervised learning neural networks capabilities. Generally, neural networks have demonstrated success compared to conventional n-gram language models. With languages that have a rich morphological system and a huge number of vocabulary words, the major trade-off with neural network language models is the size of the network. This paper presents a recurrent neural network language model based on the tokenization of words into three parts: the prefix, the stem, and the suffix. The proposed model is tested with the English AMI speech recognition dataset and outperforms the baseline n-gram model, the basic recurrent neural network language models (RNNLM) and the GPU-based recurrent neural network language models (CUED-RNNLM) in perplexity and word error rate. The automatic ...
International Journal of Intelligent Computing and Information Sciences, 2016
Using social networks have become one of the daily activities that billions of peoples around the... more Using social networks have become one of the daily activities that billions of peoples around the world do. So, great research efforts had been done to analyze and understand these virtual communities. Among other things, link prediction is a paramount task to analyze and understand these social networks. In this paper, we investigate link prediction problem using rough set theory to discard the irrelevant attributes that could be found in the profiles of Facebook users and the proposed work induces accuracy 97.79%.
Journal of Cloud Computing, 2017
Cloud computing is a ubiquitous network access model to a shared pool of configurable computing r... more Cloud computing is a ubiquitous network access model to a shared pool of configurable computing resources where available resources must be checked and scheduled using an efficient task scheduler to be assigned to clients. Most of the existing task schedulers, did not achieve the required standards and requirements as some of them only concentrated on waiting time or response time reduction or even both neglecting the starved processes at all. In this paper, we propose a novel hybrid task scheduling algorithm named (SRDQ) combining Shortest-Job-First (SJF) and Round Robin (RR) schedulers considering a dynamic variable task quantum. The proposed algorithms mainly relies on two basic keys the first having a dynamic task quantum to balance waiting time between short and long tasks while the second involves splitting the ready queue into two sub-queues, Q1 for the short tasks and the other for the long ones. Assigning tasks to resources from Q 1 or Q 2 are done mutually two tasks from Q 1 and one task from Q 2. For evaluation purpose, three different datasets were utilized during the algorithm simulation conducted using CloudSim environment toolkit 3.0.3 against three different scheduling algorithms SJF, RR and Time Slice Priority Based RR (TSPBRR) Experimentations results and tests indicated the superiority of the proposed algorithm over the state of art in reducing waiting time, response time and partially the starvation of long tasks.
International Review on Computers and Software (IRECOS), 2016
Code clones represent a stumbling blocking the way of having a more readable, maintainable and le... more Code clones represent a stumbling blocking the way of having a more readable, maintainable and less complicated source codes, free of bugs and errors. Many studies had been proposed for detecting and omitting the four types of cloned codes based on pattern matching, syntax parsing, tree parsing and refactoring which is the most commonly used technique to remove the code clones from software, while maintaining its original behavior. In this paper, we propose an automated refactoring technique and its correspondence algorithm to omit code clones of type 1 and type 2. The proposed technique performance was tested and evaluated using four open source Java projects JFreeChart, JRuby, JCommon and Apache ant. The performance of the source codes was indicated based on number of metrics as the lines of code, number of blank lines, method’s count and cyclomatic complexity before and after applying the proposed technique. The experimentation results indicated that the proposed technique had showed superiority over the state-of-the-art through omitting the cloned codes with the possibility of maintaining the stability and correctness of behavior of the source codes under consideration.
Pattern Recognition and Image Analysis, 2016
Anti-aging and looking young with a full of vigor appearance with no Facial volume depletion and ... more Anti-aging and looking young with a full of vigor appearance with no Facial volume depletion and deepening lines of facial expression is a dream of every human being in life. Researchers in dermal and cosmetic fields had spent many years looking for solutions to aging signs and wrinkles other than surgeries. Botox is a skin rejuvenation cosmetic procedure that represents the recent magical key to aging appearance problems especially with the fascinating results it had showed. Botox can simply make you look 10 to 20 years younger, which represent an obstacle in the face of human age estimation researches. In this paper, we proposed a new model called Human Injected by Botox Age Estimation (HIBAE) model, a human age estimator based on active shape models, speed up robust feature, and support vector machine to accurately estimate the age of people that are exposed to Botox injections. Human Injected by Botox Age Estimation proposed model was trained by a crossover of Productive Aging Lab. database and 60 images collected from the internet of people that were exposed to Botox, and tested using a crossover of FACES64 database and 20 images of people that were exposed to Botox. HIBAE had showed superiority through performance testing over the state-of-the-art.
Mathematics
The Teaching Learning-Based Algorithm (TLBA) is a powerful and effective optimization approach. T... more The Teaching Learning-Based Algorithm (TLBA) is a powerful and effective optimization approach. TLBA mimics the teaching-learning process in a classroom, where TLBA’s iterative computing process is separated into two phases, unlike standard evolutionary algorithms and swarm intelligence algorithms, and each phase conducts an iterative learning operation. Advanced technologies of Voltage Source Converters (VSCs) enable greater active and reactive power regulation in these networks. Various objectives are addressed for optimal energy management, with the goal of attaining economic and technical advantages by decreasing overall production fuel costs and transmission power losses in AC-DC transmission networks. In this paper, the TLBA is applied for various sorts of nonlinear and multimodal functioning of hybrid alternating current (AC) and multi-terminal direct current (DC) power grids. The proposed TLBA is evaluated on modified IEEE 30-bus and IEEE 57-bus AC-DC networks and compared t...
Mathematics
This paper proposes a multi-objective teaching–learning studying-based algorithm (MTLSBA) to hand... more This paper proposes a multi-objective teaching–learning studying-based algorithm (MTLSBA) to handle different objective frameworks for solving the large-scale Combined Heat and Power Economic Environmental Dispatch (CHPEED) problem. It aims at minimizing the fuel costs and emissions by managing the power-only, CHP and heat-only units. TLSBA is a modified version of TLBA to increase its global optimization performance by merging a new studying strategy. Based on this integrated tactic, every participant gathers knowledge from someone else randomly to improve his position. The position is specified as the vector of the design variables, which are the power and heat outputs from the power-only, CHP and heat-only units. TLSBA has been upgraded to include an extra Pareto archiving to capture and sustain the non-dominated responses. The objective characteristic is dynamically adapted by systematically modifying the shape of the applicable objective model. Likewise, a decision-making appro...
Mathematics
This article suggests a novel enhanced slime mould optimizer (ESMO) that incorporates a chaotic s... more This article suggests a novel enhanced slime mould optimizer (ESMO) that incorporates a chaotic strategy and an elitist group for handling various mathematical optimization benchmark functions and engineering problems. In the newly suggested solver, a chaotic strategy was integrated into the movement updating rule of the basic SMO, whereas the exploitation mechanism was enhanced via searching around an elitist group instead of only the global best dependence. To handle the mathematical optimization problems, 13 benchmark functions were utilized. To handle the engineering optimization problems, the optimal power flow (OPF) was handled first, where three studied cases were considered. The suggested scheme was scrutinized on a typical IEEE test grid, and the simulation results were compared with the results given in the former publications and found to be competitive in terms of the quality of the solution. The suggested ESMO outperformed the basic SMO in terms of the convergence rate,...
Mathematics
The optimal operation of modern power systems aims at achieving the increased power demand requir... more The optimal operation of modern power systems aims at achieving the increased power demand requirements regarding economic and technical aspects. Another concern is preserving the emissions within the environmental limitations. In this regard, this paper aims at finding the optimal scheduling of power generation units that are able to meet the load requirements based on a multi-objective optimal power flow framework. In the proposed multi-objective framework, objective functions, technical economical, and emissions are considered. The solution methodology is performed based on a developed turbulent flow of a water-based optimizer (TFWO). Single and multi-objective functions are employed to minimize the cost of fuel, emission level, power losses, enhance voltage deviation, and voltage stability index. The proposed algorithm is tested and investigated on the IEEE 30-bus and 57-bus systems, and 17 cases are studied. Four additional cases studied are applied on four large scale test sys...
International Journal of Intelligent Computing and Information Science, Mar 7, 2015
arXiv (Cornell University), Jun 1, 2017
Multimodal biometric identification has been grown a great attention in the most interests in the... more Multimodal biometric identification has been grown a great attention in the most interests in the security fields. In the real world there exist modern system devices that are able to detect, recognize, and classify the human identities with reliable and fast recognition rates. Unfortunately most of these systems rely on one modality, and the reliability for two or more modalities are further decreased. The variations of face images with respect to different poses are considered as one of the important challenges in face recognition systems. In this paper, we propose a multimodal biometric system that able to detect the human face images that are not only one view face image, but also multi-view face images. Each subject entered to the system adjusted their face at front of the three cameras, and then the features of the face images are extracted based on Speeded Up Robust Features (SURF) algorithm. We utilize Multi-Layer Perceptron (MLP) and combined classifiers based on both Learning Vector Quantization (LVQ), and Radial Basis Function (RBF) for classification purposes. The proposed system has been tested using SDUMLA-HMT, and CASIA datasets. Furthermore, we collected a database of multi-view face images by which we take the additive white Gaussian noise into considerations. The results indicated the reliability, robustness of the proposed system with different poses and variations including noise images.
IEEE Access, 2021
Recently Noninvasive Electroencephalogram (EEG) systems are gaining much attention. Brain-compute... more Recently Noninvasive Electroencephalogram (EEG) systems are gaining much attention. Brain-computer Interface (BCI) systems rely on EEG analysis to identify the mental state of the user, change in cognitive state and response to the events. Motor Execution (ME) is a very important control paradigm. This paper introduces a robust and useful User-Independent Hybrid Brain-computer Interface (UIHBCI) model to classify signals from fourteen EEG channels that are used to record the reactions of the brain neurons of nine subjects. Through this study the researchers identified relevant multisensory features of multi-channel EEG that represent the specific mental processes depending on two different evaluation models (Audio/Video) and (Male/Female). The Deep Belief Network (DBN) was applied independently on the two models where, the overall achieved classification rates were better in ME classification compared to the state of art. For evaluation four models were tested in addition to the proposed model, Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Brain-computer Interface Lower-Limb Motor Recovery (BCI LLMR) and Hybrid Steady-State Visual Evoked Potential Rapid Serial Visual Presentation Brain-computer Interface (Hybrid SSVEP-RSVP BCI). Results indicated the proposed model, LDA, SVM, BCI LLMR and Hybrid SSVEP-RSVP BCI accuracies for (A/V) model are 94.44%, 66.67%, 61.11%, 83.33% and 89.67% respectively, while for (M/F) model, the overall accuracies are 94.44%, 88.89%, 83.331%, 85.44% and 89.45%. Finally, the proposed model achieved superiority over the state of art algorithms in both (A/V) and (M/F) models. INDEX TERMS Deep neural networks, independent component analysis, machine learning, motor execution.
J. Inf. Hiding Multim. Signal Process., 2017
Deep neural network (DNN) techniques are utilised extensively to handle big data problems as well... more Deep neural network (DNN) techniques are utilised extensively to handle big data problems as well as predicting missing information in retrieval systems. In this paper, we propose a multimodal biometric retrieval system based on adaptive deep learning vector quantisation (ADLVQ) that resolves big data and prediction problems. Intuitively, each subject enrolled in the system is authenticated according to the required degree of security determined by the administrator. We authenticate using not only one face and fingerprint modality but also multi-sample, multi-instances face and fingerprints. The proposed system utilises local gradient pattern with variance (LGPV) to extract the features of the input modalities that are dynamically enrolled in the system. These enrolled features are classified using DNN after quantisation using the K-means algorithm based on prior learning vector quantisation (LVQ) knowledge. Further, the system assesses the performance of the input features adopted ...
Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way... more Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. Neural networks, have remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. A trained neural network can be thought of as an "expert" in the category of information it has been given to analyze. This expert can then be used to provide projections given new situations of interest and answer "what if" questions. so in this paper we tried to introduce a brief overview of ANN to help researchers in their way throw ANN.
Believing of the importance of biometrics in this research, we have presented a fused system that... more Believing of the importance of biometrics in this research, we have presented a fused system that depends upon multimodal biometric system traits face, iris, and fingerprint, achieving higher performance than the unimodal biometrics. The proposed system used Local Binary Pattern with Variance histogram (LBPV) for extracting the preprocessed features. Canny edge detection and Hough Circular Transform (HCT) were used in the preprocessing stage while, the Combined Learning Vector Quantization classifier (CLVQ) was used for matching and classification. Reduced feature dimensions are obtained using LBPV histograms which are the input patterns for CLVQ producing the classes as its outputs. The fusion process was performed at the decision level based on majority voting algorithm of the output classes resulting from CLVQ classifier. The experimental results indicated that the fusion of face, iris, and fingerprint has achieved higher genuine acceptance recognition rate (GAR) 99.50% with mini...
IEEE Access, 2021
Energy consumption always represents a challenge in the ad hoc networks which spurred the researc... more Energy consumption always represents a challenge in the ad hoc networks which spurred the researchers to benefit from the bio-inspired algorithms and their fitness functions to evaluate nodes energy through the path discovery stage. In this paper we propose energy efficient routing protocol based on the well-known Ad Hoc On-Demand Multipath Distance Vector (AOMDV) routing protocol and a bio-inspired algorithm called Elephant Herding Optimization (EHO). In the proposed EHO-AOMDV the overall consumed energy of nodes is optimized by classifying nodes into two classes, while paths are discovered from the class of the fittest nodes with sufficient energy for transmission to reduce the probability of path failure and the increasing number of dead nodes through higher data loads. The EHO updating operator updates classes based on separating operator that evaluates nodes based on residual energy after each transmission round. Experiments were conducted using Ns-3 with five evaluation metrics routing overhead, packet delivery ratio, average energy consumption, end-to-end delay and number of dead nodes and four implemented protocols the proposed protocol, AOMDV and two bio-inspired protocols ACO-FDRPSO and FF-AOMDV. Results indicated that the proposed EHO-AOMDV attained higher packet delivery ratio with less routing overhead, average energy consumption and number of dead nodes over the state of art while in the end-to-end delay AOMDV has outperformed the proposed protocol. INDEX TERMS AOMDV, elephant herding optimization, MANET, routing protocol.
Computers in Biology and Medicine, 2021
Electrooculography (EOG) is a method to concurrently obtain electrophysiological signals accompan... more Electrooculography (EOG) is a method to concurrently obtain electrophysiological signals accompanying an Electroencephalography (EEG), where both methods have a common cerebral pattern and imply a similar medical significance. The most common electrophysiological signal source is EOG that contaminated the EEG signal and thereby decreases the accuracy of measurement and the predicated signal strength. In this study, we introduce a method to improve the correction efficiency for EOG artifacts (EOAs) on raw EEG recordings: We retrieve cerebral information from three EEG signals with high system performance and accuracy by applying feature engineering and a novel machine-learning (ML) procedure. To this end, we use two adaptive algorithms for signal decomposition to remove EOAs from multichannel EEG signals: empirical mode decomposition (EMD) and complete ensemble empirical mode decomposition (CEEMD), both using the Hilbert-Huang transform. First, the signal components are decomposed into multiple intrinsic mode functions. Next, statistical feature extraction and dimension reduction using principal component analysis are employed to select optimal feature sets for the ML procedure that is based on classification and regression models. The proposed CEEMD algorithm enhances the accuracy compared to the EMD algorithm and considerably improves the multi-sensory classification of EEG signals. Models of three different categories are applied, and the classification is based on a K-nearest neighbor (k-NN) algorithm, a decision tree (DT) algorithm, and a support vector machine (SVM) algorithm with accuracies of 94% for K-NN, 75% for DT, and 69% for SVM. For each classification model, a regression learner is used to assist as an evidence rule for the proposed artificial system and to influence the learning process from classification and regression models. The regression learning algorithms applied include algorithms based on an ensemble of trees (ET), a DT, and a SVM. We find that the ET-based regression model exhibits a determination coefficient R2 = 1.00 outperforming the other two approaches with R2 = 0.80 for DT and R2 = 0.76 for SVM.
PeerJ Computer Science, 2021
Chest X-ray (CXR) imaging is one of the most feasible diagnosis modalities for early detection of... more Chest X-ray (CXR) imaging is one of the most feasible diagnosis modalities for early detection of the infection of COVID-19 viruses, which is classified as a pandemic according to the World Health Organization (WHO) report in December 2019. COVID-19 is a rapid natural mutual virus that belongs to the coronavirus family. CXR scans are one of the vital tools to early detect COVID-19 to monitor further and control its virus spread. Classification of COVID-19 aims to detect whether a subject is infected or not. In this article, a model is proposed for analyzing and evaluating grayscale CXR images called Chest X-Ray COVID Network (CXRVN) based on three different COVID-19 X-Ray datasets. The proposed CXRVN model is a lightweight architecture that depends on a single fully connected layer representing the essential features and thus reducing the total memory usage and processing time verse pre-trained models and others. The CXRVN adopts two optimizers: mini-batch gradient descent and Adam ...
IET Intelligent Transport Systems, 2020
This study presents a secret key sharing protocol that establishes cryptographically secured comm... more This study presents a secret key sharing protocol that establishes cryptographically secured communication between two entities. A new symmetric key exchange scenario for smart city applications is presented in this research. The protocol is based on the specific properties of the Fuss-Catalan numbers and the Lattice Path combinatorics. The proposed scenario consists of three phases: generating a Fuss-Catalan object based on the grid dimension, defining the movement in the Lattice Path Grid and defining the key equalisation rules. In the experimental part, the authors present the security analysis of the protocol as well as its test. Also, they examine the equivalence of the proposed with Maurer's satellite scenario and suggest a new scenario that implements an information-theoretical protocol for the public key distribution. Additionally, a comparison with related studies and methods is provided, as well as a comparison with satellite scenario, which proves the advantages of solution presented by the authors. Finally, they propose further research directions regarding key management in smart city applications.
Human-centric Computing and Information Sciences, 2018
Different approaches have been used to estimate language models from a given corpus. Recently, re... more Different approaches have been used to estimate language models from a given corpus. Recently, researchers have used different neural network architectures to estimate the language models from a given corpus using unsupervised learning neural networks capabilities. Generally, neural networks have demonstrated success compared to conventional n-gram language models. With languages that have a rich morphological system and a huge number of vocabulary words, the major trade-off with neural network language models is the size of the network. This paper presents a recurrent neural network language model based on the tokenization of words into three parts: the prefix, the stem, and the suffix. The proposed model is tested with the English AMI speech recognition dataset and outperforms the baseline n-gram model, the basic recurrent neural network language models (RNNLM) and the GPU-based recurrent neural network language models (CUED-RNNLM) in perplexity and word error rate. The automatic ...
International Journal of Intelligent Computing and Information Sciences, 2016
Using social networks have become one of the daily activities that billions of peoples around the... more Using social networks have become one of the daily activities that billions of peoples around the world do. So, great research efforts had been done to analyze and understand these virtual communities. Among other things, link prediction is a paramount task to analyze and understand these social networks. In this paper, we investigate link prediction problem using rough set theory to discard the irrelevant attributes that could be found in the profiles of Facebook users and the proposed work induces accuracy 97.79%.
Journal of Cloud Computing, 2017
Cloud computing is a ubiquitous network access model to a shared pool of configurable computing r... more Cloud computing is a ubiquitous network access model to a shared pool of configurable computing resources where available resources must be checked and scheduled using an efficient task scheduler to be assigned to clients. Most of the existing task schedulers, did not achieve the required standards and requirements as some of them only concentrated on waiting time or response time reduction or even both neglecting the starved processes at all. In this paper, we propose a novel hybrid task scheduling algorithm named (SRDQ) combining Shortest-Job-First (SJF) and Round Robin (RR) schedulers considering a dynamic variable task quantum. The proposed algorithms mainly relies on two basic keys the first having a dynamic task quantum to balance waiting time between short and long tasks while the second involves splitting the ready queue into two sub-queues, Q1 for the short tasks and the other for the long ones. Assigning tasks to resources from Q 1 or Q 2 are done mutually two tasks from Q 1 and one task from Q 2. For evaluation purpose, three different datasets were utilized during the algorithm simulation conducted using CloudSim environment toolkit 3.0.3 against three different scheduling algorithms SJF, RR and Time Slice Priority Based RR (TSPBRR) Experimentations results and tests indicated the superiority of the proposed algorithm over the state of art in reducing waiting time, response time and partially the starvation of long tasks.
International Review on Computers and Software (IRECOS), 2016
Code clones represent a stumbling blocking the way of having a more readable, maintainable and le... more Code clones represent a stumbling blocking the way of having a more readable, maintainable and less complicated source codes, free of bugs and errors. Many studies had been proposed for detecting and omitting the four types of cloned codes based on pattern matching, syntax parsing, tree parsing and refactoring which is the most commonly used technique to remove the code clones from software, while maintaining its original behavior. In this paper, we propose an automated refactoring technique and its correspondence algorithm to omit code clones of type 1 and type 2. The proposed technique performance was tested and evaluated using four open source Java projects JFreeChart, JRuby, JCommon and Apache ant. The performance of the source codes was indicated based on number of metrics as the lines of code, number of blank lines, method’s count and cyclomatic complexity before and after applying the proposed technique. The experimentation results indicated that the proposed technique had showed superiority over the state-of-the-art through omitting the cloned codes with the possibility of maintaining the stability and correctness of behavior of the source codes under consideration.
Pattern Recognition and Image Analysis, 2016
Anti-aging and looking young with a full of vigor appearance with no Facial volume depletion and ... more Anti-aging and looking young with a full of vigor appearance with no Facial volume depletion and deepening lines of facial expression is a dream of every human being in life. Researchers in dermal and cosmetic fields had spent many years looking for solutions to aging signs and wrinkles other than surgeries. Botox is a skin rejuvenation cosmetic procedure that represents the recent magical key to aging appearance problems especially with the fascinating results it had showed. Botox can simply make you look 10 to 20 years younger, which represent an obstacle in the face of human age estimation researches. In this paper, we proposed a new model called Human Injected by Botox Age Estimation (HIBAE) model, a human age estimator based on active shape models, speed up robust feature, and support vector machine to accurately estimate the age of people that are exposed to Botox injections. Human Injected by Botox Age Estimation proposed model was trained by a crossover of Productive Aging Lab. database and 60 images collected from the internet of people that were exposed to Botox, and tested using a crossover of FACES64 database and 20 images of people that were exposed to Botox. HIBAE had showed superiority through performance testing over the state-of-the-art.