Ahmed Sharaf Eldin - Profile on Academia.edu (original) (raw)
Papers by Ahmed Sharaf Eldin
Quality must be measurable. You can not manage something that you do not know and measure. A qual... more Quality must be measurable. You can not manage something that you do not know and measure. A quality management system is seldom productive when we cannot quantify the results. We need to see how the process is started and if it is having the wanted effect. This will help us set our goals for the future and guarantee that every department is working towards the same result. Quality improvements must be continuous. Total Quality Management is not something that can be done once and then forgotten. It's not a management "phase" that will end after a problem has been fixed. TQM is a long-term investment, and it is designed to help us find long-term success. Quality control refers to the measures that must be included during each assay run to verify that the test is working properly. Quality Assurance is defined as the overall program that ensures that the final results reported by the laboratory are correct. The aim of quality control is simply to guarantee that the results generated by the test are correct. However, quality assurance is concerned with much more: that the right test is carried out on the right specimen, and that the right result and right interpretation is delivered to the right person at the right time. Quality control explains the directed use of testing to measure the achievement of a specified standard. Quality control is the process, procedures and authority used to accept or reject all components, drug product containers, closures, in-process materials, packaging material, labeling and drug products and the authority to review production records to assure that no errors have occurred, that they have been fully investigated. The quality and reliability of test data rely on the state and condition of the test system which is used in its production. "Trust is Good, Control is Better" says an old proverb. The quality which is supposed to be achieved is not a concept which can be controlled by easy, numerical or other means, but it is the control over the intrinsic quality of a test facility and its studies. The aim of this book is to share useful and practical knowledge about quality control in several fields with the people who want to improve their knowledge.
PeerJ, Apr 7, 2021
Background: The endeavouring to offer complex special functions from individual systems gave rise... more Background: The endeavouring to offer complex special functions from individual systems gave rise to what is known as the System of Systems (SoS). SoS co-integrating systems together while allowing for absorbing more systems in the future. SoS as an integrated system simplifies operations, reduces costs, and ensures efficiency. However, conflict may result while co-integrating systems, violating the main benefits of SoS. This paper is concerned with enhancing the time required to detect and solve such conflicts. Methods: We adopted the k-means clustering technique to enhance the detection and solving of conflict resulting while co-integrating new systems into an existing SoS. Instead of dealing with SoS as a single entity, we partition it into clusters. Each cluster contains nearby systems according to pre-specified criteria. We can consider each cluster a Sub SoS (S-SoS). By doing so, the conflict that may arise while cointegrating new systems can be detected and solved in a shorter time. We propose the Smart Semantic Belief Function Clustered System of Systems (SSBFCSoS), which is an enhancement of the Ontology Belief Function System of Systems (OBFSoS). Results: The proposed method proved the ability to rapidly detect and resolve conflicts. It showed the ability to accommodate more systems as well, therefore achieving the objectives of SoS. In order to test the applicability of the SSBFCSoS and compare its performance with other approaches, two datasets were employed. They are (Glest & StarCraft Brood War). With each dataset, 15 test cases were examined. We achieved, on average, 89% in solving the conflict compared to 77% for other approaches. Moreover, it showed an acceleration of up to proportionality over previous approaches for about 16% in solving conflicts as well. Besides, it reduced the frequency of the same conflicts by approximately 23% better than the other method, not only in the same cluster but even while combining different clusters.
In aspect oriented development, obliviousness is one of its pillars as it helps developers to imp... more In aspect oriented development, obliviousness is one of its pillars as it helps developers to implement crosscutting concerns via aspects, which increases the overall software modularity. Despite of its merits, obliviousness brings the problem of interferences among aspects as several aspects pointcuts may address the same joinpoint for the same advice. Existing approaches deals with conflicts at design level use graphs structures, which increase in size as project size increases. In this work, a relational database model is used to map aspect oriented design models and then conflicts are extracted by an algorithm runs over this database. This approach is simpler than other approaches and enables large project sizes while the other approaches get complicated due to increment in graph size. The proposed approach can be extended to the distributed team development, dependent on the database engine used.
SN Applied Sciences, 2019
The rapid development of social media platforms allowed opinion mining research to increase signi... more The rapid development of social media platforms allowed opinion mining research to increase significantly. Opinion mining/sentiment analysis is the process of extracting subjective information from opinions that imply single sentiment. Comparative opinion mining is a sub-field of opinion mining that deals with multi-sentiment opinions. Such opinions are expressed by comparing several entities to each other. The sentiment of a comparative relation is recognized by identifying the relation's direction and thus the preferred entity. This paper proposes an unsupervised sentiment analysis technique for Arabic comparative opinions to identify the preferred entity. The proposed technique considers three main elements when analyzing comparative opinions: The type of comparative keywords, the existence of features in the opinion, and the entities' position to the comparative keyword. Five main categories are proposed for classifying comparative keywords, which facilitates the analysis of each comparative sentence. The proposed technique limits the need for human interference to the initial steps of preparing the lexicons, collecting and categorizing comparative keywords. Furthermore, the proposed technique handles opinions that do not contain features at all. The results are very promising with a total average of 96.5% f-measure of correctly identified sentiment.
Finite elements software and applications
The contents of this thesis are a detailed study of the software for the finite element method. I... more The contents of this thesis are a detailed study of the software for the finite element method. In the text, the finite element method is introduced from both the engineering and mathematical points of view. The computer implementation of the method is explained with samples of mainframe, mini- and micro-computer implementations. A solution is presented for the problem of limited stack size for both mini- and micro-computers which possess stack architecture. Several finite element programs are presented. Special purpose programs to solve problems in structural analysis and groundwater flow are discussed. However, an efficient easy-to-use finite element program for general two-dimensional problems is presented. Several problems in groundwater flow are considered that include steady, unsteady flows in different types of aquifers. Different cases of sinks and sources in the flow domain are also considered. The performance of finite element methods is studied for the chosen problems by ...
Enhancing Aggregation over Uncertain Databases
2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, 2015
Computer and Information Science, 2015
In the last years, uncertainty management became an important aspect as the presence of uncertain... more In the last years, uncertainty management became an important aspect as the presence of uncertain data increased rapidly. Due to the several advanced technologies that have been developed to record large quantity of data continuously, resulting is a data that contain errors or may be partially complete. Instead of dealing with data uncertainty by removing it, we must deal with it as a source of information. To deal with this data, database management system should have special features to handle uncertain data. The aim of this paper is twofold: on one hand, to introduce some main concepts of uncertainty in database by focusing on different data management issues in uncertain databases such as join and query processing, database integration, indexing uncertain data, security and information leakage and representation formalisms. On the other hand, to provide a survey of the current database management systems dealing with uncertain data, presenting their features and comparing them.
Enhancing packet forwarding in Mobile Ad hoc Networks by exploiting the Information Dispersal Algorithm
2008 Mosharaka International Conference on Communications, Computers and Applications, 2008
ABSTRACT
Education and Information Technologies, 2013
ABSTRACT The current study seeks to introduce a new pedagogical design for geo-informatics course... more ABSTRACT The current study seeks to introduce a new pedagogical design for geo-informatics courses using an e-training support system. Laurillard’s conversational approach based on conceptual representation for both instructor and learner was used to form the framework. As the current study specifically interested in training as a special form for learning, so, we sought methods and strategies to integrate requirements of both company and employee into the design of training programs. Therefore, a competency perspective was adopted into the conversational framework to use learning design that leads to learning activities tightly related to the needs of the company and employee. The above framework has been developed with special consideration to the underpinning pedagogical principles and the needs of lifelong learning that continues after the training has been completed. The implementation of the developed framework needs a special computerized system, so an e-training support system (ETSS) was developed to realize the framework. ETSS is an open source and standard-based infrastructure to enable and foster competence development and exchange of learning activities and learning units. Although the domain of the current study focuses on geo-informatics, ETSS is applicable to any other domain. The developed framework through its ETSS implementation were evaluated in a typical training environment. The results indicated that the best method in training was the training with the developed system with 91.5 % in comparison with the traditional training method with 81.4 %.
Mining Medical Records Using Fuzzy Logic to Detect Surprising Patterns
ABSTRACT Bio-informatics tools are widely used to obtain results that are hard to be obtained by ... more ABSTRACT Bio-informatics tools are widely used to obtain results that are hard to be obtained by physical experiments alone. In this study, we predicted the 3D structure of all human H5N1 hemagglutinin proteins with estimated precision 100%. We tested the existence of the F10 antibody epitope at their surfaces. The information gain (IG) is applied to calculate the degree of association between each position mutation and the absence of F10 antibody epitope on the protein surface. We identified amino acid positions that are responsible for the protein escape from neutralization by f10 antibody.
Robust Blind and Secure Biometric Watermarking Based on Partial Multi-Map Chaotic Encryption
2011 4th IFIP International Conference on New Technologies, Mobility and Security, 2011
... Mohamed S. El-Mahallawy1, Esam A. Hagras2, Ahmed Zein Eldin1, Mohamed Waleed Fakhr1 1,3,4Arab... more ... Mohamed S. El-Mahallawy1, Esam A. Hagras2, Ahmed Zein Eldin1, Mohamed Waleed Fakhr1 1,3,4Arab Academy for Science, Technology and Maritime Transport (AASTMT ... The 1L 2D DWT encrypted Lena based on the user secret key and the biometric key is shown in Fig. 7. ...
Communications and Computer Security, 2013
Security is considered a challenge for wireless sensor networks (WSNs) due to the critical inform... more Security is considered a challenge for wireless sensor networks (WSNs) due to the critical information transferred through the collection of smart sensor nodes forming the network. However, these nodes suffer from different limitations including memory available, computational and communicational limitations. Clustering these nodes is considered as one of the main solutions for prolonging the lifetime of the network. Low-energy adaptive clustering hierarchy (LEACH), stable election protocol (SEP) and hybrid energy efficient distributed clustering protocol (HEED) are considered the most famous clustering algorithms. In this paper, we propose a security framework called Virtual ECC group key (VEGK) merging elliptic curve cryptography (ECC) with symmetric pairwise keys along with VEGKs. In addition, we believe that merging the security with clustering will be beneficial to the energy saving in WSN. Our scheme will modify these clustering algorithms by introducing tree structure rooted at the base station to ensure the connectivity of the whole network. Based on the analysis of different scenarios, our proposed security framework is proved to protect the network from many attacks.
International Journal of Computer Applications, 2013
Bio-informatics tools are widely used to obtain results that are hard to be obtained by physical ... more Bio-informatics tools are widely used to obtain results that are hard to be obtained by physical experiments alone. In this study, we predicted the 3D structure of all human H5N1 hemagglutinin proteins with estimated precision 100%. We tested the existence of the F10 antibody epitope at their surfaces. The information gain (IG) is applied to calculate the degree of association between each position mutation and the absence of F10 antibody epitope on the protein surface. We identified amino acid positions that are responsible for the protein escape from neutralization by f10 antibody.
النشرة المعلوماتیة فی الحاسبات والمعلومات
Web 2.0 technologies have seen a big evolution recently leading to the existence of a huge amount... more Web 2.0 technologies have seen a big evolution recently leading to the existence of a huge amount of unreliable and misleading content due to the openness and low publishing barrier nature of the content generated through social media platforms. As a fact, the User-Generated Content (UGC) on social media platforms suffers from a lack of professional gatekeepers to monitor this content. Consequently, most online users fall into the trap of being misled through fake information that spreads rapidly. They usually rely on this information without any verification and this prevents them from making accurate decisions concerning their social lives, politics, or business events. Because online users face difficulty in finding which piece of information is credible or not, the researchers found that assessing User-Generated Content (UGC) of social media is very important in resolving the issue of credibility. This paper adapted some of the existing literature and concluded that many previous approaches have investigated information credibility on Twitter and a limited number of Facebook for proposing a new approach for measuring posts credibility. The proposed model used to measure the credibility of Facebook posts through a formula combined from the page profile rank and the post-analysis score. The model was tested and achieved 87.45 % accuracy.
INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY
Drug combinations is considered as an effective strategy designed to control complex diseases lik... more Drug combinations is considered as an effective strategy designed to control complex diseases like cancer. Combinations of drugs can effectively decrease side effects and enhance adaptive resistance. Therefore, increasing the likelihood of defeating complex diseases in a synergistic way. This is due to overcoming factors such as off-target activities, network robustness, bypass mechanisms, cross-talk across compensatory escape pathways and the mutational heterogeneity which results in alterations within multiple molecular pathways. The plurality of effective drug combinations used in clinic were found out through experience. The molecular mechanisms underlying these drug combinations are often not clear, which makes it not easy to suggest new drug combinations. Computational approaches are proposed to reduce the search space for defining the most promising combinations and prioritizing their experimental evaluation. In this paper, we review methods, techniques and hypotheses develop...
Journal of Information Security and Applications
Recent research has shown the possibility of using smartphones' sensors and accessories to extrac... more Recent research has shown the possibility of using smartphones' sensors and accessories to extract some behavioral attributes such as touch dynamics, keystroke dynamics and gait recognition. These attributes are known as behavioral biometrics and could be used to verify or identify users implicitly and continuously on smartphones. The authentication systems that have been built based on these behavioral biometric traits are known as active or continuous authentication systems. This paper provides a review of the active authentication systems. We present the components and the operating process of the active authentication systems in general, followed by an overview of the state-of-theart behavioral biometric traits that used to develop an active authentication systems and their evaluation on smartphones. We discuss the issues, strengths and limitations that associated with each behavioral biometric trait. Also, we introduce a comparative summary between them. Finally, challenges and open research problems are presented in this research field.
International Journal of Advanced Computer Science and Applications
Several papers have been published about the prediction of hepatitis C virus (HCV) polyprotein cl... more Several papers have been published about the prediction of hepatitis C virus (HCV) polyprotein cleavage sites, using symbolic and non-symbolic machine learning techniques. The published papers achieved different Levels of prediction accuracy. the achieved results depends on the used technique and the availability of adequate and accurate HCV polyprotein sequences with known cleavage sites. We tried here to achieve more accurate prediction results, and more Informative knowledge about the HCV protein cleavage sites using Decision tree algorithm. There are several factors that can affect the overall prediction accuracy. One of the most important factors is the availably of acceptable and accurate HCV polyproteins sequences with known cleavage sites. We collected latest accurate data sets to build the prediction model. Also we collected another dataset for the model testing. Motivation: Hepatitis C virus is a global health problem affecting a significant portion of the world's population. The World Health Organization estimated that in1999; 170 million hepatitis C virus (HCV) carriers were present worldwide, with 3 to 4 million new cases per year. Several approaches have been performed to analyze HCV life cycle to find out the important factors of the viral replication process. HCV polyprotein processing by the viral protease has a vital role in the virus replication. The prediction of HCV protease cleavage sites can help the biologists in the design of suitable viral inhibitors. Results: The ease to use and to understand of the decision tree enabled us to create simple prediction model. We used here the latest accurate viral datasets. Decision tree achieved here acceptable prediction accuracy results. Also it generated informative knowledge about the cleavage process itself. These results can help the researchers in the development of effective viral inhibitors. Using decision tree to predict HCV protein cleavage sites achieved high prediction accuracy.
Feature-based sentiment analysis in online Arabic reviews
2016 11th International Conference on Computer Engineering & Systems (ICCES), 2016
Bibliometric analysis of Egyptian publications on Hepatitis C virus from PubMed using data mining of an in-house developed database (HCVDBegy)
Scientometrics, 2016
Effect of Using a Graphical Conceptual Representation Tool on Student Understanding
Quality must be measurable. You can not manage something that you do not know and measure. A qual... more Quality must be measurable. You can not manage something that you do not know and measure. A quality management system is seldom productive when we cannot quantify the results. We need to see how the process is started and if it is having the wanted effect. This will help us set our goals for the future and guarantee that every department is working towards the same result. Quality improvements must be continuous. Total Quality Management is not something that can be done once and then forgotten. It's not a management "phase" that will end after a problem has been fixed. TQM is a long-term investment, and it is designed to help us find long-term success. Quality control refers to the measures that must be included during each assay run to verify that the test is working properly. Quality Assurance is defined as the overall program that ensures that the final results reported by the laboratory are correct. The aim of quality control is simply to guarantee that the results generated by the test are correct. However, quality assurance is concerned with much more: that the right test is carried out on the right specimen, and that the right result and right interpretation is delivered to the right person at the right time. Quality control explains the directed use of testing to measure the achievement of a specified standard. Quality control is the process, procedures and authority used to accept or reject all components, drug product containers, closures, in-process materials, packaging material, labeling and drug products and the authority to review production records to assure that no errors have occurred, that they have been fully investigated. The quality and reliability of test data rely on the state and condition of the test system which is used in its production. "Trust is Good, Control is Better" says an old proverb. The quality which is supposed to be achieved is not a concept which can be controlled by easy, numerical or other means, but it is the control over the intrinsic quality of a test facility and its studies. The aim of this book is to share useful and practical knowledge about quality control in several fields with the people who want to improve their knowledge.
PeerJ, Apr 7, 2021
Background: The endeavouring to offer complex special functions from individual systems gave rise... more Background: The endeavouring to offer complex special functions from individual systems gave rise to what is known as the System of Systems (SoS). SoS co-integrating systems together while allowing for absorbing more systems in the future. SoS as an integrated system simplifies operations, reduces costs, and ensures efficiency. However, conflict may result while co-integrating systems, violating the main benefits of SoS. This paper is concerned with enhancing the time required to detect and solve such conflicts. Methods: We adopted the k-means clustering technique to enhance the detection and solving of conflict resulting while co-integrating new systems into an existing SoS. Instead of dealing with SoS as a single entity, we partition it into clusters. Each cluster contains nearby systems according to pre-specified criteria. We can consider each cluster a Sub SoS (S-SoS). By doing so, the conflict that may arise while cointegrating new systems can be detected and solved in a shorter time. We propose the Smart Semantic Belief Function Clustered System of Systems (SSBFCSoS), which is an enhancement of the Ontology Belief Function System of Systems (OBFSoS). Results: The proposed method proved the ability to rapidly detect and resolve conflicts. It showed the ability to accommodate more systems as well, therefore achieving the objectives of SoS. In order to test the applicability of the SSBFCSoS and compare its performance with other approaches, two datasets were employed. They are (Glest & StarCraft Brood War). With each dataset, 15 test cases were examined. We achieved, on average, 89% in solving the conflict compared to 77% for other approaches. Moreover, it showed an acceleration of up to proportionality over previous approaches for about 16% in solving conflicts as well. Besides, it reduced the frequency of the same conflicts by approximately 23% better than the other method, not only in the same cluster but even while combining different clusters.
In aspect oriented development, obliviousness is one of its pillars as it helps developers to imp... more In aspect oriented development, obliviousness is one of its pillars as it helps developers to implement crosscutting concerns via aspects, which increases the overall software modularity. Despite of its merits, obliviousness brings the problem of interferences among aspects as several aspects pointcuts may address the same joinpoint for the same advice. Existing approaches deals with conflicts at design level use graphs structures, which increase in size as project size increases. In this work, a relational database model is used to map aspect oriented design models and then conflicts are extracted by an algorithm runs over this database. This approach is simpler than other approaches and enables large project sizes while the other approaches get complicated due to increment in graph size. The proposed approach can be extended to the distributed team development, dependent on the database engine used.
SN Applied Sciences, 2019
The rapid development of social media platforms allowed opinion mining research to increase signi... more The rapid development of social media platforms allowed opinion mining research to increase significantly. Opinion mining/sentiment analysis is the process of extracting subjective information from opinions that imply single sentiment. Comparative opinion mining is a sub-field of opinion mining that deals with multi-sentiment opinions. Such opinions are expressed by comparing several entities to each other. The sentiment of a comparative relation is recognized by identifying the relation's direction and thus the preferred entity. This paper proposes an unsupervised sentiment analysis technique for Arabic comparative opinions to identify the preferred entity. The proposed technique considers three main elements when analyzing comparative opinions: The type of comparative keywords, the existence of features in the opinion, and the entities' position to the comparative keyword. Five main categories are proposed for classifying comparative keywords, which facilitates the analysis of each comparative sentence. The proposed technique limits the need for human interference to the initial steps of preparing the lexicons, collecting and categorizing comparative keywords. Furthermore, the proposed technique handles opinions that do not contain features at all. The results are very promising with a total average of 96.5% f-measure of correctly identified sentiment.
Finite elements software and applications
The contents of this thesis are a detailed study of the software for the finite element method. I... more The contents of this thesis are a detailed study of the software for the finite element method. In the text, the finite element method is introduced from both the engineering and mathematical points of view. The computer implementation of the method is explained with samples of mainframe, mini- and micro-computer implementations. A solution is presented for the problem of limited stack size for both mini- and micro-computers which possess stack architecture. Several finite element programs are presented. Special purpose programs to solve problems in structural analysis and groundwater flow are discussed. However, an efficient easy-to-use finite element program for general two-dimensional problems is presented. Several problems in groundwater flow are considered that include steady, unsteady flows in different types of aquifers. Different cases of sinks and sources in the flow domain are also considered. The performance of finite element methods is studied for the chosen problems by ...
Enhancing Aggregation over Uncertain Databases
2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, 2015
Computer and Information Science, 2015
In the last years, uncertainty management became an important aspect as the presence of uncertain... more In the last years, uncertainty management became an important aspect as the presence of uncertain data increased rapidly. Due to the several advanced technologies that have been developed to record large quantity of data continuously, resulting is a data that contain errors or may be partially complete. Instead of dealing with data uncertainty by removing it, we must deal with it as a source of information. To deal with this data, database management system should have special features to handle uncertain data. The aim of this paper is twofold: on one hand, to introduce some main concepts of uncertainty in database by focusing on different data management issues in uncertain databases such as join and query processing, database integration, indexing uncertain data, security and information leakage and representation formalisms. On the other hand, to provide a survey of the current database management systems dealing with uncertain data, presenting their features and comparing them.
Enhancing packet forwarding in Mobile Ad hoc Networks by exploiting the Information Dispersal Algorithm
2008 Mosharaka International Conference on Communications, Computers and Applications, 2008
ABSTRACT
Education and Information Technologies, 2013
ABSTRACT The current study seeks to introduce a new pedagogical design for geo-informatics course... more ABSTRACT The current study seeks to introduce a new pedagogical design for geo-informatics courses using an e-training support system. Laurillard’s conversational approach based on conceptual representation for both instructor and learner was used to form the framework. As the current study specifically interested in training as a special form for learning, so, we sought methods and strategies to integrate requirements of both company and employee into the design of training programs. Therefore, a competency perspective was adopted into the conversational framework to use learning design that leads to learning activities tightly related to the needs of the company and employee. The above framework has been developed with special consideration to the underpinning pedagogical principles and the needs of lifelong learning that continues after the training has been completed. The implementation of the developed framework needs a special computerized system, so an e-training support system (ETSS) was developed to realize the framework. ETSS is an open source and standard-based infrastructure to enable and foster competence development and exchange of learning activities and learning units. Although the domain of the current study focuses on geo-informatics, ETSS is applicable to any other domain. The developed framework through its ETSS implementation were evaluated in a typical training environment. The results indicated that the best method in training was the training with the developed system with 91.5 % in comparison with the traditional training method with 81.4 %.
Mining Medical Records Using Fuzzy Logic to Detect Surprising Patterns
ABSTRACT Bio-informatics tools are widely used to obtain results that are hard to be obtained by ... more ABSTRACT Bio-informatics tools are widely used to obtain results that are hard to be obtained by physical experiments alone. In this study, we predicted the 3D structure of all human H5N1 hemagglutinin proteins with estimated precision 100%. We tested the existence of the F10 antibody epitope at their surfaces. The information gain (IG) is applied to calculate the degree of association between each position mutation and the absence of F10 antibody epitope on the protein surface. We identified amino acid positions that are responsible for the protein escape from neutralization by f10 antibody.
Robust Blind and Secure Biometric Watermarking Based on Partial Multi-Map Chaotic Encryption
2011 4th IFIP International Conference on New Technologies, Mobility and Security, 2011
... Mohamed S. El-Mahallawy1, Esam A. Hagras2, Ahmed Zein Eldin1, Mohamed Waleed Fakhr1 1,3,4Arab... more ... Mohamed S. El-Mahallawy1, Esam A. Hagras2, Ahmed Zein Eldin1, Mohamed Waleed Fakhr1 1,3,4Arab Academy for Science, Technology and Maritime Transport (AASTMT ... The 1L 2D DWT encrypted Lena based on the user secret key and the biometric key is shown in Fig. 7. ...
Communications and Computer Security, 2013
Security is considered a challenge for wireless sensor networks (WSNs) due to the critical inform... more Security is considered a challenge for wireless sensor networks (WSNs) due to the critical information transferred through the collection of smart sensor nodes forming the network. However, these nodes suffer from different limitations including memory available, computational and communicational limitations. Clustering these nodes is considered as one of the main solutions for prolonging the lifetime of the network. Low-energy adaptive clustering hierarchy (LEACH), stable election protocol (SEP) and hybrid energy efficient distributed clustering protocol (HEED) are considered the most famous clustering algorithms. In this paper, we propose a security framework called Virtual ECC group key (VEGK) merging elliptic curve cryptography (ECC) with symmetric pairwise keys along with VEGKs. In addition, we believe that merging the security with clustering will be beneficial to the energy saving in WSN. Our scheme will modify these clustering algorithms by introducing tree structure rooted at the base station to ensure the connectivity of the whole network. Based on the analysis of different scenarios, our proposed security framework is proved to protect the network from many attacks.
International Journal of Computer Applications, 2013
Bio-informatics tools are widely used to obtain results that are hard to be obtained by physical ... more Bio-informatics tools are widely used to obtain results that are hard to be obtained by physical experiments alone. In this study, we predicted the 3D structure of all human H5N1 hemagglutinin proteins with estimated precision 100%. We tested the existence of the F10 antibody epitope at their surfaces. The information gain (IG) is applied to calculate the degree of association between each position mutation and the absence of F10 antibody epitope on the protein surface. We identified amino acid positions that are responsible for the protein escape from neutralization by f10 antibody.
النشرة المعلوماتیة فی الحاسبات والمعلومات
Web 2.0 technologies have seen a big evolution recently leading to the existence of a huge amount... more Web 2.0 technologies have seen a big evolution recently leading to the existence of a huge amount of unreliable and misleading content due to the openness and low publishing barrier nature of the content generated through social media platforms. As a fact, the User-Generated Content (UGC) on social media platforms suffers from a lack of professional gatekeepers to monitor this content. Consequently, most online users fall into the trap of being misled through fake information that spreads rapidly. They usually rely on this information without any verification and this prevents them from making accurate decisions concerning their social lives, politics, or business events. Because online users face difficulty in finding which piece of information is credible or not, the researchers found that assessing User-Generated Content (UGC) of social media is very important in resolving the issue of credibility. This paper adapted some of the existing literature and concluded that many previous approaches have investigated information credibility on Twitter and a limited number of Facebook for proposing a new approach for measuring posts credibility. The proposed model used to measure the credibility of Facebook posts through a formula combined from the page profile rank and the post-analysis score. The model was tested and achieved 87.45 % accuracy.
INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY
Drug combinations is considered as an effective strategy designed to control complex diseases lik... more Drug combinations is considered as an effective strategy designed to control complex diseases like cancer. Combinations of drugs can effectively decrease side effects and enhance adaptive resistance. Therefore, increasing the likelihood of defeating complex diseases in a synergistic way. This is due to overcoming factors such as off-target activities, network robustness, bypass mechanisms, cross-talk across compensatory escape pathways and the mutational heterogeneity which results in alterations within multiple molecular pathways. The plurality of effective drug combinations used in clinic were found out through experience. The molecular mechanisms underlying these drug combinations are often not clear, which makes it not easy to suggest new drug combinations. Computational approaches are proposed to reduce the search space for defining the most promising combinations and prioritizing their experimental evaluation. In this paper, we review methods, techniques and hypotheses develop...
Journal of Information Security and Applications
Recent research has shown the possibility of using smartphones' sensors and accessories to extrac... more Recent research has shown the possibility of using smartphones' sensors and accessories to extract some behavioral attributes such as touch dynamics, keystroke dynamics and gait recognition. These attributes are known as behavioral biometrics and could be used to verify or identify users implicitly and continuously on smartphones. The authentication systems that have been built based on these behavioral biometric traits are known as active or continuous authentication systems. This paper provides a review of the active authentication systems. We present the components and the operating process of the active authentication systems in general, followed by an overview of the state-of-theart behavioral biometric traits that used to develop an active authentication systems and their evaluation on smartphones. We discuss the issues, strengths and limitations that associated with each behavioral biometric trait. Also, we introduce a comparative summary between them. Finally, challenges and open research problems are presented in this research field.
International Journal of Advanced Computer Science and Applications
Several papers have been published about the prediction of hepatitis C virus (HCV) polyprotein cl... more Several papers have been published about the prediction of hepatitis C virus (HCV) polyprotein cleavage sites, using symbolic and non-symbolic machine learning techniques. The published papers achieved different Levels of prediction accuracy. the achieved results depends on the used technique and the availability of adequate and accurate HCV polyprotein sequences with known cleavage sites. We tried here to achieve more accurate prediction results, and more Informative knowledge about the HCV protein cleavage sites using Decision tree algorithm. There are several factors that can affect the overall prediction accuracy. One of the most important factors is the availably of acceptable and accurate HCV polyproteins sequences with known cleavage sites. We collected latest accurate data sets to build the prediction model. Also we collected another dataset for the model testing. Motivation: Hepatitis C virus is a global health problem affecting a significant portion of the world's population. The World Health Organization estimated that in1999; 170 million hepatitis C virus (HCV) carriers were present worldwide, with 3 to 4 million new cases per year. Several approaches have been performed to analyze HCV life cycle to find out the important factors of the viral replication process. HCV polyprotein processing by the viral protease has a vital role in the virus replication. The prediction of HCV protease cleavage sites can help the biologists in the design of suitable viral inhibitors. Results: The ease to use and to understand of the decision tree enabled us to create simple prediction model. We used here the latest accurate viral datasets. Decision tree achieved here acceptable prediction accuracy results. Also it generated informative knowledge about the cleavage process itself. These results can help the researchers in the development of effective viral inhibitors. Using decision tree to predict HCV protein cleavage sites achieved high prediction accuracy.
Feature-based sentiment analysis in online Arabic reviews
2016 11th International Conference on Computer Engineering & Systems (ICCES), 2016
Bibliometric analysis of Egyptian publications on Hepatitis C virus from PubMed using data mining of an in-house developed database (HCVDBegy)
Scientometrics, 2016
Effect of Using a Graphical Conceptual Representation Tool on Student Understanding