Khaled Shaalan - Profile on Academia.edu (original) (raw)
Papers by Khaled Shaalan
Proceedings of the 11th International Conference on Advanced Intelligent Systems and Informatics, 2025
Transformer-based pre-trained language models are advanced machine learning models that understan... more Transformer-based pre-trained language models are advanced machine learning models that understand and produce human language. These models are mainly based on the "Transformer" design. They have undergone substantial pre-training on large volumes of text data to understand language patterns. Notable examples include BERT, GPT, and RoBERTa. These tools have transformed NLP tasks by demonstrating exceptional performance and adaptability, facilitating knowledge transfer to specialized tasks, and addressing issues associated with training a model from the start. This systematic review examines transformer-based pre-trained language models, including architecture, pretraining techniques, and adaption approaches. This study examines the core concepts, training methods, and applications of these models to answer significant research concerns. This study examines transformer-based pre-trained models in NLP and their fine-tuning methodologies. This review sheds light on the current state of transformer-based language models and outlines potential future advances in this dynamic subject.
ACM Transactions on Asian and Low-Resource Language Information Processing , 2024
The surge in advancements in large language models (LLMs) has expedited the generation of synthet... more The surge in advancements in large language models (LLMs) has expedited the generation of synthetic text imitating human writing styles. This, however, raises concerns about the potential misuse of synthetic textual data, which could compromise trust in online content. Against this backdrop, the present research aims to address the key challenges of detecting LLMs-generated texts. In this study, we used ChatGPT (v 3.5) because of its widespread and capability to comprehend and keep conversational context, allowing it to produce meaningful and contextually suitable responses. The problem revolves around the task of discerning between authentic and artificially generated textual content. To tackle this problem, we first created a dataset containing both real and DeepFake text. Subsequently, we employed transfer-learning (TL) and conducted DeepFakedetection utilizing SOTA large pre-trained LLMs. Furthermore, we conducted validation using benchmark datasets comprising unseen data samples to ensure that the model's performance reflects its ability to generalize to new data. Finally, we discussed this study's theoretical contributions, practical implications, limitations and potential avenues for future research, aiming to formulate strategies for identifying and detecting large-generative-models' produced texts. The results were promising, with accuracy ranging from 94% to 99%. The comparison between automatic detection and the human ability to detect DeepFake text revealed a significant gap in the human capacity for its identification, emphasizing an increasing need for sophisticated automated detectors. The investigation into AI-generated content detection holds central importance in the age of LLMs and technology convergence. This study is both timely and adds value to the ongoing discussion regarding the challenges associated with the pertinent theme of "DeepFake text detection", with a special focus on examining the boundaries of human detection. CCS Concepts: • Computing methodologies → Natural language processing ;
Lecture notes in civil engineering, 2024
Persona identification helps AI-based communication systems provide personalized and situationall... more Persona identification helps AI-based communication systems provide personalized and situationally informed interactions. This paper introduces pretraining on CNN, BERT, and GPT models to improve persona detection on PMPC and ROCStories datasets. Two speakers with different personalities have dialogues in the PMPC dataset. The challenge is to match each speaker to their persona. The ROCStories dataset contains fictional character traits and activities. Our study uses transformer-based design to improve persona detection using ROCStories dataset external context. We compare our method to leading models in the field. We found that pre-training and fine-tuning on several datasets improves model performance. External context from tale collections may improve persona detection algorithms and help understand human personality and behavior. Our study found that pretraining CNN, BERT, and GPT models improves persona detection, improving user experiences and communication. The method could be used in chatbots, personalized recommendation systems, and customer support. Additionally, it can help create AI-driven communication systems with tailored, context-aware, and human-like interactions.
Expert systems: Useful tools for enhancing agricultural research and production
The Future of E-Commerce Systems: 2030 and Beyond
Studies in systems, decision and control, 2021
The sophistication and efficiency of systems is undeniably advancing. As businesses evolve, quest... more The sophistication and efficiency of systems is undeniably advancing. As businesses evolve, questions on its past performance is beside the point but its anticipated functions and relevance in the future. With the use of software in commerce at the core of every business today, the overlook of how omnichannel transactions operates is an awe with the 4th industrial revolution and its societal impact. This chapter explains how commerce has currently evolved in the advent of technology elaborating on the current state and challenges of systems, its architecture, and the innovations of cyber physical systems in electronic commerce. It further expounds on the application of omnichannel systems in communication through the fifth-generation network, in transaction through blockchain and in composition through Social Internet of Things. We believe that this study will benefit all stakeholders in commerce from governments, supply chain organizations and consumers to understanding the forthcoming drivers of omnichannel systems in the 4th industrial revolution, its prospects, and its anticipated challenges.
Novel Federated Decision Making for Distribution of Anti-SARS-CoV-2 Monoclonal Antibody to Eligible High-Risk Patients
International Journal of Information Technology and Decision Making, Oct 10, 2022
Context: When the epidemic first broke out, no specific treatment was available for severe acute ... more Context: When the epidemic first broke out, no specific treatment was available for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The urgent need to end this unusual situation has resulted in many attempts to deal with SARS-CoV-2. In addition to several types of vaccinations that have been created, anti-SARS-CoV-2 monoclonal antibodies (mAbs) have added a new dimension to preventative and treatment efforts. This therapy also helps prevent severe symptoms for those at a high risk. Therefore, this is one of the most promising treatments for mild to moderate SARS-CoV-2 cases. However, the availability of anti-SARS-CoV-2 mAb therapy is limited and leads to two main challenges. The first is the privacy challenge of selecting eligible patients from the distribution hospital networking, which requires data sharing, and the second is the prioritization of all eligible patients amongst the distribution hospitals according to dose availability. To our knowledge, no research combined the federated fundamental approach with multicriteria decision-making methods for the treatment of SARS-COV-2, indicating a research gap. Objective: This paper presents a unique sequence processing methodology that distributes anti-SARS-CoV-2 mAbs to eligible high-risk patients with SARS-CoV-2 based on medical requirements by using a novel federated decision-making distributor. Method: This paper proposes a novel federated decision-making distributor (FDMD) of anti-SARS-CoV-2 mAbs for eligible high-risk patients. FDMD is implemented on augmented data of 49,152 cases of patients with SARS-CoV-2 with mild and moderate symptoms. For proof of concept, three hospitals with 16 patients each are enrolled. The proposed FDMD is constructed from the two sides of claim sequencing: central federated server (CFS) and local machine (LM). The CFS includes five sequential phases synchronised with the LMs, namely, the preliminary criteria setting phase that determines the high-risk criteria, calculates their weights using the newly formulated interval-valued spherical fuzzy and hesitant 2-tuple fuzzy-weighted zero-inconsistency (IVSH2-FWZIC), and allocates their values. The subsequent phases are federation, dose availability confirmation, global prioritization of eligible patients and alerting the hospitals with the patients most eligible for receiving the anti-SARS-CoV-2 mAbs according to dose availability. The LM independently performs all local prioritization processes without sharing patients’ data using the provided criteria settings and federated parameters from the CFS via the proposed Federated TOPSIS (F-TOPSIS). The sequential processing steps are coherently performed at both sides. Results and Discussion: (1) The proposed FDMD efficiently and independently identifies the high-risk patients most eligible for receiving anti-SARS-CoV-2 mAbs at each local distribution hospital. The final decision at the CFS relies on the indexed patients’ score and dose availability without sharing the patients’ data. (2) The IVSH2-FWZIC effectively weighs the high-risk criteria of patients with SARS-CoV-2. (3) The local and global prioritization ranks of the F-TOPSIS for eligible patients are subjected to a systematic ranking validated by high correlation results across nine scenarios by altering the weights of the criteria. (4) A comparative analysis of the experimental results with a prior study confirms the effectiveness of the proposed FDMD. Conclusion: The proposed FDMD has the benefits of centrally distributing anti-SARS-CoV-2 mAbs to high-risk patients prioritized based on their eligibility and dose availability, and simultaneously protecting their privacy and offering an effective cure to prevent progression to severe SARS-CoV-2 hospitalization or death.
Predicting the Road Accidents Severity Using Artificial Neural Network
Springer eBooks, 2022
Utilizing Machine Learning to Develop Cloud-Based Apprenticeship Programs Aligned with Labor Market Demands
2023 IEEE 10th International Conference on Cyber Security and Cloud Computing (CSCloud)/2023 IEEE 9th International Conference on Edge Computing and Scalable Cloud (EdgeCom)
There is a potential disparity between academic pursuits and labor market requirements. We propos... more There is a potential disparity between academic pursuits and labor market requirements. We proposed an approach exploring the feasibility of leveraging machine learning to tailor apprenticeship programs and enhance learning outcomes. The proposed approach involves the collaboration of four key stakeholders, namely, the employer, trainer, university management, and apprentice, each with unique roles in the apprenticeship program. A machine learning algorithm is employed to customize Occupational Learning Outcomes (OLO) for each job, with the use of blockchain technology to facilitate the student credit system. The entire system is hosted on a cloud-based centralized database to enable dynamic and sustainable program modification. The paper concludes by highlighting the potential of digital technologies to transform apprenticeships and create new opportunities and challenges.
Automatic Generation of Ancient Poetry Based on Generative Adversarial Network
Business Intelligence and Information Technology
Towards Creating Public Key Authentication for IoT Blockchain
2019 Sixth HCT Information Technology Trends (ITT), 2019
Besides confidentiality and privacy, trust is an important factor for any IoT system. When a sens... more Besides confidentiality and privacy, trust is an important factor for any IoT system. When a sensor send data that is signed with its private key, the receiving nodes verify it using the public key of the sensor. Hence, it is understood that authenticating the public key of the system is part of creating trust within the system. Traditionally, trust is maintained using Public Key Infrastructure (PKI) where a centralized Certificate Authority (CA) is used for authenticating the public keys. However, a centralized system can result in single point of failure where CA can be compromised or can act maliciously. Decentralizing this system using blockchain and by automating the process of certificate authentication without the need for a central third party can overcome the abovementioned limitations. We identify the challenges in creating such a system and propose a generic framework for PKI in IoT infrastructure using blockchain that can provide the functions of a CA.
An Optimal Consensus Node Selection Process for IoT Blockchain
2019 Sixth HCT Information Technology Trends (ITT), 2019
Blockchain and Internet of Things (IoT) are two trending technologies, when combined together can... more Blockchain and Internet of Things (IoT) are two trending technologies, when combined together can strengthen the security of various applications. However, security of blockchain depend on its consensus mechanism. The consensus mechanisms used by cryptocurrencies requires high computations and hence cannot be applied to IoT. Applying light weight consensus such as PBFT (Practical Byzantine Fault Tolerance) requires an authority or a protocol to select leader nodes and the nodes to be involved in consensus. However the current node selection process in many blockchain applications involves a central authority or is based on traditional round robin techniques. Hence we propose simple and efficient node selection mechanism that can perform consensus without wasting energy. Our approach uses PBFT, where the nodes to participate in the consensus are not predetermined by a central authority. Nodes are selected based on their performance in the blockchain.
FCSR - Fuzzy Continuous Speech Recognition Approach for Identifying Laryngeal Pathologies Using New Weighted Spectrum Features
Proceedings of the International Conference on Advanced Intelligent Systems and Informatics 2017, 2017
Speech processing technologies have provided distinct contributions for identifying laryngeal pat... more Speech processing technologies have provided distinct contributions for identifying laryngeal pathology, in which samples of normal and pathologic voice are evaluated. In this paper, a novel Fuzzy Continuous Speech Recognition approach termed FCSR is proposed for laryngeal pathology identification. First of all, new speech weighted spectrum features based on Jacobi–Fourier Moments (JFMs) are presented for characterization of larynx pathologies. This is primarily motivated by the assumption that the energy represented by spectrogram would entirely change with some larynx pathologies like physiological pathologies, neuromuscular pathologies, while it would extremely change with normal speech. This phenomenon would extensively influence the allocation of spectrogram local energy in time axis together with frequency axis. Consequently, the JFMs computed from spectrogram local regions are utilized to characterize distribution of spectrogram local energy. Besides, a proposed multi-class fuzzy support vector machine (FSVM) model is constructed to classify larynx pathologies, where partition index maximization (PIM) clustering along with particle swarm optimization (PSO) are employed for calculating fuzzy memberships and optimizing the arguments of the kernel function of the FSVM, respectively. Eventually, the experiments legitimize the proposed approach in reference to the accuracy of the laryngeal pathology recognition.
Using Arabic Social Media Feeds for Incident and Emergency Management in Smart Cities
2018 3rd International Conference on Smart and Sustainable Technologies (SpliTech), 2018
Research on Smart Cities tackles the challenges related to the rapid urban population growth comb... more Research on Smart Cities tackles the challenges related to the rapid urban population growth combined with resources' scarcity. A key function of any Smart City initiative is to be able to continuously monitor and track a city's environment and resources so as to convert the data into intelligence for streamlining the city's operations. Social media has become one of the most popular means to allow users to communicate and share information, opinions, and sentiments about events and incidents occurring in a city. With the rapid growth and proliferation of social media platforms, there is a vast amount of user-generated content that can be used as source of information about cities. In this work, we propose the use of text mining and classification techniques to extract the intelligence needed from Arabic social media feeds, for effective incident and emergency management in smart cities. In our system, the information collected from social media feeds is processed to gen...
Lean Transformation in Information Technology: The Case of IT Services in Financial Firms
In today’s competitive market, organizations have realized that high quality contributes to long-... more In today’s competitive market, organizations have realized that high quality contributes to long-term success. Lean transformation framework can be described as a set of tools and principles that positively impact quality by focusing on both eliminating waste and adding value to certain processes in the organization. This study argues the success of implementing Lean principles in IT. Since there is no such research has been conducted in the Arab world in implementing Lean in IT, this research paper is considered as a pioneer research in the field of IT quality management. The objective of this paper is to use a case-based approach to demonstrate how Lean principles and tools can help IT to enhance IT services quality, reduce cost, and improve productivity. A case study at a leading financial institution in the Arab Gulf region is examined. The study focuses on analyzing the Lean transformation stages, including current state assessment, target state design, and implementation; as w...
A Hybrid Framework for Applying Semantic Integration Technologies to Improve Data Quality
This study aims to develop a new hybrid framework of semantic integration for enterprise informat... more This study aims to develop a new hybrid framework of semantic integration for enterprise information system in order to improve data quality to resolve the problem from scattered data sources and rapid expansions of data. The proposed framework is based on a solid background that is inspired by previous studies. Significant and seminal research articles are reviewed based on selection criteria. A critical review is conducted in order to determine a set of qualified semantic technologies that can be used to construct a hybrid semantic integration framework. The proposed framework consists of six layers and one component as follows: source layer, translation layer, XML layer, RDF layer, inference layer, application layer, and ontology component. The proposed framework faces two challenges and one conflict; these were fixed while composing the framework. The proposed framework was examined to improve data quality for four dimensions of data quality dimensions.
Advances in Intelligent Systems and Computing, 2016
We present an application of sentiment analysis using natural language toolkit (NLTK) for measuri... more We present an application of sentiment analysis using natural language toolkit (NLTK) for measuring customer service representative (CSR) productivity in real estate call centers. The study describes in details the decisions made, step by step, in building an Arabic system for evaluation and measuring. The system includes transcription method, feature extraction, training process and analysis. The results are analyzed subjectively based on the original test set. The corpus consists of 7 h real estate corpus collected from three different call centers located in Egypt. We draw the baseline of productivity measurement in real estate sector.
Speculative Work in Neural Network Forecasting: An application to Egyptian Cotton Production
Lecture notes in civil engineering, 2024
Purpose -This paper aims to develop a novel chatbot to improve student services in high school by... more Purpose -This paper aims to develop a novel chatbot to improve student services in high school by transferring students' enquiries to a particular agent, based on the enquiry type. In accordance to that, comparison between machine learning and neural network is conducted in order to identify the most accurate model to classify students' requests. Methodology -In this study we selected the data from high school students, since high school is one of the most essential stages in students' lives, as in this stage, students have the option to select their academic streams and advanced courses that can shape their careers according to their passions and interests. A new corpus is created with (1004) enquiries. The data is annotated manually based on the type of request. The label high-school-courses is assigned to the requests that are related to elective courses and standardized tests during high school. On the other hand, the label majors & universities is assigned to the questions that are related to applying to universities along with selecting the majors. Two novel classifier chatbots are developed and evaluated, where the first chatbot is developed by using a Naive Bayes Machine Learning Algorithm, while the other is developed by using Recurrent Neural Networks (RNN)-LSTM. Findings -Some features and techniques are used in both models in order to improve the performance. However, both models have conveyed a high accuracy score which exceeds (91%). The models have been validated as a pilot testing by using high school students as well as experts in education and six questions and enquiries are presented to the chatbots for the evaluation. Implications and future work -This study can add value to the team of researchers and developers to integrate such classifiers into different applications. As a result, this improves the users' services, in particular, those implemented in educational institutions. In the future, it is certain that intent recognition will be developed with the addition of a voice recognition feature which can successfully integrated into smartphones.
Machine Translation, Dec 1, 2010
Arabic is rich in morphology and syntax. It is normally written with optional diacritics and with... more Arabic is rich in morphology and syntax. It is normally written with optional diacritics and without the notion of capitalization. These characteristics make dealing with Arabic a challenge for both learners and researchers. My experience in the Arabic natural language processing (ANLP) research area allows me to say that the key to fostering a research or developing an application in the ANLP field lies in getting insights into the standard layer-based structure of linguistic phenomena (phonology, morphology, syntax and semantics) as well as in recognizing the interaction between them. Until now, there was no such an available introductory resource that can fulfill these requirements. For example, a simple Google search of the term "Arabic Natural Language Processing" results in research groups, research papers, tutorials and presentations, companies, and scholars. Consequently, tangible efforts had to be spent by any beginner of the ANLP field, whether a scientist, linguist, developer, or student, in going through different material which might be either irrelevant or too advanced. Therefore, the purpose and the significance of this book are clear from where it stands. Also, it is adequately classified by the publisher as belonging to the "human language technologies" series. This book gives a sufficient solid introductory background on ANLP. This makes it the first of its kind. The author has a broad background in computational linguistics, in general, and ANLP, in particular. He also has a marvelous research track record and has been very active in serving the research communities. The book is clear about its intended audience. It is the most suitable for anyone who would like to get a fundamental background about ANLP for research, study, or development purposes. It is very well written. This book amazingly takes you gradually from the ground
A Systematic Review of Knowledge Management Integration in Higher Educational Institution with an Emphasis on a Blended Learning Environment
Lecture notes in networks and systems, Oct 23, 2022
Proceedings of the 11th International Conference on Advanced Intelligent Systems and Informatics, 2025
Transformer-based pre-trained language models are advanced machine learning models that understan... more Transformer-based pre-trained language models are advanced machine learning models that understand and produce human language. These models are mainly based on the "Transformer" design. They have undergone substantial pre-training on large volumes of text data to understand language patterns. Notable examples include BERT, GPT, and RoBERTa. These tools have transformed NLP tasks by demonstrating exceptional performance and adaptability, facilitating knowledge transfer to specialized tasks, and addressing issues associated with training a model from the start. This systematic review examines transformer-based pre-trained language models, including architecture, pretraining techniques, and adaption approaches. This study examines the core concepts, training methods, and applications of these models to answer significant research concerns. This study examines transformer-based pre-trained models in NLP and their fine-tuning methodologies. This review sheds light on the current state of transformer-based language models and outlines potential future advances in this dynamic subject.
ACM Transactions on Asian and Low-Resource Language Information Processing , 2024
The surge in advancements in large language models (LLMs) has expedited the generation of synthet... more The surge in advancements in large language models (LLMs) has expedited the generation of synthetic text imitating human writing styles. This, however, raises concerns about the potential misuse of synthetic textual data, which could compromise trust in online content. Against this backdrop, the present research aims to address the key challenges of detecting LLMs-generated texts. In this study, we used ChatGPT (v 3.5) because of its widespread and capability to comprehend and keep conversational context, allowing it to produce meaningful and contextually suitable responses. The problem revolves around the task of discerning between authentic and artificially generated textual content. To tackle this problem, we first created a dataset containing both real and DeepFake text. Subsequently, we employed transfer-learning (TL) and conducted DeepFakedetection utilizing SOTA large pre-trained LLMs. Furthermore, we conducted validation using benchmark datasets comprising unseen data samples to ensure that the model's performance reflects its ability to generalize to new data. Finally, we discussed this study's theoretical contributions, practical implications, limitations and potential avenues for future research, aiming to formulate strategies for identifying and detecting large-generative-models' produced texts. The results were promising, with accuracy ranging from 94% to 99%. The comparison between automatic detection and the human ability to detect DeepFake text revealed a significant gap in the human capacity for its identification, emphasizing an increasing need for sophisticated automated detectors. The investigation into AI-generated content detection holds central importance in the age of LLMs and technology convergence. This study is both timely and adds value to the ongoing discussion regarding the challenges associated with the pertinent theme of "DeepFake text detection", with a special focus on examining the boundaries of human detection. CCS Concepts: • Computing methodologies → Natural language processing ;
Lecture notes in civil engineering, 2024
Persona identification helps AI-based communication systems provide personalized and situationall... more Persona identification helps AI-based communication systems provide personalized and situationally informed interactions. This paper introduces pretraining on CNN, BERT, and GPT models to improve persona detection on PMPC and ROCStories datasets. Two speakers with different personalities have dialogues in the PMPC dataset. The challenge is to match each speaker to their persona. The ROCStories dataset contains fictional character traits and activities. Our study uses transformer-based design to improve persona detection using ROCStories dataset external context. We compare our method to leading models in the field. We found that pre-training and fine-tuning on several datasets improves model performance. External context from tale collections may improve persona detection algorithms and help understand human personality and behavior. Our study found that pretraining CNN, BERT, and GPT models improves persona detection, improving user experiences and communication. The method could be used in chatbots, personalized recommendation systems, and customer support. Additionally, it can help create AI-driven communication systems with tailored, context-aware, and human-like interactions.
Expert systems: Useful tools for enhancing agricultural research and production
The Future of E-Commerce Systems: 2030 and Beyond
Studies in systems, decision and control, 2021
The sophistication and efficiency of systems is undeniably advancing. As businesses evolve, quest... more The sophistication and efficiency of systems is undeniably advancing. As businesses evolve, questions on its past performance is beside the point but its anticipated functions and relevance in the future. With the use of software in commerce at the core of every business today, the overlook of how omnichannel transactions operates is an awe with the 4th industrial revolution and its societal impact. This chapter explains how commerce has currently evolved in the advent of technology elaborating on the current state and challenges of systems, its architecture, and the innovations of cyber physical systems in electronic commerce. It further expounds on the application of omnichannel systems in communication through the fifth-generation network, in transaction through blockchain and in composition through Social Internet of Things. We believe that this study will benefit all stakeholders in commerce from governments, supply chain organizations and consumers to understanding the forthcoming drivers of omnichannel systems in the 4th industrial revolution, its prospects, and its anticipated challenges.
Novel Federated Decision Making for Distribution of Anti-SARS-CoV-2 Monoclonal Antibody to Eligible High-Risk Patients
International Journal of Information Technology and Decision Making, Oct 10, 2022
Context: When the epidemic first broke out, no specific treatment was available for severe acute ... more Context: When the epidemic first broke out, no specific treatment was available for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The urgent need to end this unusual situation has resulted in many attempts to deal with SARS-CoV-2. In addition to several types of vaccinations that have been created, anti-SARS-CoV-2 monoclonal antibodies (mAbs) have added a new dimension to preventative and treatment efforts. This therapy also helps prevent severe symptoms for those at a high risk. Therefore, this is one of the most promising treatments for mild to moderate SARS-CoV-2 cases. However, the availability of anti-SARS-CoV-2 mAb therapy is limited and leads to two main challenges. The first is the privacy challenge of selecting eligible patients from the distribution hospital networking, which requires data sharing, and the second is the prioritization of all eligible patients amongst the distribution hospitals according to dose availability. To our knowledge, no research combined the federated fundamental approach with multicriteria decision-making methods for the treatment of SARS-COV-2, indicating a research gap. Objective: This paper presents a unique sequence processing methodology that distributes anti-SARS-CoV-2 mAbs to eligible high-risk patients with SARS-CoV-2 based on medical requirements by using a novel federated decision-making distributor. Method: This paper proposes a novel federated decision-making distributor (FDMD) of anti-SARS-CoV-2 mAbs for eligible high-risk patients. FDMD is implemented on augmented data of 49,152 cases of patients with SARS-CoV-2 with mild and moderate symptoms. For proof of concept, three hospitals with 16 patients each are enrolled. The proposed FDMD is constructed from the two sides of claim sequencing: central federated server (CFS) and local machine (LM). The CFS includes five sequential phases synchronised with the LMs, namely, the preliminary criteria setting phase that determines the high-risk criteria, calculates their weights using the newly formulated interval-valued spherical fuzzy and hesitant 2-tuple fuzzy-weighted zero-inconsistency (IVSH2-FWZIC), and allocates their values. The subsequent phases are federation, dose availability confirmation, global prioritization of eligible patients and alerting the hospitals with the patients most eligible for receiving the anti-SARS-CoV-2 mAbs according to dose availability. The LM independently performs all local prioritization processes without sharing patients’ data using the provided criteria settings and federated parameters from the CFS via the proposed Federated TOPSIS (F-TOPSIS). The sequential processing steps are coherently performed at both sides. Results and Discussion: (1) The proposed FDMD efficiently and independently identifies the high-risk patients most eligible for receiving anti-SARS-CoV-2 mAbs at each local distribution hospital. The final decision at the CFS relies on the indexed patients’ score and dose availability without sharing the patients’ data. (2) The IVSH2-FWZIC effectively weighs the high-risk criteria of patients with SARS-CoV-2. (3) The local and global prioritization ranks of the F-TOPSIS for eligible patients are subjected to a systematic ranking validated by high correlation results across nine scenarios by altering the weights of the criteria. (4) A comparative analysis of the experimental results with a prior study confirms the effectiveness of the proposed FDMD. Conclusion: The proposed FDMD has the benefits of centrally distributing anti-SARS-CoV-2 mAbs to high-risk patients prioritized based on their eligibility and dose availability, and simultaneously protecting their privacy and offering an effective cure to prevent progression to severe SARS-CoV-2 hospitalization or death.
Predicting the Road Accidents Severity Using Artificial Neural Network
Springer eBooks, 2022
Utilizing Machine Learning to Develop Cloud-Based Apprenticeship Programs Aligned with Labor Market Demands
2023 IEEE 10th International Conference on Cyber Security and Cloud Computing (CSCloud)/2023 IEEE 9th International Conference on Edge Computing and Scalable Cloud (EdgeCom)
There is a potential disparity between academic pursuits and labor market requirements. We propos... more There is a potential disparity between academic pursuits and labor market requirements. We proposed an approach exploring the feasibility of leveraging machine learning to tailor apprenticeship programs and enhance learning outcomes. The proposed approach involves the collaboration of four key stakeholders, namely, the employer, trainer, university management, and apprentice, each with unique roles in the apprenticeship program. A machine learning algorithm is employed to customize Occupational Learning Outcomes (OLO) for each job, with the use of blockchain technology to facilitate the student credit system. The entire system is hosted on a cloud-based centralized database to enable dynamic and sustainable program modification. The paper concludes by highlighting the potential of digital technologies to transform apprenticeships and create new opportunities and challenges.
Automatic Generation of Ancient Poetry Based on Generative Adversarial Network
Business Intelligence and Information Technology
Towards Creating Public Key Authentication for IoT Blockchain
2019 Sixth HCT Information Technology Trends (ITT), 2019
Besides confidentiality and privacy, trust is an important factor for any IoT system. When a sens... more Besides confidentiality and privacy, trust is an important factor for any IoT system. When a sensor send data that is signed with its private key, the receiving nodes verify it using the public key of the sensor. Hence, it is understood that authenticating the public key of the system is part of creating trust within the system. Traditionally, trust is maintained using Public Key Infrastructure (PKI) where a centralized Certificate Authority (CA) is used for authenticating the public keys. However, a centralized system can result in single point of failure where CA can be compromised or can act maliciously. Decentralizing this system using blockchain and by automating the process of certificate authentication without the need for a central third party can overcome the abovementioned limitations. We identify the challenges in creating such a system and propose a generic framework for PKI in IoT infrastructure using blockchain that can provide the functions of a CA.
An Optimal Consensus Node Selection Process for IoT Blockchain
2019 Sixth HCT Information Technology Trends (ITT), 2019
Blockchain and Internet of Things (IoT) are two trending technologies, when combined together can... more Blockchain and Internet of Things (IoT) are two trending technologies, when combined together can strengthen the security of various applications. However, security of blockchain depend on its consensus mechanism. The consensus mechanisms used by cryptocurrencies requires high computations and hence cannot be applied to IoT. Applying light weight consensus such as PBFT (Practical Byzantine Fault Tolerance) requires an authority or a protocol to select leader nodes and the nodes to be involved in consensus. However the current node selection process in many blockchain applications involves a central authority or is based on traditional round robin techniques. Hence we propose simple and efficient node selection mechanism that can perform consensus without wasting energy. Our approach uses PBFT, where the nodes to participate in the consensus are not predetermined by a central authority. Nodes are selected based on their performance in the blockchain.
FCSR - Fuzzy Continuous Speech Recognition Approach for Identifying Laryngeal Pathologies Using New Weighted Spectrum Features
Proceedings of the International Conference on Advanced Intelligent Systems and Informatics 2017, 2017
Speech processing technologies have provided distinct contributions for identifying laryngeal pat... more Speech processing technologies have provided distinct contributions for identifying laryngeal pathology, in which samples of normal and pathologic voice are evaluated. In this paper, a novel Fuzzy Continuous Speech Recognition approach termed FCSR is proposed for laryngeal pathology identification. First of all, new speech weighted spectrum features based on Jacobi–Fourier Moments (JFMs) are presented for characterization of larynx pathologies. This is primarily motivated by the assumption that the energy represented by spectrogram would entirely change with some larynx pathologies like physiological pathologies, neuromuscular pathologies, while it would extremely change with normal speech. This phenomenon would extensively influence the allocation of spectrogram local energy in time axis together with frequency axis. Consequently, the JFMs computed from spectrogram local regions are utilized to characterize distribution of spectrogram local energy. Besides, a proposed multi-class fuzzy support vector machine (FSVM) model is constructed to classify larynx pathologies, where partition index maximization (PIM) clustering along with particle swarm optimization (PSO) are employed for calculating fuzzy memberships and optimizing the arguments of the kernel function of the FSVM, respectively. Eventually, the experiments legitimize the proposed approach in reference to the accuracy of the laryngeal pathology recognition.
Using Arabic Social Media Feeds for Incident and Emergency Management in Smart Cities
2018 3rd International Conference on Smart and Sustainable Technologies (SpliTech), 2018
Research on Smart Cities tackles the challenges related to the rapid urban population growth comb... more Research on Smart Cities tackles the challenges related to the rapid urban population growth combined with resources' scarcity. A key function of any Smart City initiative is to be able to continuously monitor and track a city's environment and resources so as to convert the data into intelligence for streamlining the city's operations. Social media has become one of the most popular means to allow users to communicate and share information, opinions, and sentiments about events and incidents occurring in a city. With the rapid growth and proliferation of social media platforms, there is a vast amount of user-generated content that can be used as source of information about cities. In this work, we propose the use of text mining and classification techniques to extract the intelligence needed from Arabic social media feeds, for effective incident and emergency management in smart cities. In our system, the information collected from social media feeds is processed to gen...
Lean Transformation in Information Technology: The Case of IT Services in Financial Firms
In today’s competitive market, organizations have realized that high quality contributes to long-... more In today’s competitive market, organizations have realized that high quality contributes to long-term success. Lean transformation framework can be described as a set of tools and principles that positively impact quality by focusing on both eliminating waste and adding value to certain processes in the organization. This study argues the success of implementing Lean principles in IT. Since there is no such research has been conducted in the Arab world in implementing Lean in IT, this research paper is considered as a pioneer research in the field of IT quality management. The objective of this paper is to use a case-based approach to demonstrate how Lean principles and tools can help IT to enhance IT services quality, reduce cost, and improve productivity. A case study at a leading financial institution in the Arab Gulf region is examined. The study focuses on analyzing the Lean transformation stages, including current state assessment, target state design, and implementation; as w...
A Hybrid Framework for Applying Semantic Integration Technologies to Improve Data Quality
This study aims to develop a new hybrid framework of semantic integration for enterprise informat... more This study aims to develop a new hybrid framework of semantic integration for enterprise information system in order to improve data quality to resolve the problem from scattered data sources and rapid expansions of data. The proposed framework is based on a solid background that is inspired by previous studies. Significant and seminal research articles are reviewed based on selection criteria. A critical review is conducted in order to determine a set of qualified semantic technologies that can be used to construct a hybrid semantic integration framework. The proposed framework consists of six layers and one component as follows: source layer, translation layer, XML layer, RDF layer, inference layer, application layer, and ontology component. The proposed framework faces two challenges and one conflict; these were fixed while composing the framework. The proposed framework was examined to improve data quality for four dimensions of data quality dimensions.
Advances in Intelligent Systems and Computing, 2016
We present an application of sentiment analysis using natural language toolkit (NLTK) for measuri... more We present an application of sentiment analysis using natural language toolkit (NLTK) for measuring customer service representative (CSR) productivity in real estate call centers. The study describes in details the decisions made, step by step, in building an Arabic system for evaluation and measuring. The system includes transcription method, feature extraction, training process and analysis. The results are analyzed subjectively based on the original test set. The corpus consists of 7 h real estate corpus collected from three different call centers located in Egypt. We draw the baseline of productivity measurement in real estate sector.
Speculative Work in Neural Network Forecasting: An application to Egyptian Cotton Production
Lecture notes in civil engineering, 2024
Purpose -This paper aims to develop a novel chatbot to improve student services in high school by... more Purpose -This paper aims to develop a novel chatbot to improve student services in high school by transferring students' enquiries to a particular agent, based on the enquiry type. In accordance to that, comparison between machine learning and neural network is conducted in order to identify the most accurate model to classify students' requests. Methodology -In this study we selected the data from high school students, since high school is one of the most essential stages in students' lives, as in this stage, students have the option to select their academic streams and advanced courses that can shape their careers according to their passions and interests. A new corpus is created with (1004) enquiries. The data is annotated manually based on the type of request. The label high-school-courses is assigned to the requests that are related to elective courses and standardized tests during high school. On the other hand, the label majors & universities is assigned to the questions that are related to applying to universities along with selecting the majors. Two novel classifier chatbots are developed and evaluated, where the first chatbot is developed by using a Naive Bayes Machine Learning Algorithm, while the other is developed by using Recurrent Neural Networks (RNN)-LSTM. Findings -Some features and techniques are used in both models in order to improve the performance. However, both models have conveyed a high accuracy score which exceeds (91%). The models have been validated as a pilot testing by using high school students as well as experts in education and six questions and enquiries are presented to the chatbots for the evaluation. Implications and future work -This study can add value to the team of researchers and developers to integrate such classifiers into different applications. As a result, this improves the users' services, in particular, those implemented in educational institutions. In the future, it is certain that intent recognition will be developed with the addition of a voice recognition feature which can successfully integrated into smartphones.
Machine Translation, Dec 1, 2010
Arabic is rich in morphology and syntax. It is normally written with optional diacritics and with... more Arabic is rich in morphology and syntax. It is normally written with optional diacritics and without the notion of capitalization. These characteristics make dealing with Arabic a challenge for both learners and researchers. My experience in the Arabic natural language processing (ANLP) research area allows me to say that the key to fostering a research or developing an application in the ANLP field lies in getting insights into the standard layer-based structure of linguistic phenomena (phonology, morphology, syntax and semantics) as well as in recognizing the interaction between them. Until now, there was no such an available introductory resource that can fulfill these requirements. For example, a simple Google search of the term "Arabic Natural Language Processing" results in research groups, research papers, tutorials and presentations, companies, and scholars. Consequently, tangible efforts had to be spent by any beginner of the ANLP field, whether a scientist, linguist, developer, or student, in going through different material which might be either irrelevant or too advanced. Therefore, the purpose and the significance of this book are clear from where it stands. Also, it is adequately classified by the publisher as belonging to the "human language technologies" series. This book gives a sufficient solid introductory background on ANLP. This makes it the first of its kind. The author has a broad background in computational linguistics, in general, and ANLP, in particular. He also has a marvelous research track record and has been very active in serving the research communities. The book is clear about its intended audience. It is the most suitable for anyone who would like to get a fundamental background about ANLP for research, study, or development purposes. It is very well written. This book amazingly takes you gradually from the ground
A Systematic Review of Knowledge Management Integration in Higher Educational Institution with an Emphasis on a Blended Learning Environment
Lecture notes in networks and systems, Oct 23, 2022