Omar Elgendy - Academia.edu (original) (raw)
Papers by Omar Elgendy
Neural Computing and Applications
Social media is becoming a source of news for many people due to its ease and freedom of use. As ... more Social media is becoming a source of news for many people due to its ease and freedom of use. As a result, fake news has been spreading quickly and easily regardless of its credibility, especially in the last decade. Fake news publishers take advantage of critical situations such as the Covid-19 pandemic and the American presidential elections to affect societies negatively. Fake news can seriously impact society in many fields including politics, finance, sports, etc. Many studies have been conducted to help detect fake news in English, but research conducted on fake news detection in the Arabic language is scarce. Our contribution is twofold: first, we have constructed a large and diverse Arabic fake news dataset. Second, we have developed and evaluated transformer-based classifiers to identify fake news while utilizing eight state-of-the-art Arabic contextualized embedding models. The majority of these models had not been previously used for Arabic fake news detection. We conduct a thorough analysis of the state-of-the-art Arabic contextualized embedding models as well as comparison with similar fake news detection systems. Experimental results confirm that these state-of-the-art models are robust, with accuracy exceeding 98%.
Neural Computing and Applications
Telecommunication Systems
Telecommunication Systems
Casa Editrice La Tribuna, 2010
Casa Editrice La Tribuna, 2010
Figure 1: Photon level requirement vs. detection performance. In Figure 1, we discuss how many ph... more Figure 1: Photon level requirement vs. detection performance. In Figure 1, we discuss how many photons are needed for each pixel in order to achieve the target detection performance. The x-axis represents the detection accuracy we want to achieve and the y-axis is the minimal numbers of photons per pixel needed in the images. We compare four settings by switching the inputs from synthetic CIS to QIS images and changing the baseline method to our method. When the target mAP is 50%, QIS data only needs half photons of CIS data to reach the same accuracy by just using Faster R-CNN. By introducing our method, we can further decrease the required photon level by half on average.
Figure 1: Photon level requirement vs. detection performance. In Figure 1, we discuss how many ph... more Figure 1: Photon level requirement vs. detection performance. In Figure 1, we discuss how many photons are needed for each pixel in order to achieve the target detection performance. The x-axis represents the detection accuracy we want to achieve and the y-axis is the minimal numbers of photons per pixel needed in the images. We compare four settings by switching the inputs from synthetic CIS to QIS images and changing the baseline method to our method. When the target mAP is 50%, QIS data only needs half photons of CIS data to reach the same accuracy by just using Faster R-CNN. By introducing our method, we can further decrease the required photon level by half on average.
Artificial Intelligence in Medicine, 2022
Cancer is one of the most dangerous diseases to humans, and yet no permanent cure has been develo... more Cancer is one of the most dangerous diseases to humans, and yet no permanent cure has been developed for it. Breast cancer is one of the most common cancer types. According to the National Breast Cancer foundation, in 2020 alone, more than 276,000 new cases of invasive breast cancer and more than 48,000 non-invasive cases were diagnosed in the US. To put these figures in perspective, 64% of these cases are diagnosed early in the disease's cycle, giving patients a 99% chance of survival. Artificial intelligence and machine learning have been used effectively in detection and treatment of several dangerous diseases, helping in early diagnosis and treatment, and thus increasing the patient's chance of survival. Deep learning has been designed to analyze the most important features affecting detection and treatment of serious diseases. For example, breast cancer can be detected using genes or histopathological imaging. Analysis at the genetic level is very expensive, so histopathological imaging is the most common approach used to detect breast cancer. In this research work, we systematically reviewed previous work done on detection and treatment of breast cancer using genetic sequencing or histopathological imaging with the help of deep learning and machine learning. We also provide recommendations to researchers who will work in this field.
Artificial Intelligence in Medicine, 2022
Cancer is one of the most dangerous diseases to humans, and yet no permanent cure has been develo... more Cancer is one of the most dangerous diseases to humans, and yet no permanent cure has been developed for it. Breast cancer is one of the most common cancer types. According to the National Breast Cancer foundation, in 2020 alone, more than 276,000 new cases of invasive breast cancer and more than 48,000 non-invasive cases were diagnosed in the US. To put these figures in perspective, 64% of these cases are diagnosed early in the disease's cycle, giving patients a 99% chance of survival. Artificial intelligence and machine learning have been used effectively in detection and treatment of several dangerous diseases, helping in early diagnosis and treatment, and thus increasing the patient's chance of survival. Deep learning has been designed to analyze the most important features affecting detection and treatment of serious diseases. For example, breast cancer can be detected using genes or histopathological imaging. Analysis at the genetic level is very expensive, so histopathological imaging is the most common approach used to detect breast cancer. In this research work, we systematically reviewed previous work done on detection and treatment of breast cancer using genetic sequencing or histopathological imaging with the help of deep learning and machine learning. We also provide recommendations to researchers who will work in this field.
2021 Symposium on VLSI Circuits, 2021
This paper reports a 4Mpixel, 3D-stacked backside illuminated Quanta Image Sensor (QIS) with 2.2u... more This paper reports a 4Mpixel, 3D-stacked backside illuminated Quanta Image Sensor (QIS) with 2.2um pixels that can operate simultaneously in photon-counting mode with deep sub-electron read noise (0.3e- rms) and linear integration mode with large full-well capacity (30k e-). A single-exposure dynamic range of 100dB is realized with this dual-mode readout under room temperature. This QIS device uses a cluster-parallel readout architecture to achieve up to 120fps frame rate at 550mW power consumption.
2021 Symposium on VLSI Circuits, 2021
This paper reports a 4Mpixel, 3D-stacked backside illuminated Quanta Image Sensor (QIS) with 2.2u... more This paper reports a 4Mpixel, 3D-stacked backside illuminated Quanta Image Sensor (QIS) with 2.2um pixels that can operate simultaneously in photon-counting mode with deep sub-electron read noise (0.3e- rms) and linear integration mode with large full-well capacity (30k e-). A single-exposure dynamic range of 100dB is realized with this dual-mode readout under room temperature. This QIS device uses a cluster-parallel readout architecture to achieve up to 120fps frame rate at 550mW power consumption.
ArXiv, 2017
We study an image denoising problem: Given a set of image denoisers, each having a different deno... more We study an image denoising problem: Given a set of image denoisers, each having a different denoising capability, can we design a framework that allows us to integrate the individual denoisers to produce an overall better result? If we can do so, then potentially we can integrate multiple weak denoisers to denoise complex scenes. The goal of this paper is to present a meta-procedure called the Consensus Neural Network (ConsensusNet). Given a set of initial denoisers, ConsensusNet takes the initial estimates and generates a linear combination of the results. The combined estimate is then fed to a booster neural network to reduce the amount of method noise. ConsensusNet is a modular framework that allows any image denoiser to be used in the initial stage. Experimental results show that ConsensusNet can consistently improve denoising performance for both deterministic denoisers and neural network denoisers.
ArXiv, 2017
We study an image denoising problem: Given a set of image denoisers, each having a different deno... more We study an image denoising problem: Given a set of image denoisers, each having a different denoising capability, can we design a framework that allows us to integrate the individual denoisers to produce an overall better result? If we can do so, then potentially we can integrate multiple weak denoisers to denoise complex scenes. The goal of this paper is to present a meta-procedure called the Consensus Neural Network (ConsensusNet). Given a set of initial denoisers, ConsensusNet takes the initial estimates and generates a linear combination of the results. The combined estimate is then fed to a booster neural network to reduce the amount of method noise. ConsensusNet is a modular framework that allows any image denoiser to be used in the initial stage. Experimental results show that ConsensusNet can consistently improve denoising performance for both deterministic denoisers and neural network denoisers.
Since the birth of charge coupled devices (CCD) and the complementary metal-oxide-semiconductor (... more Since the birth of charge coupled devices (CCD) and the complementary metal-oxide-semiconductor (CMOS) active pixel sensors, pixel pitch of digital image sensors has been continuously shrinking to meet the resolution and size requirements of the cameras. However, shrinking pixels reduces the maximum number of photons a sensor can hold, a phenomenon broadly known as the full-well capacity limit. The drop in full-well capacity causes drop in signal-to-noise ratio and dynamic range.The Quanta Image Sensor (QIS) is a class of solid-state image sensors proposed by Eric Fossum in 2005 as a potential solution for the limited full-well capacity problem. QIS is envisioned to be the next generation image sensor after CCD and CMOS since it enables sub-diffraction-limit pixels without the inherited problems of pixel shrinking. Equipped with a massive number of detectors that have single-photon sensitivity, the sensor counts the incoming photons and triggers a binary response “1” if the photon c...
Since the birth of charge coupled devices (CCD) and the complementary metal-oxide-semiconductor (... more Since the birth of charge coupled devices (CCD) and the complementary metal-oxide-semiconductor (CMOS) active pixel sensors, pixel pitch of digital image sensors has been continuously shrinking to meet the resolution and size requirements of the cameras. However, shrinking pixels reduces the maximum number of photons a sensor can hold, a phenomenon broadly known as the full-well capacity limit. The drop in full-well capacity causes drop in signal-to-noise ratio and dynamic range.The Quanta Image Sensor (QIS) is a class of solid-state image sensors proposed by Eric Fossum in 2005 as a potential solution for the limited full-well capacity problem. QIS is envisioned to be the next generation image sensor after CCD and CMOS since it enables sub-diffraction-limit pixels without the inherited problems of pixel shrinking. Equipped with a massive number of detectors that have single-photon sensitivity, the sensor counts the incoming photons and triggers a binary response “1” if the photon c...
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021
Robust object detection under photon-limited conditions is crucial for applications such as night... more Robust object detection under photon-limited conditions is crucial for applications such as night vision, surveillance, and microscopy, where the number of photons per pixel is low due to a dark environment and/or a short integration time. While the mainstream "low-light" image enhancement methods have produced promising results that improve the image contrast between the foreground and background through advanced coloring techniques, the more challenging problem of mitigating the photon shot noise inherited from the random Poisson process remains open. In this paper, we present a photon-limited object detection framework by adding two ideas to state-of-the-art object detectors: 1) a space-time non-local module that leverages the spatial-temporal information across an image sequence in the feature space, and 2) knowledge distillation in the form of student-teacher learning to improve the robustness of the detector's feature extractor against noise. Experiments are conducted to demonstrate the improved performance of the proposed method in comparison with state-of-the-art baselines. When integrated with the latest photon counting devices, the algorithm achieves more than 50% mean average precision at a photon level of 1 photon per pixel.
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021
Robust object detection under photon-limited conditions is crucial for applications such as night... more Robust object detection under photon-limited conditions is crucial for applications such as night vision, surveillance, and microscopy, where the number of photons per pixel is low due to a dark environment and/or a short integration time. While the mainstream "low-light" image enhancement methods have produced promising results that improve the image contrast between the foreground and background through advanced coloring techniques, the more challenging problem of mitigating the photon shot noise inherited from the random Poisson process remains open. In this paper, we present a photon-limited object detection framework by adding two ideas to state-of-the-art object detectors: 1) a space-time non-local module that leverages the spatial-temporal information across an image sequence in the feature space, and 2) knowledge distillation in the form of student-teacher learning to improve the robustness of the detector's feature extractor against noise. Experiments are conducted to demonstrate the improved performance of the proposed method in comparison with state-of-the-art baselines. When integrated with the latest photon counting devices, the algorithm achieves more than 50% mean average precision at a photon level of 1 photon per pixel.
Neural Computing and Applications
Social media is becoming a source of news for many people due to its ease and freedom of use. As ... more Social media is becoming a source of news for many people due to its ease and freedom of use. As a result, fake news has been spreading quickly and easily regardless of its credibility, especially in the last decade. Fake news publishers take advantage of critical situations such as the Covid-19 pandemic and the American presidential elections to affect societies negatively. Fake news can seriously impact society in many fields including politics, finance, sports, etc. Many studies have been conducted to help detect fake news in English, but research conducted on fake news detection in the Arabic language is scarce. Our contribution is twofold: first, we have constructed a large and diverse Arabic fake news dataset. Second, we have developed and evaluated transformer-based classifiers to identify fake news while utilizing eight state-of-the-art Arabic contextualized embedding models. The majority of these models had not been previously used for Arabic fake news detection. We conduct a thorough analysis of the state-of-the-art Arabic contextualized embedding models as well as comparison with similar fake news detection systems. Experimental results confirm that these state-of-the-art models are robust, with accuracy exceeding 98%.
Neural Computing and Applications
Telecommunication Systems
Telecommunication Systems
Casa Editrice La Tribuna, 2010
Casa Editrice La Tribuna, 2010
Figure 1: Photon level requirement vs. detection performance. In Figure 1, we discuss how many ph... more Figure 1: Photon level requirement vs. detection performance. In Figure 1, we discuss how many photons are needed for each pixel in order to achieve the target detection performance. The x-axis represents the detection accuracy we want to achieve and the y-axis is the minimal numbers of photons per pixel needed in the images. We compare four settings by switching the inputs from synthetic CIS to QIS images and changing the baseline method to our method. When the target mAP is 50%, QIS data only needs half photons of CIS data to reach the same accuracy by just using Faster R-CNN. By introducing our method, we can further decrease the required photon level by half on average.
Figure 1: Photon level requirement vs. detection performance. In Figure 1, we discuss how many ph... more Figure 1: Photon level requirement vs. detection performance. In Figure 1, we discuss how many photons are needed for each pixel in order to achieve the target detection performance. The x-axis represents the detection accuracy we want to achieve and the y-axis is the minimal numbers of photons per pixel needed in the images. We compare four settings by switching the inputs from synthetic CIS to QIS images and changing the baseline method to our method. When the target mAP is 50%, QIS data only needs half photons of CIS data to reach the same accuracy by just using Faster R-CNN. By introducing our method, we can further decrease the required photon level by half on average.
Artificial Intelligence in Medicine, 2022
Cancer is one of the most dangerous diseases to humans, and yet no permanent cure has been develo... more Cancer is one of the most dangerous diseases to humans, and yet no permanent cure has been developed for it. Breast cancer is one of the most common cancer types. According to the National Breast Cancer foundation, in 2020 alone, more than 276,000 new cases of invasive breast cancer and more than 48,000 non-invasive cases were diagnosed in the US. To put these figures in perspective, 64% of these cases are diagnosed early in the disease's cycle, giving patients a 99% chance of survival. Artificial intelligence and machine learning have been used effectively in detection and treatment of several dangerous diseases, helping in early diagnosis and treatment, and thus increasing the patient's chance of survival. Deep learning has been designed to analyze the most important features affecting detection and treatment of serious diseases. For example, breast cancer can be detected using genes or histopathological imaging. Analysis at the genetic level is very expensive, so histopathological imaging is the most common approach used to detect breast cancer. In this research work, we systematically reviewed previous work done on detection and treatment of breast cancer using genetic sequencing or histopathological imaging with the help of deep learning and machine learning. We also provide recommendations to researchers who will work in this field.
Artificial Intelligence in Medicine, 2022
Cancer is one of the most dangerous diseases to humans, and yet no permanent cure has been develo... more Cancer is one of the most dangerous diseases to humans, and yet no permanent cure has been developed for it. Breast cancer is one of the most common cancer types. According to the National Breast Cancer foundation, in 2020 alone, more than 276,000 new cases of invasive breast cancer and more than 48,000 non-invasive cases were diagnosed in the US. To put these figures in perspective, 64% of these cases are diagnosed early in the disease's cycle, giving patients a 99% chance of survival. Artificial intelligence and machine learning have been used effectively in detection and treatment of several dangerous diseases, helping in early diagnosis and treatment, and thus increasing the patient's chance of survival. Deep learning has been designed to analyze the most important features affecting detection and treatment of serious diseases. For example, breast cancer can be detected using genes or histopathological imaging. Analysis at the genetic level is very expensive, so histopathological imaging is the most common approach used to detect breast cancer. In this research work, we systematically reviewed previous work done on detection and treatment of breast cancer using genetic sequencing or histopathological imaging with the help of deep learning and machine learning. We also provide recommendations to researchers who will work in this field.
2021 Symposium on VLSI Circuits, 2021
This paper reports a 4Mpixel, 3D-stacked backside illuminated Quanta Image Sensor (QIS) with 2.2u... more This paper reports a 4Mpixel, 3D-stacked backside illuminated Quanta Image Sensor (QIS) with 2.2um pixels that can operate simultaneously in photon-counting mode with deep sub-electron read noise (0.3e- rms) and linear integration mode with large full-well capacity (30k e-). A single-exposure dynamic range of 100dB is realized with this dual-mode readout under room temperature. This QIS device uses a cluster-parallel readout architecture to achieve up to 120fps frame rate at 550mW power consumption.
2021 Symposium on VLSI Circuits, 2021
This paper reports a 4Mpixel, 3D-stacked backside illuminated Quanta Image Sensor (QIS) with 2.2u... more This paper reports a 4Mpixel, 3D-stacked backside illuminated Quanta Image Sensor (QIS) with 2.2um pixels that can operate simultaneously in photon-counting mode with deep sub-electron read noise (0.3e- rms) and linear integration mode with large full-well capacity (30k e-). A single-exposure dynamic range of 100dB is realized with this dual-mode readout under room temperature. This QIS device uses a cluster-parallel readout architecture to achieve up to 120fps frame rate at 550mW power consumption.
ArXiv, 2017
We study an image denoising problem: Given a set of image denoisers, each having a different deno... more We study an image denoising problem: Given a set of image denoisers, each having a different denoising capability, can we design a framework that allows us to integrate the individual denoisers to produce an overall better result? If we can do so, then potentially we can integrate multiple weak denoisers to denoise complex scenes. The goal of this paper is to present a meta-procedure called the Consensus Neural Network (ConsensusNet). Given a set of initial denoisers, ConsensusNet takes the initial estimates and generates a linear combination of the results. The combined estimate is then fed to a booster neural network to reduce the amount of method noise. ConsensusNet is a modular framework that allows any image denoiser to be used in the initial stage. Experimental results show that ConsensusNet can consistently improve denoising performance for both deterministic denoisers and neural network denoisers.
ArXiv, 2017
We study an image denoising problem: Given a set of image denoisers, each having a different deno... more We study an image denoising problem: Given a set of image denoisers, each having a different denoising capability, can we design a framework that allows us to integrate the individual denoisers to produce an overall better result? If we can do so, then potentially we can integrate multiple weak denoisers to denoise complex scenes. The goal of this paper is to present a meta-procedure called the Consensus Neural Network (ConsensusNet). Given a set of initial denoisers, ConsensusNet takes the initial estimates and generates a linear combination of the results. The combined estimate is then fed to a booster neural network to reduce the amount of method noise. ConsensusNet is a modular framework that allows any image denoiser to be used in the initial stage. Experimental results show that ConsensusNet can consistently improve denoising performance for both deterministic denoisers and neural network denoisers.
Since the birth of charge coupled devices (CCD) and the complementary metal-oxide-semiconductor (... more Since the birth of charge coupled devices (CCD) and the complementary metal-oxide-semiconductor (CMOS) active pixel sensors, pixel pitch of digital image sensors has been continuously shrinking to meet the resolution and size requirements of the cameras. However, shrinking pixels reduces the maximum number of photons a sensor can hold, a phenomenon broadly known as the full-well capacity limit. The drop in full-well capacity causes drop in signal-to-noise ratio and dynamic range.The Quanta Image Sensor (QIS) is a class of solid-state image sensors proposed by Eric Fossum in 2005 as a potential solution for the limited full-well capacity problem. QIS is envisioned to be the next generation image sensor after CCD and CMOS since it enables sub-diffraction-limit pixels without the inherited problems of pixel shrinking. Equipped with a massive number of detectors that have single-photon sensitivity, the sensor counts the incoming photons and triggers a binary response “1” if the photon c...
Since the birth of charge coupled devices (CCD) and the complementary metal-oxide-semiconductor (... more Since the birth of charge coupled devices (CCD) and the complementary metal-oxide-semiconductor (CMOS) active pixel sensors, pixel pitch of digital image sensors has been continuously shrinking to meet the resolution and size requirements of the cameras. However, shrinking pixels reduces the maximum number of photons a sensor can hold, a phenomenon broadly known as the full-well capacity limit. The drop in full-well capacity causes drop in signal-to-noise ratio and dynamic range.The Quanta Image Sensor (QIS) is a class of solid-state image sensors proposed by Eric Fossum in 2005 as a potential solution for the limited full-well capacity problem. QIS is envisioned to be the next generation image sensor after CCD and CMOS since it enables sub-diffraction-limit pixels without the inherited problems of pixel shrinking. Equipped with a massive number of detectors that have single-photon sensitivity, the sensor counts the incoming photons and triggers a binary response “1” if the photon c...
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021
Robust object detection under photon-limited conditions is crucial for applications such as night... more Robust object detection under photon-limited conditions is crucial for applications such as night vision, surveillance, and microscopy, where the number of photons per pixel is low due to a dark environment and/or a short integration time. While the mainstream "low-light" image enhancement methods have produced promising results that improve the image contrast between the foreground and background through advanced coloring techniques, the more challenging problem of mitigating the photon shot noise inherited from the random Poisson process remains open. In this paper, we present a photon-limited object detection framework by adding two ideas to state-of-the-art object detectors: 1) a space-time non-local module that leverages the spatial-temporal information across an image sequence in the feature space, and 2) knowledge distillation in the form of student-teacher learning to improve the robustness of the detector's feature extractor against noise. Experiments are conducted to demonstrate the improved performance of the proposed method in comparison with state-of-the-art baselines. When integrated with the latest photon counting devices, the algorithm achieves more than 50% mean average precision at a photon level of 1 photon per pixel.
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021
Robust object detection under photon-limited conditions is crucial for applications such as night... more Robust object detection under photon-limited conditions is crucial for applications such as night vision, surveillance, and microscopy, where the number of photons per pixel is low due to a dark environment and/or a short integration time. While the mainstream "low-light" image enhancement methods have produced promising results that improve the image contrast between the foreground and background through advanced coloring techniques, the more challenging problem of mitigating the photon shot noise inherited from the random Poisson process remains open. In this paper, we present a photon-limited object detection framework by adding two ideas to state-of-the-art object detectors: 1) a space-time non-local module that leverages the spatial-temporal information across an image sequence in the feature space, and 2) knowledge distillation in the form of student-teacher learning to improve the robustness of the detector's feature extractor against noise. Experiments are conducted to demonstrate the improved performance of the proposed method in comparison with state-of-the-art baselines. When integrated with the latest photon counting devices, the algorithm achieves more than 50% mean average precision at a photon level of 1 photon per pixel.