Mohamed Ghoneim | Umm Al-Qura University, Makkah, Saudi Arabia (original) (raw)
Papers by Mohamed Ghoneim
Electronics
Breast cancer (BC) is a type of tumor that develops in the breast cells and is one of the most co... more Breast cancer (BC) is a type of tumor that develops in the breast cells and is one of the most common cancers in women. Women are also at risk from BC, the second most life-threatening disease after lung cancer. The early diagnosis and classification of BC are very important. Furthermore, manual detection is time-consuming, laborious work, and, possibility of pathologist errors, and incorrect classification. To address the above highlighted issues, this paper presents a hybrid deep learning (CNN-GRU) model for the automatic detection of BC-IDC (+,−) using whole slide images (WSIs) of the well-known PCam Kaggle dataset. In this research, the proposed model used different layers of architectures of CNNs and GRU to detect breast IDC (+,−) cancer. The validation tests for quantitative results were carried out using each performance measure (accuracy (Acc), precision (Prec), sensitivity (Sens), specificity (Spec), AUC and F1-Score. The proposed model shows the best performance measures (...
Electronics
This article is an open access article distributed under the terms and conditions of the Creative... more This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY
Frontiers in Energy Research
A Casson fluid is the most suitable rheological model for blood and other non-Newtonian fluids. C... more A Casson fluid is the most suitable rheological model for blood and other non-Newtonian fluids. Casson fluids hold yield-stress and have great significance in biomechanics and polymer industries. In this analysis, a numerical simulation of non-coaxial rotation of a Casson fluid over a circular disc was estimated. The influence of thermal radiation, second-order chemical reactions, buoyancy, and heat source on a Casson fluid above a rotating frame was studied. The time evolution of secondary and primary velocities, solute particles, and energy contours were also examined. A magnetic flux of varying intensity was applied to the fluid flow. A nonlinear sequence of partial differential equations was used to describe the phenomenon. The modeled equations were reduced to a non-dimensional set of ordinary differential equations (ODEs) using similarity replacement. The obtained sets of ODEs were further simulated using the parametric continuation method (PCM). The impact of physical constra...
International Journal of Modern Physics B
Quaternion differential equations (QDEs) are a new kind of differential equations which differ fr... more Quaternion differential equations (QDEs) are a new kind of differential equations which differ from ordinary differential equations. Our aim is to get the exponential matrices for the QDE which is useful for finding the solution of quaternion-valued differential equations, also, we know that linear algebra is very useful to calculate the exponential for a matrix but the solution of QDE is not a linear space. Due to the noncommutativity of the quaternion, the solution set of QDE is a right free module. For this, we must read some basic concepts on Quaternions such as eigenvalues, eigenvectors, Wronskian and the difference between quaternion and complex eigenvalues and eigenvectors; by using the right eigenvalue method for quaternions we developed a fundamental matrix which is useful to construct the exponential matrices which perform a great role in solving the QDEs.
Microfluidics and Nanofluidics
International Journal of Modern Physics B
In this study, we look at the solutions of nonlinear partial differential equations and ordinary ... more In this study, we look at the solutions of nonlinear partial differential equations and ordinary differential equations. Scientists and engineers have had a hard time coming up with a way to solve nonlinear differential equations. Almost all of the nature’s puzzles have equations that aren’t linear. There aren’t any well-known ways to solve nonlinear equations, and people have tried to improve methods for a certain type of problems. This doesn’t mean, however, that all nonlinear equations can be solved. With this in mind, we’ll look at how well the variation approach works for solving nonlinear DEs. Different problems can be solved well by using different methods. We agree that a nonlinear problem might have more than one answer. Factorization, homotropy analysis, homotropy perturbation, tangent hyperbolic function and trial function are all examples of ways to do this. On the other hand, some of these strategies don’t cover all of the nonlinear problem-solving methods. In this pape...
Alexandria Engineering Journal
Symmetry
The energy and mass transition through Newtonian hybrid nanofluid flow comprised of copper Cu and... more The energy and mass transition through Newtonian hybrid nanofluid flow comprised of copper Cu and aluminum oxide (Al2O3) nanoparticles (nps) over an extended surface has been reported. The thermal and velocity slip conditions are also considered. Such a type of physical problems mostly occurs in symmetrical phenomena and are applicable in physics, engineering, applied mathematics, and computer science. For desired outputs, the fluid flow has been studied under the consequences of the Darcy effect, thermophoresis diffusion and Brownian motion, heat absorption, viscous dissipation, and thermal radiation. An inclined magnetic field is applied to fluid flow to regulate the flow stream. Hybrid nanofluid is created by the dispensation of Cu and Al2O3 nps in the base fluid (water). For this purpose, the flow dynamics have been designed as a system of nonlinear PDEs, which are simplified to a system of dimensionless ODEs through resemblance substitution. The parametric continuation method i...
Optik, 2019
This paper employs extended trial function scheme to derive soliton solutions in birefringent fib... more This paper employs extended trial function scheme to derive soliton solutions in birefringent fibers with quadratic-cubic nonlinearity. The mathematical algorithm reveals bright and singular optical soliton solutions that are listed with their respective existence criteria.
Journal of Advanced Computational Intelligence and Intelligent Informatics, 2007
The access control and scalable encryption scheme we propose for JPEG 2000 encoded images encrypt... more The access control and scalable encryption scheme we propose for JPEG 2000 encoded images encrypts JEPG 2000 codestreams using the SNOW 2 progressive encryption algorithm to encrypt resolutions, quality layers, or packets independently to provide resolution, quality or fine-grain scalability. Access is controlled to different image resolutions or quality levels granted to different users receiving the same encrypted JPEG 2000 codestream but having different decryption keys. Keys used with successive resolutions or quality layers are mutually dependent based on the SHA-256 one-way hashing function. Encrypted JPEG 2000 codestreams are transcoded by an intermediate untrusted network transcoder, without decryption and without access to decryption keys. Our encryption scheme preserves most of the inherent flexibility of JPEG 2000 encoded images and is carefully designed to produce encrypted codestreams backward-compatible with JPEG 2000 compliant decoders.
信号処理, 2007
The motion estimation and compensation technique is widely used for video coding applications. An... more The motion estimation and compensation technique is widely used for video coding applications. An efficient motion estimation plays a key role in achieving a high compression ratio. Block matching is the most popular motion estimation algorithm, which has been adopted by various video coding standards such as MPEG1/2/4 and ITU-T H.261/262/263. The full search algorithm is the most straightforward and optimal block-matching algorithm, but it is very computationally intensive. Previously proposed fast algorithms reduce the number of computations by limiting the number of search locations. In this paper, we present a very fast and efficient search algorithm that can be used in block-based motion estimation. The proposed algorithm is based on the idea of one-ata-time optimization. It is shown that the proposed algorithm produces better quality performance and requires less computational time compared against popular motion estimation algorithms.
Motion estimation is a key issue in the field of moving images analysis. In the framework of vide... more Motion estimation is a key issue in the field of moving images analysis. In the framework of video coding, it is combined with motion compensation in order to exploit the spatio temporal correlation of image sequences along the motion trajectory. It then achieves one of the most important compression factors of a video coder. By dividing each frame into rectangular blocks, motion vectors are obtained via the block matching algorithms (BMA). The full search algorithm (FS) is a brute force BMA. It searches all possible locations inside the search window in the reference frame to provide an optimal solution. However, its high computational complexity makes it often not suitable for real-time implementation. Many fast but sub-optimal algorithms are introduced to improve the performance of video coders. The present book analyses three prospects of improving the quality of existing video coding schemes. Namely, one at a time optimization, adaptive search stagey and feature domain based cr...
Neural Computing and Applications, 2014
The accumulating data are easy to store but the ability of understanding and using it does not ke... more The accumulating data are easy to store but the ability of understanding and using it does not keep track with its growth. So researches focus on the nature of knowledge processing in the mind. This paper proposes a semantic model (CKRMCC) based on cognitive aspects that enables cognitive computer to process the knowledge as the human mind and find a suitable representation of that knowledge. In cognitive computer, knowledge processing passes through three major stages: knowledge acquisition and encoding, knowledge representation, and knowledge inference and validation. The core of CKRMCC is knowledge representation, which in turn proceeds through four phases: prototype formation phase, discrimination phase, generalization phase, and algorithm development phase. Each of those phases is mathematically formulated using the notions of real-time process algebra. The performance efficiency of CKRMCC is evaluated using some datasets from the well-known UCI repository of machine learning datasets. The acquired datasets are divided into training and testing data that are encoded using concept matrix. Consequently, in the knowledge representation stage, a set of symbolic rule is derived to establish a suitable representation for the training datasets. This representation will be available in a usable form when it is needed in the future. The inference stage uses the rule set to obtain the classes of the encoded testing datasets. Finally, knowledge validation phase is validating and verifying the results of applying the rule set on testing datasets. The performances are compared with classification and regression tree and support vector machine and prove that CKRMCC has an efficient performance in representing the knowledge using symbolic rules. Keywords Cognitive computers Á Knowledge processing Á Knowledge representation Á Denotational mathematics Á Cognitive computing Á Real-time process algebra 1 Introduction For centuries, scientists have made great strides in our understanding of human mind. Yet, a lot of efforts still needed to explore how mind can be observed, measured, and simulated. The computational or information processing view aims to understand the mind in terms of processes that operate on representations. Wilhelm Wundt and his students initiated laboratory methods for studying mental operations more systematically. George Miller summarized
2006 International Symposium on Intelligent Signal Processing and Communications, 2006
ABSTRACT Motion segmentation is important in many computer vision application, which aims to dete... more ABSTRACT Motion segmentation is important in many computer vision application, which aims to detect motion regions such as moving vehicles and people in natural scenes. Detecting moving blobs provides a focus of attention for later processes such as tracking and activity analysis. However, changes from weather, illumination, shadow and repetitive motion from clutter make motion segmentation difficult to process quickly and reliably. In this paper we proposed a method using minimum graph cuts based on hybrid algorithm approach for moving object segmentation. Experiments are carried out to examine the efficiency of the proposed approach
2013 8th International Conference on Computer Engineering & Systems (ICCES), 2013
ABSTRACT The field of artificial intelligence embraces two approaches to artificial learning. The... more ABSTRACT The field of artificial intelligence embraces two approaches to artificial learning. The first is motivated by the study of mental processes and states that artificial learning is the study of mechanisms embodied in the human mind. It aims to understand how these mechanisms can be translated into computer programs. The second approach initiated from a practical computing standpoint and has less grandiose aims. It involves developing programs that learn from past data, and may be considered as a branch of data processing. In this paper, we are concerned with the first approach. Artificial learning is interested in the classification learning that is a learning algorithm for categorizing unseen examples into predefined classes based on a set of training examples. We formulated a computational model for binary classification process using formal concept analysis. The classification rules are derived and applied successfully for different study cases.
IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, 2006
ABSTRACT In this paper, we first briefly discuss the newly emerging Secured JPEG (JPSEC) standard... more ABSTRACT In this paper, we first briefly discuss the newly emerging Secured JPEG (JPSEC) standard for security services for JPEG 2000 compressed images. We then propose our novel approach for applying authentication to JPEG 2000 images in a scalable manner. Our authentication technique can be used for source authentication, nonrepudiation and integrity verification for the received possibly transcoded JPEG 2000 images in such a way that it is possible to authenticate different resolutions or different qualities extracted or received from a JPEG 2000 encoded image. Three different implementation methods for our authentication technique are presented. Packet-Based Authentication involves using the MD5 hashing algorithm for calculating the hash value for each individual packet in the JPEG 2000 codestream. Hash values are truncated to a specified length to reduce the overhead in storage space, concatenated into a single string, and then signed using the RSA algorithm and the author's private key for repudiation prevention. Resolution-Based Authentication and Quality-Based Authentication methods involve generating a single hash value from all contiguous packets from each entire resolution or each entire quality layer, respectively. Our algorithms maintain most of the inherent flexibility and scalability of JPEG 2000 compressed images. The resultant secured codestream is still JPEG 2000 compliant and compatible with JPEG 2000 compliant decoders. Also, our algorithms are compatible with the Public Key Infrastructure (PKI) for preventing signing repudiation from the sender and are implemented using the new JPSEC standard for security signaling.
IEEE Transactions on Signal Processing, 2009
Recently, Aissa-El-Bey et al. have proposed two subspacebased methods for underdetermined blind s... more Recently, Aissa-El-Bey et al. have proposed two subspacebased methods for underdetermined blind source separation (UBSS) in time-frequency (TF) domain. These methods allow multiple active sources at TF points so long as the number of active sources at any TF point is strictly less than the number of sensors, and the column vectors of the mixing matrix are pairwise linearly independent. In this correspondence, we first show that the subspace-based methods must also satisfy the condition that any submatrix of the mixing matrix is of full rank. Then we present a new UBSS approach which only requires that the number of active sources at any TF point does not exceed that of sensors. An algorithm is proposed to perform the UBSS.
Video surveillance has been used in many monitoring security sensitive areas such as banks, depar... more Video surveillance has been used in many monitoring security sensitive areas such as banks, department stores, highways, crowded public places and borders. Detecting moving regions such as vehicles and people is the first basic step of almost every vision system, because it provides a focus of attention and simplifies the processing on subsequent analysis steps. It is one of the most difficult tasks, the motion segmentation accuracy determines the eventual success or failure of computerized analysis procedures. For this reason considerable care should be taken to improve the probability of rugged segmentation. Many approaches exist for detecting moving object in a sequence of images. Commonly used techniques for motion detection are background subtraction, temporal differencing and optical flow. Background subtraction[1,2,3,4,5,6] attempts to detect moving regions in an image by subtracting current image from a reference background image in a pixel-by-pixel fashion. Temporal differencing[7,8] makes use of pixel intensity difference between two or three consecutive frames in an image sequence to extract moving regions. Motion segmentation based on optical flow[9,10] uses characteristics of flow vectors of moving objects over time to detect change regions in an image sequence. In these methods, background subtraction provides the most complete feature data, but is extremely sensitive to dynamic scene changes due to lighting and extraneous events. Temporal differencing is very adaptive to dynamic environments, but generally has a poor performance in extracting all relevant feature pixels. Optical flow can be used to detect independently moving objects in the presence of camera motion. However, most optical flow computation methods are computationally complex, and cannot be applied to full-frame video streams in real-time without specialized hardware. Recently, genetic algorithms based motion segmentation have been proposed[11], It proposes a new video sequence segmentation method based on the genetic algorithm (GA) that can improve computational efficiency. The computation is distributed into chromosomes that evolve using distributed genetic algorithms (DGAs). But this method needs the segmentation in spatial and temperal field seperately. Then combines them together, which is time consuming. Inspired by the paper [11], we introduce a new motion segmentation method which combines the spatial segmentation and temperal segmentation together. First we construct the image model based on the Markov Random Fields (MRF) for each frame. Then the segmentation is represented by the minimization of a posterior energy function. We use the genetic algorithm(GA) to find the solutions. The background differecing and evolution probability are combined to find the unstable individuals. The advantage of this method is that it decrease the number of the evolution individuals and decrease the computation time consuming.
Diagnostics
Human skin diseases have become increasingly prevalent in recent decades, with millions of indivi... more Human skin diseases have become increasingly prevalent in recent decades, with millions of individuals in developed countries experiencing monkeypox. Such conditions often carry less obvious but no less devastating risks, including increased vulnerability to monkeypox, cancer, and low self-esteem. Due to the low visual resolution of monkeypox disease images, medical specialists with high-level tools are typically required for a proper diagnosis. The manual diagnosis of monkeypox disease is subjective, time-consuming, and labor-intensive. Therefore, it is necessary to create a computer-aided approach for the automated diagnosis of monkeypox disease. Most research articles on monkeypox disease relied on convolutional neural networks (CNNs) and using classical loss functions, allowing them to pick up discriminative elements in monkeypox images. To enhance this, a novel framework using Al-Biruni Earth radius (BER) optimization-based stochastic fractal search (BERSFS) is proposed to fine...
Electronics
Breast cancer (BC) is a type of tumor that develops in the breast cells and is one of the most co... more Breast cancer (BC) is a type of tumor that develops in the breast cells and is one of the most common cancers in women. Women are also at risk from BC, the second most life-threatening disease after lung cancer. The early diagnosis and classification of BC are very important. Furthermore, manual detection is time-consuming, laborious work, and, possibility of pathologist errors, and incorrect classification. To address the above highlighted issues, this paper presents a hybrid deep learning (CNN-GRU) model for the automatic detection of BC-IDC (+,−) using whole slide images (WSIs) of the well-known PCam Kaggle dataset. In this research, the proposed model used different layers of architectures of CNNs and GRU to detect breast IDC (+,−) cancer. The validation tests for quantitative results were carried out using each performance measure (accuracy (Acc), precision (Prec), sensitivity (Sens), specificity (Spec), AUC and F1-Score. The proposed model shows the best performance measures (...
Electronics
This article is an open access article distributed under the terms and conditions of the Creative... more This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY
Frontiers in Energy Research
A Casson fluid is the most suitable rheological model for blood and other non-Newtonian fluids. C... more A Casson fluid is the most suitable rheological model for blood and other non-Newtonian fluids. Casson fluids hold yield-stress and have great significance in biomechanics and polymer industries. In this analysis, a numerical simulation of non-coaxial rotation of a Casson fluid over a circular disc was estimated. The influence of thermal radiation, second-order chemical reactions, buoyancy, and heat source on a Casson fluid above a rotating frame was studied. The time evolution of secondary and primary velocities, solute particles, and energy contours were also examined. A magnetic flux of varying intensity was applied to the fluid flow. A nonlinear sequence of partial differential equations was used to describe the phenomenon. The modeled equations were reduced to a non-dimensional set of ordinary differential equations (ODEs) using similarity replacement. The obtained sets of ODEs were further simulated using the parametric continuation method (PCM). The impact of physical constra...
International Journal of Modern Physics B
Quaternion differential equations (QDEs) are a new kind of differential equations which differ fr... more Quaternion differential equations (QDEs) are a new kind of differential equations which differ from ordinary differential equations. Our aim is to get the exponential matrices for the QDE which is useful for finding the solution of quaternion-valued differential equations, also, we know that linear algebra is very useful to calculate the exponential for a matrix but the solution of QDE is not a linear space. Due to the noncommutativity of the quaternion, the solution set of QDE is a right free module. For this, we must read some basic concepts on Quaternions such as eigenvalues, eigenvectors, Wronskian and the difference between quaternion and complex eigenvalues and eigenvectors; by using the right eigenvalue method for quaternions we developed a fundamental matrix which is useful to construct the exponential matrices which perform a great role in solving the QDEs.
Microfluidics and Nanofluidics
International Journal of Modern Physics B
In this study, we look at the solutions of nonlinear partial differential equations and ordinary ... more In this study, we look at the solutions of nonlinear partial differential equations and ordinary differential equations. Scientists and engineers have had a hard time coming up with a way to solve nonlinear differential equations. Almost all of the nature’s puzzles have equations that aren’t linear. There aren’t any well-known ways to solve nonlinear equations, and people have tried to improve methods for a certain type of problems. This doesn’t mean, however, that all nonlinear equations can be solved. With this in mind, we’ll look at how well the variation approach works for solving nonlinear DEs. Different problems can be solved well by using different methods. We agree that a nonlinear problem might have more than one answer. Factorization, homotropy analysis, homotropy perturbation, tangent hyperbolic function and trial function are all examples of ways to do this. On the other hand, some of these strategies don’t cover all of the nonlinear problem-solving methods. In this pape...
Alexandria Engineering Journal
Symmetry
The energy and mass transition through Newtonian hybrid nanofluid flow comprised of copper Cu and... more The energy and mass transition through Newtonian hybrid nanofluid flow comprised of copper Cu and aluminum oxide (Al2O3) nanoparticles (nps) over an extended surface has been reported. The thermal and velocity slip conditions are also considered. Such a type of physical problems mostly occurs in symmetrical phenomena and are applicable in physics, engineering, applied mathematics, and computer science. For desired outputs, the fluid flow has been studied under the consequences of the Darcy effect, thermophoresis diffusion and Brownian motion, heat absorption, viscous dissipation, and thermal radiation. An inclined magnetic field is applied to fluid flow to regulate the flow stream. Hybrid nanofluid is created by the dispensation of Cu and Al2O3 nps in the base fluid (water). For this purpose, the flow dynamics have been designed as a system of nonlinear PDEs, which are simplified to a system of dimensionless ODEs through resemblance substitution. The parametric continuation method i...
Optik, 2019
This paper employs extended trial function scheme to derive soliton solutions in birefringent fib... more This paper employs extended trial function scheme to derive soliton solutions in birefringent fibers with quadratic-cubic nonlinearity. The mathematical algorithm reveals bright and singular optical soliton solutions that are listed with their respective existence criteria.
Journal of Advanced Computational Intelligence and Intelligent Informatics, 2007
The access control and scalable encryption scheme we propose for JPEG 2000 encoded images encrypt... more The access control and scalable encryption scheme we propose for JPEG 2000 encoded images encrypts JEPG 2000 codestreams using the SNOW 2 progressive encryption algorithm to encrypt resolutions, quality layers, or packets independently to provide resolution, quality or fine-grain scalability. Access is controlled to different image resolutions or quality levels granted to different users receiving the same encrypted JPEG 2000 codestream but having different decryption keys. Keys used with successive resolutions or quality layers are mutually dependent based on the SHA-256 one-way hashing function. Encrypted JPEG 2000 codestreams are transcoded by an intermediate untrusted network transcoder, without decryption and without access to decryption keys. Our encryption scheme preserves most of the inherent flexibility of JPEG 2000 encoded images and is carefully designed to produce encrypted codestreams backward-compatible with JPEG 2000 compliant decoders.
信号処理, 2007
The motion estimation and compensation technique is widely used for video coding applications. An... more The motion estimation and compensation technique is widely used for video coding applications. An efficient motion estimation plays a key role in achieving a high compression ratio. Block matching is the most popular motion estimation algorithm, which has been adopted by various video coding standards such as MPEG1/2/4 and ITU-T H.261/262/263. The full search algorithm is the most straightforward and optimal block-matching algorithm, but it is very computationally intensive. Previously proposed fast algorithms reduce the number of computations by limiting the number of search locations. In this paper, we present a very fast and efficient search algorithm that can be used in block-based motion estimation. The proposed algorithm is based on the idea of one-ata-time optimization. It is shown that the proposed algorithm produces better quality performance and requires less computational time compared against popular motion estimation algorithms.
Motion estimation is a key issue in the field of moving images analysis. In the framework of vide... more Motion estimation is a key issue in the field of moving images analysis. In the framework of video coding, it is combined with motion compensation in order to exploit the spatio temporal correlation of image sequences along the motion trajectory. It then achieves one of the most important compression factors of a video coder. By dividing each frame into rectangular blocks, motion vectors are obtained via the block matching algorithms (BMA). The full search algorithm (FS) is a brute force BMA. It searches all possible locations inside the search window in the reference frame to provide an optimal solution. However, its high computational complexity makes it often not suitable for real-time implementation. Many fast but sub-optimal algorithms are introduced to improve the performance of video coders. The present book analyses three prospects of improving the quality of existing video coding schemes. Namely, one at a time optimization, adaptive search stagey and feature domain based cr...
Neural Computing and Applications, 2014
The accumulating data are easy to store but the ability of understanding and using it does not ke... more The accumulating data are easy to store but the ability of understanding and using it does not keep track with its growth. So researches focus on the nature of knowledge processing in the mind. This paper proposes a semantic model (CKRMCC) based on cognitive aspects that enables cognitive computer to process the knowledge as the human mind and find a suitable representation of that knowledge. In cognitive computer, knowledge processing passes through three major stages: knowledge acquisition and encoding, knowledge representation, and knowledge inference and validation. The core of CKRMCC is knowledge representation, which in turn proceeds through four phases: prototype formation phase, discrimination phase, generalization phase, and algorithm development phase. Each of those phases is mathematically formulated using the notions of real-time process algebra. The performance efficiency of CKRMCC is evaluated using some datasets from the well-known UCI repository of machine learning datasets. The acquired datasets are divided into training and testing data that are encoded using concept matrix. Consequently, in the knowledge representation stage, a set of symbolic rule is derived to establish a suitable representation for the training datasets. This representation will be available in a usable form when it is needed in the future. The inference stage uses the rule set to obtain the classes of the encoded testing datasets. Finally, knowledge validation phase is validating and verifying the results of applying the rule set on testing datasets. The performances are compared with classification and regression tree and support vector machine and prove that CKRMCC has an efficient performance in representing the knowledge using symbolic rules. Keywords Cognitive computers Á Knowledge processing Á Knowledge representation Á Denotational mathematics Á Cognitive computing Á Real-time process algebra 1 Introduction For centuries, scientists have made great strides in our understanding of human mind. Yet, a lot of efforts still needed to explore how mind can be observed, measured, and simulated. The computational or information processing view aims to understand the mind in terms of processes that operate on representations. Wilhelm Wundt and his students initiated laboratory methods for studying mental operations more systematically. George Miller summarized
2006 International Symposium on Intelligent Signal Processing and Communications, 2006
ABSTRACT Motion segmentation is important in many computer vision application, which aims to dete... more ABSTRACT Motion segmentation is important in many computer vision application, which aims to detect motion regions such as moving vehicles and people in natural scenes. Detecting moving blobs provides a focus of attention for later processes such as tracking and activity analysis. However, changes from weather, illumination, shadow and repetitive motion from clutter make motion segmentation difficult to process quickly and reliably. In this paper we proposed a method using minimum graph cuts based on hybrid algorithm approach for moving object segmentation. Experiments are carried out to examine the efficiency of the proposed approach
2013 8th International Conference on Computer Engineering & Systems (ICCES), 2013
ABSTRACT The field of artificial intelligence embraces two approaches to artificial learning. The... more ABSTRACT The field of artificial intelligence embraces two approaches to artificial learning. The first is motivated by the study of mental processes and states that artificial learning is the study of mechanisms embodied in the human mind. It aims to understand how these mechanisms can be translated into computer programs. The second approach initiated from a practical computing standpoint and has less grandiose aims. It involves developing programs that learn from past data, and may be considered as a branch of data processing. In this paper, we are concerned with the first approach. Artificial learning is interested in the classification learning that is a learning algorithm for categorizing unseen examples into predefined classes based on a set of training examples. We formulated a computational model for binary classification process using formal concept analysis. The classification rules are derived and applied successfully for different study cases.
IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, 2006
ABSTRACT In this paper, we first briefly discuss the newly emerging Secured JPEG (JPSEC) standard... more ABSTRACT In this paper, we first briefly discuss the newly emerging Secured JPEG (JPSEC) standard for security services for JPEG 2000 compressed images. We then propose our novel approach for applying authentication to JPEG 2000 images in a scalable manner. Our authentication technique can be used for source authentication, nonrepudiation and integrity verification for the received possibly transcoded JPEG 2000 images in such a way that it is possible to authenticate different resolutions or different qualities extracted or received from a JPEG 2000 encoded image. Three different implementation methods for our authentication technique are presented. Packet-Based Authentication involves using the MD5 hashing algorithm for calculating the hash value for each individual packet in the JPEG 2000 codestream. Hash values are truncated to a specified length to reduce the overhead in storage space, concatenated into a single string, and then signed using the RSA algorithm and the author's private key for repudiation prevention. Resolution-Based Authentication and Quality-Based Authentication methods involve generating a single hash value from all contiguous packets from each entire resolution or each entire quality layer, respectively. Our algorithms maintain most of the inherent flexibility and scalability of JPEG 2000 compressed images. The resultant secured codestream is still JPEG 2000 compliant and compatible with JPEG 2000 compliant decoders. Also, our algorithms are compatible with the Public Key Infrastructure (PKI) for preventing signing repudiation from the sender and are implemented using the new JPSEC standard for security signaling.
IEEE Transactions on Signal Processing, 2009
Recently, Aissa-El-Bey et al. have proposed two subspacebased methods for underdetermined blind s... more Recently, Aissa-El-Bey et al. have proposed two subspacebased methods for underdetermined blind source separation (UBSS) in time-frequency (TF) domain. These methods allow multiple active sources at TF points so long as the number of active sources at any TF point is strictly less than the number of sensors, and the column vectors of the mixing matrix are pairwise linearly independent. In this correspondence, we first show that the subspace-based methods must also satisfy the condition that any submatrix of the mixing matrix is of full rank. Then we present a new UBSS approach which only requires that the number of active sources at any TF point does not exceed that of sensors. An algorithm is proposed to perform the UBSS.
Video surveillance has been used in many monitoring security sensitive areas such as banks, depar... more Video surveillance has been used in many monitoring security sensitive areas such as banks, department stores, highways, crowded public places and borders. Detecting moving regions such as vehicles and people is the first basic step of almost every vision system, because it provides a focus of attention and simplifies the processing on subsequent analysis steps. It is one of the most difficult tasks, the motion segmentation accuracy determines the eventual success or failure of computerized analysis procedures. For this reason considerable care should be taken to improve the probability of rugged segmentation. Many approaches exist for detecting moving object in a sequence of images. Commonly used techniques for motion detection are background subtraction, temporal differencing and optical flow. Background subtraction[1,2,3,4,5,6] attempts to detect moving regions in an image by subtracting current image from a reference background image in a pixel-by-pixel fashion. Temporal differencing[7,8] makes use of pixel intensity difference between two or three consecutive frames in an image sequence to extract moving regions. Motion segmentation based on optical flow[9,10] uses characteristics of flow vectors of moving objects over time to detect change regions in an image sequence. In these methods, background subtraction provides the most complete feature data, but is extremely sensitive to dynamic scene changes due to lighting and extraneous events. Temporal differencing is very adaptive to dynamic environments, but generally has a poor performance in extracting all relevant feature pixels. Optical flow can be used to detect independently moving objects in the presence of camera motion. However, most optical flow computation methods are computationally complex, and cannot be applied to full-frame video streams in real-time without specialized hardware. Recently, genetic algorithms based motion segmentation have been proposed[11], It proposes a new video sequence segmentation method based on the genetic algorithm (GA) that can improve computational efficiency. The computation is distributed into chromosomes that evolve using distributed genetic algorithms (DGAs). But this method needs the segmentation in spatial and temperal field seperately. Then combines them together, which is time consuming. Inspired by the paper [11], we introduce a new motion segmentation method which combines the spatial segmentation and temperal segmentation together. First we construct the image model based on the Markov Random Fields (MRF) for each frame. Then the segmentation is represented by the minimization of a posterior energy function. We use the genetic algorithm(GA) to find the solutions. The background differecing and evolution probability are combined to find the unstable individuals. The advantage of this method is that it decrease the number of the evolution individuals and decrease the computation time consuming.
Diagnostics
Human skin diseases have become increasingly prevalent in recent decades, with millions of indivi... more Human skin diseases have become increasingly prevalent in recent decades, with millions of individuals in developed countries experiencing monkeypox. Such conditions often carry less obvious but no less devastating risks, including increased vulnerability to monkeypox, cancer, and low self-esteem. Due to the low visual resolution of monkeypox disease images, medical specialists with high-level tools are typically required for a proper diagnosis. The manual diagnosis of monkeypox disease is subjective, time-consuming, and labor-intensive. Therefore, it is necessary to create a computer-aided approach for the automated diagnosis of monkeypox disease. Most research articles on monkeypox disease relied on convolutional neural networks (CNNs) and using classical loss functions, allowing them to pick up discriminative elements in monkeypox images. To enhance this, a novel framework using Al-Biruni Earth radius (BER) optimization-based stochastic fractal search (BERSFS) is proposed to fine...