Asst. Prof. Dr. Mohammed S. H. Al-Tamimi | University of Baghdad (original) (raw)
Papers by Asst. Prof. Dr. Mohammed S. H. Al-Tamimi
The Electrocardiogram records the heart's electrical signals. It is a practice; a painless diagno... more The Electrocardiogram records the heart's electrical signals. It is a practice; a painless diagnostic procedure used to rapidly diagnose and monitor heart problems. The ECG is an easy, noninvasive method for diagnosing various common heart conditions. Due to its unique advantages that other humans do not share, in addition to the fact that the heart's electrical activity may be easily detected from the body's surface, security is another area of concern. On this basis, it has become apparent that there are essential steps of pre-processing to deal with data of an electrical nature, signals, and prepare them for use in Biometric systems. Since it depends on the structure and function of the heart, it can be utilized as a biometric attribute for identification and recognition. This academic paper presents a new data formatting technique to be used in biometric systems based on the two major dataset kinds, PTB and PTB-XL.
Estimating an individual's age from a photograph of their face is critical in many applications, ... more Estimating an individual's age from a photograph of their face is critical in many applications, including intelligence and defense, border security and human-machine interaction, as well as soft biometric recognition. There has been recent progress in this discipline that focuses on the idea of deep learning. These solutions need the creation and training of deep neural networks for the sole purpose of resolving this issue. In addition, pre-trained deep neural networks are utilized in the research process for the purpose of facial recognition and fine-tuning for accurate outcomes. The purpose of this study was to offer a method for estimating human ages from the frontal view of the face in a manner that is as accurate as possible and takes into account the majority of the challenges faced by existing methods of age estimate. Making use of the data set that serves as the foundation for the face estimation system in this region (IMDB-WIKI). By performing preparatory processing activities to setup and train the data in order to collect cases, and by using the CNN deep learning method, which yielded results with an accuracy of 0.960 percent, we were able to reach our objective.
Determining the face of wearing a mask from not wearing a mask from visual data such as video and... more Determining the face of wearing a mask from not wearing a mask from visual data such as video and still, images have been a fascinating research topic in recent decades due to the spread of the Corona pandemic, which has changed the features of the entire world and forced people to wear a mask as a way to prevent the pandemic that has calmed the entire world, and it has played an important role. Intelligent development based on artificial intelligence and computers has a very important role in the issue of safety from the pandemic, as the Topic of face recognition and identifying people who wear the mask or not in the introduction and deep education was the most prominent in this topic. Using deep learning techniques and the YOLO ("You only look once") neural network algorithm, which is an efficient real-time object identification algorithm, an intelligent system was developed in this thesis to distinguish which faces are wearing a mask and who is not wearing a wrong mask. The proposed system was developed based on data preparation, preprocessing, and adding a multi-layer neural network, followed by extracting the detection algorithm to improve the accuracy of the system. Two global data sets were used to train and test the proposed system and worked on it in three models, where the first contains the AIZOO data set, the second contains the MoLa RGB CovSurv data set, and the third model contains a combined data set for the two in order to provide cases that are difficult to identify and the accuracy results that were obtained. obtained from the merging datasets showed that the face mask (0.953) and the face recognition system were the most accurate in detecting them (0.916).
The meniscus has a crucial function in human anatomy, and Magnetic Resonance Imaging (M.R.I.) pla... more The meniscus has a crucial function in human anatomy, and Magnetic Resonance Imaging (M.R.I.) plays an essential role in meniscus assessment. It is difficult to identify cartilage lesions using typical image processing approaches because the M.R.I. data is so diverse. An M.R.I. data sequence comprises numerous images, and the attributes area we are searching for may differ from each image in the series. Therefore, feature extraction gets more complicated, hence specifically, traditional image processing becomes very complex. In traditional image processing, a human tells a computer what should be there, but a deep learning (D.L.) algorithm extracts the features of what is already there automatically. The surface changes become valuable when diagnosing a tissue sample. Small, unnoticeable changes in pixel density may indicate the beginning of cancer or tear tissue in the early stages. These details even expert pathologists might miss. Artificial intelligence (A.I.) and D.L. revolutionized radiology by enhancing efficiency and accuracy of both interpretative and non-interpretive jobs. When you look at AI applications, you should think about how they might work. Convolutional Neural Network (C.N.N.) is a part of D.L. that can be used to diagnose knee problems. There are existing algorithms that can detect and categorize cartilage lesions, meniscus tears on M.R.I., offer an automated quantitative evaluation of healing, and forecast who is most likely to have recurring meniscus tears based on radiographs.
In recent years images have been used widely by online social networks providers or numerous orga... more In recent years images have been used widely by online social networks providers or numerous organizations such as governments, police departments, colleges, universities, and private companies. It held in vast databases. Thus, efficient storage of such images is advantageous and its compression is an appealing application. Image compression generally represents the significant image information compactly with a smaller size of bytes while insignificant image information (redundancy) already been removed for this reason image compression has an important role in data transfer and storage especially due to the data explosion that is increasing significantly. It is a challenging task since there are highly complex unknown correlations between the pixels. As a result, it is hard to find and recover a well-compressed representation for images, and it also hard to design and test networks that are able to recover images successfully in a lossless or lossy way. Several neural networks and deep learning methods have been used to compress images. This article survey most common techniques and methods of image compression focusing on auto-encoder of deep learning.
The Finger-vein recognition (FVR) method has received increasing attention in recent years. It is... more The Finger-vein recognition (FVR) method has received increasing attention in recent years. It is a new method of personal identifi cation and biometric technology that identifi es individuals using unique fi nger-vein patterns, which is the fi rst reliable and suitable area to be recognized. It was discovered for the fi rst time with a home imaging system; it is characterized by high accuracy and high processing speed. Also, the presence of patterns of veins inside one's body makes it almost diffi cult to repeat and diffi cult to steal. Based on the increased focus on protecting privacy, that also produces vein biometrics safer alternatives without forgery, damage, or alteration over time. Fingerprint recognition is benefi cial because it includes the use of low-cost, small devices which are diffi cult to counterfeit. This paper discusses preceding fi nger-vein recognition approaches systems with the methodologies taken from other researchers' work about image acquisition, pretreatment, vein extraction, and matching. It is reviewing the latest algorithms; continues to critically review the strengths and weaknesses of these methods, and it states the modern results following a key comparative analysis of methods.
Finger vein recognition and user identification is a relatively recent biometric recognition tech... more Finger vein recognition and user identification is a relatively recent biometric recognition technology with a broad variety of applications, and biometric authentication is extensively employed in the information age. As one of the most essential authentication technologies available today, finger vein recognition captures our attention owing to its high level of security, dependability, and track record of performance. Embedded convolutional neural networks are based on the early or intermediate fusing of input. In early fusion, pictures are categorized according to their location in the input space. In this study, we employ a highly optimized network and late fusion rather than early fusion to create a Fusion convolutional neural network that uses two convolutional neural networks (CNNs) in short ways. The technique is based on using two similar CNNs with varying input picture quality, integrating their outputs in a single layer, and employing an optimized CNN design on a proposed Sains University Malaysia (FV-USM) finger vein dataset 5904 images. The final pooling CNN, which is composed of the original picture, an image improved using the contrast limited adaptive histogram (CLAHE) approach and the Median filter, And, using Principal Component Analysis (PCA), we retrieved the features and got an acceptable performance from the FV-USM database, with a recognition rate of 98.53 percent. Our proposed strategy outperformed other strategies described in the literature.
It is crucial to remember that the brain is a part of the body responsible for a wide range of so... more It is crucial to remember that the brain is a part of the body responsible for a wide range of sophisticated bodily activities. Brain imaging can be used to diagnose a wide range of brain problems, including brain tumours, strokes, paralysis, and other neurological conditions. An imaging technique known as Magnetic Resonance Imaging (MRI) is a relatively new method that can classify and categorize the brain non-brain tissues through high-resolution imaging's. For automated brain picture segmentation and analysis, the existence of these non-brain tissues is seen as a critical roadblock to success. For quantitative morphometric examinations of MR brain images, skull-stripping is often required. Skull-stripping procedures are described in this work, as well as a summary of the most recent research on skull-stripping.
Corona virus sickness has become a big public health issue in 2019. Because of its contact-transp... more Corona virus sickness has become a big public health issue in 2019. Because of its contact-transparent characteristics, it is rapidly spreading. The use of a face mask is among the most efficient methods for preventing the transmission of the Covid-19 virus. Wearing the face mask alone can cut the chance of catching the virus by over 70%. Consequently, World Health Organization (WHO) advised wearing masks in crowded places as precautionary measures. Because of the incorrect use of facial masks, illnesses have spread rapidly in some locations. To solve this challenge, we needed a reliable mask monitoring system. Numerous government entities are attempting to make wearing a face mask mandatory; this process can be facilitated by using face mask detection software based on AI and image processing techniques. For face detection, helmet detection, and mask detection, the approaches mentioned in the article utilize Machine learning, Deep learning, and many other approaches. It will be simple to distinguish between persons having masks and those who are not having masks using all of these ways. The effectiveness of mask detectors must be improved immediately. In this article, we will explain the techniques for face mask detection with a literature review and drawbacks for each technique.
Corona virus sickness has become a big public health issue in 2019. Because of its contact-transp... more Corona virus sickness has become a big public health issue in 2019. Because of its contact-transparent characteristics, it is rapidly spreading. The use of a face mask is among the most efficient methods for preventing the transmission of the Covid-19 virus. Wearing the face mask alone can cut the chance of catching the virus by over 70%. Consequently, World Health Organization (WHO) advised wearing masks in crowded places as precautionary measures. Because of the incorrect use of facial masks, illnesses have spread rapidly in some locations. To solve this challenge, we needed a reliable mask monitoring system. Numerous government entities are attempting to make wearing a face mask mandatory; this process can be facilitated by using face mask detection software based on AI and image processing techniques. For face detection, helmet detection, and mask detection, the approaches mentioned in the article utilize Machine learning, Deep learning, and many other approaches. It will be simple to distinguish between persons having masks and those who are not having masks using all of these ways. The effectiveness of mask detectors must be improved immediately. In this article, we will explain the techniques for face mask detection with a literature review and drawbacks for each technique.
Plagiarism is becoming more of a problem in academics. It's made worse by the ease with which a w... more Plagiarism is becoming more of a problem in academics. It's made worse by the ease with which a wide range of resources can be found on the internet, as well as the ease with which they can be copied and pasted. It is academic theft since the perpetrator has "taken" and presented the work of others as his or her own. Manual detection of plagiarism by a human being is difficult, imprecise, and time-consuming because it is difficult for anyone to compare their work to current data. Plagiarism is a big problem in higher education, and it can happen on any topic. Plagiarism detection has been studied in many scientific articles, and methods for recognition have been created utilizing the Plagiarism analysis, Authorship identification, and Near-duplicate detection (PAN) Dataset 2009-2011. Verbatim plagiarism, according to the researchers, plagiarism is simply copying and pasting. They then moved on to smart plagiarism, which is more challenging to spot since it might include text change, taking ideas from other academics, and translation into a more difficult-to-manage language. Other studies have found that plagiarism can obscure the scientific content of publications by swapping words, removing or adding material, or reordering or changing the original articles. This article discusses the comparative study of plagiarism detection techniques.
Advances in Science and Technology Research Journal, 2022
The Finger-vein recognition (FVR) method has received increasing attention in recent years. It is... more The Finger-vein recognition (FVR) method has received increasing attention in recent years. It is a new method of personal identifi cation and biometric technology that identifi es individuals using unique fi nger-vein patterns, which is the fi rst reliable and suitable area to be recognized. It was discovered for the fi rst time with a home imaging system; it is characterized by high accuracy and high processing speed. Also, the presence of patterns of veins inside one's body makes it almost diffi cult to repeat and diffi cult to steal. Based on the increased focus on protecting privacy, that also produces vein biometrics safer alternatives without forgery, damage, or alteration over time. Fingerprint recognition is benefi cial because it includes the use of low-cost, small devices which are diffi cult to counterfeit. This paper discusses preceding fi nger-vein recognition approaches systems with the methodologies taken from other researchers' work about image acquisition, pretreatment, vein extraction, and matching. It is reviewing the latest algorithms; continues to critically review the strengths and weaknesses of these methods, and it states the modern results following a key comparative analysis of methods.
Plagiarism Detection Systems play an important role in revealing instances of a plagiarism act, e... more Plagiarism Detection Systems play an important role in revealing instances of a plagiarism act, especially in the educational sector with scientific documents and papers. The idea of plagiarism is that when any content is copied without permission or citation from the author. To detect such activities, it is necessary to have extensive information about plagiarism forms and classes. Thanks to the developed tools and methods it is possible to reveal many types of plagiarism. The development of the Information and Communication Technologies (ICT) and the availability of the online scientific documents lead to the ease of access to these documents. With the availability of many software text editors, plagiarism detections becomes a critical issue. A large number of scientific papers have already investigated in plagiarism detection, and common types of plagiarism detection datasets are being used for recognition systems, WordNet and PAN Datasets have been used since 2009. The researchers have defined the operation of verbatim plagiarism detection as a simple type of copy and paste. Then they have shed the lights on intelligent plagiarism where this process became more difficult to reveal because it may include manipulation of original text, adoption of other researchers' ideas, and translation to other languages, which will be more challenging to handle. Other researchers have expressed that the ways of plagiarism may overshadow the scientific text by replacing, removing, or inserting words, along with shuffling or modifying the original papers. This paper gives an overall definition of plagiarism and works through different papers for the most known types of plagiarism methods and tools.
Design Engineering, 2021
Image compression plays an important role in reducing the size and storage of data while increasi... more Image compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eyes' observation of the different colors and features of images. We propose a multi-layer hybrid system for deep learning using the unsupervised CAE architecture and using the color clustering of the K-mean algorithm to compress images and determine their size and color intensity. The system is implemented using Kodak and Challenge on Learned Image Compression (CLIC) dataset for deep learning. Experimental results show that our proposed method is superior to the traditional compression methods of the autoencoder, and the proposed work has better performance in terms of performance speed and quality measures Peak Signal To Noise Ratio (PSNR) and Structural Similarity Index (SSIM) where the results achieved better performance and high efficiency With high compression bit rates and low Mean Squared Error (MSE) rate the results recorded the highest compression ratios that ranged between (0.7117 to 0.8707) for the Kodak dataset and (0.7191 to 0.9930) for CLIC dataset. The system achieved high accuracy and quality in comparison to the error coefficient, which was recorded (0.0126 to reach 0.0003) below, and this system is considered the most quality and
Plagiarism Detection Systems are critical in identifying instances of plagiarism, particularly in... more Plagiarism Detection Systems are critical in identifying instances of plagiarism, particularly in the educational sector whenever it comes to scientific publications and papers. Plagiarism occurs when any material is copied without the author's consent or attribution. To identify such acts, thorough knowledge of plagiarism types and classes is required. It is feasible to detect several sorts of plagiarism using current tools and methodologies. With the advancement of information and communication technologies (ICT) and the availability of online scientific publications, access to these publications has grown more convenient. Additionally, with the availability of several software text editors, plagiarism detection has become a crucial concern. Numerous scholarly articles have previously examined plagiarism detection and the two most often used datasets for plagiarism detection, WordNet and the PAN Dataset. The researchers described verbatim plagiarism detection as a straightforward case of copying and pasting, and then shed light on clever plagiarism, which is more difficult to detect since it may involve original text alteration, borrowing ideas from other studies, and Other scholars have said that plagiarism can obscure the scientific content by substituting terms, deleting or introducing material, rearranging or changing the original publications. The suggested system incorporated natural language processing (NLP) and machine learning (ML) techniques, as well as an external plagiarism detection strategy based on text mining and similarity analysis. The suggested technique employs a mix of Jaccard and cosine similarity. It was examined using the PAN-PC-11 corpus. The proposed system outperforms previous systems on the PAN-PC-11, as demonstrated by the findings. Additionally, the proposed system obtains an accuracy of 0.96, a recall of 0.86, an F-measure of 0.86, and a PlagDet score of 0.86. (0.86). 0.865 and the proposed technique is substantiated by a design application that is used to detect plagiarism in scientific publications and generate nonmedication notifications. Portable Document Format (PDF) .
The image fusion process is characterized as collecting all of the important information from mul... more The image fusion process is characterized as collecting all of the important information from multiple images, as well as its inclusion in fewer, usually one, images. In this paper, there is indeed the solution to Problems with the various images, such as multi-focus images and medical images through a simulation process using images of brain magnetic resonance(MR) to the fuse 's work based on previously abused fusion techniques such as convolutional neural networks (CNN) and In the experimentation, The (CNN) algorithm is being developed with the introduction of the Euclidean distance algorithm as part of operations to make implementation faster and with higher efficiency than (CNN) Standard. Two objective fusion measures widely used in multimodal fusion of medical images are applied to perform quantitative and qualitative assessments. Peak signal to noise ratio (PSNR). Image fusion system (IFS) was tested to use standard datasets This dataset includes brain MR images along with manual FLAIR segmentation masks for anomalies. The photos were taken from The Center for Cancer Imaging (TCIA). They a collection of 3740 different brain images samples correspond to (110) patients each patient contains (20-70) Segmentation Low-grade glioma collection that have at least fluid-attenuated inversion recovery (FLAIR) Required sequence and genomic cluster data brain tumor in magnetic resonance imaging included in The Cancer Genome Atlas (TCGA). The results of the experiments carried out show that the use of convolutional neural networks (CNN) with the Euclidean distance algorithm is used in training and testing as a classifier of the medical images provides approximately (98.18%) accuracy. Compared with the findings of other published works these rates are considered high.
Survey Based Study: Classification of Patients with Alzheimer’s Disease, 2020
Neuroimaging is a description, whether in two-dimensions (2D) or three-dimensions (3D), of the st... more Neuroimaging is a description, whether in two-dimensions (2D) or three-dimensions (3D), of the structure and functions of the brain. Neuroimaging provides a valuable diagnostic tool, in which a limited approach is used to create images of the focal sensory system by medicine professionals. For the clinical diagnosis of patients with Alzheimer's Disease (AD) or Mild Cognitive Impairs (MCI), the accurate identification of patients from normal control persons (NCs) is critical. Recently, numerous researches have been undertaken on the identification of AD based on neuroimaging data, including images with radiographs and algorithms for master learning. In the previous decade, these techniques were also used slowly to differentiate AD and MCI symptoms from structure classification methods. This review focuses on neuroimaging studies conducted to detect and classify AD, through a survey based on Google Scholar content. We explore the challenges of this field and evaluate the performance of these studies along with their negative aspects.
Background/Objectives: The purpose of this study was to classify Alzheimer's disease (AD) patient... more Background/Objectives: The purpose of this study was to classify Alzheimer's disease (AD) patients from Normal Control (NC) patients using Magnetic Resonance Imaging (MRI). Methods/Statistical analysis: The performance evolution is carried out for 346 MR images from Alzheimer's Neuroimaging Initiative (ADNI) dataset. The classifier Deep Belief Network (DBN) is used for the function of classification. The network is trained using a sample training set, and the weights produced are then used to check the system's recognition capability. Findings: As a result, this paper presented a novel method of automated classification system for AD determination. The suggested method offers good performance of the experiments carried out show that the use of Gray Level Co-occurrence Matrix (GLCM) features and DBN classifier provides 98.26% accuracy with the two specific classes were tested. Improvements/Applications: AD is a neurological condition affecting the brain and causing dementia that may affect the mind and memory. The disease indirectly impacts more than 15 million relatives, companions and guardians. The results of the present research are expected to help the specialist in decision making process.
Image Fusion is being used to gather important data from such an input image array and to place i... more Image Fusion is being used to gather important data from such an input image array and to place it in a single output picture to make it much more meaningful & usable than either of the input images. Image fusion boosts the quality and application of data. The accuracy of the image that has fused depending on the application. It is widely used in smart robotics, audio camera fusion, photonics, system control and output, construction and inspection of electronic circuits, complex computer, software diagnostics, also smart line assembling robots. In this paper provides a literature review of different image fusion techniques in the spatial domain and frequency domain, such as averaging, min-max, block substitution, Intensity-Hue-Saturation (IHS), Principal Component Analysis (PCA), pyramid-based techniques, and transforming. Different quality metrics for quantitative analysis of these approaches have been debated.
Improved Merging Multi Convolutional Neural Networks Framework of Image Indexing and Retrieval, 2020
Background/Objectives: The purpose of current research aims to a modified image representation fr... more Background/Objectives: The purpose of current research aims to a modified image representation framework for Content-Based Image Retrieval (CBIR) through gray scale input image, Zernike Moments (ZMs) properties, Local Binary Pattern (LBP), Y Color Space, Slantlet Transform (SLT), and Discrete Wavelet Transform (DWT). Methods/Statistical analysis: This study surveyed and analysed three standard datasets WANG V1.0, WANG V2.0, and Caltech 101. The features an image of objects in this sets that belong to 101 classes-with approximately 40-800 images for every category. The suggested infrastructure within the study seeks to present a description and operationalization of the CBIR system through automated attribute extraction system premised on CNN infrastructure. Findings: The results acquired through the investigated CBIR system alongside the benchmarked results have clearly indicated that the suggested technique had the best performance with the overall accuracy at 88.29% as opposed to the other sets of data adopted in the experiments. The outstanding results indicate clearly that the suggested method was effective for all the sets of data. Improvements/Applications: As a result of this study, it was found the revealed that the multiple image representation was redundant for extraction accuracy, and the findings from the study indicated that automatically retrieved features are capable and reliable in generating accurate outcomes.
The Electrocardiogram records the heart's electrical signals. It is a practice; a painless diagno... more The Electrocardiogram records the heart's electrical signals. It is a practice; a painless diagnostic procedure used to rapidly diagnose and monitor heart problems. The ECG is an easy, noninvasive method for diagnosing various common heart conditions. Due to its unique advantages that other humans do not share, in addition to the fact that the heart's electrical activity may be easily detected from the body's surface, security is another area of concern. On this basis, it has become apparent that there are essential steps of pre-processing to deal with data of an electrical nature, signals, and prepare them for use in Biometric systems. Since it depends on the structure and function of the heart, it can be utilized as a biometric attribute for identification and recognition. This academic paper presents a new data formatting technique to be used in biometric systems based on the two major dataset kinds, PTB and PTB-XL.
Estimating an individual's age from a photograph of their face is critical in many applications, ... more Estimating an individual's age from a photograph of their face is critical in many applications, including intelligence and defense, border security and human-machine interaction, as well as soft biometric recognition. There has been recent progress in this discipline that focuses on the idea of deep learning. These solutions need the creation and training of deep neural networks for the sole purpose of resolving this issue. In addition, pre-trained deep neural networks are utilized in the research process for the purpose of facial recognition and fine-tuning for accurate outcomes. The purpose of this study was to offer a method for estimating human ages from the frontal view of the face in a manner that is as accurate as possible and takes into account the majority of the challenges faced by existing methods of age estimate. Making use of the data set that serves as the foundation for the face estimation system in this region (IMDB-WIKI). By performing preparatory processing activities to setup and train the data in order to collect cases, and by using the CNN deep learning method, which yielded results with an accuracy of 0.960 percent, we were able to reach our objective.
Determining the face of wearing a mask from not wearing a mask from visual data such as video and... more Determining the face of wearing a mask from not wearing a mask from visual data such as video and still, images have been a fascinating research topic in recent decades due to the spread of the Corona pandemic, which has changed the features of the entire world and forced people to wear a mask as a way to prevent the pandemic that has calmed the entire world, and it has played an important role. Intelligent development based on artificial intelligence and computers has a very important role in the issue of safety from the pandemic, as the Topic of face recognition and identifying people who wear the mask or not in the introduction and deep education was the most prominent in this topic. Using deep learning techniques and the YOLO ("You only look once") neural network algorithm, which is an efficient real-time object identification algorithm, an intelligent system was developed in this thesis to distinguish which faces are wearing a mask and who is not wearing a wrong mask. The proposed system was developed based on data preparation, preprocessing, and adding a multi-layer neural network, followed by extracting the detection algorithm to improve the accuracy of the system. Two global data sets were used to train and test the proposed system and worked on it in three models, where the first contains the AIZOO data set, the second contains the MoLa RGB CovSurv data set, and the third model contains a combined data set for the two in order to provide cases that are difficult to identify and the accuracy results that were obtained. obtained from the merging datasets showed that the face mask (0.953) and the face recognition system were the most accurate in detecting them (0.916).
The meniscus has a crucial function in human anatomy, and Magnetic Resonance Imaging (M.R.I.) pla... more The meniscus has a crucial function in human anatomy, and Magnetic Resonance Imaging (M.R.I.) plays an essential role in meniscus assessment. It is difficult to identify cartilage lesions using typical image processing approaches because the M.R.I. data is so diverse. An M.R.I. data sequence comprises numerous images, and the attributes area we are searching for may differ from each image in the series. Therefore, feature extraction gets more complicated, hence specifically, traditional image processing becomes very complex. In traditional image processing, a human tells a computer what should be there, but a deep learning (D.L.) algorithm extracts the features of what is already there automatically. The surface changes become valuable when diagnosing a tissue sample. Small, unnoticeable changes in pixel density may indicate the beginning of cancer or tear tissue in the early stages. These details even expert pathologists might miss. Artificial intelligence (A.I.) and D.L. revolutionized radiology by enhancing efficiency and accuracy of both interpretative and non-interpretive jobs. When you look at AI applications, you should think about how they might work. Convolutional Neural Network (C.N.N.) is a part of D.L. that can be used to diagnose knee problems. There are existing algorithms that can detect and categorize cartilage lesions, meniscus tears on M.R.I., offer an automated quantitative evaluation of healing, and forecast who is most likely to have recurring meniscus tears based on radiographs.
In recent years images have been used widely by online social networks providers or numerous orga... more In recent years images have been used widely by online social networks providers or numerous organizations such as governments, police departments, colleges, universities, and private companies. It held in vast databases. Thus, efficient storage of such images is advantageous and its compression is an appealing application. Image compression generally represents the significant image information compactly with a smaller size of bytes while insignificant image information (redundancy) already been removed for this reason image compression has an important role in data transfer and storage especially due to the data explosion that is increasing significantly. It is a challenging task since there are highly complex unknown correlations between the pixels. As a result, it is hard to find and recover a well-compressed representation for images, and it also hard to design and test networks that are able to recover images successfully in a lossless or lossy way. Several neural networks and deep learning methods have been used to compress images. This article survey most common techniques and methods of image compression focusing on auto-encoder of deep learning.
The Finger-vein recognition (FVR) method has received increasing attention in recent years. It is... more The Finger-vein recognition (FVR) method has received increasing attention in recent years. It is a new method of personal identifi cation and biometric technology that identifi es individuals using unique fi nger-vein patterns, which is the fi rst reliable and suitable area to be recognized. It was discovered for the fi rst time with a home imaging system; it is characterized by high accuracy and high processing speed. Also, the presence of patterns of veins inside one's body makes it almost diffi cult to repeat and diffi cult to steal. Based on the increased focus on protecting privacy, that also produces vein biometrics safer alternatives without forgery, damage, or alteration over time. Fingerprint recognition is benefi cial because it includes the use of low-cost, small devices which are diffi cult to counterfeit. This paper discusses preceding fi nger-vein recognition approaches systems with the methodologies taken from other researchers' work about image acquisition, pretreatment, vein extraction, and matching. It is reviewing the latest algorithms; continues to critically review the strengths and weaknesses of these methods, and it states the modern results following a key comparative analysis of methods.
Finger vein recognition and user identification is a relatively recent biometric recognition tech... more Finger vein recognition and user identification is a relatively recent biometric recognition technology with a broad variety of applications, and biometric authentication is extensively employed in the information age. As one of the most essential authentication technologies available today, finger vein recognition captures our attention owing to its high level of security, dependability, and track record of performance. Embedded convolutional neural networks are based on the early or intermediate fusing of input. In early fusion, pictures are categorized according to their location in the input space. In this study, we employ a highly optimized network and late fusion rather than early fusion to create a Fusion convolutional neural network that uses two convolutional neural networks (CNNs) in short ways. The technique is based on using two similar CNNs with varying input picture quality, integrating their outputs in a single layer, and employing an optimized CNN design on a proposed Sains University Malaysia (FV-USM) finger vein dataset 5904 images. The final pooling CNN, which is composed of the original picture, an image improved using the contrast limited adaptive histogram (CLAHE) approach and the Median filter, And, using Principal Component Analysis (PCA), we retrieved the features and got an acceptable performance from the FV-USM database, with a recognition rate of 98.53 percent. Our proposed strategy outperformed other strategies described in the literature.
It is crucial to remember that the brain is a part of the body responsible for a wide range of so... more It is crucial to remember that the brain is a part of the body responsible for a wide range of sophisticated bodily activities. Brain imaging can be used to diagnose a wide range of brain problems, including brain tumours, strokes, paralysis, and other neurological conditions. An imaging technique known as Magnetic Resonance Imaging (MRI) is a relatively new method that can classify and categorize the brain non-brain tissues through high-resolution imaging's. For automated brain picture segmentation and analysis, the existence of these non-brain tissues is seen as a critical roadblock to success. For quantitative morphometric examinations of MR brain images, skull-stripping is often required. Skull-stripping procedures are described in this work, as well as a summary of the most recent research on skull-stripping.
Corona virus sickness has become a big public health issue in 2019. Because of its contact-transp... more Corona virus sickness has become a big public health issue in 2019. Because of its contact-transparent characteristics, it is rapidly spreading. The use of a face mask is among the most efficient methods for preventing the transmission of the Covid-19 virus. Wearing the face mask alone can cut the chance of catching the virus by over 70%. Consequently, World Health Organization (WHO) advised wearing masks in crowded places as precautionary measures. Because of the incorrect use of facial masks, illnesses have spread rapidly in some locations. To solve this challenge, we needed a reliable mask monitoring system. Numerous government entities are attempting to make wearing a face mask mandatory; this process can be facilitated by using face mask detection software based on AI and image processing techniques. For face detection, helmet detection, and mask detection, the approaches mentioned in the article utilize Machine learning, Deep learning, and many other approaches. It will be simple to distinguish between persons having masks and those who are not having masks using all of these ways. The effectiveness of mask detectors must be improved immediately. In this article, we will explain the techniques for face mask detection with a literature review and drawbacks for each technique.
Corona virus sickness has become a big public health issue in 2019. Because of its contact-transp... more Corona virus sickness has become a big public health issue in 2019. Because of its contact-transparent characteristics, it is rapidly spreading. The use of a face mask is among the most efficient methods for preventing the transmission of the Covid-19 virus. Wearing the face mask alone can cut the chance of catching the virus by over 70%. Consequently, World Health Organization (WHO) advised wearing masks in crowded places as precautionary measures. Because of the incorrect use of facial masks, illnesses have spread rapidly in some locations. To solve this challenge, we needed a reliable mask monitoring system. Numerous government entities are attempting to make wearing a face mask mandatory; this process can be facilitated by using face mask detection software based on AI and image processing techniques. For face detection, helmet detection, and mask detection, the approaches mentioned in the article utilize Machine learning, Deep learning, and many other approaches. It will be simple to distinguish between persons having masks and those who are not having masks using all of these ways. The effectiveness of mask detectors must be improved immediately. In this article, we will explain the techniques for face mask detection with a literature review and drawbacks for each technique.
Plagiarism is becoming more of a problem in academics. It's made worse by the ease with which a w... more Plagiarism is becoming more of a problem in academics. It's made worse by the ease with which a wide range of resources can be found on the internet, as well as the ease with which they can be copied and pasted. It is academic theft since the perpetrator has "taken" and presented the work of others as his or her own. Manual detection of plagiarism by a human being is difficult, imprecise, and time-consuming because it is difficult for anyone to compare their work to current data. Plagiarism is a big problem in higher education, and it can happen on any topic. Plagiarism detection has been studied in many scientific articles, and methods for recognition have been created utilizing the Plagiarism analysis, Authorship identification, and Near-duplicate detection (PAN) Dataset 2009-2011. Verbatim plagiarism, according to the researchers, plagiarism is simply copying and pasting. They then moved on to smart plagiarism, which is more challenging to spot since it might include text change, taking ideas from other academics, and translation into a more difficult-to-manage language. Other studies have found that plagiarism can obscure the scientific content of publications by swapping words, removing or adding material, or reordering or changing the original articles. This article discusses the comparative study of plagiarism detection techniques.
Advances in Science and Technology Research Journal, 2022
The Finger-vein recognition (FVR) method has received increasing attention in recent years. It is... more The Finger-vein recognition (FVR) method has received increasing attention in recent years. It is a new method of personal identifi cation and biometric technology that identifi es individuals using unique fi nger-vein patterns, which is the fi rst reliable and suitable area to be recognized. It was discovered for the fi rst time with a home imaging system; it is characterized by high accuracy and high processing speed. Also, the presence of patterns of veins inside one's body makes it almost diffi cult to repeat and diffi cult to steal. Based on the increased focus on protecting privacy, that also produces vein biometrics safer alternatives without forgery, damage, or alteration over time. Fingerprint recognition is benefi cial because it includes the use of low-cost, small devices which are diffi cult to counterfeit. This paper discusses preceding fi nger-vein recognition approaches systems with the methodologies taken from other researchers' work about image acquisition, pretreatment, vein extraction, and matching. It is reviewing the latest algorithms; continues to critically review the strengths and weaknesses of these methods, and it states the modern results following a key comparative analysis of methods.
Plagiarism Detection Systems play an important role in revealing instances of a plagiarism act, e... more Plagiarism Detection Systems play an important role in revealing instances of a plagiarism act, especially in the educational sector with scientific documents and papers. The idea of plagiarism is that when any content is copied without permission or citation from the author. To detect such activities, it is necessary to have extensive information about plagiarism forms and classes. Thanks to the developed tools and methods it is possible to reveal many types of plagiarism. The development of the Information and Communication Technologies (ICT) and the availability of the online scientific documents lead to the ease of access to these documents. With the availability of many software text editors, plagiarism detections becomes a critical issue. A large number of scientific papers have already investigated in plagiarism detection, and common types of plagiarism detection datasets are being used for recognition systems, WordNet and PAN Datasets have been used since 2009. The researchers have defined the operation of verbatim plagiarism detection as a simple type of copy and paste. Then they have shed the lights on intelligent plagiarism where this process became more difficult to reveal because it may include manipulation of original text, adoption of other researchers' ideas, and translation to other languages, which will be more challenging to handle. Other researchers have expressed that the ways of plagiarism may overshadow the scientific text by replacing, removing, or inserting words, along with shuffling or modifying the original papers. This paper gives an overall definition of plagiarism and works through different papers for the most known types of plagiarism methods and tools.
Design Engineering, 2021
Image compression plays an important role in reducing the size and storage of data while increasi... more Image compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eyes' observation of the different colors and features of images. We propose a multi-layer hybrid system for deep learning using the unsupervised CAE architecture and using the color clustering of the K-mean algorithm to compress images and determine their size and color intensity. The system is implemented using Kodak and Challenge on Learned Image Compression (CLIC) dataset for deep learning. Experimental results show that our proposed method is superior to the traditional compression methods of the autoencoder, and the proposed work has better performance in terms of performance speed and quality measures Peak Signal To Noise Ratio (PSNR) and Structural Similarity Index (SSIM) where the results achieved better performance and high efficiency With high compression bit rates and low Mean Squared Error (MSE) rate the results recorded the highest compression ratios that ranged between (0.7117 to 0.8707) for the Kodak dataset and (0.7191 to 0.9930) for CLIC dataset. The system achieved high accuracy and quality in comparison to the error coefficient, which was recorded (0.0126 to reach 0.0003) below, and this system is considered the most quality and
Plagiarism Detection Systems are critical in identifying instances of plagiarism, particularly in... more Plagiarism Detection Systems are critical in identifying instances of plagiarism, particularly in the educational sector whenever it comes to scientific publications and papers. Plagiarism occurs when any material is copied without the author's consent or attribution. To identify such acts, thorough knowledge of plagiarism types and classes is required. It is feasible to detect several sorts of plagiarism using current tools and methodologies. With the advancement of information and communication technologies (ICT) and the availability of online scientific publications, access to these publications has grown more convenient. Additionally, with the availability of several software text editors, plagiarism detection has become a crucial concern. Numerous scholarly articles have previously examined plagiarism detection and the two most often used datasets for plagiarism detection, WordNet and the PAN Dataset. The researchers described verbatim plagiarism detection as a straightforward case of copying and pasting, and then shed light on clever plagiarism, which is more difficult to detect since it may involve original text alteration, borrowing ideas from other studies, and Other scholars have said that plagiarism can obscure the scientific content by substituting terms, deleting or introducing material, rearranging or changing the original publications. The suggested system incorporated natural language processing (NLP) and machine learning (ML) techniques, as well as an external plagiarism detection strategy based on text mining and similarity analysis. The suggested technique employs a mix of Jaccard and cosine similarity. It was examined using the PAN-PC-11 corpus. The proposed system outperforms previous systems on the PAN-PC-11, as demonstrated by the findings. Additionally, the proposed system obtains an accuracy of 0.96, a recall of 0.86, an F-measure of 0.86, and a PlagDet score of 0.86. (0.86). 0.865 and the proposed technique is substantiated by a design application that is used to detect plagiarism in scientific publications and generate nonmedication notifications. Portable Document Format (PDF) .
The image fusion process is characterized as collecting all of the important information from mul... more The image fusion process is characterized as collecting all of the important information from multiple images, as well as its inclusion in fewer, usually one, images. In this paper, there is indeed the solution to Problems with the various images, such as multi-focus images and medical images through a simulation process using images of brain magnetic resonance(MR) to the fuse 's work based on previously abused fusion techniques such as convolutional neural networks (CNN) and In the experimentation, The (CNN) algorithm is being developed with the introduction of the Euclidean distance algorithm as part of operations to make implementation faster and with higher efficiency than (CNN) Standard. Two objective fusion measures widely used in multimodal fusion of medical images are applied to perform quantitative and qualitative assessments. Peak signal to noise ratio (PSNR). Image fusion system (IFS) was tested to use standard datasets This dataset includes brain MR images along with manual FLAIR segmentation masks for anomalies. The photos were taken from The Center for Cancer Imaging (TCIA). They a collection of 3740 different brain images samples correspond to (110) patients each patient contains (20-70) Segmentation Low-grade glioma collection that have at least fluid-attenuated inversion recovery (FLAIR) Required sequence and genomic cluster data brain tumor in magnetic resonance imaging included in The Cancer Genome Atlas (TCGA). The results of the experiments carried out show that the use of convolutional neural networks (CNN) with the Euclidean distance algorithm is used in training and testing as a classifier of the medical images provides approximately (98.18%) accuracy. Compared with the findings of other published works these rates are considered high.
Survey Based Study: Classification of Patients with Alzheimer’s Disease, 2020
Neuroimaging is a description, whether in two-dimensions (2D) or three-dimensions (3D), of the st... more Neuroimaging is a description, whether in two-dimensions (2D) or three-dimensions (3D), of the structure and functions of the brain. Neuroimaging provides a valuable diagnostic tool, in which a limited approach is used to create images of the focal sensory system by medicine professionals. For the clinical diagnosis of patients with Alzheimer's Disease (AD) or Mild Cognitive Impairs (MCI), the accurate identification of patients from normal control persons (NCs) is critical. Recently, numerous researches have been undertaken on the identification of AD based on neuroimaging data, including images with radiographs and algorithms for master learning. In the previous decade, these techniques were also used slowly to differentiate AD and MCI symptoms from structure classification methods. This review focuses on neuroimaging studies conducted to detect and classify AD, through a survey based on Google Scholar content. We explore the challenges of this field and evaluate the performance of these studies along with their negative aspects.
Background/Objectives: The purpose of this study was to classify Alzheimer's disease (AD) patient... more Background/Objectives: The purpose of this study was to classify Alzheimer's disease (AD) patients from Normal Control (NC) patients using Magnetic Resonance Imaging (MRI). Methods/Statistical analysis: The performance evolution is carried out for 346 MR images from Alzheimer's Neuroimaging Initiative (ADNI) dataset. The classifier Deep Belief Network (DBN) is used for the function of classification. The network is trained using a sample training set, and the weights produced are then used to check the system's recognition capability. Findings: As a result, this paper presented a novel method of automated classification system for AD determination. The suggested method offers good performance of the experiments carried out show that the use of Gray Level Co-occurrence Matrix (GLCM) features and DBN classifier provides 98.26% accuracy with the two specific classes were tested. Improvements/Applications: AD is a neurological condition affecting the brain and causing dementia that may affect the mind and memory. The disease indirectly impacts more than 15 million relatives, companions and guardians. The results of the present research are expected to help the specialist in decision making process.
Image Fusion is being used to gather important data from such an input image array and to place i... more Image Fusion is being used to gather important data from such an input image array and to place it in a single output picture to make it much more meaningful & usable than either of the input images. Image fusion boosts the quality and application of data. The accuracy of the image that has fused depending on the application. It is widely used in smart robotics, audio camera fusion, photonics, system control and output, construction and inspection of electronic circuits, complex computer, software diagnostics, also smart line assembling robots. In this paper provides a literature review of different image fusion techniques in the spatial domain and frequency domain, such as averaging, min-max, block substitution, Intensity-Hue-Saturation (IHS), Principal Component Analysis (PCA), pyramid-based techniques, and transforming. Different quality metrics for quantitative analysis of these approaches have been debated.
Improved Merging Multi Convolutional Neural Networks Framework of Image Indexing and Retrieval, 2020
Background/Objectives: The purpose of current research aims to a modified image representation fr... more Background/Objectives: The purpose of current research aims to a modified image representation framework for Content-Based Image Retrieval (CBIR) through gray scale input image, Zernike Moments (ZMs) properties, Local Binary Pattern (LBP), Y Color Space, Slantlet Transform (SLT), and Discrete Wavelet Transform (DWT). Methods/Statistical analysis: This study surveyed and analysed three standard datasets WANG V1.0, WANG V2.0, and Caltech 101. The features an image of objects in this sets that belong to 101 classes-with approximately 40-800 images for every category. The suggested infrastructure within the study seeks to present a description and operationalization of the CBIR system through automated attribute extraction system premised on CNN infrastructure. Findings: The results acquired through the investigated CBIR system alongside the benchmarked results have clearly indicated that the suggested technique had the best performance with the overall accuracy at 88.29% as opposed to the other sets of data adopted in the experiments. The outstanding results indicate clearly that the suggested method was effective for all the sets of data. Improvements/Applications: As a result of this study, it was found the revealed that the multiple image representation was redundant for extraction accuracy, and the findings from the study indicated that automatically retrieved features are capable and reliable in generating accurate outcomes.