Hazem El-bakry | Mansoura University (original) (raw)

Papers by Hazem El-bakry

Research paper thumbnail of Arabic Handwritten Characters Recognition Using Convolutional Neural Network

Handwritten Arabic character recognition systems face several challenges, including the unlimited... more Handwritten Arabic character recognition systems face several challenges, including the unlimited variation in human handwriting and large public databases. In this work, we model a deep learning architecture that can be effectively apply to recognizing Arabic handwritten characters. A Convolutional Neural Network (CNN) is a special type of feed-forward multilayer trained in supervised mode. The CNN trained and tested our database that contain 16800 of handwritten Arabic characters. In this paper, the optimization methods implemented to increase the performance of CNN. Common machine learning methods usually apply a combination of feature extractor and trainable classifier. The use of CNN leads to significant improvements across different machine-learning classification algorithms. Our proposed CNN is giving an average 5.1% misclassification error on testing data.

Research paper thumbnail of Image Stitching System Based on ORB Feature- Based Technique and Compensation Blending

—The construction of a high-resolution panoramic image from a sequence of input overlapping image... more —The construction of a high-resolution panoramic image from a sequence of input overlapping images of the same scene is called image stitching/mosaicing. It is considered as an important, challenging topic in computer vision, multimedia, and computer graphics. The quality of the mosaic image and the time cost are the two primary parameters for measuring the stitching performance. Therefore, the main objective of this paper is to introduce a high-quality image stitching system with least computation time. First, we compare many different features detectors. We test Harris corner detector, SIFT, SURF, FAST, GoodFeaturesToTrack, MSER, and ORB techniques to measure the detection rate of the corrected keypoints and processing time. Second, we manipulate the implementation of different common categories of image blending methods to increase the quality of the stitching process. From experimental results, we conclude that ORB algorithm is the fastest, more accurate, and with higher performance. In addition, Exposure Compensation is the highest stitching quality blending method. Finally, we have generated an image stitching system based on ORB using Exposure Compensation blending method. Keywords—Image stitching; Image mosaicking; Feature-based approaches; Scale Invariant Feature Transform (SIFT); Speed-up Robust Feature detector (SURF); Oriented FAST and Rotated BRIEF (ORB); Exposure Compensation blending

Research paper thumbnail of Real Time Image Mosaicing System Based on Feature Extraction Techniques

AbstractíImage mosaicing/stitching is considered as an active research area in computer vision an... more AbstractíImage mosaicing/stitching is considered as an active research area in computer vision and computer graphics. Image mosaicing is concerned with combining two or more images of the same scene into one panoramic image with high resolution. There are two main types of techniques used for creating image stitching: direct methods and feature-based methods. The greatest advantages of feature-based methods over the other methods are their speed, robustness, and the availability of creating panoramic image of a non-planar scene with unrestricted camera motion. In this paper, we propose a real time image stitching system based on ORB feature-based technique. We compared the performance of our proposed system with SIFT and SURF feature-based techniques. The experiment results show that the ORB algorithm is the fastest, the highest performance, and it needs very low memory requirements. In addition, we make a comparison between different feature-based detectors. The experimental result shows that SIFT is a robust algorithm but it takes more time for computations. MSER and FAST techniques have better performance with respect to speed and accuracy.

Research paper thumbnail of A new Technique for License Plate Detection Using Mathematical Morphology and Support Vector Machine

license plate localization is considered the cornerstone for any license plate recognition system... more license plate localization is considered the cornerstone for any license plate recognition system because it is the first step and the other steps rely on the result of localization. In this paper, a new technique for plate detection is presented. The First step is detecting the vertical edge using Sobel mask, and then the horizontal projection is applied to filter the edge image regions. Morphological operations and connected components labelling are used to get the candidate regions. Finally, support vector machine is used to examine the candidate regions and determine the license plate. Dataset downloaded from the internet is used to train SVM and to test the proposed technique. This dataset contains 514 images of cars and vans. The images are captured in various illumination conditions, raining days, taken from different angle, furthermore many of them have very complex background, shadow. In addition, many regions are similar to the plate region. Simulate result of the proposed technique shows accuracy of 92.2%.

Research paper thumbnail of Spatial Query Performance For GIS cloud

Geographic Information System (GIS) is very important in our live and spatial data is required fo... more Geographic Information System (GIS) is very important in our live and spatial data is required for several fields. Cloud computing is one of the most technology used in the modern data interchange. Spatial data query response time over cloud depends on the cloud data resource. This paper presents a query response time measurement for cloud GIS query. Spatial Query Performance (SQP) is a software designed and represented in Java programming language for measuring query response time. SQP's main functionality is to compare the response time for two spatial data resource servers by asking one query for both servers in the same time and calculate the response time for each server. Google and Bing map servers are used as spatial data resources for measuring the query response time for each server. Google and Bing map servers are used as spatial data resources for measuring the query response time for each. SQP determines that Google is faster than Bing over different test times. 1. Introduction As of late, Infrastructure as a Service (IaaS) distributed computing has developed as a suitable distinct option for the obtaining and administration of physical assets. With IaaS, clients can rent storage and processing time from extensive datacenters. Renting of calculation time is proficient by permitting clients to convey virtual machines (VMs) on the datacenter's assets. Since the client has complete control over the design of the VMs utilizing on-interest arrangements, IaaS renting is equivalent to obtaining committed equipment yet without the long haul responsibility and expense. The on-interest nature of IaaS is discriminating to making such rents appealing, since it empowers clients to extend or shrink their assets as per their computational needs, by utilizing outside assets to supplement their nearby asset base. B. Claudel et al.(2009). This rising model prompts new difficulties identifying with the outline and improvement of IaaS frameworks. One of the normally happening examples in the operation of IaaS is the need to convey an extensive number of VMs on numerous hubs of a data centre at the same time, beginning from an arrangement of VM in ages already put away in an industrious manner. For example, this pattern happens when the client needs to convey a virtual cluster that executes a circulated application or an arrangement of situations to bolster a work process. We allude to this example as multi deployment. Such an expansive sending of numerous VMs without a moment's delay can take a long time. This issue is especially intense for VM pictures utilized as a part of experimental figuring where picture sizes are huge (from a couple of gigabytes up to more than 10 GB). A run of the mill sending comprises of hundreds or even a great many such pictures. Customary organization procedures telecast the pictures to the nodes before beginning the VM occasions, a procedure that can take several minutes to hours, not including the time to boot the working framework itself. This can set aside a few minutes of the IaaS establishment any longer than worthy and delete the on-interest advantages of cloud computing. Once the VM examples are running, a comparable test applies to snapshotting the organization: numerous VM pictures that were by regional standards altered should be simultaneously exchanged to stable storage with the reason for catching the VM state for later utilize (e.g., for verify guiding or off-line movement toward another group or cloud). We allude to this example as multi snapshotting. Ordinary snapshotting approaches depend on custom VM picture file formats to store just incremental contrasts in another document that relies on upon the first VM picture as the sponsorship record, figure1.

Research paper thumbnail of Detection of Caries in Panoramic Dental X-ray Images using Back-Propagation Neural Network

– Recently, artificial neural network (ANNs) has been adopted widely for solving many complex pro... more – Recently, artificial neural network (ANNs) has been adopted widely for solving many complex problems in different fields due to its high performance and its ability to generalize. One of these fields is the medical image processing for diagnosing purposes. In this paper, tooth caries detection strategy is introduced based on a back propagation (BP) neural network for analyzing the dental X-ray images. The neural network used the inter-pixel autocorrelation as input features. The accuracy of classification is satisfactory where the tooth caries detection is clearly improved when compared to the diagnosing process performed by a rule-based computer assisted program and a group of doctors.

Research paper thumbnail of Arab Kids Tutor (AKT) System For Handwriting Stroke Errors Detection

This paper presents the architecture, components and evaluation of Arab kids tutor (AKT), an inte... more This paper presents the architecture, components and evaluation of Arab kids tutor (AKT), an intelligent tutor system for learning handwriting Arabic alphabets. Today, children suffer from handwriting difficulty, so tutors hope to get rid of negative impact of traditional learning system. Our system contains an immediate feedback with error detection that can check multiple kinds of handwriting errors and provide an intelligent feedback to our children. Moreover, AKT use freeman chain code and mathematical algorithms to detect order and direction errors. Through the work, we indicate the children level of understanding of learning handwriting character using fuzzy sets. Experimental results indicate that the AKT successfully detect handwriting strokes errors with automatic feedback.

Research paper thumbnail of Enhancing Hybrid Asymmetric-Multicast Hash-Routing for Information Centric Networks

Information-Centric Networking (ICN) methods present fundamental resilience to a lot of users in ... more Information-Centric Networking (ICN) methods
present fundamental resilience to a lot of users in order to
collect data. It is one of the substantial sharing features of
ICN designs are the global caching. They are exceedingly
admissible that the in-network cache will enhance the
execution. Besides, there was no totally abridgement on the
mobility to plan an effective caching program in ICN
network. This paper discusses the modality of enhancing the
Hybrid Asymmetric-Multicast Hash-Routing strategy. This is
done by finding the best location for storage regarding the
nodes which are participating directly in the search for
content. Thus achieving higher cache hit ratio.

Research paper thumbnail of A Proposed Model for Human Securing using GPS

– This paper presents system architecture for Human security monitoring, which can be used in the... more – This paper presents system architecture for Human security monitoring, which can be used in the personal locators for children, elderly people or those suffering from Alzheimer's or memory loss, and monitoring the movement for law execution. This architecture consists of GPS part for collecting information about a movable object (MO), spatial database part for storing this information by listener server, and finally, Geographic Information System (GIS). Using GIS helps in the display and analysis of spatial information on a digital map. The methods used in spatial data collecting and management are described in details in this work. The spatial database stores information about location (latitude, longitude, date, time, etc) at the time of observation, and some additional desired attributes. GIS provides information about the MO either within the permitted area or outside of it. In this case, the system sends SMS messages that contain the spatial data about the MO to the stakeholders (police, parents, helpers, etc) for giving the assistance as possible.

Research paper thumbnail of A New Fast 3D Reconstruction Approach using Multiple View Images

– The extract key points and matching the pictures are the most paramount reconstruction 3D facto... more – The extract key points and matching the pictures are the most paramount reconstruction 3D factors. They almost two-thirds the time of reconstruction. This paper presents a method to extract the most paramount key points, through the use of GrabCut algorithm that elimintes considerable parts of images that does not have its prominence in the reconstructio. Moreover, the proposed algorithm uses siftGPU algorithm that runs parallel to any process more than one image at a time to extract key points and carry out matching process. The experiments show that the proposed system increase the speed of reconstruction and thoroughly good. Keywords – 3D Reconstruction, S tructure From Motion (S FM), Mash Reconstructionand Multi-View S tereo (MVS). I. INTRODUCTİON 3D reconstruction is one of the classical and difficu lt problems in co mputer vision, and finds its applications in a variety of different fields. In recent years, large scale 3D reconstruction from co mmunity photo collections has become an emerging research topic, wh ich is attracting more and more researchers fro m academy and industry. However, 3D reconstruction is extremely co mputationally expensive. For examp le, it may cost more than a day in a single mach ine to reconstruct an object with only one thousand pictures. In the Structure from Motion (SfM) model [1, 2], 3D reconstruction pipeline can be divided into various steps: feature extraction, image matching, track generation and geometric estimation, etc. Among them, image matching occupies the fundamental computational cost, even more than half of all in some case. Moreover, inexact matching results might lead to washout of reconstruction. Therefore, fast and accurate image matching is crit ical for 3D reconstruction. There are various ways to build reconstruction For example Reconstruction manually is most Statute method to reconstruct a 3D model for an object real world. but is a method ponderous and very intensive. Level of realism can be achieved [3]. The other way tried to eliminate the voltage on the user. 3D Scanner Variant Gu ide to reconstruction is to let co mputers to take some work, and is a well-established method of 3D scanning. The 3D the scanner is the device that apprehends the detailed informat ion for shape and appearance[4]. Modern developments in techniques scanners and Laser able to apprehend point clouds of scenes the real world, And also Automatically can reveal scene planes and create 3D models without the help of the user which can generate Points dense cloud from total images by photogrammetry tools [5].To create a point clouds typically sharing the same problems fro m noisy and lost data. Makes it very hard to apply the methods of surface reconstruction the direct [6,7], Points cloud doesn't contain the specific edges and borders. Last method offered by the this paperPhotogrammetry reconstruction regains 3D informat ion fro m a single or more of images. Main ly focused on rebuilding Photos mu lti view called stereo vision. Epipolar geo metry describes the features and the linkages between the scene and the 3D geometric projections on two or more images of 2D. Figure 1 shows the idealistic workflow for photogrammetric reconstruction. The first step of photogrammetric reconstruction includes the registration of all input images. This procedure is called structure-fro m-mot ion and includes the computation of intrinsic and extrinsic camera parameters. For reg istered images, it is possible to compute 3D positions from t wo or mo re corresponding image points. Multi-view stereo algorithms use these conditions and compute dense point clouds or triangulated meshes fro m the input scene. A B Fig.1. Reconstruction Photogrammet ry are recording mu ltip le images (A): is created by the structure fro m motion (B) and 3D geo metry by dense mult i view stereo [8]. The terms Multi-view Stereo (MVS) simu lates the sense of human sight distance and 3D objects. It uses two or more images fro m various points of view to get the 3D structure of the scene and distance information. Many algorith ms stereo mult iview [9, 10] used all the images at the same time to rebuild the 3D model. It requires a high expense and also lacks scalability. Furukawa [ 10] suggested PMVS (mult iple stereoscopic vision correction) and took mu ltiple p ictures of various views of the body to extract feature points. It is then expanded abroad to find more of the interview points. Furukawa CM VS also suggested (Views comp ilation of mult iple stereo) [ 12] in 2010, has been used to ameliorate the image co mb ines numerous susceptibility in order to see the stereo, and sustainable forest management PM VS b roker contacts. RGB-D systems have been developed due to the advent of RGB-D sensors, such as the Microsoft Kinect.

Research paper thumbnail of An Intelligent Agent Tutor System for Detecting Arabic Children Handwriting Difficulty Based on Immediate Feedback

In this paper, an intelligent tutor application is built for Arabic preschool children called Ara... more In this paper, an intelligent tutor application is built for Arabic preschool children called Arab Handwritten Children Educator (AHCE). AHCE allows Arab children to do practice at anytime and anywhere. As an intelligent tutor, the AHCE can automatically check the handwriting errors, such as stroke sequence errors, stroke direction error, stroke position error, and extra stroke errors. The AHCE provide a useful feedback to Arab children to correct their mistakes. In this paper, attributed mathematics and agents are used to locate the handwritten errors. The system applies a fuzzy approach to evaluate Arabic children handwriting. Experimental results indicate that the proposed intelligent system successfully detect handwriting strokes errors with immediate feedback.

Research paper thumbnail of CNN for Handwritten Arabic Digits Recognition Based on LeNet-5

In recent years, handwritten digits recognition has been an important area due to its application... more In recent years, handwritten digits recognition has been an important area due to its applications in several fields. This work is focus-ing on the recognition part of handwritten Arabic digits recognition that face several challenges, including the unlimited variation in human handwriting and the large public databases. The paper provided a deep learning technique that can be effectively apply to recognizing Arabic handwritten digits. LeNet-5, a Convolutional Neural Network (CNN) trained and tested MADBase database (Arabic handwritten digits images) that contain 60000 training and 10000 testing images. A comparison is held amongst the results, and it is shown by the end that the use of CNN was leaded to significant improvements across different machine-learning classification algorithms.

Research paper thumbnail of Fast pattern detection sing neural networks and cross correlation in the frequency domain

Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., 2000

ABSTRACTRecently, fast neural networks for object/face detection were presented in . The speed u... more ABSTRACTRecently, fast neural networks for object/face detection were presented in . The speed up factor of these networks based on cross correlation in the frequency domain between the input image and the weights of the hidden layer. But, these equations given in [1-3] for conventional and fast neural networks are not valid for many reasons presented here. In this paper, correct equations for cross correlation in the spatial and frequency domains are presented. Furthermore, correct formulas for the number of computation steps required by conventional and fast neural networks given in [1-3] are introduced. A new formula for the speed up ratio is established. Also, corrections for the equations of fast multi scale object/face detection are given. Moreover, commutative cross correlation is achieved. Simulation results show that sub-image detection based on cross correlation in the frequency domain is faster than classical neural networks.

Research paper thumbnail of Interactive Visualization of Retrieved Information

Interactive visualization of information retrieval has become for many applications. Using inform... more Interactive visualization of information retrieval has become for many applications. Using information retrieval system backs more retrieval results, some of them more relevant than other, and some is not relevant. While using search engine to retrieve information has grown very substantially, there remain problems with the information retrieval systems. The interface of the systems does not help them to perceive the precision of these results. It is therefore not surprising that graphical visualizations have been employed in search engines to assist users. The main objective of Internet users is to find the required information with high efficiency and effectiveness. In this paper we present brief sides of information visualization's role in enhancing web information retrieval system as in some of its techniques such as tree view, title view, map view, bubble view and cloud view and its tools such as highlighting and Colored Query Result.

Research paper thumbnail of A COMPARISON BETWEEN TWO DIPHONE-BASED CONCATENATIVE Text to Speech SYSTEMS FOR ARABIC

Research paper thumbnail of Fast Forecasting of Stock Market Prices by using New High Speed Time Delay Neural Networks

Fast forecasting of stock market prices is very important for strategic planning. In this paper, ... more Fast forecasting of stock market prices is very important for strategic planning. In this paper, a new approach for fast forecasting of stock market prices is presented. Such algorithm uses new high speed time delay neural networks (HSTDNNs). The operation of these networks relies on performing cross correlation in the frequency domain between the input data and the input weights of neural networks. It is proved mathematically and practically that the number of computation steps required for the presented HSTDNNs is less than that needed by traditional time delay neural networks (TTDNNs). Simulation results using MATLAB confirm the theoretical computations.

Research paper thumbnail of Fast Packet Detection by using High Speed Time Delay Neural Networks

Fast packet detection is very important to overcome intrusion attack. In this paper, a new approa... more Fast packet detection is very important to overcome intrusion attack. In this paper, a new approach for fast packet detection in serial data sequence is presented. Such algorithm uses fast time delay neural networks (FTDNNs). The operation of these networks relies on performing cross correlation in the frequency domain between the input data and the input weights of neural networks. It is proved mathematically and practically that the number of computation steps required for the presented FTDNNs is less than that needed by conventional time delay neural networks (CTDNNs). Simulation results using MATLAB confirm the theoretical computations.

Research paper thumbnail of Comparative Study among Data Reduction Techniques over Classification Accuracy

International Journal of Computer Applications

Nowadays, Healthcare is one of the most critical issues that need efficient and effective analysi... more Nowadays, Healthcare is one of the most critical issues that need efficient and effective analysis. Data mining provides many techniques and tools that help in getting a good analysis for healthcare data. Data classification is a form of data analysis for deducting models. Mining on a reduced version of data or a lower number of attributes increases the efficiency of system providing almost the same results. In this paper, a comparative study between different data reduction techniques is introduced. Such comparison is tested against classification algorithms accuracy. The results showed that fuzzy rough feature selection outperforms rough set attribute selection, gain ratio, correlation feature selection and principal components analysis.

Research paper thumbnail of Data Mining Techniques for Medical Applications: A Survey

Data mining has been used to uncover hidden patterns and relations to summarize the data in ways ... more Data mining has been used to uncover hidden patterns and relations to summarize the data in ways to be useful and understandable in all types of businesses to make prediction for future perspective. Medical data is consider most famous application to mine that data, so in this paper we introduce a survey on how medical data problems such as dealing with noisy, incomplete , heterogeneous and intensive data has been faced ,the advantages and disadvantages of each one , finally suggest a framework for enhancing and overcoming this problems. The theory of fuzzy sets has been recognized as a suitable tool to model several kinds of patterns that can hold in data. In this paper, we are concerned with the development of a general model to discover association rules among items in a (crisp) set of fuzzy transactions. This general model can be particularized in several ways; each particular instance corresponds to a certain kind of pattern and/or repository of data. We describe some applications of this scheme, paying special attention to the discovery of fuzzy association rules .to extract association rules from quantitative data, the dataset at hand must be partitioned into intervals, and then converted into Boolean type .fuzzy association rules are developed as a sharp knife by handling quantitative data using fuzzy set. Along with the proposed system we will use neural network approaches for clustering, classification, statistical analysis and data modeling.

Research paper thumbnail of Adaptive E-Learning System Based On Learning Interactivity

This paper considers the affordances of social networking theories and tools to build new and eff... more This paper considers the affordances of social networking theories and tools to build new and effective e-learning practices. We argue that "connectivism" (social networking applied to learning and knowledge contexts) can lead to a reconceptualization of learning in which formal, non-formal and informal learning can be integrated as to build a potentially lifelong learning activities to be experienced in "personal learning environment. In order to provide a guide in the design, development and improvement both of personal learning environments and in the related learning activities we provide a knowledge flow model called Open Social Learning Network (OSLN) -a hybrid of the LMS and the personal learning environment (PLE)-is proposed as an alternative learning technology environment with the potential to leverage the affordances of the Web to improve learning dramatically and highlighting the stages of learning and the related enabling conditions. The derived model is applied in a possible scenario of formal learning in order to show how the learning process can be designed according to the presented theory.

Research paper thumbnail of Arabic Handwritten Characters Recognition Using Convolutional Neural Network

Handwritten Arabic character recognition systems face several challenges, including the unlimited... more Handwritten Arabic character recognition systems face several challenges, including the unlimited variation in human handwriting and large public databases. In this work, we model a deep learning architecture that can be effectively apply to recognizing Arabic handwritten characters. A Convolutional Neural Network (CNN) is a special type of feed-forward multilayer trained in supervised mode. The CNN trained and tested our database that contain 16800 of handwritten Arabic characters. In this paper, the optimization methods implemented to increase the performance of CNN. Common machine learning methods usually apply a combination of feature extractor and trainable classifier. The use of CNN leads to significant improvements across different machine-learning classification algorithms. Our proposed CNN is giving an average 5.1% misclassification error on testing data.

Research paper thumbnail of Image Stitching System Based on ORB Feature- Based Technique and Compensation Blending

—The construction of a high-resolution panoramic image from a sequence of input overlapping image... more —The construction of a high-resolution panoramic image from a sequence of input overlapping images of the same scene is called image stitching/mosaicing. It is considered as an important, challenging topic in computer vision, multimedia, and computer graphics. The quality of the mosaic image and the time cost are the two primary parameters for measuring the stitching performance. Therefore, the main objective of this paper is to introduce a high-quality image stitching system with least computation time. First, we compare many different features detectors. We test Harris corner detector, SIFT, SURF, FAST, GoodFeaturesToTrack, MSER, and ORB techniques to measure the detection rate of the corrected keypoints and processing time. Second, we manipulate the implementation of different common categories of image blending methods to increase the quality of the stitching process. From experimental results, we conclude that ORB algorithm is the fastest, more accurate, and with higher performance. In addition, Exposure Compensation is the highest stitching quality blending method. Finally, we have generated an image stitching system based on ORB using Exposure Compensation blending method. Keywords—Image stitching; Image mosaicking; Feature-based approaches; Scale Invariant Feature Transform (SIFT); Speed-up Robust Feature detector (SURF); Oriented FAST and Rotated BRIEF (ORB); Exposure Compensation blending

Research paper thumbnail of Real Time Image Mosaicing System Based on Feature Extraction Techniques

AbstractíImage mosaicing/stitching is considered as an active research area in computer vision an... more AbstractíImage mosaicing/stitching is considered as an active research area in computer vision and computer graphics. Image mosaicing is concerned with combining two or more images of the same scene into one panoramic image with high resolution. There are two main types of techniques used for creating image stitching: direct methods and feature-based methods. The greatest advantages of feature-based methods over the other methods are their speed, robustness, and the availability of creating panoramic image of a non-planar scene with unrestricted camera motion. In this paper, we propose a real time image stitching system based on ORB feature-based technique. We compared the performance of our proposed system with SIFT and SURF feature-based techniques. The experiment results show that the ORB algorithm is the fastest, the highest performance, and it needs very low memory requirements. In addition, we make a comparison between different feature-based detectors. The experimental result shows that SIFT is a robust algorithm but it takes more time for computations. MSER and FAST techniques have better performance with respect to speed and accuracy.

Research paper thumbnail of A new Technique for License Plate Detection Using Mathematical Morphology and Support Vector Machine

license plate localization is considered the cornerstone for any license plate recognition system... more license plate localization is considered the cornerstone for any license plate recognition system because it is the first step and the other steps rely on the result of localization. In this paper, a new technique for plate detection is presented. The First step is detecting the vertical edge using Sobel mask, and then the horizontal projection is applied to filter the edge image regions. Morphological operations and connected components labelling are used to get the candidate regions. Finally, support vector machine is used to examine the candidate regions and determine the license plate. Dataset downloaded from the internet is used to train SVM and to test the proposed technique. This dataset contains 514 images of cars and vans. The images are captured in various illumination conditions, raining days, taken from different angle, furthermore many of them have very complex background, shadow. In addition, many regions are similar to the plate region. Simulate result of the proposed technique shows accuracy of 92.2%.

Research paper thumbnail of Spatial Query Performance For GIS cloud

Geographic Information System (GIS) is very important in our live and spatial data is required fo... more Geographic Information System (GIS) is very important in our live and spatial data is required for several fields. Cloud computing is one of the most technology used in the modern data interchange. Spatial data query response time over cloud depends on the cloud data resource. This paper presents a query response time measurement for cloud GIS query. Spatial Query Performance (SQP) is a software designed and represented in Java programming language for measuring query response time. SQP's main functionality is to compare the response time for two spatial data resource servers by asking one query for both servers in the same time and calculate the response time for each server. Google and Bing map servers are used as spatial data resources for measuring the query response time for each server. Google and Bing map servers are used as spatial data resources for measuring the query response time for each. SQP determines that Google is faster than Bing over different test times. 1. Introduction As of late, Infrastructure as a Service (IaaS) distributed computing has developed as a suitable distinct option for the obtaining and administration of physical assets. With IaaS, clients can rent storage and processing time from extensive datacenters. Renting of calculation time is proficient by permitting clients to convey virtual machines (VMs) on the datacenter's assets. Since the client has complete control over the design of the VMs utilizing on-interest arrangements, IaaS renting is equivalent to obtaining committed equipment yet without the long haul responsibility and expense. The on-interest nature of IaaS is discriminating to making such rents appealing, since it empowers clients to extend or shrink their assets as per their computational needs, by utilizing outside assets to supplement their nearby asset base. B. Claudel et al.(2009). This rising model prompts new difficulties identifying with the outline and improvement of IaaS frameworks. One of the normally happening examples in the operation of IaaS is the need to convey an extensive number of VMs on numerous hubs of a data centre at the same time, beginning from an arrangement of VM in ages already put away in an industrious manner. For example, this pattern happens when the client needs to convey a virtual cluster that executes a circulated application or an arrangement of situations to bolster a work process. We allude to this example as multi deployment. Such an expansive sending of numerous VMs without a moment's delay can take a long time. This issue is especially intense for VM pictures utilized as a part of experimental figuring where picture sizes are huge (from a couple of gigabytes up to more than 10 GB). A run of the mill sending comprises of hundreds or even a great many such pictures. Customary organization procedures telecast the pictures to the nodes before beginning the VM occasions, a procedure that can take several minutes to hours, not including the time to boot the working framework itself. This can set aside a few minutes of the IaaS establishment any longer than worthy and delete the on-interest advantages of cloud computing. Once the VM examples are running, a comparable test applies to snapshotting the organization: numerous VM pictures that were by regional standards altered should be simultaneously exchanged to stable storage with the reason for catching the VM state for later utilize (e.g., for verify guiding or off-line movement toward another group or cloud). We allude to this example as multi snapshotting. Ordinary snapshotting approaches depend on custom VM picture file formats to store just incremental contrasts in another document that relies on upon the first VM picture as the sponsorship record, figure1.

Research paper thumbnail of Detection of Caries in Panoramic Dental X-ray Images using Back-Propagation Neural Network

– Recently, artificial neural network (ANNs) has been adopted widely for solving many complex pro... more – Recently, artificial neural network (ANNs) has been adopted widely for solving many complex problems in different fields due to its high performance and its ability to generalize. One of these fields is the medical image processing for diagnosing purposes. In this paper, tooth caries detection strategy is introduced based on a back propagation (BP) neural network for analyzing the dental X-ray images. The neural network used the inter-pixel autocorrelation as input features. The accuracy of classification is satisfactory where the tooth caries detection is clearly improved when compared to the diagnosing process performed by a rule-based computer assisted program and a group of doctors.

Research paper thumbnail of Arab Kids Tutor (AKT) System For Handwriting Stroke Errors Detection

This paper presents the architecture, components and evaluation of Arab kids tutor (AKT), an inte... more This paper presents the architecture, components and evaluation of Arab kids tutor (AKT), an intelligent tutor system for learning handwriting Arabic alphabets. Today, children suffer from handwriting difficulty, so tutors hope to get rid of negative impact of traditional learning system. Our system contains an immediate feedback with error detection that can check multiple kinds of handwriting errors and provide an intelligent feedback to our children. Moreover, AKT use freeman chain code and mathematical algorithms to detect order and direction errors. Through the work, we indicate the children level of understanding of learning handwriting character using fuzzy sets. Experimental results indicate that the AKT successfully detect handwriting strokes errors with automatic feedback.

Research paper thumbnail of Enhancing Hybrid Asymmetric-Multicast Hash-Routing for Information Centric Networks

Information-Centric Networking (ICN) methods present fundamental resilience to a lot of users in ... more Information-Centric Networking (ICN) methods
present fundamental resilience to a lot of users in order to
collect data. It is one of the substantial sharing features of
ICN designs are the global caching. They are exceedingly
admissible that the in-network cache will enhance the
execution. Besides, there was no totally abridgement on the
mobility to plan an effective caching program in ICN
network. This paper discusses the modality of enhancing the
Hybrid Asymmetric-Multicast Hash-Routing strategy. This is
done by finding the best location for storage regarding the
nodes which are participating directly in the search for
content. Thus achieving higher cache hit ratio.

Research paper thumbnail of A Proposed Model for Human Securing using GPS

– This paper presents system architecture for Human security monitoring, which can be used in the... more – This paper presents system architecture for Human security monitoring, which can be used in the personal locators for children, elderly people or those suffering from Alzheimer's or memory loss, and monitoring the movement for law execution. This architecture consists of GPS part for collecting information about a movable object (MO), spatial database part for storing this information by listener server, and finally, Geographic Information System (GIS). Using GIS helps in the display and analysis of spatial information on a digital map. The methods used in spatial data collecting and management are described in details in this work. The spatial database stores information about location (latitude, longitude, date, time, etc) at the time of observation, and some additional desired attributes. GIS provides information about the MO either within the permitted area or outside of it. In this case, the system sends SMS messages that contain the spatial data about the MO to the stakeholders (police, parents, helpers, etc) for giving the assistance as possible.

Research paper thumbnail of A New Fast 3D Reconstruction Approach using Multiple View Images

– The extract key points and matching the pictures are the most paramount reconstruction 3D facto... more – The extract key points and matching the pictures are the most paramount reconstruction 3D factors. They almost two-thirds the time of reconstruction. This paper presents a method to extract the most paramount key points, through the use of GrabCut algorithm that elimintes considerable parts of images that does not have its prominence in the reconstructio. Moreover, the proposed algorithm uses siftGPU algorithm that runs parallel to any process more than one image at a time to extract key points and carry out matching process. The experiments show that the proposed system increase the speed of reconstruction and thoroughly good. Keywords – 3D Reconstruction, S tructure From Motion (S FM), Mash Reconstructionand Multi-View S tereo (MVS). I. INTRODUCTİON 3D reconstruction is one of the classical and difficu lt problems in co mputer vision, and finds its applications in a variety of different fields. In recent years, large scale 3D reconstruction from co mmunity photo collections has become an emerging research topic, wh ich is attracting more and more researchers fro m academy and industry. However, 3D reconstruction is extremely co mputationally expensive. For examp le, it may cost more than a day in a single mach ine to reconstruct an object with only one thousand pictures. In the Structure from Motion (SfM) model [1, 2], 3D reconstruction pipeline can be divided into various steps: feature extraction, image matching, track generation and geometric estimation, etc. Among them, image matching occupies the fundamental computational cost, even more than half of all in some case. Moreover, inexact matching results might lead to washout of reconstruction. Therefore, fast and accurate image matching is crit ical for 3D reconstruction. There are various ways to build reconstruction For example Reconstruction manually is most Statute method to reconstruct a 3D model for an object real world. but is a method ponderous and very intensive. Level of realism can be achieved [3]. The other way tried to eliminate the voltage on the user. 3D Scanner Variant Gu ide to reconstruction is to let co mputers to take some work, and is a well-established method of 3D scanning. The 3D the scanner is the device that apprehends the detailed informat ion for shape and appearance[4]. Modern developments in techniques scanners and Laser able to apprehend point clouds of scenes the real world, And also Automatically can reveal scene planes and create 3D models without the help of the user which can generate Points dense cloud from total images by photogrammetry tools [5].To create a point clouds typically sharing the same problems fro m noisy and lost data. Makes it very hard to apply the methods of surface reconstruction the direct [6,7], Points cloud doesn't contain the specific edges and borders. Last method offered by the this paperPhotogrammetry reconstruction regains 3D informat ion fro m a single or more of images. Main ly focused on rebuilding Photos mu lti view called stereo vision. Epipolar geo metry describes the features and the linkages between the scene and the 3D geometric projections on two or more images of 2D. Figure 1 shows the idealistic workflow for photogrammetric reconstruction. The first step of photogrammetric reconstruction includes the registration of all input images. This procedure is called structure-fro m-mot ion and includes the computation of intrinsic and extrinsic camera parameters. For reg istered images, it is possible to compute 3D positions from t wo or mo re corresponding image points. Multi-view stereo algorithms use these conditions and compute dense point clouds or triangulated meshes fro m the input scene. A B Fig.1. Reconstruction Photogrammet ry are recording mu ltip le images (A): is created by the structure fro m motion (B) and 3D geo metry by dense mult i view stereo [8]. The terms Multi-view Stereo (MVS) simu lates the sense of human sight distance and 3D objects. It uses two or more images fro m various points of view to get the 3D structure of the scene and distance information. Many algorith ms stereo mult iview [9, 10] used all the images at the same time to rebuild the 3D model. It requires a high expense and also lacks scalability. Furukawa [ 10] suggested PMVS (mult iple stereoscopic vision correction) and took mu ltiple p ictures of various views of the body to extract feature points. It is then expanded abroad to find more of the interview points. Furukawa CM VS also suggested (Views comp ilation of mult iple stereo) [ 12] in 2010, has been used to ameliorate the image co mb ines numerous susceptibility in order to see the stereo, and sustainable forest management PM VS b roker contacts. RGB-D systems have been developed due to the advent of RGB-D sensors, such as the Microsoft Kinect.

Research paper thumbnail of An Intelligent Agent Tutor System for Detecting Arabic Children Handwriting Difficulty Based on Immediate Feedback

In this paper, an intelligent tutor application is built for Arabic preschool children called Ara... more In this paper, an intelligent tutor application is built for Arabic preschool children called Arab Handwritten Children Educator (AHCE). AHCE allows Arab children to do practice at anytime and anywhere. As an intelligent tutor, the AHCE can automatically check the handwriting errors, such as stroke sequence errors, stroke direction error, stroke position error, and extra stroke errors. The AHCE provide a useful feedback to Arab children to correct their mistakes. In this paper, attributed mathematics and agents are used to locate the handwritten errors. The system applies a fuzzy approach to evaluate Arabic children handwriting. Experimental results indicate that the proposed intelligent system successfully detect handwriting strokes errors with immediate feedback.

Research paper thumbnail of CNN for Handwritten Arabic Digits Recognition Based on LeNet-5

In recent years, handwritten digits recognition has been an important area due to its application... more In recent years, handwritten digits recognition has been an important area due to its applications in several fields. This work is focus-ing on the recognition part of handwritten Arabic digits recognition that face several challenges, including the unlimited variation in human handwriting and the large public databases. The paper provided a deep learning technique that can be effectively apply to recognizing Arabic handwritten digits. LeNet-5, a Convolutional Neural Network (CNN) trained and tested MADBase database (Arabic handwritten digits images) that contain 60000 training and 10000 testing images. A comparison is held amongst the results, and it is shown by the end that the use of CNN was leaded to significant improvements across different machine-learning classification algorithms.

Research paper thumbnail of Fast pattern detection sing neural networks and cross correlation in the frequency domain

Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., 2000

ABSTRACTRecently, fast neural networks for object/face detection were presented in . The speed u... more ABSTRACTRecently, fast neural networks for object/face detection were presented in . The speed up factor of these networks based on cross correlation in the frequency domain between the input image and the weights of the hidden layer. But, these equations given in [1-3] for conventional and fast neural networks are not valid for many reasons presented here. In this paper, correct equations for cross correlation in the spatial and frequency domains are presented. Furthermore, correct formulas for the number of computation steps required by conventional and fast neural networks given in [1-3] are introduced. A new formula for the speed up ratio is established. Also, corrections for the equations of fast multi scale object/face detection are given. Moreover, commutative cross correlation is achieved. Simulation results show that sub-image detection based on cross correlation in the frequency domain is faster than classical neural networks.

Research paper thumbnail of Interactive Visualization of Retrieved Information

Interactive visualization of information retrieval has become for many applications. Using inform... more Interactive visualization of information retrieval has become for many applications. Using information retrieval system backs more retrieval results, some of them more relevant than other, and some is not relevant. While using search engine to retrieve information has grown very substantially, there remain problems with the information retrieval systems. The interface of the systems does not help them to perceive the precision of these results. It is therefore not surprising that graphical visualizations have been employed in search engines to assist users. The main objective of Internet users is to find the required information with high efficiency and effectiveness. In this paper we present brief sides of information visualization's role in enhancing web information retrieval system as in some of its techniques such as tree view, title view, map view, bubble view and cloud view and its tools such as highlighting and Colored Query Result.

Research paper thumbnail of A COMPARISON BETWEEN TWO DIPHONE-BASED CONCATENATIVE Text to Speech SYSTEMS FOR ARABIC

Research paper thumbnail of Fast Forecasting of Stock Market Prices by using New High Speed Time Delay Neural Networks

Fast forecasting of stock market prices is very important for strategic planning. In this paper, ... more Fast forecasting of stock market prices is very important for strategic planning. In this paper, a new approach for fast forecasting of stock market prices is presented. Such algorithm uses new high speed time delay neural networks (HSTDNNs). The operation of these networks relies on performing cross correlation in the frequency domain between the input data and the input weights of neural networks. It is proved mathematically and practically that the number of computation steps required for the presented HSTDNNs is less than that needed by traditional time delay neural networks (TTDNNs). Simulation results using MATLAB confirm the theoretical computations.

Research paper thumbnail of Fast Packet Detection by using High Speed Time Delay Neural Networks

Fast packet detection is very important to overcome intrusion attack. In this paper, a new approa... more Fast packet detection is very important to overcome intrusion attack. In this paper, a new approach for fast packet detection in serial data sequence is presented. Such algorithm uses fast time delay neural networks (FTDNNs). The operation of these networks relies on performing cross correlation in the frequency domain between the input data and the input weights of neural networks. It is proved mathematically and practically that the number of computation steps required for the presented FTDNNs is less than that needed by conventional time delay neural networks (CTDNNs). Simulation results using MATLAB confirm the theoretical computations.

Research paper thumbnail of Comparative Study among Data Reduction Techniques over Classification Accuracy

International Journal of Computer Applications

Nowadays, Healthcare is one of the most critical issues that need efficient and effective analysi... more Nowadays, Healthcare is one of the most critical issues that need efficient and effective analysis. Data mining provides many techniques and tools that help in getting a good analysis for healthcare data. Data classification is a form of data analysis for deducting models. Mining on a reduced version of data or a lower number of attributes increases the efficiency of system providing almost the same results. In this paper, a comparative study between different data reduction techniques is introduced. Such comparison is tested against classification algorithms accuracy. The results showed that fuzzy rough feature selection outperforms rough set attribute selection, gain ratio, correlation feature selection and principal components analysis.

Research paper thumbnail of Data Mining Techniques for Medical Applications: A Survey

Data mining has been used to uncover hidden patterns and relations to summarize the data in ways ... more Data mining has been used to uncover hidden patterns and relations to summarize the data in ways to be useful and understandable in all types of businesses to make prediction for future perspective. Medical data is consider most famous application to mine that data, so in this paper we introduce a survey on how medical data problems such as dealing with noisy, incomplete , heterogeneous and intensive data has been faced ,the advantages and disadvantages of each one , finally suggest a framework for enhancing and overcoming this problems. The theory of fuzzy sets has been recognized as a suitable tool to model several kinds of patterns that can hold in data. In this paper, we are concerned with the development of a general model to discover association rules among items in a (crisp) set of fuzzy transactions. This general model can be particularized in several ways; each particular instance corresponds to a certain kind of pattern and/or repository of data. We describe some applications of this scheme, paying special attention to the discovery of fuzzy association rules .to extract association rules from quantitative data, the dataset at hand must be partitioned into intervals, and then converted into Boolean type .fuzzy association rules are developed as a sharp knife by handling quantitative data using fuzzy set. Along with the proposed system we will use neural network approaches for clustering, classification, statistical analysis and data modeling.

Research paper thumbnail of Adaptive E-Learning System Based On Learning Interactivity

This paper considers the affordances of social networking theories and tools to build new and eff... more This paper considers the affordances of social networking theories and tools to build new and effective e-learning practices. We argue that "connectivism" (social networking applied to learning and knowledge contexts) can lead to a reconceptualization of learning in which formal, non-formal and informal learning can be integrated as to build a potentially lifelong learning activities to be experienced in "personal learning environment. In order to provide a guide in the design, development and improvement both of personal learning environments and in the related learning activities we provide a knowledge flow model called Open Social Learning Network (OSLN) -a hybrid of the LMS and the personal learning environment (PLE)-is proposed as an alternative learning technology environment with the potential to leverage the affordances of the Web to improve learning dramatically and highlighting the stages of learning and the related enabling conditions. The derived model is applied in a possible scenario of formal learning in order to show how the learning process can be designed according to the presented theory.