Olarik Surinta - Academia.edu (original) (raw)

Papers by Olarik Surinta

Research paper thumbnail of Handwritten Character Recognition Using k-Nearest Neighbors Classifier – A Survey

Research paper thumbnail of 11th International Conference on Computer Vision Theory and Applications

Research paper thumbnail of Optimal weighted parameters of ensemble convolutional neural networks based on a differential evolution algorithm for enhancing pornographic image classification

Research paper thumbnail of Feature Extraction Efficient for Face Verification Based on Residual Network Architecture

Lecture Notes in Computer Science

Research paper thumbnail of DeblurGAN-CNN: Effective Image Denoising and Recognition for Noisy Handwritten Characters

IEEE Access

Many problems can reduce handwritten character recognition performance, such as image degradation... more Many problems can reduce handwritten character recognition performance, such as image degradation, light conditions, low-resolution images, and even the quality of the capture devices. However, in this research, we have focused on the noise in the character images that could decrease the accuracy of handwritten character recognition. Many types of noise penalties influence the recognition performance, for example, low resolution, Gaussian noise, low contrast, and blur. First, this research proposes a method that learns from the noisy handwritten character images and synthesizes clean character images using the robust deblur generative adversarial network (DeblurGAN). Second, we combine the DeblurGAN architecture with a convolutional neural network (CNN), called DeblurGAN-CNN. Subsequently, two state-of-the-art CNN architectures are combined with DeblurGAN, namely DeblurGAN-DenseNet121 and DeblurGAN-MobileNetV2, to address many noise problems and enhance the recognition performance of the handwritten character images. Finally, the DeblurGAN-CNN could transform the noisy characters to the new clean characters and recognize clean characters simultaneously. We have evaluated and compared the experimental results of the proposed DeblurGAN-CNN architectures with the existing methods on four handwritten character datasets: n-THI-C68, n-MNIST, THI-C68, and THCC-67. For the n-THI-C68 dataset, the DeblurGAN-CNN achieved above 98% and outperformed the other existing methods. For the n-MNIST, the proposed DeblurGAN-CNN achieved an accuracy of 97.59% when the AWGN+Contrast noise method was applied to the handwritten digits. We have evaluated the DeblurGAN-CNN on the THCC-67 dataset. The result showed that the proposed DeblurGAN-CNN achieved an accuracy of 80.68%, which is significantly higher than the existing method, approximately 10%. INDEX TERMS Handwritten character recognition, denoising image, generative adversarial network, DeblurGAN, convolutional neural network.

Research paper thumbnail of Vehicle Type Image Dataset (Version 2): VTID2

After creating VTID, the researchers decided to extend the collection process to create another l... more After creating VTID, the researchers decided to extend the collection process to create another larger dataset to add further diversity to the dataset in order to avoid data overfitting. Finally, the new dataset, called "Vehicle Type Image Dataset 2 (VTID2)", consisted of 4,356 image samples that could be separated into five vehicle type classes as follows: 1,230 sedans, 1,240 pick-ups, 680 SUVs, 606 hatchbacks, and 600 other vehicle images.

Research paper thumbnail of Ensemble Methods with Deep Convolutional Neural Networks for Plant Leaf Recognition

Recognition of plant leaves and diseases from images is a challenging task in computer vision and... more Recognition of plant leaves and diseases from images is a challenging task in computer vision and machine learning. This is because various problems directly affect the performance of the system, such as the leaf structure, differences of the intra-class, similarity of shape between inter-class, perspective of the image, and even recording time. In this paper, we propose the ensemble convolutional neural network (CNN) method to tackle these issues and improve plant leaf recognition performance. We trained five CNN models: MobileNetV1, MobileNetV2, NASNetMobile, DenseNet121, and Xception, accordingly to discover the best CNN based model. Ensemble methods, unweighted average, weighted average, and unweighted majority vote methods were then applied to the CNN output probabilities of each model. We have evaluated these ensemble CNN methods on a mulberry leaf dataset and two leaf disease datasets: tomato and corn leaf disease. As a result, the individual CNN model shows that MobileNetV2 ...

Research paper thumbnail of Enhancement of Plant Leaf Disease Classification Based on Snapshot Ensemble Convolutional Neural Network

Plant diseases are one of the most serious issues that can decrease the value and volume of plant... more Plant diseases are one of the most serious issues that can decrease the value and volume of plant goods. It is time-consuming for farmers to discover and identify the disease by observing the leaves of plants, even with specialists scientists and laboratory processes. This study proposed the deep learning approach to address the real-world problems that are contained in the PlantDoc dataset. The deep learning method aims to classify plant leaf disease images from the PlantDoc dataset. First, four state-of-the-art convolutional neural networks (CNNs): VGG16, MobileNetV2, InceptionResNetV2, and DenseNet201, were proposed to enhance the plant leaf disease classification performance. As a result, for the baseline CNN model, DenseNet201 showed better performance with an accuracy of 67.18%, while the second-best CNN model was the InceptionResNetV2 with an accuracy of 61.75%. In addition, the data augmentation techniques (rotation, zoom, brightness, cutout, and mixup) were combined in the ...

Research paper thumbnail of AIWR Dataset

According to the standard of land use code by fundamental geographic data set: FGDS), Thailand [5... more According to the standard of land use code by fundamental geographic data set: FGDS), Thailand [5] land use classification requires an analysis and transformation of satellite images data together with field survey data. In this article, researchers studied only land use in water bodies. The water bodies in this research can be divided into 2 levels: natural body of water (W1) artificial body of (W2) water. The aerial image data used in this research was 1:50 meters. Every aerial image had 650x650 pixels. Those images included water bodies type W1 and W2 as shown in Figure 3a. Ground truth of all aerial images was set for before sending it to be analyzed and interpreted by remote sensing experts. This assured that the water bodies groupings were correct. An example of ground truth, which has been checked by experts. Ground truth has been used in learning the algorithm in deep learning mode and also used in further evaluation. The aerial images used in this experiment consists of wat...

Research paper thumbnail of Multi-Script Handwritten Character Recognition using Feature Descriptors and Machine Learning

i ii contents 3.5 Data Collection and Preparation 3.6 Experimental Results 3.6.1 Experiments with... more i ii contents 3.5 Data Collection and Preparation 3.6 Experimental Results 3.6.1 Experiments with the kNN Algorithm 3.6.2 Experiments with the SVM Algorithm 3.7 Conclusions 4 local gradient feature descriptors 4.1 Local Gradient Feature Descriptors 4.1.1 Histograms of Oriented Gradients (HOG) 4.1.2 Scale Invariant Feature Transform Descriptor (siftD) 4.2 Handwritten Character Datasets 4.2.1 Thai Handwritten Dataset 4.2.2 Bangla Handwritten Dataset 4.2.3 Latin Handwritten Dataset 4.3 Experimental Results 4.3.1 Experiments with the HOG Descriptor 4.3.2 Experiments with the SIFT Keypoint Descriptor 4.3.3 Comparison of HOG and siftD to Pixel Intensities 4.4 Discussion 5 local feature descriptors and bags of visual words 5.1 Feature Extraction Methods 5.1.1 Principal Component Analysis (PCA) 5.1.2 Discrete Cosine Transform (DCT) 5.

Research paper thumbnail of The Monk Line Segmentation (MLS) Dataset

Overview The MLS dataset available from this page consists of 31 handwritten page scans. The data... more Overview The MLS dataset available from this page consists of 31 handwritten page scans. The dataset contains medieval, historical and contemporary manuscripts, and has the purpose of testing line-segmentation algorithms. The collection contains a wide variation of the common problems in handwriting recognition: lines with overlapping ascenders/descenders, slightly rotated scans and curved base lines. <br> Download The MLS dataset was collected from the Monk system as of Friday May 17 14:15:04 CEST 2013. It was collected by Lambert Schomaker in May 2013 at the Institution of Artificial Intelligence and Cognitive Engineering (ALICE), University of Gronigen. The tar.gz file contains the image dataset for historical manuscripts. For more details please refer to the README file in the tar.gz file. The dataset downloaded for research use only. © 2013 Copyright. <br> @INPROCEEDINGS{Surinta:2014:ICFHR,<br> author = {O. Surinta and M. Holtkamp and M. F. Karaaba and JP. van...

Research paper thumbnail of Fast and Accurate Deep Learning Architecture on Vehicle Type Recognition

Current Applied Science and Technology, 2021

Vehicle Type Recognition has a significant problem that happens when people need to search for ve... more Vehicle Type Recognition has a significant problem that happens when people need to search for vehicle data from a video surveillance system at a time when a license plate does not appear in the image. This paper proposes to solve this problem with a deep learning technique called Convolutional Neural Network (CNN), which is one of the latest advanced machine learning techniques. In the experiments, researchers collected two datasets of Vehicle Type Image Data (VTID I & II), which contained 1,310 and 4,356 images, respectively. The first experiment was performed with 5 CNN architectures (MobileNets, VGG16, VGG19, Inception V3, and Inception V4), and the second experiment with another 5 CNNs (MobileNetV2, ResNet50, Inception ResNet V2, Darknet-19, and Darknet-53) including several data augmentation methods. The results showed that MobileNets, when combine with the brightness augmented method, significantly outperformed other CNN architectures, producing the highest accuracy rate at 95.46%. It was also the fastest model when compared to other CNN networks.

Research paper thumbnail of Develop the Framework Conception for Hybrid Indoor Navigation for Monitoring inside Building using Quadcopter

2019 14th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP), 2019

Building security is crucial, but guards and CCTV may be inadequate for monitoring all areas. A q... more Building security is crucial, but guards and CCTV may be inadequate for monitoring all areas. A quadcopter (drone) with manual and autonomous control was used in a trial mission in this project. Generally, all drones can stream live video and take photos. They can also be adapted to assist better decision-making in emergencies that occur inside a building. In this paper, we show how to improve a quadcopter’s ability to fly indoors, detect obstacles and react appropriately. This paper represents a new conceptual framework of hybrid indoor navigation ontology that analyzes a regular indoor route, including detection and avoidance of obstacles for the auto-pilot. An experiment with the system demonstrates improvements that occur in building surveillance and maintaining real-time situational awareness. The immediate objective is to show that the drone can serve as a reliable tool in security operations in a building environment.

Research paper thumbnail of Handwritten Character Classification using the Hotspot Feature Extraction Technique

Feature extraction techniques can be important in character recognition, because they can enhance... more Feature extraction techniques can be important in character recognition, because they can enhance the efficacy of recognition in comparison to featureless or pixel-based approaches. This study aims to investigate the novel feature extraction technique called the hotspot technique in order to use it for representing handwritten characters and digits. In the hotspot technique, the distance values between the closest black pixels and the hotspots in each direction are used as representation for a character. The hotspot technique is applied to three data sets including Thai handwritten characters (65 classes), Bangla numeric (10 classes), and MNIST (10 classes). The hotspot technique consists of two parameters including the number of hotspots and the number of chain code directions. The data sets are then classified by the k-Nearest Neighbors algorithm using the Euclidean distance as function for computing distances between data points. In this study, the classification rates obtained f...

Research paper thumbnail of Evaluating automatically parallelized versions of the support vector machine

Concurrency and Computation: Practice and Experience, 2014

Take-down policy If you believe that this document breaches copyright please contact us providing... more Take-down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.

Research paper thumbnail of A Comparison of Feature and Pixel-Based Methods for Recognizing Handwritten Bangla Digits

2013 12th International Conference on Document Analysis and Recognition, 2013

We propose a novel handwritten character recognition method for isolated handwritten Bangla digit... more We propose a novel handwritten character recognition method for isolated handwritten Bangla digits. A feature is introduced for such patterns, the contour angular technique. It is compared to other methods, such as the hotspot feature, the gray-level normalized character image and a basic lowresolution pixel-based method. One of the goals of this study is to explore performance differences between dedicated feature methods and the pixel-based methods. The four methods are compared with support vector machine (SVM) classifiers on the collection of handwritten Bangla digit images. The results show that the fast contour angular technique outperforms the other techniques when not very many training examples are used. The fast contour angular technique captures aspects of curvature of the handwritten image and results in much faster character classification than the gray pixel-based method. Still, this feature obtains a similar recognition compared to the gray pixel-based method when a large training set is used. In order to investigate further whether the different feature methods represent complementary aspects of shape, the effect of majority voting is explored. The results indicate that the majority voting method achieves the best recognition performance on this dataset.

Research paper thumbnail of A Path Planning for Line Segmentation of Handwritten Documents

2014 14th International Conference on Frontiers in Handwriting Recognition, 2014

This paper describes the use of a novel A * pathplanning algorithm for performing line segmentati... more This paper describes the use of a novel A * pathplanning algorithm for performing line segmentation of handwritten documents. The novelty of the proposed approach lies in the use of a smart combination of simple soft cost functions that allows an artificial agent to compute paths separating the upper and lower text fields. The use of soft cost functions enables the agent to compute near-optimal separating paths even if the upper and lower text parts are overlapping in particular places. We have performed experiments on the Saint Gall and Monk line segmentation (MLS) datasets. The experimental results show that our proposed method performs very well on the Saint Gall dataset, and also demonstrate that our algorithm is able to cope well with the much more complicated MLS dataset.

Research paper thumbnail of Comparison of Image Analysis for Thai Handwritten Character Recognition

IFIP International Federation for Information Processing

This paper is proposing tlie method for Thai handwritten character recognition. The methods are R... more This paper is proposing tlie method for Thai handwritten character recognition. The methods are Robust C-Prototype and Bacl-c-Propagation Neural Network. The objective of experimental is recognition on Thai handwritten character. This is the result of both methods to be appearing accuracy more than 85%.

Research paper thumbnail of Optimization of line segmentation techniques for Thai handwritten documents

2009 Eighth International Symposium on Natural Language Processing, 2009

The purpose of the research is to study the optimization of line segmentation techniques for Thai... more The purpose of the research is to study the optimization of line segmentation techniques for Thai handwritten documents. This research considered only single-column of Thai documents. The author proposed two new techniques including comparing Thai character and sorting and distinguishing. These two techniques were used with recognized techniques on the basis of projection profile (including horizontal projection profile and stripe)

Research paper thumbnail of Image Segmentation of Historical Handwriting from Palm Leaf Manuscripts

IFIP – The International Federation for Information Processing

Palm leaf manuscripts were one of the earliest forms of written media and were used in Southeast ... more Palm leaf manuscripts were one of the earliest forms of written media and were used in Southeast Asia to store early written knowledge about subjects such as medicine, Buddhist doctrine and astrology. Therefore, historical handwritten palm leaf manuscripts are important for people who like to learn about historical documents, because we can learn more experience from them. This paper presents an image segmentation of historical handwriting from palm leaf manuscripts. The process is composed of three steps: 1) background elimination to separate text and background by Otsu's algorithm 2) line segmentation and 3) character segmentation by histogram of image. The end result is the character's image. The results from this research may be applied to optical character recognition (OCR) in the future.

Research paper thumbnail of Handwritten Character Recognition Using k-Nearest Neighbors Classifier – A Survey

Research paper thumbnail of 11th International Conference on Computer Vision Theory and Applications

Research paper thumbnail of Optimal weighted parameters of ensemble convolutional neural networks based on a differential evolution algorithm for enhancing pornographic image classification

Research paper thumbnail of Feature Extraction Efficient for Face Verification Based on Residual Network Architecture

Lecture Notes in Computer Science

Research paper thumbnail of DeblurGAN-CNN: Effective Image Denoising and Recognition for Noisy Handwritten Characters

IEEE Access

Many problems can reduce handwritten character recognition performance, such as image degradation... more Many problems can reduce handwritten character recognition performance, such as image degradation, light conditions, low-resolution images, and even the quality of the capture devices. However, in this research, we have focused on the noise in the character images that could decrease the accuracy of handwritten character recognition. Many types of noise penalties influence the recognition performance, for example, low resolution, Gaussian noise, low contrast, and blur. First, this research proposes a method that learns from the noisy handwritten character images and synthesizes clean character images using the robust deblur generative adversarial network (DeblurGAN). Second, we combine the DeblurGAN architecture with a convolutional neural network (CNN), called DeblurGAN-CNN. Subsequently, two state-of-the-art CNN architectures are combined with DeblurGAN, namely DeblurGAN-DenseNet121 and DeblurGAN-MobileNetV2, to address many noise problems and enhance the recognition performance of the handwritten character images. Finally, the DeblurGAN-CNN could transform the noisy characters to the new clean characters and recognize clean characters simultaneously. We have evaluated and compared the experimental results of the proposed DeblurGAN-CNN architectures with the existing methods on four handwritten character datasets: n-THI-C68, n-MNIST, THI-C68, and THCC-67. For the n-THI-C68 dataset, the DeblurGAN-CNN achieved above 98% and outperformed the other existing methods. For the n-MNIST, the proposed DeblurGAN-CNN achieved an accuracy of 97.59% when the AWGN+Contrast noise method was applied to the handwritten digits. We have evaluated the DeblurGAN-CNN on the THCC-67 dataset. The result showed that the proposed DeblurGAN-CNN achieved an accuracy of 80.68%, which is significantly higher than the existing method, approximately 10%. INDEX TERMS Handwritten character recognition, denoising image, generative adversarial network, DeblurGAN, convolutional neural network.

Research paper thumbnail of Vehicle Type Image Dataset (Version 2): VTID2

After creating VTID, the researchers decided to extend the collection process to create another l... more After creating VTID, the researchers decided to extend the collection process to create another larger dataset to add further diversity to the dataset in order to avoid data overfitting. Finally, the new dataset, called "Vehicle Type Image Dataset 2 (VTID2)", consisted of 4,356 image samples that could be separated into five vehicle type classes as follows: 1,230 sedans, 1,240 pick-ups, 680 SUVs, 606 hatchbacks, and 600 other vehicle images.

Research paper thumbnail of Ensemble Methods with Deep Convolutional Neural Networks for Plant Leaf Recognition

Recognition of plant leaves and diseases from images is a challenging task in computer vision and... more Recognition of plant leaves and diseases from images is a challenging task in computer vision and machine learning. This is because various problems directly affect the performance of the system, such as the leaf structure, differences of the intra-class, similarity of shape between inter-class, perspective of the image, and even recording time. In this paper, we propose the ensemble convolutional neural network (CNN) method to tackle these issues and improve plant leaf recognition performance. We trained five CNN models: MobileNetV1, MobileNetV2, NASNetMobile, DenseNet121, and Xception, accordingly to discover the best CNN based model. Ensemble methods, unweighted average, weighted average, and unweighted majority vote methods were then applied to the CNN output probabilities of each model. We have evaluated these ensemble CNN methods on a mulberry leaf dataset and two leaf disease datasets: tomato and corn leaf disease. As a result, the individual CNN model shows that MobileNetV2 ...

Research paper thumbnail of Enhancement of Plant Leaf Disease Classification Based on Snapshot Ensemble Convolutional Neural Network

Plant diseases are one of the most serious issues that can decrease the value and volume of plant... more Plant diseases are one of the most serious issues that can decrease the value and volume of plant goods. It is time-consuming for farmers to discover and identify the disease by observing the leaves of plants, even with specialists scientists and laboratory processes. This study proposed the deep learning approach to address the real-world problems that are contained in the PlantDoc dataset. The deep learning method aims to classify plant leaf disease images from the PlantDoc dataset. First, four state-of-the-art convolutional neural networks (CNNs): VGG16, MobileNetV2, InceptionResNetV2, and DenseNet201, were proposed to enhance the plant leaf disease classification performance. As a result, for the baseline CNN model, DenseNet201 showed better performance with an accuracy of 67.18%, while the second-best CNN model was the InceptionResNetV2 with an accuracy of 61.75%. In addition, the data augmentation techniques (rotation, zoom, brightness, cutout, and mixup) were combined in the ...

Research paper thumbnail of AIWR Dataset

According to the standard of land use code by fundamental geographic data set: FGDS), Thailand [5... more According to the standard of land use code by fundamental geographic data set: FGDS), Thailand [5] land use classification requires an analysis and transformation of satellite images data together with field survey data. In this article, researchers studied only land use in water bodies. The water bodies in this research can be divided into 2 levels: natural body of water (W1) artificial body of (W2) water. The aerial image data used in this research was 1:50 meters. Every aerial image had 650x650 pixels. Those images included water bodies type W1 and W2 as shown in Figure 3a. Ground truth of all aerial images was set for before sending it to be analyzed and interpreted by remote sensing experts. This assured that the water bodies groupings were correct. An example of ground truth, which has been checked by experts. Ground truth has been used in learning the algorithm in deep learning mode and also used in further evaluation. The aerial images used in this experiment consists of wat...

Research paper thumbnail of Multi-Script Handwritten Character Recognition using Feature Descriptors and Machine Learning

i ii contents 3.5 Data Collection and Preparation 3.6 Experimental Results 3.6.1 Experiments with... more i ii contents 3.5 Data Collection and Preparation 3.6 Experimental Results 3.6.1 Experiments with the kNN Algorithm 3.6.2 Experiments with the SVM Algorithm 3.7 Conclusions 4 local gradient feature descriptors 4.1 Local Gradient Feature Descriptors 4.1.1 Histograms of Oriented Gradients (HOG) 4.1.2 Scale Invariant Feature Transform Descriptor (siftD) 4.2 Handwritten Character Datasets 4.2.1 Thai Handwritten Dataset 4.2.2 Bangla Handwritten Dataset 4.2.3 Latin Handwritten Dataset 4.3 Experimental Results 4.3.1 Experiments with the HOG Descriptor 4.3.2 Experiments with the SIFT Keypoint Descriptor 4.3.3 Comparison of HOG and siftD to Pixel Intensities 4.4 Discussion 5 local feature descriptors and bags of visual words 5.1 Feature Extraction Methods 5.1.1 Principal Component Analysis (PCA) 5.1.2 Discrete Cosine Transform (DCT) 5.

Research paper thumbnail of The Monk Line Segmentation (MLS) Dataset

Overview The MLS dataset available from this page consists of 31 handwritten page scans. The data... more Overview The MLS dataset available from this page consists of 31 handwritten page scans. The dataset contains medieval, historical and contemporary manuscripts, and has the purpose of testing line-segmentation algorithms. The collection contains a wide variation of the common problems in handwriting recognition: lines with overlapping ascenders/descenders, slightly rotated scans and curved base lines. <br> Download The MLS dataset was collected from the Monk system as of Friday May 17 14:15:04 CEST 2013. It was collected by Lambert Schomaker in May 2013 at the Institution of Artificial Intelligence and Cognitive Engineering (ALICE), University of Gronigen. The tar.gz file contains the image dataset for historical manuscripts. For more details please refer to the README file in the tar.gz file. The dataset downloaded for research use only. © 2013 Copyright. <br> @INPROCEEDINGS{Surinta:2014:ICFHR,<br> author = {O. Surinta and M. Holtkamp and M. F. Karaaba and JP. van...

Research paper thumbnail of Fast and Accurate Deep Learning Architecture on Vehicle Type Recognition

Current Applied Science and Technology, 2021

Vehicle Type Recognition has a significant problem that happens when people need to search for ve... more Vehicle Type Recognition has a significant problem that happens when people need to search for vehicle data from a video surveillance system at a time when a license plate does not appear in the image. This paper proposes to solve this problem with a deep learning technique called Convolutional Neural Network (CNN), which is one of the latest advanced machine learning techniques. In the experiments, researchers collected two datasets of Vehicle Type Image Data (VTID I & II), which contained 1,310 and 4,356 images, respectively. The first experiment was performed with 5 CNN architectures (MobileNets, VGG16, VGG19, Inception V3, and Inception V4), and the second experiment with another 5 CNNs (MobileNetV2, ResNet50, Inception ResNet V2, Darknet-19, and Darknet-53) including several data augmentation methods. The results showed that MobileNets, when combine with the brightness augmented method, significantly outperformed other CNN architectures, producing the highest accuracy rate at 95.46%. It was also the fastest model when compared to other CNN networks.

Research paper thumbnail of Develop the Framework Conception for Hybrid Indoor Navigation for Monitoring inside Building using Quadcopter

2019 14th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP), 2019

Building security is crucial, but guards and CCTV may be inadequate for monitoring all areas. A q... more Building security is crucial, but guards and CCTV may be inadequate for monitoring all areas. A quadcopter (drone) with manual and autonomous control was used in a trial mission in this project. Generally, all drones can stream live video and take photos. They can also be adapted to assist better decision-making in emergencies that occur inside a building. In this paper, we show how to improve a quadcopter’s ability to fly indoors, detect obstacles and react appropriately. This paper represents a new conceptual framework of hybrid indoor navigation ontology that analyzes a regular indoor route, including detection and avoidance of obstacles for the auto-pilot. An experiment with the system demonstrates improvements that occur in building surveillance and maintaining real-time situational awareness. The immediate objective is to show that the drone can serve as a reliable tool in security operations in a building environment.

Research paper thumbnail of Handwritten Character Classification using the Hotspot Feature Extraction Technique

Feature extraction techniques can be important in character recognition, because they can enhance... more Feature extraction techniques can be important in character recognition, because they can enhance the efficacy of recognition in comparison to featureless or pixel-based approaches. This study aims to investigate the novel feature extraction technique called the hotspot technique in order to use it for representing handwritten characters and digits. In the hotspot technique, the distance values between the closest black pixels and the hotspots in each direction are used as representation for a character. The hotspot technique is applied to three data sets including Thai handwritten characters (65 classes), Bangla numeric (10 classes), and MNIST (10 classes). The hotspot technique consists of two parameters including the number of hotspots and the number of chain code directions. The data sets are then classified by the k-Nearest Neighbors algorithm using the Euclidean distance as function for computing distances between data points. In this study, the classification rates obtained f...

Research paper thumbnail of Evaluating automatically parallelized versions of the support vector machine

Concurrency and Computation: Practice and Experience, 2014

Take-down policy If you believe that this document breaches copyright please contact us providing... more Take-down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.

Research paper thumbnail of A Comparison of Feature and Pixel-Based Methods for Recognizing Handwritten Bangla Digits

2013 12th International Conference on Document Analysis and Recognition, 2013

We propose a novel handwritten character recognition method for isolated handwritten Bangla digit... more We propose a novel handwritten character recognition method for isolated handwritten Bangla digits. A feature is introduced for such patterns, the contour angular technique. It is compared to other methods, such as the hotspot feature, the gray-level normalized character image and a basic lowresolution pixel-based method. One of the goals of this study is to explore performance differences between dedicated feature methods and the pixel-based methods. The four methods are compared with support vector machine (SVM) classifiers on the collection of handwritten Bangla digit images. The results show that the fast contour angular technique outperforms the other techniques when not very many training examples are used. The fast contour angular technique captures aspects of curvature of the handwritten image and results in much faster character classification than the gray pixel-based method. Still, this feature obtains a similar recognition compared to the gray pixel-based method when a large training set is used. In order to investigate further whether the different feature methods represent complementary aspects of shape, the effect of majority voting is explored. The results indicate that the majority voting method achieves the best recognition performance on this dataset.

Research paper thumbnail of A Path Planning for Line Segmentation of Handwritten Documents

2014 14th International Conference on Frontiers in Handwriting Recognition, 2014

This paper describes the use of a novel A * pathplanning algorithm for performing line segmentati... more This paper describes the use of a novel A * pathplanning algorithm for performing line segmentation of handwritten documents. The novelty of the proposed approach lies in the use of a smart combination of simple soft cost functions that allows an artificial agent to compute paths separating the upper and lower text fields. The use of soft cost functions enables the agent to compute near-optimal separating paths even if the upper and lower text parts are overlapping in particular places. We have performed experiments on the Saint Gall and Monk line segmentation (MLS) datasets. The experimental results show that our proposed method performs very well on the Saint Gall dataset, and also demonstrate that our algorithm is able to cope well with the much more complicated MLS dataset.

Research paper thumbnail of Comparison of Image Analysis for Thai Handwritten Character Recognition

IFIP International Federation for Information Processing

This paper is proposing tlie method for Thai handwritten character recognition. The methods are R... more This paper is proposing tlie method for Thai handwritten character recognition. The methods are Robust C-Prototype and Bacl-c-Propagation Neural Network. The objective of experimental is recognition on Thai handwritten character. This is the result of both methods to be appearing accuracy more than 85%.

Research paper thumbnail of Optimization of line segmentation techniques for Thai handwritten documents

2009 Eighth International Symposium on Natural Language Processing, 2009

The purpose of the research is to study the optimization of line segmentation techniques for Thai... more The purpose of the research is to study the optimization of line segmentation techniques for Thai handwritten documents. This research considered only single-column of Thai documents. The author proposed two new techniques including comparing Thai character and sorting and distinguishing. These two techniques were used with recognized techniques on the basis of projection profile (including horizontal projection profile and stripe)

Research paper thumbnail of Image Segmentation of Historical Handwriting from Palm Leaf Manuscripts

IFIP – The International Federation for Information Processing

Palm leaf manuscripts were one of the earliest forms of written media and were used in Southeast ... more Palm leaf manuscripts were one of the earliest forms of written media and were used in Southeast Asia to store early written knowledge about subjects such as medicine, Buddhist doctrine and astrology. Therefore, historical handwritten palm leaf manuscripts are important for people who like to learn about historical documents, because we can learn more experience from them. This paper presents an image segmentation of historical handwriting from palm leaf manuscripts. The process is composed of three steps: 1) background elimination to separate text and background by Otsu's algorithm 2) line segmentation and 3) character segmentation by histogram of image. The end result is the character's image. The results from this research may be applied to optical character recognition (OCR) in the future.