aysegul ucar - Academia.edu (original) (raw)

Papers by aysegul ucar

Research paper thumbnail of Development of a deep wavelet pyramid scene parsing semantic segmentation network for scene perception in indoor environments

Journal of Ambient Intelligence and Humanized Computing

Research paper thumbnail of New CNN and hybrid CNN-LSTM models for learning object manipulation of humanoid robots from demonstration

Cluster Computing, 2021

As the environments that human live are complex and uncontrolled, the object manipulation with hu... more As the environments that human live are complex and uncontrolled, the object manipulation with humanoid robots is regarded as one of the most challenging tasks. Learning a manipulation skill from human Demonstration (LfD) is one of the popular methods in the artificial intelligence and robotics community. This paper introduces a deep learning based teleoperation system for humanoid robots that imitate the human operator's object manipulation behavior. One of the fundamental problems in LfD is to approximate the robot trajectories obtained by means of human demonstrations with high accuracy. The work introduces novel models based on Convolutional Neural Networks (CNNs), CNNs-Long Short-Term Memory (LSTM) models combining the CNN LSTM models, and their scaled variants for object manipulation with humanoid robots by using LfD. In the proposed LfD system, six models are employed to estimate the shoulder roll position of the humanoid robot. The data are first collected in terms of teleoperation of a real Robotis-Op3 humanoid robot and the models are trained. The trajectory estimation is then carried out by the trained CNNs and CNN-LSTM models on the humanoid robot in an autonomous way. All trajectories relating the joint positions are finally generated by the model outputs. The results relating to the six models are compared to each other and the real ones in terms of the training and validation loss, the parameter number, and the training and testing time. Extensive experimental results show that the proposed CNN models are well learned the joint positions and especially the hybrid CNN-LSTM models in the proposed teleoperation system exhibit a more accuracy and stable results.

Research paper thumbnail of A Review of Deep Reinforcement Learning Algorithms and Comparative Results on Inverted Pendulum System

Learning and Analytics in Intelligent Systems, 2020

The control of inverted pendulum problem that is one of the classical control problems is importa... more The control of inverted pendulum problem that is one of the classical control problems is important for many areas from autonomous vehicles to robotic. This chapter presents the usage of the deep reinforcement learning algorithms to control the cart-pole balancing problem. The first part of the chapter reviews the theories of deep reinforcement learning methods such as Deep Q Networks (DQN), DQN with Prioritized Experience Replay (DQN+PER), Double DQN (DDQN), Double Dueling Deep-Q Network (D3QN), Reinforce, Asynchronous Advanced Actor Critic Asynchronous (A3C) and Synchronous Advantage Actor-Critic (A2C). Then, the cart-pole balancing problem in OpenAI Gym environment is considered to implement the deep reinforcement learning methods. Finally, the performance of all methods are comparatively given on the cart-pole balancing problem. The results are presented by tables and figures.

Research paper thumbnail of Learning to Move an Object by the Humanoid Robots by Using Deep Reinforcement Learning

Intelligent Environments 2021, 2021

This paper proposes an algorithm for learning to move the desired object by humanoid robots. In t... more This paper proposes an algorithm for learning to move the desired object by humanoid robots. In this algorithm, the semantic segmentation algorithm and Deep Reinforcement Learning (DRL) algorithms are combined. The semantic segmentation algorithm is used to detect and recognize the object be moved. DRL algorithms are used at the walking and grasping steps. Deep Q Network (DQN) is used to walk towards the target object by means of the previously defined actions at the gate manager and the different head positions of the robot. Deep Deterministic Policy Gradient (DDPG) network is used for grasping by means of the continuous actions. The previously defined commands are finally assigned for the robot to stand up, turn left side and move forward together with the object. In the experimental setup, the Robotis-Op3 humanoid robot is used. The obtained results show that the proposed algorithm has successfully worked.

Research paper thumbnail of New convolutional neural network models for efficient object recognition with humanoid robots

Journal of Information and Telecommunication, 2021

Humanoid robots are expected to manipulate the objects they have not previously seen in real-life... more Humanoid robots are expected to manipulate the objects they have not previously seen in real-life environments. Hence, it is important that the robots have the object recognition capability. However, object recognition is still a challenging problem at different locations and different object positions in real time. The current paper presents four novel models with small structure, based on Convolutional Neural Networks (CNNs) for object recognition with humanoid robots. In the proposed models, a few combinations of convolutions are used to recognize the class labels. The MNIST and CIFAR-10 benchmark datasets are first tested on our models. The performance of the proposed models is shown by comparisons to that of the best state-of-the-art models. The models are then applied on the Robotis-Op3 humanoid robot to recognize the objects of different shapes. The results of the models are compared to those of the models, such as VGG-16 and Residual Network-20 (ResNet-20), in terms of training and validation accuracy and loss, parameter number and training time. The experimental results show that the proposed model exhibits high accurate recognition by the lower parameter number and smaller training time than complex models. Consequently, the proposed models can be considered promising powerful models for object recognition with humanoid robots.

Research paper thumbnail of Gesture imitation and recognition using Kinect sensor and extreme learning machines

Measurement, 2016

This study presents a framework that recognizes and imitates human upper-body motions in real tim... more This study presents a framework that recognizes and imitates human upper-body motions in real time. The framework consists of two parts. In the first part, a transformation algorithm is applied to 3D human motion data captured by a Kinect. The data are then converted into the robot's joint angles by the algorithm. The human upper-body motions are successfully imitated by the NAO humanoid robot in real time. In the second part, the human action recognition algorithm is implemented for upper-body gestures. A human action dataset is also created for the upper-body movements. Each action is performed 10 times by twenty-four users. The collected joint angles are divided into six action classes. Extreme Learning Machines (ELMs) are used to classify the human actions. Additionally, the Feed-Forward Neural Networks (FNNs) and K-Nearest Neighbor (K-NN) classifiers are used for comparison. According to the comparative results, ELMs produce a good human action recognition performance.

Research paper thumbnail of A Brief Survey and an Application of Semantic Image Segmentation for Autonomous Driving

Handbook of Deep Learning Applications, 2019

Deep learning is a fast-growing machine learning approach to perceive and understand large amount... more Deep learning is a fast-growing machine learning approach to perceive and understand large amounts of data. In this paper, general information about the deep learning approach which is attracted much attention in the field of machine learning is given in recent years and an application about semantic image segmentation is carried out in order to help autonomous driving of autonomous vehicles. This application is implemented with Fully Convolutional Network (FCN) architectures obtained by modifying the Convolutional Neural Network (CNN) architectures based on deep learning. Experimental studies for the application are utilized 4 different FCN architectures named FCN-AlexNet, FCN-8s, FCN-16s and FCN-32s. For the experimental studies, FCNs are first trained separately and validation accuracies of these trained network models on the used dataset is compared. In addition, image segmentation inferences are visualized to take account of how precisely FCN architectures can segment objects.

Research paper thumbnail of Support Vector Machines as Zero Order and First Order Adaptive Fuzzy Inference Systems and Their Applications on System Identification

Research paper thumbnail of A New Formulation for Classification by Ellipsoids

Lecture Notes in Computer Science, 2006

We propose a new formulation for the optimal separation problems. This robust formulation is base... more We propose a new formulation for the optimal separation problems. This robust formulation is based on finding the minimum volume ellipsoid covering the points belong to the class. Idea is to separate by ellipsoids in the input space without mapping data to a high ...

Research paper thumbnail of BULANIK KURALLARI ÇIKARMAK İÇİN RADYAL TABANLI FONKSİYONLAR AĞININ OPTİMAL DİZAYNI ve METODUN MOTOR HATA BULMA ve TANISI İÇİN …

emo.org.tr

... Ayşegül UÇAR Yakup DEMİR Fırat Üniversitesi Elektrik-Elektronik Mühendisliği, ELAZIĞ agulucar... more ... Ayşegül UÇAR Yakup DEMİR Fırat Üniversitesi Elektrik-Elektronik Mühendisliği, ELAZIĞ agulucar@firat.edu.tr ydemir@firat.edu.tr ... BÇS'de; eğer f(x,y) sıfırıncı dereceden bir polinom ise sıfırıncı dereceden Sugeno BÇS'ye, birinci dereceden bir polinom ise birinci dereceden ...

Research paper thumbnail of A. Uçar and E. Yavşan, Behavior learning of memristor - based chaotic circuit by extreme learning machines,Turkish Journal of Electrical Engineering & Compute Sciences, pp.1-37, isbn: doi: 10.3906/elk-1304-248

As the behavior of a chaotic Chua's circuit is non -stationary and inherently noisy, it is re... more As the behavior of a chaotic Chua's circuit is non -stationary and inherently noisy, it is regarded as one of the most challenging applications. In the prediction of behavior of a chaotic Chua's circuit, one of the fundamental problems is to model the circuit with high accuracy. This paper presents here a novel method based on multiple Extreme Learning Machine (ELM) models to learn the chaotic behavior of the four elements canonical Chua's circuit containing the memristor instead of nonlinear resistor only by using the state variables as the input. In the proposed method, the four ELM models are used to estimate the state variables of the circuit. ELMs are firstly trained by using the data spoilt by noise obtained from MATLAB models of the memristor and Chua's circuit. Multi -step -ahead prediction is then carried out by the trained ELMs in the autonomous mode. All attractors of the circuit are finally reconstructed by the outputs of the models. The results of four E...

Research paper thumbnail of Object Detection on FPGAs and GPUs by Using Accelerated Deep Learning

2019 International Artificial Intelligence and Data Processing Symposium (IDAP)

Object detection and recognition is one of the main tasks in many areas such as autonomous unmann... more Object detection and recognition is one of the main tasks in many areas such as autonomous unmanned ground vehicles, robotic and medical image processing. Recently, deep learning has been used by many researchers in these areas when the data measure is large. In particular, one of the most up-to-date structures of deep learning, Convolutional Neural Networks (CNNs) has achieved great success in this field. Real-time works related to CNNs are carried out by using GPU-Graphics Processing Units. Although GPUs provides high stability, they requires high power, energy consumption, and large computational load problems. In order to overcome this problem, it has started to used the Field Programmable Gate Arrays (FPGAs). In this article, object detection and recognition procedures were performed using the ZYNQ XC7Z020 development board including both the ARM processor and the FPGA. Real-time object recognition has been made with the Movidius USB-GPU externally plugged into the FPGA. The results are given with figures.

Research paper thumbnail of Development of Deep Learning Algorithm for Humanoid Robots to Walk to the Target Using Semantic Segmentation and Deep Q Network

2020 Innovations in Intelligent Systems and Applications Conference (ASYU)

In this article, a new algorithm incorporating deep semantic segmentation algorithm and deep rein... more In this article, a new algorithm incorporating deep semantic segmentation algorithm and deep reinforcement algorithm is proposed to avoid the obstacle. This work was generated from two parts. The first part included semantic segmentation by using mini-Unet. The objects were detected and recognized. In the second part, Deep Q Network (DQN) was used for humanoid robots to learn to walk to target. The obtained results showed that the performance of proposed algorithm confirmed.

Research paper thumbnail of Semantic Image Segmentation for Autonomous Driving Using Fully Convolutional Networks

2019 International Artificial Intelligence and Data Processing Symposium (IDAP)

In this paper, an application of semantic image segmentation is implemented in order to support a... more In this paper, an application of semantic image segmentation is implemented in order to support autonomous driving of autonomous vehicles using deep learning based methods. The application is performed by Fully Convolutional Network (FCN) architectures obtained by making changes in Convolutional Neural Network (CNN) architectures. SYNTHIA- San Francisco (SF) is used as the dataset in the experimental studies performed for the application. The experimental studies are conducted using FCN architectures named FCN-AlexNet, FCN-32s, FCN-16s and FCN-8s. Considering these architectures and dataset, this study is carried out for the first time in the literature. The validations of the network models used for experimental studies are compared on the dataset. In addition, segmentation inferences are visualized and thus the segmentation precisions of the FCN architectures are observed. Experimental results are shown that FCNs are suitable for segmentation applications that can assist the autonomous driving of autonomous vehicles. However, it is thought that the experimental results can contribute to the literature and the researchers working on autonomous driving.

Research paper thumbnail of Semantic Segmentation for Object Detection and Grasping with Humanoid Robots

2020 Innovations in Intelligent Systems and Applications Conference (ASYU), 2020

Humanoid robots are expected to support to human at environments such as the house hospital, and ... more Humanoid robots are expected to support to human at environments such as the house hospital, and hotel. For this aim, the humanoid robots should detect and recognize the objects. In this article, semantic segmentation algorithms are proposed to carry out for this aim. Different networks from the literature are used. The simulation results are compared in that accuracy, segmentation performance coefficients and parameter number. The obtained results show that the semantic segmentation is good at this task for the humanoid robots

Research paper thumbnail of Implementation of Object Detection and Recognition Algorithms on a Robotic Arm Platform Using Raspberry Pi

2018 International Conference on Artificial Intelligence and Data Processing (IDAP), 2018

In this paper, it is aimed to implement object detection and recognition algorithms for a robotic... more In this paper, it is aimed to implement object detection and recognition algorithms for a robotic arm platform. With these algorithms, the objects that are desired to be grasped by the gripper of the robotic arm are recognized and located. In the experimental setup that established, OWI–535 robotic arm with 4 DOF and a gripper, which is similar to the robotic arms used in the industry, is preferred. Local feature-based algorithms such as SIFT, SURF, FAST, and ORB are used on the images which are captured via the camera to detect and recognize the target object to be grasped by the gripper of the robotic arm. These algorithms are implemented in the software for object recognition and localization, which is written in C++ programming language using OpenCV library and the software runs on the Raspberry Pi embedded Linux platform. In the experimental studies, the performance of the features which are extracted with the SIFT, SURF, FAST, and ORB algorithms are compared. This study, which is first implemented with OWI–535 robotic arm, shows that the local feature-based algorithms are suitable for education and industrial applications.

Research paper thumbnail of Recognition of Real-World Texture Images Under Challenging Conditions With Deep Learning

Journal of Intelligent Systems with Applications, 2018

Images obtained from the real world environments usually have various distortions in image qualit... more Images obtained from the real world environments usually have various distortions in image quality. For example, when an object in motion is filmed, or when an environment is being filmed on the move, motion tracking effects occur on the image. Increasing the recognition performance of expert systems, which perform image recognition on data obtained under such conditions, is an important research area. In this study, we propose a Convolutional Neural Network (CNN) based Deep System Model (CNN-DSM) for accurate classification of images under challenging conditions. In the proposed model, a new layer is designed in addition to the classical CNN layers. This layer works as an enhancement layer. For the performance evaluations, various real world surface images were selected from the Curet database. Finally, results are presented and discussed.

Research paper thumbnail of Deep Convolutional Neural Networks for facial expression recognition

2017 IEEE International Conference on INnovations in Intelligent SysTems and Applications (INISTA), 2017

Facial expression recognition is a very active research topic due to its potential applications i... more Facial expression recognition is a very active research topic due to its potential applications in the many fields such as human-robot interaction, human-machine interfaces, driving safety, and health-care. Despite of the significant improvements, facial expression recognition is still a challenging problem that wait for more and more accurate algorithms. This article presents a new model that is capable of recognizing facial expression by using deep Convolutional Neural Network (CNN). The CNN model is generated by using Caffe in Digits environment. Moreover, it is trained and tested on NVIDIA Tegra TX1 embedded development platform including a 250 Graphics Processing Unit (GPU) CUDA cores and Quadcore ARM Cortex A57 processor. The proposed model is applied to address the facial expression problem on the publicly available two expression databases, the JAFFE database and the Cohn-Kanade database.

Research paper thumbnail of End-To-End Learning from Demonstation for Object Manipulation of Robotis-Op3 Humanoid Robot

2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA), 2020

Humanoid robots are deployed ranging from houses and hotels to healthcare and industry environmen... more Humanoid robots are deployed ranging from houses and hotels to healthcare and industry environments to help people. Robots can be easily programed by users to predefined tasks such as walking, grasping, stand-up, and shake-up. However, in these days, all robots are expected to learn itself from the obtained experience by watching the environment and people in there. In this study, it is aimed for Robotis-Op3 humanoid robot to grasp the objects by learning from demonstrations based on vision. A new algorithm is proposed for this purpose. Firstly, the robot is manipulated from user commands and the raw images from the camera of Robotis-Op3 are collected. Secondly, a semantic segmentation algorithm is applied to detect and recognize the objects. A new model using Convolutional Neural Networks (CNNs) and Long Short-Term Memory Networks (LSTMs) is then proposed to learn the user demonstrations. The results were compared in terms of training time, performance, and model complexity. Simulation results showed that new models produced a high performance for object manipulation.

Research paper thumbnail of Fast Object Recognition for Humanoid Robots by Using Deep Learning Models with Small Structure

2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA), 2020

In these days, the humanoid robots are expected to help people in healthcare, house and hotels, i... more In these days, the humanoid robots are expected to help people in healthcare, house and hotels, industry, military and the other security environments by performing specific tasks or to replace with people in dangerous scenarios. For this purpose, the humanoid robots should be able to recognize objects and then to do the desired tasks. In this study, it is aimed for Robotis-Op3 humanoid robot to recognize the different shaped objects with deep learning methods. First of all, new models with small structure of Convolutional Neural Networks (CNNs) were proposed. Then, the popular deep neural networks models such as VGG16 and Residual Network (ResNet) that is good at object recognition were used for comparing at recognizing the objects. The results were compared in terms of training time, performance, and model complexity. Simulation results show that new models with small layer structure produced higher performance than complex models.

Research paper thumbnail of Development of a deep wavelet pyramid scene parsing semantic segmentation network for scene perception in indoor environments

Journal of Ambient Intelligence and Humanized Computing

Research paper thumbnail of New CNN and hybrid CNN-LSTM models for learning object manipulation of humanoid robots from demonstration

Cluster Computing, 2021

As the environments that human live are complex and uncontrolled, the object manipulation with hu... more As the environments that human live are complex and uncontrolled, the object manipulation with humanoid robots is regarded as one of the most challenging tasks. Learning a manipulation skill from human Demonstration (LfD) is one of the popular methods in the artificial intelligence and robotics community. This paper introduces a deep learning based teleoperation system for humanoid robots that imitate the human operator's object manipulation behavior. One of the fundamental problems in LfD is to approximate the robot trajectories obtained by means of human demonstrations with high accuracy. The work introduces novel models based on Convolutional Neural Networks (CNNs), CNNs-Long Short-Term Memory (LSTM) models combining the CNN LSTM models, and their scaled variants for object manipulation with humanoid robots by using LfD. In the proposed LfD system, six models are employed to estimate the shoulder roll position of the humanoid robot. The data are first collected in terms of teleoperation of a real Robotis-Op3 humanoid robot and the models are trained. The trajectory estimation is then carried out by the trained CNNs and CNN-LSTM models on the humanoid robot in an autonomous way. All trajectories relating the joint positions are finally generated by the model outputs. The results relating to the six models are compared to each other and the real ones in terms of the training and validation loss, the parameter number, and the training and testing time. Extensive experimental results show that the proposed CNN models are well learned the joint positions and especially the hybrid CNN-LSTM models in the proposed teleoperation system exhibit a more accuracy and stable results.

Research paper thumbnail of A Review of Deep Reinforcement Learning Algorithms and Comparative Results on Inverted Pendulum System

Learning and Analytics in Intelligent Systems, 2020

The control of inverted pendulum problem that is one of the classical control problems is importa... more The control of inverted pendulum problem that is one of the classical control problems is important for many areas from autonomous vehicles to robotic. This chapter presents the usage of the deep reinforcement learning algorithms to control the cart-pole balancing problem. The first part of the chapter reviews the theories of deep reinforcement learning methods such as Deep Q Networks (DQN), DQN with Prioritized Experience Replay (DQN+PER), Double DQN (DDQN), Double Dueling Deep-Q Network (D3QN), Reinforce, Asynchronous Advanced Actor Critic Asynchronous (A3C) and Synchronous Advantage Actor-Critic (A2C). Then, the cart-pole balancing problem in OpenAI Gym environment is considered to implement the deep reinforcement learning methods. Finally, the performance of all methods are comparatively given on the cart-pole balancing problem. The results are presented by tables and figures.

Research paper thumbnail of Learning to Move an Object by the Humanoid Robots by Using Deep Reinforcement Learning

Intelligent Environments 2021, 2021

This paper proposes an algorithm for learning to move the desired object by humanoid robots. In t... more This paper proposes an algorithm for learning to move the desired object by humanoid robots. In this algorithm, the semantic segmentation algorithm and Deep Reinforcement Learning (DRL) algorithms are combined. The semantic segmentation algorithm is used to detect and recognize the object be moved. DRL algorithms are used at the walking and grasping steps. Deep Q Network (DQN) is used to walk towards the target object by means of the previously defined actions at the gate manager and the different head positions of the robot. Deep Deterministic Policy Gradient (DDPG) network is used for grasping by means of the continuous actions. The previously defined commands are finally assigned for the robot to stand up, turn left side and move forward together with the object. In the experimental setup, the Robotis-Op3 humanoid robot is used. The obtained results show that the proposed algorithm has successfully worked.

Research paper thumbnail of New convolutional neural network models for efficient object recognition with humanoid robots

Journal of Information and Telecommunication, 2021

Humanoid robots are expected to manipulate the objects they have not previously seen in real-life... more Humanoid robots are expected to manipulate the objects they have not previously seen in real-life environments. Hence, it is important that the robots have the object recognition capability. However, object recognition is still a challenging problem at different locations and different object positions in real time. The current paper presents four novel models with small structure, based on Convolutional Neural Networks (CNNs) for object recognition with humanoid robots. In the proposed models, a few combinations of convolutions are used to recognize the class labels. The MNIST and CIFAR-10 benchmark datasets are first tested on our models. The performance of the proposed models is shown by comparisons to that of the best state-of-the-art models. The models are then applied on the Robotis-Op3 humanoid robot to recognize the objects of different shapes. The results of the models are compared to those of the models, such as VGG-16 and Residual Network-20 (ResNet-20), in terms of training and validation accuracy and loss, parameter number and training time. The experimental results show that the proposed model exhibits high accurate recognition by the lower parameter number and smaller training time than complex models. Consequently, the proposed models can be considered promising powerful models for object recognition with humanoid robots.

Research paper thumbnail of Gesture imitation and recognition using Kinect sensor and extreme learning machines

Measurement, 2016

This study presents a framework that recognizes and imitates human upper-body motions in real tim... more This study presents a framework that recognizes and imitates human upper-body motions in real time. The framework consists of two parts. In the first part, a transformation algorithm is applied to 3D human motion data captured by a Kinect. The data are then converted into the robot's joint angles by the algorithm. The human upper-body motions are successfully imitated by the NAO humanoid robot in real time. In the second part, the human action recognition algorithm is implemented for upper-body gestures. A human action dataset is also created for the upper-body movements. Each action is performed 10 times by twenty-four users. The collected joint angles are divided into six action classes. Extreme Learning Machines (ELMs) are used to classify the human actions. Additionally, the Feed-Forward Neural Networks (FNNs) and K-Nearest Neighbor (K-NN) classifiers are used for comparison. According to the comparative results, ELMs produce a good human action recognition performance.

Research paper thumbnail of A Brief Survey and an Application of Semantic Image Segmentation for Autonomous Driving

Handbook of Deep Learning Applications, 2019

Deep learning is a fast-growing machine learning approach to perceive and understand large amount... more Deep learning is a fast-growing machine learning approach to perceive and understand large amounts of data. In this paper, general information about the deep learning approach which is attracted much attention in the field of machine learning is given in recent years and an application about semantic image segmentation is carried out in order to help autonomous driving of autonomous vehicles. This application is implemented with Fully Convolutional Network (FCN) architectures obtained by modifying the Convolutional Neural Network (CNN) architectures based on deep learning. Experimental studies for the application are utilized 4 different FCN architectures named FCN-AlexNet, FCN-8s, FCN-16s and FCN-32s. For the experimental studies, FCNs are first trained separately and validation accuracies of these trained network models on the used dataset is compared. In addition, image segmentation inferences are visualized to take account of how precisely FCN architectures can segment objects.

Research paper thumbnail of Support Vector Machines as Zero Order and First Order Adaptive Fuzzy Inference Systems and Their Applications on System Identification

Research paper thumbnail of A New Formulation for Classification by Ellipsoids

Lecture Notes in Computer Science, 2006

We propose a new formulation for the optimal separation problems. This robust formulation is base... more We propose a new formulation for the optimal separation problems. This robust formulation is based on finding the minimum volume ellipsoid covering the points belong to the class. Idea is to separate by ellipsoids in the input space without mapping data to a high ...

Research paper thumbnail of BULANIK KURALLARI ÇIKARMAK İÇİN RADYAL TABANLI FONKSİYONLAR AĞININ OPTİMAL DİZAYNI ve METODUN MOTOR HATA BULMA ve TANISI İÇİN …

emo.org.tr

... Ayşegül UÇAR Yakup DEMİR Fırat Üniversitesi Elektrik-Elektronik Mühendisliği, ELAZIĞ agulucar... more ... Ayşegül UÇAR Yakup DEMİR Fırat Üniversitesi Elektrik-Elektronik Mühendisliği, ELAZIĞ agulucar@firat.edu.tr ydemir@firat.edu.tr ... BÇS'de; eğer f(x,y) sıfırıncı dereceden bir polinom ise sıfırıncı dereceden Sugeno BÇS'ye, birinci dereceden bir polinom ise birinci dereceden ...

Research paper thumbnail of A. Uçar and E. Yavşan, Behavior learning of memristor - based chaotic circuit by extreme learning machines,Turkish Journal of Electrical Engineering & Compute Sciences, pp.1-37, isbn: doi: 10.3906/elk-1304-248

As the behavior of a chaotic Chua's circuit is non -stationary and inherently noisy, it is re... more As the behavior of a chaotic Chua's circuit is non -stationary and inherently noisy, it is regarded as one of the most challenging applications. In the prediction of behavior of a chaotic Chua's circuit, one of the fundamental problems is to model the circuit with high accuracy. This paper presents here a novel method based on multiple Extreme Learning Machine (ELM) models to learn the chaotic behavior of the four elements canonical Chua's circuit containing the memristor instead of nonlinear resistor only by using the state variables as the input. In the proposed method, the four ELM models are used to estimate the state variables of the circuit. ELMs are firstly trained by using the data spoilt by noise obtained from MATLAB models of the memristor and Chua's circuit. Multi -step -ahead prediction is then carried out by the trained ELMs in the autonomous mode. All attractors of the circuit are finally reconstructed by the outputs of the models. The results of four E...

Research paper thumbnail of Object Detection on FPGAs and GPUs by Using Accelerated Deep Learning

2019 International Artificial Intelligence and Data Processing Symposium (IDAP)

Object detection and recognition is one of the main tasks in many areas such as autonomous unmann... more Object detection and recognition is one of the main tasks in many areas such as autonomous unmanned ground vehicles, robotic and medical image processing. Recently, deep learning has been used by many researchers in these areas when the data measure is large. In particular, one of the most up-to-date structures of deep learning, Convolutional Neural Networks (CNNs) has achieved great success in this field. Real-time works related to CNNs are carried out by using GPU-Graphics Processing Units. Although GPUs provides high stability, they requires high power, energy consumption, and large computational load problems. In order to overcome this problem, it has started to used the Field Programmable Gate Arrays (FPGAs). In this article, object detection and recognition procedures were performed using the ZYNQ XC7Z020 development board including both the ARM processor and the FPGA. Real-time object recognition has been made with the Movidius USB-GPU externally plugged into the FPGA. The results are given with figures.

Research paper thumbnail of Development of Deep Learning Algorithm for Humanoid Robots to Walk to the Target Using Semantic Segmentation and Deep Q Network

2020 Innovations in Intelligent Systems and Applications Conference (ASYU)

In this article, a new algorithm incorporating deep semantic segmentation algorithm and deep rein... more In this article, a new algorithm incorporating deep semantic segmentation algorithm and deep reinforcement algorithm is proposed to avoid the obstacle. This work was generated from two parts. The first part included semantic segmentation by using mini-Unet. The objects were detected and recognized. In the second part, Deep Q Network (DQN) was used for humanoid robots to learn to walk to target. The obtained results showed that the performance of proposed algorithm confirmed.

Research paper thumbnail of Semantic Image Segmentation for Autonomous Driving Using Fully Convolutional Networks

2019 International Artificial Intelligence and Data Processing Symposium (IDAP)

In this paper, an application of semantic image segmentation is implemented in order to support a... more In this paper, an application of semantic image segmentation is implemented in order to support autonomous driving of autonomous vehicles using deep learning based methods. The application is performed by Fully Convolutional Network (FCN) architectures obtained by making changes in Convolutional Neural Network (CNN) architectures. SYNTHIA- San Francisco (SF) is used as the dataset in the experimental studies performed for the application. The experimental studies are conducted using FCN architectures named FCN-AlexNet, FCN-32s, FCN-16s and FCN-8s. Considering these architectures and dataset, this study is carried out for the first time in the literature. The validations of the network models used for experimental studies are compared on the dataset. In addition, segmentation inferences are visualized and thus the segmentation precisions of the FCN architectures are observed. Experimental results are shown that FCNs are suitable for segmentation applications that can assist the autonomous driving of autonomous vehicles. However, it is thought that the experimental results can contribute to the literature and the researchers working on autonomous driving.

Research paper thumbnail of Semantic Segmentation for Object Detection and Grasping with Humanoid Robots

2020 Innovations in Intelligent Systems and Applications Conference (ASYU), 2020

Humanoid robots are expected to support to human at environments such as the house hospital, and ... more Humanoid robots are expected to support to human at environments such as the house hospital, and hotel. For this aim, the humanoid robots should detect and recognize the objects. In this article, semantic segmentation algorithms are proposed to carry out for this aim. Different networks from the literature are used. The simulation results are compared in that accuracy, segmentation performance coefficients and parameter number. The obtained results show that the semantic segmentation is good at this task for the humanoid robots

Research paper thumbnail of Implementation of Object Detection and Recognition Algorithms on a Robotic Arm Platform Using Raspberry Pi

2018 International Conference on Artificial Intelligence and Data Processing (IDAP), 2018

In this paper, it is aimed to implement object detection and recognition algorithms for a robotic... more In this paper, it is aimed to implement object detection and recognition algorithms for a robotic arm platform. With these algorithms, the objects that are desired to be grasped by the gripper of the robotic arm are recognized and located. In the experimental setup that established, OWI–535 robotic arm with 4 DOF and a gripper, which is similar to the robotic arms used in the industry, is preferred. Local feature-based algorithms such as SIFT, SURF, FAST, and ORB are used on the images which are captured via the camera to detect and recognize the target object to be grasped by the gripper of the robotic arm. These algorithms are implemented in the software for object recognition and localization, which is written in C++ programming language using OpenCV library and the software runs on the Raspberry Pi embedded Linux platform. In the experimental studies, the performance of the features which are extracted with the SIFT, SURF, FAST, and ORB algorithms are compared. This study, which is first implemented with OWI–535 robotic arm, shows that the local feature-based algorithms are suitable for education and industrial applications.

Research paper thumbnail of Recognition of Real-World Texture Images Under Challenging Conditions With Deep Learning

Journal of Intelligent Systems with Applications, 2018

Images obtained from the real world environments usually have various distortions in image qualit... more Images obtained from the real world environments usually have various distortions in image quality. For example, when an object in motion is filmed, or when an environment is being filmed on the move, motion tracking effects occur on the image. Increasing the recognition performance of expert systems, which perform image recognition on data obtained under such conditions, is an important research area. In this study, we propose a Convolutional Neural Network (CNN) based Deep System Model (CNN-DSM) for accurate classification of images under challenging conditions. In the proposed model, a new layer is designed in addition to the classical CNN layers. This layer works as an enhancement layer. For the performance evaluations, various real world surface images were selected from the Curet database. Finally, results are presented and discussed.

Research paper thumbnail of Deep Convolutional Neural Networks for facial expression recognition

2017 IEEE International Conference on INnovations in Intelligent SysTems and Applications (INISTA), 2017

Facial expression recognition is a very active research topic due to its potential applications i... more Facial expression recognition is a very active research topic due to its potential applications in the many fields such as human-robot interaction, human-machine interfaces, driving safety, and health-care. Despite of the significant improvements, facial expression recognition is still a challenging problem that wait for more and more accurate algorithms. This article presents a new model that is capable of recognizing facial expression by using deep Convolutional Neural Network (CNN). The CNN model is generated by using Caffe in Digits environment. Moreover, it is trained and tested on NVIDIA Tegra TX1 embedded development platform including a 250 Graphics Processing Unit (GPU) CUDA cores and Quadcore ARM Cortex A57 processor. The proposed model is applied to address the facial expression problem on the publicly available two expression databases, the JAFFE database and the Cohn-Kanade database.

Research paper thumbnail of End-To-End Learning from Demonstation for Object Manipulation of Robotis-Op3 Humanoid Robot

2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA), 2020

Humanoid robots are deployed ranging from houses and hotels to healthcare and industry environmen... more Humanoid robots are deployed ranging from houses and hotels to healthcare and industry environments to help people. Robots can be easily programed by users to predefined tasks such as walking, grasping, stand-up, and shake-up. However, in these days, all robots are expected to learn itself from the obtained experience by watching the environment and people in there. In this study, it is aimed for Robotis-Op3 humanoid robot to grasp the objects by learning from demonstrations based on vision. A new algorithm is proposed for this purpose. Firstly, the robot is manipulated from user commands and the raw images from the camera of Robotis-Op3 are collected. Secondly, a semantic segmentation algorithm is applied to detect and recognize the objects. A new model using Convolutional Neural Networks (CNNs) and Long Short-Term Memory Networks (LSTMs) is then proposed to learn the user demonstrations. The results were compared in terms of training time, performance, and model complexity. Simulation results showed that new models produced a high performance for object manipulation.

Research paper thumbnail of Fast Object Recognition for Humanoid Robots by Using Deep Learning Models with Small Structure

2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA), 2020

In these days, the humanoid robots are expected to help people in healthcare, house and hotels, i... more In these days, the humanoid robots are expected to help people in healthcare, house and hotels, industry, military and the other security environments by performing specific tasks or to replace with people in dangerous scenarios. For this purpose, the humanoid robots should be able to recognize objects and then to do the desired tasks. In this study, it is aimed for Robotis-Op3 humanoid robot to recognize the different shaped objects with deep learning methods. First of all, new models with small structure of Convolutional Neural Networks (CNNs) were proposed. Then, the popular deep neural networks models such as VGG16 and Residual Network (ResNet) that is good at object recognition were used for comparing at recognizing the objects. The results were compared in terms of training time, performance, and model complexity. Simulation results show that new models with small layer structure produced higher performance than complex models.