Erol Sahin - Academia.edu (original) (raw)

Papers by Erol Sahin

Research paper thumbnail of Learning Context on a Humanoid Robot using Incremental Latent Dirichlet Allocation

IEEE Transactions on Cognitive and Developmental Systems, Mar 1, 2016

In this article, we formalize and model context in terms of a set of concepts grounded in the sen... more In this article, we formalize and model context in terms of a set of concepts grounded in the sensorimotor interactions of a robot. The concepts are modeled as a web using Markov Random Field, inspired from the concept web hypothesis for representing concepts in humans. On this concept web, we treat context as a latent variable of Latent Dirichlet Allocation (LDA), which is a widely-used method in computational linguistics for modeling topics in texts. We extend the standard LDA method in order to make it incremental so that (i) it does not re-learn everything from scratch given new interactions (i.e., it is online) and (ii) it can discover and add a new context into its model when necessary. We demonstrate on the iCub platform that, partly owing to modeling context on top of the concept web, our approach is adaptive, online and robust: It is adaptive and online since it can learn and discover a new context from new interactions. It is robust since it is not affected by irrelevant stimuli and it can discover contexts after a few interactions only. Moreover, we show how to use the context learned in such a model for two important tasks: object recognition and planning.

Research paper thumbnail of Fraud detection from paper texture using Siamese networks

Signal, Image and Video Processing, Apr 11, 2023

Research paper thumbnail of Self-organized flocking with a mobile robot swarm

Adaptive Agents and Multi-Agents Systems, May 12, 2008

This paper studies self-organized flocking in a swarm of mobile robots. We present Kobot, a mobil... more This paper studies self-organized flocking in a swarm of mobile robots. We present Kobot, a mobile robot platform developed specifically for swarm robotic studies, briefly describing its sensing and communication abilities. In particular, we describe a scalable method that allows the robots to sense the orientations of their neighbors using a digital compass and wireless communication. Then we propose a behavior for a swarm of robots that creates self-organized flocking by using heading alignment and proximal control. The flocking behavior is observed to operate in three phases: alignment, advance, and avoidance. We evaluate four variants of this behavior by setting its parameters to extreme values and analyze the performance of flocking using a number of metrics, such as order and entropy. Our results show that, the flocking behavior obtained under appropriate parameter values, is quite robust and generates successful selforganized flocking in constraint environments.

Research paper thumbnail of Designing Social Cues for Collaborative Robots

In this paper, we investigate how collaborative robots, or cobots, typically composed of a roboti... more In this paper, we investigate how collaborative robots, or cobots, typically composed of a robotic arm and a gripper carrying out manipulation tasks alongside human coworkers, can be enhanced with HRI capabilities by applying ideas and principles from character animation. To this end, we modified the appearance and behaviors of a cobot, with minimal impact on its functionality and performance, and studied the extent to which these modifications improved its communication with and perceptions by human collaborators. Specifically, we aimed to improve the Appeal of the robot by manipulating its physical appearance, posture, and gaze, creating an animal-like character with a head-on-neck morphology; to utilize Arcs by generating smooth trajectories for the robot arm; and to increase the lifelikeness of the robot through Secondary Action by adding breathing motions to the robot. In two user studies, we investigated the effects of these cues on collaborator perceptions of the robot. Findings from our first study showed breathing to have a positive effect on most measures of robot perception and reveal nuanced interactions among the other factors. Data from our second study showed that, using gaze cues alone, a robot arm can improve metrics such as likeability and perceived sociability. CCS CONCEPTS • Human-centered computing →Interaction design;• Computer systems organization →Robotics. ACM Reference Format: Yunus Terzioğlu, Bilge Mutlu, and Erol Şahin. 2020. Designing Social Cues for Collaborative Robots: The Role of Gaze and Breathing in Human-Robot Collaboration. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI’20), March 23-26, 2020, Cambridge, United Kingdom. ACM, New York, NY, USA, 15 pages. https://doi.org/10.1145/3319502.3374829

Research paper thumbnail of Mechatronic Design for Multi Robots-Insect Swarms Interactions

Research paper thumbnail of Proceedings of the Tenth International Conference on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems

Research paper thumbnail of Swarm Intelligence: 7th International Conference, ANTS 2010. Proceedings

Livre: Swarm intelligence: 7th international conference, ants 2010, brussels, belgium,september 8... more Livre: Swarm intelligence: 7th international conference, ants 2010, brussels, belgium,september 8-10, 2010 proceedings (paperback) (series: DORIGO Marco, BIRATTARI Mauro, DI CARO Gianni A., DOURSAT René, ENGELBRECHT Andries.

Research paper thumbnail of Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): Preface

International Conference on Swarm Intelligence, Nov 3, 2010

Research paper thumbnail of Proceedings of the 2004 international conference on Swarm Robotics

Google, Inc. (search). ...

Research paper thumbnail of AssembleRL: Learning to Assemble Furniture from Their Point Clouds

arXiv (Cornell University), Sep 15, 2022

The rise of simulation environments has enabled learning-based approaches for assembly planning, ... more The rise of simulation environments has enabled learning-based approaches for assembly planning, which is otherwise a labor-intensive and daunting task. Assembling furniture is especially interesting since furniture are intricate and pose challenges for learning-based approaches. Surprisingly, humans can solve furniture assembly mostly given a 2D snapshot of the assembled product. Although recent years have witnessed promising learning-based approaches for furniture assembly, they assume the availability of correct connection labels for each assembly step, which are expensive to obtain in practice. In this paper, we alleviate this assumption and aim to solve furniture assembly with as little human expertise and supervision as possible. To be specific, we assume the availability of the assembled point cloud, and comparing the point cloud of the current assembly and the point cloud of the target product, obtain a novel reward signal based on two measures: Incorrectness and incompleteness. We show that our novel reward signal can train a deep network to successfully assemble different types of furniture.

Research paper thumbnail of Texture Analysis by Deep Twin Networks for Paper Fraud Detection

2022 30th Signal Processing and Communications Applications Conference (SIU)

Research paper thumbnail of Ants 2002 - Ant Algorithms

Research paper thumbnail of A Minimally Invasive Approach Towards “Ecosystem Hacking” With Honeybees

Frontiers in Robotics and AI

Honey bees live in colonies of thousands of individuals, that not only need to collaborate with e... more Honey bees live in colonies of thousands of individuals, that not only need to collaborate with each other but also to interact intensively with their ecosystem. A small group of robots operating in a honey bee colony and interacting with the queen bee, a central colony element, has the potential to change the collective behavior of the entire colony and thus also improve its interaction with the surrounding ecosystem. Such a system can be used to study and understand many elements of bee behavior within hives that have not been adequately researched. We discuss here the applicability of this technology for ecosystem protection: A novel paradigm of a minimally invasive form of conservation through “Ecosystem Hacking”. We discuss the necessary requirements for such technology and show experimental data on the dynamics of the natural queen’s court, initial designs of biomimetic robotic surrogates of court bees, and a multi-agent model of the queen bee court system. Our model is intend...

Research paper thumbnail of Traversability on a simple humanoid: What did I just trip over?

Abstract The notion of affordance has taken the attention of roboticists in recent years. Previou... more Abstract The notion of affordance has taken the attention of roboticists in recent years. Previously we had used this concept to learn and perceive the traversability of a mobile robot platform. In this paper, we have shown how a simplistic humanoid robot equipped with time-of-flight ultrasonic sensor can learn traversability affordance. In addition to this, we have demonstrated it can infer how sensory data history affect this affordance by merging previously sensed data with the current data via a sliding data window concatenating ...

Research paper thumbnail of Curiosity-driven learning of traversability affordance on a mobile robot

The concept of affordances, as proposed by J.J. Gibson, refers to the relationship between the or... more The concept of affordances, as proposed by J.J. Gibson, refers to the relationship between the organism and its environment and has become popular in autonomous robot control. The learning of affordances in autonomous robots, however, typically requires a large set of training data obtained from the interactions of the robot with its environment. Therefore, the learning process is not only time-consuming, and costly but is also risky since some of the interactions may inflict damage on the robot. In this paper, we study the learning of traversability affordance on a mobile robot and investigate how the number of interactions required can be minimized with minimial degradation on the learning process. Specifically, we propose a two step learning process which consists of bootstrapping and curiosity-based learning phases. In the bootstrapping phase, a small set of initial interaction data are used to find the relevant perceptual features for the affordance, and a Support Vector Machine (SVM) classifier is trained. In the curiosity-driven learning phase, a curiosity band around the decision hyperplane of the SVM is used to decide whether a given interaction opportunity is worth exploring or not. Specifically, if the output of the SVM for a given percept lies within curiosity band, indicating that the classifier is not so certain about the hypothesized effect of the interaction, the robot goes ahead with the interaction, and skips if not. Our studies within a physics-based robot simulator show that the robot can achieve better learning with the proposed curiositydriven learning method for a fixed number of interactions. The results also show that, for optimum performance, there exists a minimum number of initial interactions to be used for bootstrapping. Finally, the trained classifier with the proposed learning method was also successfully tested on the real robot.

Research paper thumbnail of Evolving aggregation behaviors for swarm robotic systems: a systematic case study

When one attempts to use artificial evolution to develop behaviors for a swarm robotic system, he... more When one attempts to use artificial evolution to develop behaviors for a swarm robotic system, he is faced with decisions to be made regarding the parameters of the evolution. In this paper, aggregation behavior is chosen as a case, where performance and scalability of aggregation behaviors of perceptron controllers that are evolved for a simulated swarm robotic system are systematically studied with different parameter settings. Four experiments are conducted varying some of the parameters, and rules of thumb are derived, which can be of guidance to the use of evolutionary methods to generate other swarm robotic behaviors.

Research paper thumbnail of The learning of adjectives and nouns from affordance and appearance features

Adaptive Behavior, Aug 22, 2013

We study how a robot can link concepts represented by adjectives and nouns in language with its o... more We study how a robot can link concepts represented by adjectives and nouns in language with its own sensorimotor interactions. Specifically, an iCub humanoid robot interacts with a group of objects using a repertoire of manipulation behaviors. The objects are labeled using a set of adjectives and nouns. The effects induced on the objects are labeled as affordances, and classifiers are learned to predict the affordances from the appearance of an object. We evaluated three different models for learning adjectives and nouns using features obtained from the appearance and affordances of an object, through cross-validated training as well as through testing on novel objects. The results indicate that shape-related adjectives are best learned using features related to affordances, whereas nouns are best learned using appearance features. Analysis of the feature relevancy shows that affordance features are more relevant for adjectives and appearance features for nouns. We have shown that adjective predictions can be used to solve the odd-one-out task on a number of examples. Finally, we linked our results with studies from Psychology, Neuroscience and Linguistics that point to the differences between the development and representation of adjectives and nouns in humans.

Research paper thumbnail of Special issue on swarm robotics

Swarm Intelligence, Aug 23, 2008

Swarm robotics is a new approach to the coordination of multi-robot systems. In contrast with tra... more Swarm robotics is a new approach to the coordination of multi-robot systems. In contrast with traditional multi-robot systems which use centralised or hierarchical control and com- munication systems in order to coordinate robots' behaviours, swarm robotics adopts a de- ...

Research paper thumbnail of Vision-Based Detection and Distance Estimation of Micro Unmanned Aerial Vehicles

Sensors, Sep 18, 2015

Detection and distance estimation of micro unmanned aerial vehicles (mUAVs) is crucial for (i) th... more Detection and distance estimation of micro unmanned aerial vehicles (mUAVs) is crucial for (i) the detection of intruder mUAVs in protected environments; (ii) sense and avoid purposes on mUAVs or on other aerial vehicles and (iii) multi-mUAV control scenarios, such as environmental monitoring, surveillance and exploration. In this article, we evaluate vision algorithms as alternatives for detection and distance estimation of mUAVs, since other sensing modalities entail certain limitations on the environment or on the distance. For this purpose, we test Haar-like features, histogram of gradients (HOG) and local binary patterns (LBP) using cascades of boosted classifiers. Cascaded boosted classifiers allow fast processing by performing detection tests at multiple stages, where only candidates passing earlier simple stages are processed at the preceding more complex stages. We also integrate a distance estimation method with our system utilizing geometric cues with support vector regressors. We evaluated each method on indoor and outdoor videos that are collected in a systematic way and also on videos having motion blur. Our experiments show that, using boosted cascaded classifiers with LBP, near real-time detection and distance estimation of mUAVs are possible in about 60 ms indoors (1032 × 778 resolution) and 150 ms outdoors (1280 × 720 resolution) per frame, with a detection rate of 0.96 F-score. However, the cascaded classifiers using Haar-like features lead to better distance estimation since they can position the bounding boxes on mUAVs more accurately. On the other hand, our time analysis yields that the cascaded classifiers using HOG train and run faster than the other algorithms.

Research paper thumbnail of Mind the Gap! Predictive Flocking of Aerial Robot Swarm in Cluttered Environments

Research paper thumbnail of Learning Context on a Humanoid Robot using Incremental Latent Dirichlet Allocation

IEEE Transactions on Cognitive and Developmental Systems, Mar 1, 2016

In this article, we formalize and model context in terms of a set of concepts grounded in the sen... more In this article, we formalize and model context in terms of a set of concepts grounded in the sensorimotor interactions of a robot. The concepts are modeled as a web using Markov Random Field, inspired from the concept web hypothesis for representing concepts in humans. On this concept web, we treat context as a latent variable of Latent Dirichlet Allocation (LDA), which is a widely-used method in computational linguistics for modeling topics in texts. We extend the standard LDA method in order to make it incremental so that (i) it does not re-learn everything from scratch given new interactions (i.e., it is online) and (ii) it can discover and add a new context into its model when necessary. We demonstrate on the iCub platform that, partly owing to modeling context on top of the concept web, our approach is adaptive, online and robust: It is adaptive and online since it can learn and discover a new context from new interactions. It is robust since it is not affected by irrelevant stimuli and it can discover contexts after a few interactions only. Moreover, we show how to use the context learned in such a model for two important tasks: object recognition and planning.

Research paper thumbnail of Fraud detection from paper texture using Siamese networks

Signal, Image and Video Processing, Apr 11, 2023

Research paper thumbnail of Self-organized flocking with a mobile robot swarm

Adaptive Agents and Multi-Agents Systems, May 12, 2008

This paper studies self-organized flocking in a swarm of mobile robots. We present Kobot, a mobil... more This paper studies self-organized flocking in a swarm of mobile robots. We present Kobot, a mobile robot platform developed specifically for swarm robotic studies, briefly describing its sensing and communication abilities. In particular, we describe a scalable method that allows the robots to sense the orientations of their neighbors using a digital compass and wireless communication. Then we propose a behavior for a swarm of robots that creates self-organized flocking by using heading alignment and proximal control. The flocking behavior is observed to operate in three phases: alignment, advance, and avoidance. We evaluate four variants of this behavior by setting its parameters to extreme values and analyze the performance of flocking using a number of metrics, such as order and entropy. Our results show that, the flocking behavior obtained under appropriate parameter values, is quite robust and generates successful selforganized flocking in constraint environments.

Research paper thumbnail of Designing Social Cues for Collaborative Robots

In this paper, we investigate how collaborative robots, or cobots, typically composed of a roboti... more In this paper, we investigate how collaborative robots, or cobots, typically composed of a robotic arm and a gripper carrying out manipulation tasks alongside human coworkers, can be enhanced with HRI capabilities by applying ideas and principles from character animation. To this end, we modified the appearance and behaviors of a cobot, with minimal impact on its functionality and performance, and studied the extent to which these modifications improved its communication with and perceptions by human collaborators. Specifically, we aimed to improve the Appeal of the robot by manipulating its physical appearance, posture, and gaze, creating an animal-like character with a head-on-neck morphology; to utilize Arcs by generating smooth trajectories for the robot arm; and to increase the lifelikeness of the robot through Secondary Action by adding breathing motions to the robot. In two user studies, we investigated the effects of these cues on collaborator perceptions of the robot. Findings from our first study showed breathing to have a positive effect on most measures of robot perception and reveal nuanced interactions among the other factors. Data from our second study showed that, using gaze cues alone, a robot arm can improve metrics such as likeability and perceived sociability. CCS CONCEPTS • Human-centered computing →Interaction design;• Computer systems organization →Robotics. ACM Reference Format: Yunus Terzioğlu, Bilge Mutlu, and Erol Şahin. 2020. Designing Social Cues for Collaborative Robots: The Role of Gaze and Breathing in Human-Robot Collaboration. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI’20), March 23-26, 2020, Cambridge, United Kingdom. ACM, New York, NY, USA, 15 pages. https://doi.org/10.1145/3319502.3374829

Research paper thumbnail of Mechatronic Design for Multi Robots-Insect Swarms Interactions

Research paper thumbnail of Proceedings of the Tenth International Conference on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems

Research paper thumbnail of Swarm Intelligence: 7th International Conference, ANTS 2010. Proceedings

Livre: Swarm intelligence: 7th international conference, ants 2010, brussels, belgium,september 8... more Livre: Swarm intelligence: 7th international conference, ants 2010, brussels, belgium,september 8-10, 2010 proceedings (paperback) (series: DORIGO Marco, BIRATTARI Mauro, DI CARO Gianni A., DOURSAT René, ENGELBRECHT Andries.

Research paper thumbnail of Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): Preface

International Conference on Swarm Intelligence, Nov 3, 2010

Research paper thumbnail of Proceedings of the 2004 international conference on Swarm Robotics

Google, Inc. (search). ...

Research paper thumbnail of AssembleRL: Learning to Assemble Furniture from Their Point Clouds

arXiv (Cornell University), Sep 15, 2022

The rise of simulation environments has enabled learning-based approaches for assembly planning, ... more The rise of simulation environments has enabled learning-based approaches for assembly planning, which is otherwise a labor-intensive and daunting task. Assembling furniture is especially interesting since furniture are intricate and pose challenges for learning-based approaches. Surprisingly, humans can solve furniture assembly mostly given a 2D snapshot of the assembled product. Although recent years have witnessed promising learning-based approaches for furniture assembly, they assume the availability of correct connection labels for each assembly step, which are expensive to obtain in practice. In this paper, we alleviate this assumption and aim to solve furniture assembly with as little human expertise and supervision as possible. To be specific, we assume the availability of the assembled point cloud, and comparing the point cloud of the current assembly and the point cloud of the target product, obtain a novel reward signal based on two measures: Incorrectness and incompleteness. We show that our novel reward signal can train a deep network to successfully assemble different types of furniture.

Research paper thumbnail of Texture Analysis by Deep Twin Networks for Paper Fraud Detection

2022 30th Signal Processing and Communications Applications Conference (SIU)

Research paper thumbnail of Ants 2002 - Ant Algorithms

Research paper thumbnail of A Minimally Invasive Approach Towards “Ecosystem Hacking” With Honeybees

Frontiers in Robotics and AI

Honey bees live in colonies of thousands of individuals, that not only need to collaborate with e... more Honey bees live in colonies of thousands of individuals, that not only need to collaborate with each other but also to interact intensively with their ecosystem. A small group of robots operating in a honey bee colony and interacting with the queen bee, a central colony element, has the potential to change the collective behavior of the entire colony and thus also improve its interaction with the surrounding ecosystem. Such a system can be used to study and understand many elements of bee behavior within hives that have not been adequately researched. We discuss here the applicability of this technology for ecosystem protection: A novel paradigm of a minimally invasive form of conservation through “Ecosystem Hacking”. We discuss the necessary requirements for such technology and show experimental data on the dynamics of the natural queen’s court, initial designs of biomimetic robotic surrogates of court bees, and a multi-agent model of the queen bee court system. Our model is intend...

Research paper thumbnail of Traversability on a simple humanoid: What did I just trip over?

Abstract The notion of affordance has taken the attention of roboticists in recent years. Previou... more Abstract The notion of affordance has taken the attention of roboticists in recent years. Previously we had used this concept to learn and perceive the traversability of a mobile robot platform. In this paper, we have shown how a simplistic humanoid robot equipped with time-of-flight ultrasonic sensor can learn traversability affordance. In addition to this, we have demonstrated it can infer how sensory data history affect this affordance by merging previously sensed data with the current data via a sliding data window concatenating ...

Research paper thumbnail of Curiosity-driven learning of traversability affordance on a mobile robot

The concept of affordances, as proposed by J.J. Gibson, refers to the relationship between the or... more The concept of affordances, as proposed by J.J. Gibson, refers to the relationship between the organism and its environment and has become popular in autonomous robot control. The learning of affordances in autonomous robots, however, typically requires a large set of training data obtained from the interactions of the robot with its environment. Therefore, the learning process is not only time-consuming, and costly but is also risky since some of the interactions may inflict damage on the robot. In this paper, we study the learning of traversability affordance on a mobile robot and investigate how the number of interactions required can be minimized with minimial degradation on the learning process. Specifically, we propose a two step learning process which consists of bootstrapping and curiosity-based learning phases. In the bootstrapping phase, a small set of initial interaction data are used to find the relevant perceptual features for the affordance, and a Support Vector Machine (SVM) classifier is trained. In the curiosity-driven learning phase, a curiosity band around the decision hyperplane of the SVM is used to decide whether a given interaction opportunity is worth exploring or not. Specifically, if the output of the SVM for a given percept lies within curiosity band, indicating that the classifier is not so certain about the hypothesized effect of the interaction, the robot goes ahead with the interaction, and skips if not. Our studies within a physics-based robot simulator show that the robot can achieve better learning with the proposed curiositydriven learning method for a fixed number of interactions. The results also show that, for optimum performance, there exists a minimum number of initial interactions to be used for bootstrapping. Finally, the trained classifier with the proposed learning method was also successfully tested on the real robot.

Research paper thumbnail of Evolving aggregation behaviors for swarm robotic systems: a systematic case study

When one attempts to use artificial evolution to develop behaviors for a swarm robotic system, he... more When one attempts to use artificial evolution to develop behaviors for a swarm robotic system, he is faced with decisions to be made regarding the parameters of the evolution. In this paper, aggregation behavior is chosen as a case, where performance and scalability of aggregation behaviors of perceptron controllers that are evolved for a simulated swarm robotic system are systematically studied with different parameter settings. Four experiments are conducted varying some of the parameters, and rules of thumb are derived, which can be of guidance to the use of evolutionary methods to generate other swarm robotic behaviors.

Research paper thumbnail of The learning of adjectives and nouns from affordance and appearance features

Adaptive Behavior, Aug 22, 2013

We study how a robot can link concepts represented by adjectives and nouns in language with its o... more We study how a robot can link concepts represented by adjectives and nouns in language with its own sensorimotor interactions. Specifically, an iCub humanoid robot interacts with a group of objects using a repertoire of manipulation behaviors. The objects are labeled using a set of adjectives and nouns. The effects induced on the objects are labeled as affordances, and classifiers are learned to predict the affordances from the appearance of an object. We evaluated three different models for learning adjectives and nouns using features obtained from the appearance and affordances of an object, through cross-validated training as well as through testing on novel objects. The results indicate that shape-related adjectives are best learned using features related to affordances, whereas nouns are best learned using appearance features. Analysis of the feature relevancy shows that affordance features are more relevant for adjectives and appearance features for nouns. We have shown that adjective predictions can be used to solve the odd-one-out task on a number of examples. Finally, we linked our results with studies from Psychology, Neuroscience and Linguistics that point to the differences between the development and representation of adjectives and nouns in humans.

Research paper thumbnail of Special issue on swarm robotics

Swarm Intelligence, Aug 23, 2008

Swarm robotics is a new approach to the coordination of multi-robot systems. In contrast with tra... more Swarm robotics is a new approach to the coordination of multi-robot systems. In contrast with traditional multi-robot systems which use centralised or hierarchical control and com- munication systems in order to coordinate robots' behaviours, swarm robotics adopts a de- ...

Research paper thumbnail of Vision-Based Detection and Distance Estimation of Micro Unmanned Aerial Vehicles

Sensors, Sep 18, 2015

Detection and distance estimation of micro unmanned aerial vehicles (mUAVs) is crucial for (i) th... more Detection and distance estimation of micro unmanned aerial vehicles (mUAVs) is crucial for (i) the detection of intruder mUAVs in protected environments; (ii) sense and avoid purposes on mUAVs or on other aerial vehicles and (iii) multi-mUAV control scenarios, such as environmental monitoring, surveillance and exploration. In this article, we evaluate vision algorithms as alternatives for detection and distance estimation of mUAVs, since other sensing modalities entail certain limitations on the environment or on the distance. For this purpose, we test Haar-like features, histogram of gradients (HOG) and local binary patterns (LBP) using cascades of boosted classifiers. Cascaded boosted classifiers allow fast processing by performing detection tests at multiple stages, where only candidates passing earlier simple stages are processed at the preceding more complex stages. We also integrate a distance estimation method with our system utilizing geometric cues with support vector regressors. We evaluated each method on indoor and outdoor videos that are collected in a systematic way and also on videos having motion blur. Our experiments show that, using boosted cascaded classifiers with LBP, near real-time detection and distance estimation of mUAVs are possible in about 60 ms indoors (1032 × 778 resolution) and 150 ms outdoors (1280 × 720 resolution) per frame, with a detection rate of 0.96 F-score. However, the cascaded classifiers using Haar-like features lead to better distance estimation since they can position the bounding boxes on mUAVs more accurately. On the other hand, our time analysis yields that the cascaded classifiers using HOG train and run faster than the other algorithms.

Research paper thumbnail of Mind the Gap! Predictive Flocking of Aerial Robot Swarm in Cluttered Environments