David Droeschel | Bonn Universität (original) (raw)

Papers by David Droeschel

Research paper thumbnail of Autonomous Navigation for Micro Aerial Vehicles in Complex GNSS-denied Environments

Micro aerial vehicles, such as multirotors, are particular well suited for the autonomous monitor... more Micro aerial vehicles, such as multirotors, are particular well suited for
the autonomous monitoring, inspection, and surveillance of buildings, e.g., for main-
tenance in industrial plants. Key prerequisites for the fully autonomous operation
of micro aerial vehicles in restricted environments are 3D mapping, real-time pose
tracking, obstacle detection, and planning of collision-free trajectories. In this arti-
cle, we propose a complete navigation system with a multimodal sensor setup for
omnidirectional environment perception. Measurements of a 3D laser scanner are
aggregated in egocentric local multiresolution grid maps. Local maps are registered
and merged to allocentric maps in which the MAV localizes. For autonomous navi-
gation, we generate trajectories in a multi-layered approach: from mission planning
over global and local trajectory planning to reactive obstacle avoidance. We evaluate
our approach in a GNSS-denied indoor environment where multiple collision hazards
require reliable omnidirectional perception and quick navigation reactions

Research paper thumbnail of Continuous Mapping and Localization for Autonomous Navigation in Rough Terrain using a 3D Laser Scanner

For autonomous navigation in difficult terrain, such as degraded environments in disaster respons... more For autonomous navigation in difficult terrain, such as degraded environments in disaster response scenarios, robots are required to create a map of an unknown environment and to localize within this map. In this paper, we describe our approach to simultaneous localization and mapping that is based on the measurements of a 3D laser-range finder. We aggregate laser-range measurements by registering sparse 3D scans with a local multiresolution surfel map that has high resolution in the vicinity of the robot and coarser resolutions with increasing distance, which corresponds well to measurement density and accuracy of our sensor. By modeling measurements by surface elements, our approach allows for efficient and accurate registration and leverages online mapping and localization. The incrementally built local dense 3D maps of nearby key poses are registered against each other. Graph optimization yields a globally consistent dense 3D map of the environment. Continuous registration of local maps with the global map allows for tracking the 6D robot pose in real time. We assess the drivability of the terrain by analyzing height differences in an allocentric height map and plan cost-optimal paths. The system has been successfully demonstrated during the DARPA Robotics Challenge and the DLR SpaceBot Camp. In experiments, we evaluate accuracy and efficiency of our approach.

Research paper thumbnail of Supervised Autonomy for Exploration and Mobile Manipulation in Rough Terrain

Planetary exploration scenarios illustrate the need for robots that are capable to operate i... more Planetary exploration scenarios illustrate the need for robots
that are capable to operate in unknown environments without direct
human interaction. Motivated by the DLR SpaceBot Cup 2015, where
robots should explore a Mars-like environment, nd and transport ob-
jects, take a soil sample, and perform assembly tasks, we developed au-
tonomous capabilities for our mobile manipulation robot Momaro. The
robot perceives and maps previously unknown, uneven terrain using a
3D laser scanner. We assess drivability and plan navigation for the omni-
directional drive. Using its four legs, Momaro adapts to the slope of the
terrain. It perceives objects with cameras, estimates their pose, and ma-
nipulates them with its two arms autonomously. For specifying missions,
monitoring mission progress, and on-the- y recon guration, we devel-
oped suitable operator interfaces. With the developed system, our team
NimbRo Explorer solved all tasks of the DLR SpaceBot Camp 2015.

Research paper thumbnail of NimbRo@Home: Winning Team of the RoboCup@Home Competition 2012.

In this paper we describe details of our winning team Nimb-Ro@Home at the RoboCup@Home competitio... more In this paper we describe details of our winning team Nimb-Ro@Home at the RoboCup@Home competition 2012. This year we improved the gripper design of our robots and further advanced mobile manipulation capabilities such as object perception and manipulation planning. For human-robot interaction, we propose to complement faceto-face communication between user and robot with a remote user interface for handheld PCs. We report on the use of our approaches and the performance of our robots at RoboCup 2012.

Research paper thumbnail of Autonomous MAV Navigation in Complex GNSS-denied 3D Environments

— Micro aerial vehicles, such as multirotors, are particular well suited for the autonomous explo... more — Micro aerial vehicles, such as multirotors, are particular well suited for the autonomous exploration, examination , and surveillance of otherwise inaccessible areas, e.g., for search and rescue missions in indoor disaster sites. Key prerequisites for the fully autonomous operation of micro aerial vehicles in restricted environments are 3D mapping, real-time pose tracking, obstacle detection, and planning of collision-free trajectories. In this work, we propose a complete navigation system with a multimodal sensor setup for omnidirectional environment perception. Measurements of a 3D laser scanner are aggregated in egocentric local multiresolution grid maps. Local maps are registered and merged to allocentric maps in which the MAV localizes. For autonomous navigation, we generate trajectories in a multi-layered approach: from mission planning over global and local trajectory planning to reactive obstacle avoidance. We evaluate our approach in a GNSS-denied indoor environment where multiple collision hazards require reliable omnidirec-tional perception and quick navigation reactions.

Research paper thumbnail of Obstacle Detection and Navigation Planning for Autonomous Micro Aerial Vehicles

Obstacle detection and real-time planning of collision-free trajectories are key for the fully au... more Obstacle detection and real-time planning of
collision-free trajectories are key for the fully autonomous
operation of micro aerial vehicles in restricted environments.
In this paper, we propose a complete system with a multimodal
sensor setup for omnidirectional obstacle perception
consisting of a 3D laser scanner, two stereo camera pairs, and
ultrasonic distance sensors. Detected obstacles are aggregated
in egocentric local multiresolution grid maps. We generate
trajectories in a multi-layered approach: from mission planning
to global and local trajectory planning, to reactive obstacle
avoidance.
We evaluate our approach in simulation and with the real
autonomous micro aerial vehicle.

Research paper thumbnail of Fusing Time-of-Flight Cameras and Inertial Measurement Units for Ego-Motion Estimation

... Results are benchmarked against reference poses from an accurate laser ranger finder-basedloc... more ... Results are benchmarked against reference poses from an accurate laser ranger finder-basedlocalization. ... On building 3D maps using a range camera: Applications to rescue robotics,” techreport, ARC ... [5] K. Ohno, T. Nomura, and S. Tadokoro, “Real-time robot tra-jectory ...

Research paper thumbnail of Fuzija TOF kamera i inercijalnih mjernih jedinica za procjenu vlastitog gibanja

Research paper thumbnail of Towards Multimodal Omnidirectional Obstacle Detection for Autonomous Unmanned Aerial Vehicles

ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2013

Limiting factors for increasing autonomy and complexity of truly autonomous systems (without exte... more Limiting factors for increasing autonomy and complexity of truly autonomous systems (without external sensing and control) are onboard sensing and onboard processing power. In this paper, we propose a hardware setup and processing pipeline that allows a fully autonomous UAV to perceive obstacles in (almost) all directions in its surroundings. Different sensor modalities are applied in order take into account the different characteristics of obstacles that can commonly be found in typical UAV applications. We provide a complete overview on the implemented system and present experimental results as a proof of concept.

Research paper thumbnail of Towards joint attention for a domestic service robot - Person awareness and gesture recognition using time-of-flight cameras

Proceedings - IEEE International Conference on Robotics and Automation, 2011

Joint attention between a human user and a robot is essential for effective human-robot interacti... more Joint attention between a human user and a robot is essential for effective human-robot interaction. In this work, we propose an approach to person awareness and to the perception of showing and pointing gestures for a domestic service robot. In contrast to previous work, we do not require the person to be at a predefined position, but instead actively approach and orient towards the communication partner. For perceiving showing and pointing gestures and for estimating the pointing direction a Time-of-Flight camera is used. Estimated pointing directions and shown objects are matched to objects in the robot's environment.

Research paper thumbnail of Learning to interpret pointing gestures with a time-of-flight camera

HRI 2011 - Proceedings of the 6th ACM/IEEE International Conference on Human-Robot Interaction, 2011

Pointing gestures are a common and intuitive way to draw somebody's attention to a certain object... more Pointing gestures are a common and intuitive way to draw somebody's attention to a certain object. While humans can easily interpret robot gestures, the perception of human behavior using robot sensors is more difficult. In this work, we propose a method for perceiving pointing gestures using a Time-of-Flight (ToF) camera. To determine the intended pointing target, frequently the line between a person's eyes and hand is assumed to be the pointing direction. However, since people tend to keep the line-of-sight free while they are pointing, this simple approximation is inadequate. Moreover, depending on the distance and angle to the pointing target, the line between shoulder and hand or elbow and hand may yield better interpretations of the pointing direction. In order to achieve a better estimate, we extract a set of body features from depth and amplitude images of a ToF camera and train a model of pointing directions using Gaussian Process Regression. We evaluate the accuracy of the estimated pointing direction in a quantitative study. The results show that our learned model achieves far better accuracy than simple criteria like head-hand, shoulder-hand, or elbow-hand line. Figure 1: Application scenario from ICRA 2010 Mobile Manipulation Challenge. Our robot approaches the user and he selects a drink by pointing to it.

Research paper thumbnail of Obstacle detection and navigation planning for autonomous micro aerial vehicles

2014 International Conference on Unmanned Aircraft Systems (ICUAS), 2014

Obstacle detection and real-time planning of collision-free trajectories are key for the fully au... more Obstacle detection and real-time planning of collision-free trajectories are key for the fully autonomous operation of micro aerial vehicles in restricted environments.

Research paper thumbnail of Local multi-resolution surfel grids for MAV motion estimation and 3D mapping

ABSTRACT For autonomous navigation in restricted environments, mi-cro aerial vehicles (MAV) need ... more ABSTRACT For autonomous navigation in restricted environments, mi-cro aerial vehicles (MAV) need to create 3D maps of their surroundings and must track their motion within these maps. In this paper, we propose an approach to simultaneous localization and mapping that is based on the measurements of a lightweight 3D laser-range finder. We aggregate laser-range measurements by registering sparse 3D scans with a local multiresolution surfel map that has high resolution in the vicinity of the MAV and coarser resolutions with increasing distance, which corre-sponds well to measurement density and accuracy of our sensor. Modeling measurement distributions within voxels by surface elements allows for efficient and accurate registration of 3D scans with the local map. The incrementally built local dense 3D maps of nearby key poses are regis-tered globally by graph optimization. This yields a globally consistent dense 3D map of the environment. Continuous registration of local maps with the global map allows for tracking the 6D MAV pose in real time. In experiments, we demonstrate accuracy and efficiency of our approach.

Research paper thumbnail of Multi-frequency phase unwrapping for time-of-flight cameras

IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings, 2010

Time-of-Flight (ToF) cameras gain depth information by emitting amplitude-modulated near-infrared... more Time-of-Flight (ToF) cameras gain depth information by emitting amplitude-modulated near-infrared light and measuring the phase shift between the emitted and the reflected signal. The phase shift is proportional to the object's distance modulo the wavelength of the modulation frequency. This results in a distance ambiguity. Distances larger than the wavelength are wrapped into the sensor's non-ambiguity range and cause spurious distance measurements. We apply Phase Unwrapping to reconstruct these wrapped measurements. Our approach is based on a probabilistic graphical model. We use loopy belief propagation to detect and infer the position of wrapped measurements. Besides depth discontinuities, our method utilizes multiple modulation frequencies to identify wrapped measurements. In experiments, we show that wrapped measurements are identified and corrected, even in situations where the scene shows steep slopes in the depth measurements.

Research paper thumbnail of Omnidirectional Perception for Lightweight MAVs using a Continuously Rotating 3D Laser<BR>Omnidirektionale Wahrnehmung für leichte MAVs mittels eines kontinuierlich rotierenden 3D-Laserscanners

Photogrammetrie - Fernerkundung - Geoinformation, 2014

Micro aerial vehicles (MAV) are restricted in their size and weight, making the design of sensory... more Micro aerial vehicles (MAV) are restricted in their size and weight, making the design of sensory systems for these vehicles challenging. We designed a small and lightweight continuously rotating 3D laser scanner-allowing for environment perception in a range of 30 m in almost all directions. This sensor is well suited for applications such as 3D obstacle detection, 6D motion estimation, localisation, and mapping. Reliably perceiving obstacles in the surroundings of the MAV is a prerequisite for fully autonomous flight in complex environments. Due to varying shape and reflectance properties of objects, not all obstacles are perceived in every 3D laser scan (one half rotation of the scanner). Especially farther away from the MAV, multiple scans may be necessary in order to adequately detect an obstacle. In order to increase the probability of detecting obstacles, we aggregate acquired scans over short periods of time in an egocentric grid-based map. We register acquired scans against this local map to estimate the motion of our MAV and to consistently update the map. In experiments, we show that our approaches to pose estimation and laser scan matching allow for reliable aggregation of 3D scans over short periods of time, sufficiently accurate to improve detection probability without causing inaccuracies in the estimation of the position of detected obstacles. Furthermore, we assess the probability of detecting different types of obstacles in varying distances from the MAV.

Research paper thumbnail of Increasing Flexibility of Mobile Manipulation and Intuitive Human-Robot Interaction in RoboCup@Home

Lecture Notes in Computer Science, 2014

In this paper, we describe system and approaches of our team NimbRo@Home that won the RoboCup@Hom... more In this paper, we describe system and approaches of our team NimbRo@Home that won the RoboCup@Home competition 2013. We designed a multi-purpose gripper for grasping typical household objects in pick-and-place tasks and also for using tools. The tools are complementarily equipped with special handles that establish form closure with the gripper, which resists wrenches in any direction. We demonstrate tool use for opening a bottle and grasping sausages with a pair of tongs in a barbecue scenario. We also devised efficient deformable registration methods for the transfer of manipulation skills between objects of the same kind but with differing shape. Finally, we enhance human-robot interaction with a remote user interface for handheld PCs that enables a user to control capabilities of the robot. These capabilities have been demonstrated in the open challenges of the competition. We also explain our approaches to the predefined tests of the competition, and report on the performance of our robots at RoboCup 2013.

Research paper thumbnail of Robust 3D-mapping with time-of-flight cameras

2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009

Time-of-Flight cameras constitute a smart and fast technology for 3D perception but lack in measu... more Time-of-Flight cameras constitute a smart and fast technology for 3D perception but lack in measurement precision and robustness. The authors present a comprehensive approach for 3D environment mapping based on this technology. Imprecision of depth measurements are properly handled by calibration and application of several filters. Robust registration is performed by a novel extension to the Iterative Closest Point algorithm. Remaining registration errors are reduced by global relaxation after loop-closure and surface smoothing. A laboratory ground truth evaluation is provided as well as 3D mapping experiments in a larger indoor environment.

Research paper thumbnail of Using time-of-flight cameras with active gaze control for 3D collision avoidance

Proceedings - IEEE International Conference on Robotics and Automation, 2010

We propose a 3D obstacle avoidance method for mobile robots. Besides the robot's 2D laser range f... more We propose a 3D obstacle avoidance method for mobile robots. Besides the robot's 2D laser range finder, a Timeof-Flight camera is used to perceive obstacles that are not in the scan plane of the laser range finder. Existing approaches that employ Time-of-Flight cameras suffer from the limited fieldof-view of the sensor. To overcome this issue, we mount the camera on the head of our anthropomorphic robot Dynamaid. This allows to change the gaze direction through the robot's pan-tilt neck and its torso yaw joint.

Research paper thumbnail of Towards joint attention for a domestic service robot - Person awareness and gesture recognition using time-of-flight cameras

Proceedings - IEEE International Conference on Robotics and Automation, 2011

Joint attention between a human user and a robot is essential for effective human-robot interacti... more Joint attention between a human user and a robot is essential for effective human-robot interaction. In this work, we propose an approach to person awareness and to the perception of showing and pointing gestures for a domestic service robot. In contrast to previous work, we do not require the person to be at a predefined position, but instead actively approach and orient towards the communication partner. For perceiving showing and pointing gestures and for estimating the pointing direction a Time-of-Flight camera is used. Estimated pointing directions and shown objects are matched to objects in the robot's environment.

Research paper thumbnail of Multi-frequency phase unwrapping for time-of-flight cameras

IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings, 2010

Time-of-Flight (ToF) cameras gain depth information by emitting amplitude-modulated near-infrared... more Time-of-Flight (ToF) cameras gain depth information by emitting amplitude-modulated near-infrared light and measuring the phase shift between the emitted and the reflected signal. The phase shift is proportional to the object's distance modulo the wavelength of the modulation frequency. This results in a distance ambiguity. Distances larger than the wavelength are wrapped into the sensor's non-ambiguity range and cause spurious distance measurements. We apply Phase Unwrapping to reconstruct these wrapped measurements. Our approach is based on a probabilistic graphical model. We use loopy belief propagation to detect and infer the position of wrapped measurements. Besides depth discontinuities, our method utilizes multiple modulation frequencies to identify wrapped measurements. In experiments, we show that wrapped measurements are identified and corrected, even in situations where the scene shows steep slopes in the depth measurements.

Research paper thumbnail of Autonomous Navigation for Micro Aerial Vehicles in Complex GNSS-denied Environments

Micro aerial vehicles, such as multirotors, are particular well suited for the autonomous monitor... more Micro aerial vehicles, such as multirotors, are particular well suited for
the autonomous monitoring, inspection, and surveillance of buildings, e.g., for main-
tenance in industrial plants. Key prerequisites for the fully autonomous operation
of micro aerial vehicles in restricted environments are 3D mapping, real-time pose
tracking, obstacle detection, and planning of collision-free trajectories. In this arti-
cle, we propose a complete navigation system with a multimodal sensor setup for
omnidirectional environment perception. Measurements of a 3D laser scanner are
aggregated in egocentric local multiresolution grid maps. Local maps are registered
and merged to allocentric maps in which the MAV localizes. For autonomous navi-
gation, we generate trajectories in a multi-layered approach: from mission planning
over global and local trajectory planning to reactive obstacle avoidance. We evaluate
our approach in a GNSS-denied indoor environment where multiple collision hazards
require reliable omnidirectional perception and quick navigation reactions

Research paper thumbnail of Continuous Mapping and Localization for Autonomous Navigation in Rough Terrain using a 3D Laser Scanner

For autonomous navigation in difficult terrain, such as degraded environments in disaster respons... more For autonomous navigation in difficult terrain, such as degraded environments in disaster response scenarios, robots are required to create a map of an unknown environment and to localize within this map. In this paper, we describe our approach to simultaneous localization and mapping that is based on the measurements of a 3D laser-range finder. We aggregate laser-range measurements by registering sparse 3D scans with a local multiresolution surfel map that has high resolution in the vicinity of the robot and coarser resolutions with increasing distance, which corresponds well to measurement density and accuracy of our sensor. By modeling measurements by surface elements, our approach allows for efficient and accurate registration and leverages online mapping and localization. The incrementally built local dense 3D maps of nearby key poses are registered against each other. Graph optimization yields a globally consistent dense 3D map of the environment. Continuous registration of local maps with the global map allows for tracking the 6D robot pose in real time. We assess the drivability of the terrain by analyzing height differences in an allocentric height map and plan cost-optimal paths. The system has been successfully demonstrated during the DARPA Robotics Challenge and the DLR SpaceBot Camp. In experiments, we evaluate accuracy and efficiency of our approach.

Research paper thumbnail of Supervised Autonomy for Exploration and Mobile Manipulation in Rough Terrain

Planetary exploration scenarios illustrate the need for robots that are capable to operate i... more Planetary exploration scenarios illustrate the need for robots
that are capable to operate in unknown environments without direct
human interaction. Motivated by the DLR SpaceBot Cup 2015, where
robots should explore a Mars-like environment, nd and transport ob-
jects, take a soil sample, and perform assembly tasks, we developed au-
tonomous capabilities for our mobile manipulation robot Momaro. The
robot perceives and maps previously unknown, uneven terrain using a
3D laser scanner. We assess drivability and plan navigation for the omni-
directional drive. Using its four legs, Momaro adapts to the slope of the
terrain. It perceives objects with cameras, estimates their pose, and ma-
nipulates them with its two arms autonomously. For specifying missions,
monitoring mission progress, and on-the- y recon guration, we devel-
oped suitable operator interfaces. With the developed system, our team
NimbRo Explorer solved all tasks of the DLR SpaceBot Camp 2015.

Research paper thumbnail of NimbRo@Home: Winning Team of the RoboCup@Home Competition 2012.

In this paper we describe details of our winning team Nimb-Ro@Home at the RoboCup@Home competitio... more In this paper we describe details of our winning team Nimb-Ro@Home at the RoboCup@Home competition 2012. This year we improved the gripper design of our robots and further advanced mobile manipulation capabilities such as object perception and manipulation planning. For human-robot interaction, we propose to complement faceto-face communication between user and robot with a remote user interface for handheld PCs. We report on the use of our approaches and the performance of our robots at RoboCup 2012.

Research paper thumbnail of Autonomous MAV Navigation in Complex GNSS-denied 3D Environments

— Micro aerial vehicles, such as multirotors, are particular well suited for the autonomous explo... more — Micro aerial vehicles, such as multirotors, are particular well suited for the autonomous exploration, examination , and surveillance of otherwise inaccessible areas, e.g., for search and rescue missions in indoor disaster sites. Key prerequisites for the fully autonomous operation of micro aerial vehicles in restricted environments are 3D mapping, real-time pose tracking, obstacle detection, and planning of collision-free trajectories. In this work, we propose a complete navigation system with a multimodal sensor setup for omnidirectional environment perception. Measurements of a 3D laser scanner are aggregated in egocentric local multiresolution grid maps. Local maps are registered and merged to allocentric maps in which the MAV localizes. For autonomous navigation, we generate trajectories in a multi-layered approach: from mission planning over global and local trajectory planning to reactive obstacle avoidance. We evaluate our approach in a GNSS-denied indoor environment where multiple collision hazards require reliable omnidirec-tional perception and quick navigation reactions.

Research paper thumbnail of Obstacle Detection and Navigation Planning for Autonomous Micro Aerial Vehicles

Obstacle detection and real-time planning of collision-free trajectories are key for the fully au... more Obstacle detection and real-time planning of
collision-free trajectories are key for the fully autonomous
operation of micro aerial vehicles in restricted environments.
In this paper, we propose a complete system with a multimodal
sensor setup for omnidirectional obstacle perception
consisting of a 3D laser scanner, two stereo camera pairs, and
ultrasonic distance sensors. Detected obstacles are aggregated
in egocentric local multiresolution grid maps. We generate
trajectories in a multi-layered approach: from mission planning
to global and local trajectory planning, to reactive obstacle
avoidance.
We evaluate our approach in simulation and with the real
autonomous micro aerial vehicle.

Research paper thumbnail of Fusing Time-of-Flight Cameras and Inertial Measurement Units for Ego-Motion Estimation

... Results are benchmarked against reference poses from an accurate laser ranger finder-basedloc... more ... Results are benchmarked against reference poses from an accurate laser ranger finder-basedlocalization. ... On building 3D maps using a range camera: Applications to rescue robotics,” techreport, ARC ... [5] K. Ohno, T. Nomura, and S. Tadokoro, “Real-time robot tra-jectory ...

Research paper thumbnail of Fuzija TOF kamera i inercijalnih mjernih jedinica za procjenu vlastitog gibanja

Research paper thumbnail of Towards Multimodal Omnidirectional Obstacle Detection for Autonomous Unmanned Aerial Vehicles

ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2013

Limiting factors for increasing autonomy and complexity of truly autonomous systems (without exte... more Limiting factors for increasing autonomy and complexity of truly autonomous systems (without external sensing and control) are onboard sensing and onboard processing power. In this paper, we propose a hardware setup and processing pipeline that allows a fully autonomous UAV to perceive obstacles in (almost) all directions in its surroundings. Different sensor modalities are applied in order take into account the different characteristics of obstacles that can commonly be found in typical UAV applications. We provide a complete overview on the implemented system and present experimental results as a proof of concept.

Research paper thumbnail of Towards joint attention for a domestic service robot - Person awareness and gesture recognition using time-of-flight cameras

Proceedings - IEEE International Conference on Robotics and Automation, 2011

Joint attention between a human user and a robot is essential for effective human-robot interacti... more Joint attention between a human user and a robot is essential for effective human-robot interaction. In this work, we propose an approach to person awareness and to the perception of showing and pointing gestures for a domestic service robot. In contrast to previous work, we do not require the person to be at a predefined position, but instead actively approach and orient towards the communication partner. For perceiving showing and pointing gestures and for estimating the pointing direction a Time-of-Flight camera is used. Estimated pointing directions and shown objects are matched to objects in the robot's environment.

Research paper thumbnail of Learning to interpret pointing gestures with a time-of-flight camera

HRI 2011 - Proceedings of the 6th ACM/IEEE International Conference on Human-Robot Interaction, 2011

Pointing gestures are a common and intuitive way to draw somebody's attention to a certain object... more Pointing gestures are a common and intuitive way to draw somebody's attention to a certain object. While humans can easily interpret robot gestures, the perception of human behavior using robot sensors is more difficult. In this work, we propose a method for perceiving pointing gestures using a Time-of-Flight (ToF) camera. To determine the intended pointing target, frequently the line between a person's eyes and hand is assumed to be the pointing direction. However, since people tend to keep the line-of-sight free while they are pointing, this simple approximation is inadequate. Moreover, depending on the distance and angle to the pointing target, the line between shoulder and hand or elbow and hand may yield better interpretations of the pointing direction. In order to achieve a better estimate, we extract a set of body features from depth and amplitude images of a ToF camera and train a model of pointing directions using Gaussian Process Regression. We evaluate the accuracy of the estimated pointing direction in a quantitative study. The results show that our learned model achieves far better accuracy than simple criteria like head-hand, shoulder-hand, or elbow-hand line. Figure 1: Application scenario from ICRA 2010 Mobile Manipulation Challenge. Our robot approaches the user and he selects a drink by pointing to it.

Research paper thumbnail of Obstacle detection and navigation planning for autonomous micro aerial vehicles

2014 International Conference on Unmanned Aircraft Systems (ICUAS), 2014

Obstacle detection and real-time planning of collision-free trajectories are key for the fully au... more Obstacle detection and real-time planning of collision-free trajectories are key for the fully autonomous operation of micro aerial vehicles in restricted environments.

Research paper thumbnail of Local multi-resolution surfel grids for MAV motion estimation and 3D mapping

ABSTRACT For autonomous navigation in restricted environments, mi-cro aerial vehicles (MAV) need ... more ABSTRACT For autonomous navigation in restricted environments, mi-cro aerial vehicles (MAV) need to create 3D maps of their surroundings and must track their motion within these maps. In this paper, we propose an approach to simultaneous localization and mapping that is based on the measurements of a lightweight 3D laser-range finder. We aggregate laser-range measurements by registering sparse 3D scans with a local multiresolution surfel map that has high resolution in the vicinity of the MAV and coarser resolutions with increasing distance, which corre-sponds well to measurement density and accuracy of our sensor. Modeling measurement distributions within voxels by surface elements allows for efficient and accurate registration of 3D scans with the local map. The incrementally built local dense 3D maps of nearby key poses are regis-tered globally by graph optimization. This yields a globally consistent dense 3D map of the environment. Continuous registration of local maps with the global map allows for tracking the 6D MAV pose in real time. In experiments, we demonstrate accuracy and efficiency of our approach.

Research paper thumbnail of Multi-frequency phase unwrapping for time-of-flight cameras

IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings, 2010

Time-of-Flight (ToF) cameras gain depth information by emitting amplitude-modulated near-infrared... more Time-of-Flight (ToF) cameras gain depth information by emitting amplitude-modulated near-infrared light and measuring the phase shift between the emitted and the reflected signal. The phase shift is proportional to the object's distance modulo the wavelength of the modulation frequency. This results in a distance ambiguity. Distances larger than the wavelength are wrapped into the sensor's non-ambiguity range and cause spurious distance measurements. We apply Phase Unwrapping to reconstruct these wrapped measurements. Our approach is based on a probabilistic graphical model. We use loopy belief propagation to detect and infer the position of wrapped measurements. Besides depth discontinuities, our method utilizes multiple modulation frequencies to identify wrapped measurements. In experiments, we show that wrapped measurements are identified and corrected, even in situations where the scene shows steep slopes in the depth measurements.

Research paper thumbnail of Omnidirectional Perception for Lightweight MAVs using a Continuously Rotating 3D Laser<BR>Omnidirektionale Wahrnehmung für leichte MAVs mittels eines kontinuierlich rotierenden 3D-Laserscanners

Photogrammetrie - Fernerkundung - Geoinformation, 2014

Micro aerial vehicles (MAV) are restricted in their size and weight, making the design of sensory... more Micro aerial vehicles (MAV) are restricted in their size and weight, making the design of sensory systems for these vehicles challenging. We designed a small and lightweight continuously rotating 3D laser scanner-allowing for environment perception in a range of 30 m in almost all directions. This sensor is well suited for applications such as 3D obstacle detection, 6D motion estimation, localisation, and mapping. Reliably perceiving obstacles in the surroundings of the MAV is a prerequisite for fully autonomous flight in complex environments. Due to varying shape and reflectance properties of objects, not all obstacles are perceived in every 3D laser scan (one half rotation of the scanner). Especially farther away from the MAV, multiple scans may be necessary in order to adequately detect an obstacle. In order to increase the probability of detecting obstacles, we aggregate acquired scans over short periods of time in an egocentric grid-based map. We register acquired scans against this local map to estimate the motion of our MAV and to consistently update the map. In experiments, we show that our approaches to pose estimation and laser scan matching allow for reliable aggregation of 3D scans over short periods of time, sufficiently accurate to improve detection probability without causing inaccuracies in the estimation of the position of detected obstacles. Furthermore, we assess the probability of detecting different types of obstacles in varying distances from the MAV.

Research paper thumbnail of Increasing Flexibility of Mobile Manipulation and Intuitive Human-Robot Interaction in RoboCup@Home

Lecture Notes in Computer Science, 2014

In this paper, we describe system and approaches of our team NimbRo@Home that won the RoboCup@Hom... more In this paper, we describe system and approaches of our team NimbRo@Home that won the RoboCup@Home competition 2013. We designed a multi-purpose gripper for grasping typical household objects in pick-and-place tasks and also for using tools. The tools are complementarily equipped with special handles that establish form closure with the gripper, which resists wrenches in any direction. We demonstrate tool use for opening a bottle and grasping sausages with a pair of tongs in a barbecue scenario. We also devised efficient deformable registration methods for the transfer of manipulation skills between objects of the same kind but with differing shape. Finally, we enhance human-robot interaction with a remote user interface for handheld PCs that enables a user to control capabilities of the robot. These capabilities have been demonstrated in the open challenges of the competition. We also explain our approaches to the predefined tests of the competition, and report on the performance of our robots at RoboCup 2013.

Research paper thumbnail of Robust 3D-mapping with time-of-flight cameras

2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009

Time-of-Flight cameras constitute a smart and fast technology for 3D perception but lack in measu... more Time-of-Flight cameras constitute a smart and fast technology for 3D perception but lack in measurement precision and robustness. The authors present a comprehensive approach for 3D environment mapping based on this technology. Imprecision of depth measurements are properly handled by calibration and application of several filters. Robust registration is performed by a novel extension to the Iterative Closest Point algorithm. Remaining registration errors are reduced by global relaxation after loop-closure and surface smoothing. A laboratory ground truth evaluation is provided as well as 3D mapping experiments in a larger indoor environment.

Research paper thumbnail of Using time-of-flight cameras with active gaze control for 3D collision avoidance

Proceedings - IEEE International Conference on Robotics and Automation, 2010

We propose a 3D obstacle avoidance method for mobile robots. Besides the robot's 2D laser range f... more We propose a 3D obstacle avoidance method for mobile robots. Besides the robot's 2D laser range finder, a Timeof-Flight camera is used to perceive obstacles that are not in the scan plane of the laser range finder. Existing approaches that employ Time-of-Flight cameras suffer from the limited fieldof-view of the sensor. To overcome this issue, we mount the camera on the head of our anthropomorphic robot Dynamaid. This allows to change the gaze direction through the robot's pan-tilt neck and its torso yaw joint.

Research paper thumbnail of Towards joint attention for a domestic service robot - Person awareness and gesture recognition using time-of-flight cameras

Proceedings - IEEE International Conference on Robotics and Automation, 2011

Joint attention between a human user and a robot is essential for effective human-robot interacti... more Joint attention between a human user and a robot is essential for effective human-robot interaction. In this work, we propose an approach to person awareness and to the perception of showing and pointing gestures for a domestic service robot. In contrast to previous work, we do not require the person to be at a predefined position, but instead actively approach and orient towards the communication partner. For perceiving showing and pointing gestures and for estimating the pointing direction a Time-of-Flight camera is used. Estimated pointing directions and shown objects are matched to objects in the robot's environment.

Research paper thumbnail of Multi-frequency phase unwrapping for time-of-flight cameras

IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings, 2010

Time-of-Flight (ToF) cameras gain depth information by emitting amplitude-modulated near-infrared... more Time-of-Flight (ToF) cameras gain depth information by emitting amplitude-modulated near-infrared light and measuring the phase shift between the emitted and the reflected signal. The phase shift is proportional to the object's distance modulo the wavelength of the modulation frequency. This results in a distance ambiguity. Distances larger than the wavelength are wrapped into the sensor's non-ambiguity range and cause spurious distance measurements. We apply Phase Unwrapping to reconstruct these wrapped measurements. Our approach is based on a probabilistic graphical model. We use loopy belief propagation to detect and infer the position of wrapped measurements. Besides depth discontinuities, our method utilizes multiple modulation frequencies to identify wrapped measurements. In experiments, we show that wrapped measurements are identified and corrected, even in situations where the scene shows steep slopes in the depth measurements.