Gregory Dudek - Academia.edu (original) (raw)
Papers by Gregory Dudek
In this paper we present an efficient method for visual mapping of open water environments using ... more In this paper we present an efficient method for visual mapping of open water environments using exploration and reward identification followed by selective visual coverage. In particular, we consider the problem of visual mapping a shallow water coral reef to provide an environmental assay. Our approach has two stages based on two classes of sensors: bathymetric mapping and visual mapping. We use a robotic boat to collect bathymetric data using a sonar sensor for the first stage and video data using a visual sensor for the second stage. Since underwater environments have varying visibility, we use the sonar map to select regions of potential value, and efficiently construct the bathymetric map from sparse data using a Gaussian Process model. In the second stage, we collect visual data only where there is good potential pay-off, and we use a reward-driven finite-horizon model akin to a Markov Decision Process to extract the maximum amount of valuable data in the least amount of time. We show that a very small number of sonar readings suffice on a typical fringing reef. We validate and demonstrate our surveying technique using real robot in the presence of real world conditions such as wind and current. We also show that our proposed approach is suitable for visual surveying by presenting a visual collage of the reef.
This paper addresses the problem of searching multiple non-adversarial targets using a mobile sea... more This paper addresses the problem of searching multiple non-adversarial targets using a mobile searcher in an obstacle-free environment. In practice, we are particularly interested in marine applications where the targets drift on the ocean surface. These targets can be surface sensors used for marine environmental monitoring, drifting debris, or lost divers in open water. Searching for a floating target requires prior knowledge about the search region and an estimate of the target's motion. This task becomes challenging when searching for multiple targets where persistent searching for one of the targets can result in the loss of other targets. Hence, the searcher needs to trade-off between guaranteed and fast searches. We propose three classes of search strategies for addressing the multi-target search problem. These include, data-independent, probabilistic and hybrid search. The dataindependent search strategy follow a pre-defined search pattern and schedule. The probabilistic search strategy is guided by the estimated probability distribution of the search target. The hybrid strategy combines data-independent search patterns with a probabilistic search schedule. We evaluate these search strategies in simulation and compare their performance characteristics in the context of searching multiple drifting targets using an Autonomous Surface Vehicle (ASV).
In this paper, we examine multi-target search, where one or more targets must be found by a movin... more In this paper, we examine multi-target search, where one or more targets must be found by a moving robot. Given the target's initial probability distribution or the expected search region, we present an analysis of three search strategies-Global maxima search, Local maxima search, and Spiral search. We aim at minimizing the meantime to find and maximizing the total probability of finding the target. This leads to two types of illustrative performance metrics: minimum time capture and guaranteed capture. We validate the search strategies with respect to these two performance metrics. In addition, we study the effect of different target distributions on the performance of the search strategies. We also consider the practical realization of the proposed algorithms for multi-target search. The search strategies are analytically evaluated, through simulations and illustrative deployments, in open-water with an Autonomous Surface Vehicle (ASV) and drifting sensor targets.
In this paper we present an algorithm for finding a distance optimal rendezvous location with res... more In this paper we present an algorithm for finding a distance optimal rendezvous location with respect to both initial and target locations of the mobile agents. These agents can be humans or robots, who need to meet and split while performing a collaborative task. Our aim is to embed the meeting process within a background activity such that the agents travel through the rendezvous location while taking the shortest paths to their respective target locations. We analyze this problem in a street network scenario with two agents who are given their individual scheduled routes to complete with an underlying common goal. The agents are allowed to select any combination of the waypoints along their routes as long as they travel the shortest path and pass through the same potential rendezvous location. The total number of path combinations that the agents need to evaluate for the shortest path increases rapidly with the number of waypoints along their routes. We address this computational cost by proposing a combination of Euclidean and street network distances for a trade-off between the number of queries and a distance optimal solution.
2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018
We address the problem of learning vision-based, collision-avoiding, and target-selecting control... more We address the problem of learning vision-based, collision-avoiding, and target-selecting controllers in 3D, specifically in underwater environments densely populated with coral reefs. Using a highly maneuverable, dynamic, six-legged (or flippered) vehicle to swim underwater, we exploit real time visual feedback to make close-range navigation decisions that would be hard to achieve with other sensors. Our approach uses computer vision as the sole mechanism for both collision avoidance and visual target selection. In particular, we seek to swim close to the reef to make observations while avoiding both collisions and barren, coral-deprived regions. To carry out path selection while avoiding collisions, we use monocular image data processed in real time. The proposed system uses a convolutional neural network that takes an image from a forward-facing camera as input and predicts unscaled and relative path changes. The network is trained to encode our desired obstacle-avoidance and reef-exploration objectives via supervised learning from human-labeled data. The predictions from the network are transformed into absolute path changes via a combination of a temporally-smoothed proportional controller for heading targets and a low-level motor controller. This system enables safe and autonomous coral reef navigation in underwater environments. We validate our approach using an untethered and fully autonomous robot swimming through coral reef in the open ocean. Our robot successfully traverses 1000 m of the ocean floor collision-free while collecting close-up footage of coral reefs.
2020 IEEE International Conference on Robotics and Automation (ICRA), 2020
We present a method for learning to drive on smooth terrain while simultaneously avoiding collisi... more We present a method for learning to drive on smooth terrain while simultaneously avoiding collisions in challenging off-road and unstructured outdoor environments using only visual inputs. Our approach applies a hybrid model-based and model-free reinforcement learning method that is entirely self-supervised in labeling terrain roughness and collisions using on-board sensors. Notably, we provide both first-person and overhead aerial image inputs to our model. We find that the fusion of these complementary inputs improves planning foresight and makes the model robust to visual obstructions. Our results show the ability to generalize to environments with plentiful vegetation, various types of rock, and sandy trails. During evaluation, our policy attained 90% smooth terrain traversal and reduced the proportion of rough terrain driven over by 6.1 times compared to a model using only firstperson imagery. Video and project details can be found at www.cim.mcgill.ca/mrl/offroad driving/.
Springer Proceedings in Advanced Robotics, 2017
This paper presents experimental insights from the deployment of an ensemble of heterogeneous aut... more This paper presents experimental insights from the deployment of an ensemble of heterogeneous autonomous sensor systems over a shallow coral reef. Visual, inertial, GPS, and ultrasonic data collected are compared and correlated to produce a comprehensive view of the health of the coral reef. Coverage strategies are discussed with a focus on the use of informed decisions to maximize the information collected during a fixed period of time.
2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020
In this paper, we propose a real-time deep learning approach for determining the 6D relative pose... more In this paper, we propose a real-time deep learning approach for determining the 6D relative pose of Autonomous Underwater Vehicles (AUV) from a single image. A team of autonomous robots localizing themselves in a communicationconstrained underwater environment is essential for many applications such as underwater exploration, mapping, multirobot convoying, and other multi-robot tasks. Due to the profound difficulty of collecting ground truth images with accurate 6D poses underwater, this work utilizes rendered images from the Unreal Game Engine simulation for training. An image-to-image translation network is employed to bridge the gap between the rendered and the real images producing synthetic images for training. The proposed method predicts the 6D pose of an AUV from a single image as 2D image keypoints representing 8 corners of the 3D model of the AUV, and then the 6D pose in the camera coordinates is determined using RANSAC-based PnP. Experimental results in real-world underwater environments (swimming pool and ocean) with different cameras demonstrate the robustness and accuracy of the proposed technique in terms of translation error and orientation error over the state-of-the-art methods. The code is publicly available. 1
2017 14th Conference on Computer and Robot Vision (CRV), 2017
This paper addresses distributed data sampling in marine environments using robotic devices. We p... more This paper addresses distributed data sampling in marine environments using robotic devices. We present a method to strategically sample locally observable features using two classes of sensor platforms. Our system consists of a sophisticated autonomous surface vehicle (ASV) which strategically samples based on information provided by a team of inexpensive sensor nodes. The sensor nodes effectively extend the observational capabilities of the vehicle by capturing georeferenced samples from disparate and moving points across the region. The ASV uses this information, along with its own observations, to plan a path so as to sample points which it expects to be particularly informative. We compare our approach to a traditional exhaustive survey approach and show that we are able to effectively represent a region with less energy expenditure. We validate our approach through simulations and test the system on real robots in field.
Autonomous Robots, 2021
This work presentsObject Landmarks, a new type of visual feature designed for visual localization... more This work presentsObject Landmarks, a new type of visual feature designed for visual localization over major changes in distance and scale. AnObject Landmarkconsists of a bounding box$${\mathbf {b}}$$bdefining an object, a descriptor$${\mathbf {q}}$$qof that object produced by a Convolutional Neural Network, and a set of classical point features within$${\mathbf {b}}$$b. We evaluateObject Landmarkson visual odometry and place-recognition tasks, and compare them against several modern approaches. We find thatObject Landmarksenable superior localization over major scale changes, reducing error by as much as 18% and increasing robustness to failure by as much as 80% versus the state-of-the-art. They allow localization under scale change factors up to 6, where state-of-the-art approaches break down at factors of 3 or more.
Robotics and Autonomous Systems, 2017
Due to the recent technological progress, Human-Robot Interaction (HRI) has become a major field ... more Due to the recent technological progress, Human-Robot Interaction (HRI) has become a major field of research in both engineering and artistic realms, particularly so in the last decade. The mainstream interests are, however, extremely diverse: challenges are continuously shifting, the evolution of robot skills, as well as the advances in methods for understanding their environment radically impact the design and implementation of research prototypes. When directly deployed in a public installation or artistic performances, robots help foster the next level of understanding in HRI. To this effect, this paper presents a successful interdisciplinary art-science-technology project, the Aerostabiles, leading to a new way of conducting HRI research. The project consists of developing a mechatronic, intelligent platform embodied in multiple geometric blimps-cubes-that hover and move in the air. The artistic context of this project required a number of advances in engineering on the aspects of localization and control systems, flight dynamics, as well as interaction strategies, and their evolution through periods of collective activities called "research-creation residencies". These events involve artists, engineers, and performers working in close collaboration, sometimes, over several weeks at a time. They generate fruitful exchanges between all researchers, but most of all, they present a unique and creative way to direct and focus the robotics development. This paper represents an overview of the technical contributions from a range of expertise through the artistic drive of the Aerostabiles project.
Proceedings. 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems. Innovations in Theory, Practice and Applications (Cat. No.98CH36190)
This paper presents an approach to vision-based mobile robot localization. In an attempt to capit... more This paper presents an approach to vision-based mobile robot localization. In an attempt to capitalize on the benefits of both image and landmark-based methods, we describe a method that combines their strengths. Images are encoded as a set of visual features called landmarks. Potential landmarks are detected using an attention mechanism implemented as a measure of uniqueness. They are then selected and represented by an appearance-based encoding. Localization is performed using a landmark tracking and interpolation method which obtains an estimate accurate to a fraction of the environment sampling density. Experimental results are shown to confirm the feasibility and accuracy of the method.
Autonomous Robots, 2015
This paper presents a novel approach to modeling curiosity in a mobile robot, which is useful for... more This paper presents a novel approach to modeling curiosity in a mobile robot, which is useful for monitoring and adaptive data collection tasks, especially in the context of long term autonomous missions where preprogrammed missions are likely to have limited utility. We use a realtime topic modeling technique to build a semantic perception model of the environment, using which, we plan a path through the locations in the world with high semantic information content. The lifelong learning behavior of the proposed perception model makes it suitable for long-term exploration missions. We validate the approach using simulated exploration experiments using aerial and underwater data, and demonstrate an implementation on the Aqua underwater robot in a variety of scenarios. We find that the proposed exploration paths that are biased towards locations with high topic perplexity, produce better terrain models with high discriminative power. Moreover, we show that the proposed algorithm implemented on Aqua robot is able to do tasks such as coral reef inspection, diver following, and sea floor exploration, without any prior training or preparation.
Proceedings 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems. Expanding the Societal Role of Robotics in the the Next Millennium (Cat. No.01CH37180)
We examine the problem of learning a visual map of the environment while maintaining an accurate ... more We examine the problem of learning a visual map of the environment while maintaining an accurate pose estimate. Our approach is based on using two robots in a simple collaborative scheme; in practice, one of these robots can be much less capable than the other. In many mapping contexts, a robot moves about collecting data (images, in particular) which are later used to assemble a map; we can think of map construction as a training process. Without outside information, as a robot collects training images, its position estimate accumulates errors, thus corrupting its knowledge of the positions from which observations are taken. We address this problem by deploying a second robot to observe the first one as it explores, thereby establishing a virtual tether, and enabling an accurate estimate of the robot's position while it constructs the map. We refer to this process as cooperative localization. The images collected during this process are assembled into a representation that allows vision-based position estimation from a single image at a later date. In addition to developing a formalism and concept, we validate our results experimentally and present quantitative results demonstrating the performance of the method in over 90 trials.
2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422)
We discuss a software environment for multirobot, multi-platform mobile robot control and simulat... more We discuss a software environment for multirobot, multi-platform mobile robot control and simulation. Like others, we have observed that mobile robotics research is greatly facilitated by the availability of a suitable simulator for both vehicle kinematics as well as sensing, and have created an environment that permits this while allowing a large measure of device independence. By using a multiprocessor internet-based architecture, our platform permits multiple users to use a variety of programming interfaces (visual, script-based or various application programming interfaces (API's)) to rapidly prototype methods to control multiple heterogeneous robots both in simulation and in realworld settings. We present an overview of our architecture and discuss its future directions.
2014 IEEE International Conference on Robotics and Automation (ICRA), 2014
In this paper we present an algorithm for finding a distance optimal rendezvous location with res... more In this paper we present an algorithm for finding a distance optimal rendezvous location with respect to both initial and target locations of the mobile agents. These agents can be humans or robots, who need to meet and split while performing a collaborative task. Our aim is to embed the meeting process within a background activity such that the agents travel through the rendezvous location while taking the shortest paths to their respective target locations. We analyze this problem in a street network scenario with two agents who are given their individual scheduled routes to complete with an underlying common goal. The agents are allowed to select any combination of the waypoints along their routes as long as they travel the shortest path and pass through the same potential rendezvous location. The total number of path combinations that the agents need to evaluate for the shortest path increases rapidly with the number of waypoints along their routes. We address this computational cost by proposing a combination of Euclidean and street network distances for a trade-off between the number of queries and a distance optimal solution.
Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065)
This paper presents a new sensing modality for multirobot exploration. The approach is based on u... more This paper presents a new sensing modality for multirobot exploration. The approach is based on using a pair of robots that observe each other, and act in concert to reduce odometry errors. We assume the robots can both directly sense nearby obstacles and see each other. The proposed approach improves the quality of the map by reducing the inaccuracies that occur over time from dead reckoning errors. Furthermore, by exploiting the ability of the robots to see each other, we can detect opaque obstacles in the environment independently of their surface reflectance properties. Two different algorithms, based on the size of the environment, are introduced, with a complexity analysis, and experimental results in simulation and with real robots.
2011 Canadian Conference on Computer and Robot Vision, 2011
In this paper we present the computer vision component of a 6DOF pose estimation algorithm to be ... more In this paper we present the computer vision component of a 6DOF pose estimation algorithm to be used by an underwater robot. Our goal is to evaluate which feature trackers enable us to accurately estimate the 3D positions of features, as quickly as possible. To this end, we perform an evaluation of available detectors, descriptors, and matching schemes, over different underwater datasets. We are interested in identifying combinations in this search space that are suitable for use in structure from motion algorithms, and more generally, vision-aided localization algorithms that use a monocular camera. Our evaluation includes frame-by-frame statistics of desired attributes, as well as measures of robustness expressed as the length of tracked features. We compare the fit of each combination based on the following attributes: number of extracted keypoints per frame, length of feature tracks, average tracking time per frame, number of false positive matches between frames. Several datasets were used, collected in different underwater locations and under different lighting and visibility conditions.
2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2011
This paper presents an adaptation of a vision and inertial-based state estimation algorithm for u... more This paper presents an adaptation of a vision and inertial-based state estimation algorithm for use in an underwater robot. The proposed approach combines information from an Inertial Measurement Unit (IMU) in the form of linear accelerations and angular velocities, depth data from a pressure sensor, and feature tracking from a monocular downward facing camera to estimate the 6DOF pose of the vehicle. To validate the approach, we present extensive experimental results from field trials conducted in underwater environments with varying lighting and visibility conditions, and we demonstrate successful application of the technique underwater.
2014 Canadian Conference on Computer and Robot Vision, 2014
This paper presents a novel approach to modeling curiosity in a mobile robot, which is useful for... more This paper presents a novel approach to modeling curiosity in a mobile robot, which is useful for monitoring and adaptive data collection tasks. We use ROST, a realtime topic modeling framework to build a semantic perception model of the environment, using which, we plan a path through the locations in the world with high semantic information content. We demonstrate the approach using the Aqua robot in a variety of different scenarios, and find the robot be able to do tasks such as coral reef inspection, diver following, and sea floor exploration, without any prior training or preparation.
In this paper we present an efficient method for visual mapping of open water environments using ... more In this paper we present an efficient method for visual mapping of open water environments using exploration and reward identification followed by selective visual coverage. In particular, we consider the problem of visual mapping a shallow water coral reef to provide an environmental assay. Our approach has two stages based on two classes of sensors: bathymetric mapping and visual mapping. We use a robotic boat to collect bathymetric data using a sonar sensor for the first stage and video data using a visual sensor for the second stage. Since underwater environments have varying visibility, we use the sonar map to select regions of potential value, and efficiently construct the bathymetric map from sparse data using a Gaussian Process model. In the second stage, we collect visual data only where there is good potential pay-off, and we use a reward-driven finite-horizon model akin to a Markov Decision Process to extract the maximum amount of valuable data in the least amount of time. We show that a very small number of sonar readings suffice on a typical fringing reef. We validate and demonstrate our surveying technique using real robot in the presence of real world conditions such as wind and current. We also show that our proposed approach is suitable for visual surveying by presenting a visual collage of the reef.
This paper addresses the problem of searching multiple non-adversarial targets using a mobile sea... more This paper addresses the problem of searching multiple non-adversarial targets using a mobile searcher in an obstacle-free environment. In practice, we are particularly interested in marine applications where the targets drift on the ocean surface. These targets can be surface sensors used for marine environmental monitoring, drifting debris, or lost divers in open water. Searching for a floating target requires prior knowledge about the search region and an estimate of the target's motion. This task becomes challenging when searching for multiple targets where persistent searching for one of the targets can result in the loss of other targets. Hence, the searcher needs to trade-off between guaranteed and fast searches. We propose three classes of search strategies for addressing the multi-target search problem. These include, data-independent, probabilistic and hybrid search. The dataindependent search strategy follow a pre-defined search pattern and schedule. The probabilistic search strategy is guided by the estimated probability distribution of the search target. The hybrid strategy combines data-independent search patterns with a probabilistic search schedule. We evaluate these search strategies in simulation and compare their performance characteristics in the context of searching multiple drifting targets using an Autonomous Surface Vehicle (ASV).
In this paper, we examine multi-target search, where one or more targets must be found by a movin... more In this paper, we examine multi-target search, where one or more targets must be found by a moving robot. Given the target's initial probability distribution or the expected search region, we present an analysis of three search strategies-Global maxima search, Local maxima search, and Spiral search. We aim at minimizing the meantime to find and maximizing the total probability of finding the target. This leads to two types of illustrative performance metrics: minimum time capture and guaranteed capture. We validate the search strategies with respect to these two performance metrics. In addition, we study the effect of different target distributions on the performance of the search strategies. We also consider the practical realization of the proposed algorithms for multi-target search. The search strategies are analytically evaluated, through simulations and illustrative deployments, in open-water with an Autonomous Surface Vehicle (ASV) and drifting sensor targets.
In this paper we present an algorithm for finding a distance optimal rendezvous location with res... more In this paper we present an algorithm for finding a distance optimal rendezvous location with respect to both initial and target locations of the mobile agents. These agents can be humans or robots, who need to meet and split while performing a collaborative task. Our aim is to embed the meeting process within a background activity such that the agents travel through the rendezvous location while taking the shortest paths to their respective target locations. We analyze this problem in a street network scenario with two agents who are given their individual scheduled routes to complete with an underlying common goal. The agents are allowed to select any combination of the waypoints along their routes as long as they travel the shortest path and pass through the same potential rendezvous location. The total number of path combinations that the agents need to evaluate for the shortest path increases rapidly with the number of waypoints along their routes. We address this computational cost by proposing a combination of Euclidean and street network distances for a trade-off between the number of queries and a distance optimal solution.
2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018
We address the problem of learning vision-based, collision-avoiding, and target-selecting control... more We address the problem of learning vision-based, collision-avoiding, and target-selecting controllers in 3D, specifically in underwater environments densely populated with coral reefs. Using a highly maneuverable, dynamic, six-legged (or flippered) vehicle to swim underwater, we exploit real time visual feedback to make close-range navigation decisions that would be hard to achieve with other sensors. Our approach uses computer vision as the sole mechanism for both collision avoidance and visual target selection. In particular, we seek to swim close to the reef to make observations while avoiding both collisions and barren, coral-deprived regions. To carry out path selection while avoiding collisions, we use monocular image data processed in real time. The proposed system uses a convolutional neural network that takes an image from a forward-facing camera as input and predicts unscaled and relative path changes. The network is trained to encode our desired obstacle-avoidance and reef-exploration objectives via supervised learning from human-labeled data. The predictions from the network are transformed into absolute path changes via a combination of a temporally-smoothed proportional controller for heading targets and a low-level motor controller. This system enables safe and autonomous coral reef navigation in underwater environments. We validate our approach using an untethered and fully autonomous robot swimming through coral reef in the open ocean. Our robot successfully traverses 1000 m of the ocean floor collision-free while collecting close-up footage of coral reefs.
2020 IEEE International Conference on Robotics and Automation (ICRA), 2020
We present a method for learning to drive on smooth terrain while simultaneously avoiding collisi... more We present a method for learning to drive on smooth terrain while simultaneously avoiding collisions in challenging off-road and unstructured outdoor environments using only visual inputs. Our approach applies a hybrid model-based and model-free reinforcement learning method that is entirely self-supervised in labeling terrain roughness and collisions using on-board sensors. Notably, we provide both first-person and overhead aerial image inputs to our model. We find that the fusion of these complementary inputs improves planning foresight and makes the model robust to visual obstructions. Our results show the ability to generalize to environments with plentiful vegetation, various types of rock, and sandy trails. During evaluation, our policy attained 90% smooth terrain traversal and reduced the proportion of rough terrain driven over by 6.1 times compared to a model using only firstperson imagery. Video and project details can be found at www.cim.mcgill.ca/mrl/offroad driving/.
Springer Proceedings in Advanced Robotics, 2017
This paper presents experimental insights from the deployment of an ensemble of heterogeneous aut... more This paper presents experimental insights from the deployment of an ensemble of heterogeneous autonomous sensor systems over a shallow coral reef. Visual, inertial, GPS, and ultrasonic data collected are compared and correlated to produce a comprehensive view of the health of the coral reef. Coverage strategies are discussed with a focus on the use of informed decisions to maximize the information collected during a fixed period of time.
2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020
In this paper, we propose a real-time deep learning approach for determining the 6D relative pose... more In this paper, we propose a real-time deep learning approach for determining the 6D relative pose of Autonomous Underwater Vehicles (AUV) from a single image. A team of autonomous robots localizing themselves in a communicationconstrained underwater environment is essential for many applications such as underwater exploration, mapping, multirobot convoying, and other multi-robot tasks. Due to the profound difficulty of collecting ground truth images with accurate 6D poses underwater, this work utilizes rendered images from the Unreal Game Engine simulation for training. An image-to-image translation network is employed to bridge the gap between the rendered and the real images producing synthetic images for training. The proposed method predicts the 6D pose of an AUV from a single image as 2D image keypoints representing 8 corners of the 3D model of the AUV, and then the 6D pose in the camera coordinates is determined using RANSAC-based PnP. Experimental results in real-world underwater environments (swimming pool and ocean) with different cameras demonstrate the robustness and accuracy of the proposed technique in terms of translation error and orientation error over the state-of-the-art methods. The code is publicly available. 1
2017 14th Conference on Computer and Robot Vision (CRV), 2017
This paper addresses distributed data sampling in marine environments using robotic devices. We p... more This paper addresses distributed data sampling in marine environments using robotic devices. We present a method to strategically sample locally observable features using two classes of sensor platforms. Our system consists of a sophisticated autonomous surface vehicle (ASV) which strategically samples based on information provided by a team of inexpensive sensor nodes. The sensor nodes effectively extend the observational capabilities of the vehicle by capturing georeferenced samples from disparate and moving points across the region. The ASV uses this information, along with its own observations, to plan a path so as to sample points which it expects to be particularly informative. We compare our approach to a traditional exhaustive survey approach and show that we are able to effectively represent a region with less energy expenditure. We validate our approach through simulations and test the system on real robots in field.
Autonomous Robots, 2021
This work presentsObject Landmarks, a new type of visual feature designed for visual localization... more This work presentsObject Landmarks, a new type of visual feature designed for visual localization over major changes in distance and scale. AnObject Landmarkconsists of a bounding box$${\mathbf {b}}$$bdefining an object, a descriptor$${\mathbf {q}}$$qof that object produced by a Convolutional Neural Network, and a set of classical point features within$${\mathbf {b}}$$b. We evaluateObject Landmarkson visual odometry and place-recognition tasks, and compare them against several modern approaches. We find thatObject Landmarksenable superior localization over major scale changes, reducing error by as much as 18% and increasing robustness to failure by as much as 80% versus the state-of-the-art. They allow localization under scale change factors up to 6, where state-of-the-art approaches break down at factors of 3 or more.
Robotics and Autonomous Systems, 2017
Due to the recent technological progress, Human-Robot Interaction (HRI) has become a major field ... more Due to the recent technological progress, Human-Robot Interaction (HRI) has become a major field of research in both engineering and artistic realms, particularly so in the last decade. The mainstream interests are, however, extremely diverse: challenges are continuously shifting, the evolution of robot skills, as well as the advances in methods for understanding their environment radically impact the design and implementation of research prototypes. When directly deployed in a public installation or artistic performances, robots help foster the next level of understanding in HRI. To this effect, this paper presents a successful interdisciplinary art-science-technology project, the Aerostabiles, leading to a new way of conducting HRI research. The project consists of developing a mechatronic, intelligent platform embodied in multiple geometric blimps-cubes-that hover and move in the air. The artistic context of this project required a number of advances in engineering on the aspects of localization and control systems, flight dynamics, as well as interaction strategies, and their evolution through periods of collective activities called "research-creation residencies". These events involve artists, engineers, and performers working in close collaboration, sometimes, over several weeks at a time. They generate fruitful exchanges between all researchers, but most of all, they present a unique and creative way to direct and focus the robotics development. This paper represents an overview of the technical contributions from a range of expertise through the artistic drive of the Aerostabiles project.
Proceedings. 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems. Innovations in Theory, Practice and Applications (Cat. No.98CH36190)
This paper presents an approach to vision-based mobile robot localization. In an attempt to capit... more This paper presents an approach to vision-based mobile robot localization. In an attempt to capitalize on the benefits of both image and landmark-based methods, we describe a method that combines their strengths. Images are encoded as a set of visual features called landmarks. Potential landmarks are detected using an attention mechanism implemented as a measure of uniqueness. They are then selected and represented by an appearance-based encoding. Localization is performed using a landmark tracking and interpolation method which obtains an estimate accurate to a fraction of the environment sampling density. Experimental results are shown to confirm the feasibility and accuracy of the method.
Autonomous Robots, 2015
This paper presents a novel approach to modeling curiosity in a mobile robot, which is useful for... more This paper presents a novel approach to modeling curiosity in a mobile robot, which is useful for monitoring and adaptive data collection tasks, especially in the context of long term autonomous missions where preprogrammed missions are likely to have limited utility. We use a realtime topic modeling technique to build a semantic perception model of the environment, using which, we plan a path through the locations in the world with high semantic information content. The lifelong learning behavior of the proposed perception model makes it suitable for long-term exploration missions. We validate the approach using simulated exploration experiments using aerial and underwater data, and demonstrate an implementation on the Aqua underwater robot in a variety of scenarios. We find that the proposed exploration paths that are biased towards locations with high topic perplexity, produce better terrain models with high discriminative power. Moreover, we show that the proposed algorithm implemented on Aqua robot is able to do tasks such as coral reef inspection, diver following, and sea floor exploration, without any prior training or preparation.
Proceedings 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems. Expanding the Societal Role of Robotics in the the Next Millennium (Cat. No.01CH37180)
We examine the problem of learning a visual map of the environment while maintaining an accurate ... more We examine the problem of learning a visual map of the environment while maintaining an accurate pose estimate. Our approach is based on using two robots in a simple collaborative scheme; in practice, one of these robots can be much less capable than the other. In many mapping contexts, a robot moves about collecting data (images, in particular) which are later used to assemble a map; we can think of map construction as a training process. Without outside information, as a robot collects training images, its position estimate accumulates errors, thus corrupting its knowledge of the positions from which observations are taken. We address this problem by deploying a second robot to observe the first one as it explores, thereby establishing a virtual tether, and enabling an accurate estimate of the robot's position while it constructs the map. We refer to this process as cooperative localization. The images collected during this process are assembled into a representation that allows vision-based position estimation from a single image at a later date. In addition to developing a formalism and concept, we validate our results experimentally and present quantitative results demonstrating the performance of the method in over 90 trials.
2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422)
We discuss a software environment for multirobot, multi-platform mobile robot control and simulat... more We discuss a software environment for multirobot, multi-platform mobile robot control and simulation. Like others, we have observed that mobile robotics research is greatly facilitated by the availability of a suitable simulator for both vehicle kinematics as well as sensing, and have created an environment that permits this while allowing a large measure of device independence. By using a multiprocessor internet-based architecture, our platform permits multiple users to use a variety of programming interfaces (visual, script-based or various application programming interfaces (API's)) to rapidly prototype methods to control multiple heterogeneous robots both in simulation and in realworld settings. We present an overview of our architecture and discuss its future directions.
2014 IEEE International Conference on Robotics and Automation (ICRA), 2014
In this paper we present an algorithm for finding a distance optimal rendezvous location with res... more In this paper we present an algorithm for finding a distance optimal rendezvous location with respect to both initial and target locations of the mobile agents. These agents can be humans or robots, who need to meet and split while performing a collaborative task. Our aim is to embed the meeting process within a background activity such that the agents travel through the rendezvous location while taking the shortest paths to their respective target locations. We analyze this problem in a street network scenario with two agents who are given their individual scheduled routes to complete with an underlying common goal. The agents are allowed to select any combination of the waypoints along their routes as long as they travel the shortest path and pass through the same potential rendezvous location. The total number of path combinations that the agents need to evaluate for the shortest path increases rapidly with the number of waypoints along their routes. We address this computational cost by proposing a combination of Euclidean and street network distances for a trade-off between the number of queries and a distance optimal solution.
Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065)
This paper presents a new sensing modality for multirobot exploration. The approach is based on u... more This paper presents a new sensing modality for multirobot exploration. The approach is based on using a pair of robots that observe each other, and act in concert to reduce odometry errors. We assume the robots can both directly sense nearby obstacles and see each other. The proposed approach improves the quality of the map by reducing the inaccuracies that occur over time from dead reckoning errors. Furthermore, by exploiting the ability of the robots to see each other, we can detect opaque obstacles in the environment independently of their surface reflectance properties. Two different algorithms, based on the size of the environment, are introduced, with a complexity analysis, and experimental results in simulation and with real robots.
2011 Canadian Conference on Computer and Robot Vision, 2011
In this paper we present the computer vision component of a 6DOF pose estimation algorithm to be ... more In this paper we present the computer vision component of a 6DOF pose estimation algorithm to be used by an underwater robot. Our goal is to evaluate which feature trackers enable us to accurately estimate the 3D positions of features, as quickly as possible. To this end, we perform an evaluation of available detectors, descriptors, and matching schemes, over different underwater datasets. We are interested in identifying combinations in this search space that are suitable for use in structure from motion algorithms, and more generally, vision-aided localization algorithms that use a monocular camera. Our evaluation includes frame-by-frame statistics of desired attributes, as well as measures of robustness expressed as the length of tracked features. We compare the fit of each combination based on the following attributes: number of extracted keypoints per frame, length of feature tracks, average tracking time per frame, number of false positive matches between frames. Several datasets were used, collected in different underwater locations and under different lighting and visibility conditions.
2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2011
This paper presents an adaptation of a vision and inertial-based state estimation algorithm for u... more This paper presents an adaptation of a vision and inertial-based state estimation algorithm for use in an underwater robot. The proposed approach combines information from an Inertial Measurement Unit (IMU) in the form of linear accelerations and angular velocities, depth data from a pressure sensor, and feature tracking from a monocular downward facing camera to estimate the 6DOF pose of the vehicle. To validate the approach, we present extensive experimental results from field trials conducted in underwater environments with varying lighting and visibility conditions, and we demonstrate successful application of the technique underwater.
2014 Canadian Conference on Computer and Robot Vision, 2014
This paper presents a novel approach to modeling curiosity in a mobile robot, which is useful for... more This paper presents a novel approach to modeling curiosity in a mobile robot, which is useful for monitoring and adaptive data collection tasks. We use ROST, a realtime topic modeling framework to build a semantic perception model of the environment, using which, we plan a path through the locations in the world with high semantic information content. We demonstrate the approach using the Aqua robot in a variety of different scenarios, and find the robot be able to do tasks such as coral reef inspection, diver following, and sea floor exploration, without any prior training or preparation.