Map Partitioning to Approximate an Exploration Strategy in Mobile Robotics (original) (raw)
Related papers
Interleaving Planning and Control of Mobiles Robots in Urban Environments Using Road-Map
2013
This paper presents a robot solution that allows to automatically reach a set of goals attributed to a robot. The challenge is to design autonomous robots assigned to perform missions without a predefined plan. We address the stochastic salesman problem where the goal is to visit a set of points of interest. A stochastic Road-Map is defined as a topological representation of an unstructured environment with uncertainty on the path achievement. The Road-Map allows us to split deliberation and reactive control. The proposed decision making uses a computation of Markov Decision Processes (MDPs) in order to plan all the reactive tasks to perform while there are goals which are not yet reached. Finally, from a brief explanation on how the approach could be extended to multi-robot missions, experiments in real conditions permit to evaluate the proposed architecture for multi-robot stochastic salesmen missions.
Hierarchical map building and planning based on graph partitioning
Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006., 2006
Mobile robot localization and navigation requires a map -the robot's internal representation of the environment. A common problem is that path planning becomes very inefficient for large maps. In this paper we address the problem of segmenting a base-level map in order to construct a higherlevel representation of the space which can be used for more efficient planning. We represent the base-level map as a graph for both geometric and appearance based space representations.
Advanced Robotics, 2014
In this paper 2 , we present a multi-robot exploration strategy for map-building. We consider an indoor structured environment and a team of robots with different sensing and motion capabilities. We combine geometric and probabilistic reasoning to propose a solution to our problem. We formalize the proposed solution using Stochastic Dynamic Programming (SDP) in states with imperfect information. Our modeling can be considered as a Partially-Observable Markov Decision Process (POMDP), which is optimized using SDP. We apply the dynamic programming technique in a reduced search space that allows us to incrementally explore the environment. We propose realistic sensor models and provide a method to compute the probability of the next observation given the current state of the team of robots based on a Bayesian approach. We also propose a probabilistic motion model, which allows us to take into account errors (noise) on the velocities applied to each robot. This modeling also allows us to simulate imperfect robot motions, and to estimate the probability of reaching the next state given the current state. We have implemented all our algorithms and simulations results are presented.
Exploration and map-building under uncertainty with multiple heterogeneous robots
2011 IEEE International Conference on Robotics and Automation, 2011
In this paper, we present a multi-robot exploration strategy for map-building. We consider a team of robots with different sensing and motion capabilities. We combine geometric and probabilistic reasoning to propose a solution to our problem. We formalize the proposed solution using dynamic programming in states with imperfect information. We apply the dynamic programming technique in a reduced search space that allows us to incrementally explore the environment. We propose realistic sensor models and provide a method to compute the probability of the next sensor reading given the current state of the team of robots based on a Bayesian approach.
real-time path planning using a simulation-based Markov Decision Process
This paper introduces a novel path planning technique called MCRT which is aimed at non-deterministic, partially known, real-time domains populated with dynamically moving obstacles, such as might be found in a real-time strategy (RTS) game. The technique combines an efficient form of Monte-Carlo tree search with the randomized exploration capabilities of rapidly exploring random tree (RRT) planning. The main innovation of MCRT is in incrementally building an RRT structure with a collision-sensitive reward function, and then re-using it to efficiently solve multiple, sequential goals. We have implemented the technique in MCRT-planner, a program which solves non-deterministic path planning problems in imperfect information RTS games, and evaluated it in comparison to four other state of the art techniques. Planners embedding each technique were applied to a typical RTS game and evaluated using the game score and the planning cost. The empirical evidence demonstrates the success of MC...
Focussed processing of MDPs for path planning
16th IEEE International Conference on Tools with Artificial Intelligence, 2004
We present a heuristic-based algorithm for solving restricted Markov decision processes (MDPs). Our approach, which combines ideas from deterministic search and recent dynamic programming methods, focusses computation towards promising areas of the state space. It is thus able to significantly reduce the amount of processing required to produce a solution. We demonstrate this improvement by comparing the performance of our approach to the performance of several existing algorithms on a robotic path planning domain.
Active Visual Planning for Mobile Robot Teams Using Hierarchical POMDPs
Key challenges to widespread deployment of mobile robots include collaboration and the ability to tailor sensing and information processing to the task at hand. Partially observable Markov decision processes (POMDPs), which are an instance of probabilistic sequential decision-making, can be used to address these challenges in domains characterized by partial observability and nondeterministic action outcomes. However, such formulations tend to be computationally intractable for domains that have large complex state spaces and require robots to respond to dynamic changes. This paper presents a hierarchical decomposition of POMDPs that incorporates adaptive observation functions, constrained convolutional policies, and automatic belief propagation, enabling robots to retain capabilities for different tasks, direct sensing to relevant locations, and determine the sequence of sensing and processing algorithms best suited to any given task. A communication layer is added to the POMDP hierarchy for belief sharing and collaboration in a team of robots. All algorithms are evaluated in simulation and on physical robots, localizing target objects in dynamic indoor domains.
Distributed Multirobot Exploration Based on Scene Partitioning and Frontier Selection
Mathematical Problems in Engineering, 2018
In mobile robotics, the exploration task consists of navigating through an unknown environment and building a representation of it. The mobile robot community has developed many approaches to solve this problem. These methods are mainly based on two key ideas. The first one is the selection of promising regions to explore and the second is the minimization of a cost function involving the distance traveled by the robots, the time it takes for them to finish the exploration, and others. An option to solve the exploration problem is the use of multiple robots to reduce the time needed for the task and to add fault tolerance to the system. We propose a new method to explore unknown areas, by using a scene partitioning scheme and assigning weights to the frontiers between explored and unknown areas. Energy consumption is always a concern during the exploration, for this reason our method is a distributed algorithm, which helps to reduce the number of communications between robots. By us...
Planning exploration strategies for simultaneous localization and mapping
Robotics and Autonomous Systems, 2006
In this paper, we present techniques that allow one or multiple mobile robots to efficiently explore and model their environment. While much existing research in the area of Simultaneous Localization and Mapping (SLAM) focuses on issues related to uncertainty in sensor data, our work focuses on the problem of planning optimal exploration strategies. We develop a utility function that measures the quality of proposed sensing locations, give a randomized algorithm for selecting an optimal next sensing location, and provide methods for extracting features from sensor data and merging these into an incrementally constructed map.
Coordinated multi-robot exploration using a segmentation of the environment
2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2008
This paper addresses the problem of exploring an unknown environment with a team of mobile robots. The key issue in coordinated multi-robot exploration is how to assign target locations to the individual robots such that the overall mission time is minimized. In this paper, we propose a novel approach to distribute the robots over the environment that takes into account the structure of the environment. To achieve this, it partitions the space into segments, for example, corresponding to individual rooms. Instead of only selecting frontiers between unknown and explored areas as target locations, we send the robots to the individual segments with the task to explore the corresponding area. Our approach has been implemented and tested in simulation as well as in real world experiments. The experiments demonstrate that the overall exploration time can be significantly reduced by considering the segmentation of the environment.