Learning-based robot control with localized sparse online Gaussian process (original) (raw)
Related papers
Gaussian Processes for Data-Efficient Learning in Robotics and Control
Autonomous learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows to reduce the amount of engineering knowledge, which is otherwise required. However, autonomous reinforcement learning (RL) approaches typically require many interactions with the system to learn controllers, which is a practical limitation in real systems, such as robots, where many interactions can be impractical and time consuming. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations, realistic simulators, pre-shaped policies, or specific knowledge about the underlying dynamics. In this article, we follow a different approach and speed up learning by extracting more information from data. In particular, we learn a probabilistic, non-parametric Gaussian process transition model of the system. By explicitly incorporating model uncertainty into long-term planning and controller learning our approach reduces the effects of model errors, a key problem in model-based learning. Compared to state-of-the art RL our model-based policy search method achieves an unprecedented speed of learning. We demonstrate its applicability to autonomous learning in real robot and control tasks.
Robot learning by demonstration with local gaussian process regression
In recent years there was a tremendous progress in robotic systems, and however also increased expectations: A robot should be easy to program and reliable in task execution. Learning from Demonstration (LfD) offers a very promising alternative to classical engineering approaches. LfD is a very natural way for humans to interact with robots and will be an essential part of future service robots. In this work we first review heteroscedastic Gaussian processes and show how these can be used to encode a task. We then introduce a new Gaussian process regression model that clusters the input space into smaller subsets similar to the work in . In the next step we show how these approaches fit into the Learning by Demonstration framework of [2], . At the end we present an experiment on a real robot arm that shows how all these approaches interact.
Online Incremental Learning of Inverse Dynamics Incorporating Prior Knowledge
Lecture Notes in Computer Science, 2011
Recent approaches to model-based manipulator control involve data-driven learning of the inverse dynamics relationship of a manipulator, eliminating the need for any knowledge of the system model. Ideally, such algorithms should be able to process large amounts of data in an online and incremental manner, thus allowing the system to adapt to changes in its model structure or parameters. Locally Weighted Projection Regression (LWPR) and other non-parametric regression techniques have been applied to learn manipulator inverse dynamics. However, a common issue amongst these learning algorithms is that the system is unable to generalize well outside of regions where it has been trained. Furthermore, learning commences entirely from 'scratch,' making no use of any a-priori knowledge which may be available. In this paper, an online, incremental learning algorithm incorporating prior knowledge is proposed. Prior knowledge is incorporated into the LWPR framework by initializing the local linear models with a first order approximation of the available prior information. It is shown that the proposed approach allows the system to operate well even without any initial training data, and further improves performance with additional online training.
Exploiting Robot Redundancy for Online Learning and Control
Zenodo (CERN European Organization for Nuclear Research), 2022
Accurate trajectory tracking in the task space is critical in many robotics applications. Model-based robot controllers are able to ensure very good tracking but lose effectiveness in the presence of model uncertainties. On the other hand, online learning-based control laws can handle poor dynamic modeling, as long as prediction errors are kept small and decrease over time. However, in the case of redundant robots directly controlled in the task space, this condition is not usually met. We present an online learning-based control framework that exploits robot redundancy so as to increase the overall performance and shorten the learning transient. The validity of the proposed approach is shown through a comparative study conducted in simulation on a KUKA LWR4+ robot.
On-Line Dynamic Model Learning for Manipulator Control
10th IFAC Symposium on Robot Control, 2012
This paper proposes an approach for online learning of the dynamic model of a robot manipulator. The dynamic model is formulated as a weighted sum of locally linear models, and Locally Weighted Projection Regression (LWPR) is used to learn the models based on training data obtained during operation. The LWPR model can be initialized with partial knowledge of rigid body parameters to improve the initial performance. The resulting dynamic model is used to implement a model-based controller. Both feedforward and feedback configurations are investigated. The proposed approach is tested on an industrial robot, and shown to outperform independent joint and fixed model-based control.
2010
Abstract Recent trends in robot learning are to use trajectory-based optimal control techniques and reinforcement learning to scale complex robotic systems. On the one hand, increased computational power and multiprocessing, and on the other hand, probabilistic reinforcement learning methods and function approximation, have contributed to a steadily increasing interest in robot learning. Imitation learning has helped significantly to start learning with reasonable initial behavior.
Evaluating Techniques for Learning a Feedback Controller for Low-Cost Manipulators
Robust manipulation with tractability in unstructured environments is a prominent hurdle in robotics. Learning algorithms to control robotic arms have introduced elegant solutions to the complexities faced in such systems. A novel method of Reinforcement Learning (RL), Gaussian Process Dynamic Programming (GPDP), yields promissing results for closed-loop control of a low-cost manipulator however research surrounding most RL techniques lack breadth of comparable experiments into the viability of particular learning techniques on equivalent environments. We introduce several model-based learning agents as mechanisms to control a noisy, low-cost robotic system. The agents were tested in a simulated domain for learning closed-loop policies of a simple task with no prior information. Then, the fidelity of the simulations is confirmed by application of GPDP to a physical system.
Learning inverse dynamics for redundant manipulator control
2010 International Conference on Autonomous and Intelligent Systems, AIS 2010, 2010
High performance control of robotic systems, including the new generation of humanoid, assistive and entertainment robots, requires adequate knowledge of the dynamics of the system. This can be problematic in the presence of modeling uncertainties as the performance of classical, modelbased controllers is highly dependant upon accurate knowledge of the system. In addition, future robotic systems such as humanoids are likely to be redundant, requiring a mechanism for redundancy resolution when performing lower degree-of-freedom tasks. In this paper, a learning approach to estimating the inverse dynamic equations is presented. Locally Weighted Projection Regression (LWPR) is used to learn the inverse dynamics of a manipulator in both joint and task space and the resulting controllers are used to drive a 3 and 4 DOF robot in simulation. The performance of the learning controllers is compared to a traditional model based control method and is also shown to be a viable control method for a redundant system.
Learning to control in operational space
2008
Abstract One of the most general frameworks for phrasing control problems for complex, redundant robots is operational-space control. However, while this framework is of essential importance for robotics and well understood from an analytical point of view, it can be prohibitively hard to achieve accurate control in the face of modeling errors, which are inevitable in complex robots (eg humanoid robots). In this paper, we suggest a learning approach for operational-space control as a direct inverse model learning problem.