Ignacio de Loyola Páez Ubieta | University of Alicante / Universidad de Alicante (original) (raw)

Papers by Ignacio de Loyola Páez Ubieta

Research paper thumbnail of Robotic Approach Trajectory Using Reinforcement Learning with Dual Quaternions

ROBOT 2024: Seventh Iberian Robotics Conference, 2024

Manipulation tasks in robotics usually involve two phases: an approach to the object and the gras... more Manipulation tasks in robotics usually involve two phases: an approach to the object and the grasp itself. The first action allows the robot to reach a certain pose in space that
is likely to allow the object to be manipulated. Reinforcement Learning (RL) techniques allow a policy to be learned through experience given a set of states and actions, so this is a powerful tool for developing controllers for specific tasks such as positioning the robot in a particular point in space. However, when manipulating an object, orientation is as relevant as position. For this reason, a method of RL for positioning the robot’s end
effector in a suitable position and orientation for manipulation in simulation is presented. This approach models the problem of computing the distance for the reward function using dual quaternions parameterisation, an element that can represent the pose and attitude of a rigid body in Euclidean space in a compact way without having to apply any constraints.

Research paper thumbnail of Trajectory Generation Using Dual-Robot Haptic Interface for Reinforcement Learning from Demonstration

Robot 2023: Sixth Iberian Robotics Conference, 2023

In learning robotics, techniques such as Learning from Demonstrations (LfD) and Reinforcement Lea... more In learning robotics, techniques such as Learning from Demonstrations (LfD) and Reinforcement Learning (RL) have become widely popular among developers. However, this approximations can result in inefficient strategies when it comes to train more than one agent interacting in the same space with several objects and unknown obstacles. To solve this problematic, Reinforcement Learning from Demonstration (RLfD) allows the agent to learn and evaluate its performance from a set of demonstrations provided by a human expert while generalising from them using RL training. In dual-robot applications this approach is suitable for training agents that perform collaborative tasks. For this reason, a dual-robot haptic interface has been designed in order to produce dual manipulation trajectories to feed a RLfD agent. Haptics allows to perform high quality demonstrations following an impedance control approach. Trajectories obtained will be used as positive demonstrations so the training environment can generate automatic ones. As a result, this dual-robot haptic interface will provide a few trajectory demonstrations on dual manipulation in order to train agents using RL strategies. The aim of this research is to generate trajectories with this dual-robot haptic interface to train one or more agents following RLfD paradigms. Results show that trajectories performed with this interface present less error and deviation than others performed with a non-haptic interface, increasing the quality of the training data.

Research paper thumbnail of Detection and depth estimation for domestic waste in outdoor environments by sensors fusion

22nd IFAC World Congress, 2023

In this work, we estimate the depth in which domestic waste are located in space from a mobile ro... more In this work, we estimate the depth in which domestic waste are located in space from a mobile robot in outdoor scenarios. As we are doing this calculus on a broad range of space (0.3-6.0 m), we use RGB-D camera and LiDAR fusion. With this aim and range, we compare several methods such as average, nearest, median and center point, applied to those which are inside a reduced or non-reduced Bounding Box (BB). These BB are obtained from segmentation and detection methods which are representative of these techniques like Yolact, SOLO, You Only Look Once (YOLO)v5, YOLOv6 and YOLOv7. Results shown that, applying a detection method with the average technique and a reduction of BB of 40%, returns the same output as segmenting the object and applying the average method. Indeed, the detection method is faster and lighter in comparison with the segmentation one. The committed median error in the conducted experiments was 0.0298 ± 0.0544 m.

Research paper thumbnail of Vision and Tactile Robotic System to Grasp Litter in Outdoor Environments

Journal of Intelligent & Robotic Systems, 2023

The accumulation of litter is increasing in many places and is consequently becoming a problem th... more The accumulation of litter is increasing in many places and is consequently becoming a problem that must be dealt with. In this paper, we present a manipulator robotic system to collect litter in outdoor environments. This system has three functionalities. Firstly, it uses colour images to detect and recognise litter comprising different materials. Secondly, depth data are combined with pixels of waste objects to compute a 3D location and segment three-dimensional point clouds of the litter items in the scene. The grasp in 3 Degrees of Freedom (DoFs) is then estimated for a robot arm with a gripper for the segmented cloud of each instance of waste. Finally, two tactile-based algorithms are implemented and then employed in order to provide the gripper with a sense of touch. This work uses two low-cost visual-based tactile sensors at the fingertips. One of them addresses the detection of contact (which is obtained from tactile images) between the gripper and solid waste, while another has been designed to detect slippage in order to prevent the objects grasped from falling. Our proposal was successfully tested by carrying out extensive experimentation with different objects varying in size, texture, geometry and materials in different outdoor environments (a tiled pavement, a surface of stone/soil, and grass). Our system achieved an average score of 94% for the detection and Collection Success Rate (CSR) as regards its overall performance, and of 80% for the collection of items of litter at the first attempt.

Research paper thumbnail of LiDAR data augmentation by interpolation on spherical range image

2023 IEEE 28th International Conference on Emerging Technologies and Factory Automation (ETFA), 2023

LiDAR sensors are used for mapping tasks, LiDAR odometry or 3D environment reconstruction. Severa... more LiDAR sensors are used for mapping tasks, LiDAR odometry or 3D environment reconstruction. Several of them count with a high number of vertical layers, which increase their price and prevents research groups from carrying out experiments and scientific advances. In this paper, we propose a method for augmenting point cloud data by bilinear interpolation in a Spherical Range Image. Our method improves others on the state-of-the-art by means of standard deviation filtering of the newly generated layers. The system operates at a frequency greater than 10 Hz for data interpolation up to 20 times. In addition, we present two applications for our approach such as LiDAR odometry and LiDAR-Camera fusion, obtaining better results than others that do not apply data augmentation. Finally we make available to the scientific community a package development on ROS (Robot Operating System). The code is available at https://github.com/EPVelasco/lidar-camera-fusion

Research paper thumbnail of GeoGraspEvo: grasping points for multifingered grippers

2023 IEEE 28th International Conference on Emerging Technologies and Factory Automation (ETFA), 2023

The task of grasping objects is a simple and routinely action for humans but it is complex for ro... more The task of grasping objects is a simple and routinely action for humans but it is complex for robots. To integrate robots into everyday tasks, they have to be equipped with capabilities human-like dexterity. In this line, we propose an analytic method, called GeoGraspEvo, to compute grasping points to be used by robotic hands with three, four or more fingers. Our proposal uses features computed from visible surface objects captured by a single RGBD image of a scene. Additionally, it uses as input some configurable kinematic parameters to be able to carry out the grasping depending on the hand morphology. The method compute grasping points with no training process.

Research paper thumbnail of Visual-tactile manipulation to collect household waste in outdoor

Revista Iberoamericana de Automática e Informática industrial (RIAI), 2023

Este artículo presenta un sistema de percepcion orientado a la manipulación robótica, capaz de as... more Este artículo presenta un sistema de percepcion orientado a la manipulación robótica, capaz de asistir en tareas de navegación, clasificacion y recogida de residuos domésticos en exterior. El sistema está compuesto de sensores táctiles ópticos, cámaras RGBD y un LiDAR. Estos se integran en una plataforma móvil que transporta un robot manipulador con pinza. El sistema consta de tres modulos software, dos visuales y uno táctil. Los módulos visuales implementan arquitecturas CNNs para la localización y reconocimiento de residuos sólidos, además de estimar puntos de agarre. El módulo táctil, también basado en CNNs y procesamiento de imagen, regula la apertura de la pinza para controlar el agarre a partir de informacion de contacto. Nuestra propuesta tiene errores de localizacion entorno al 6 %, una precisión de reconocimiento del 98 %, y garantiza estabilidad de agarre el 91 % de las veces. Los tres modulos trabajan en tiempos inferiores a los 750 ms.

Research paper thumbnail of Desarrollos en BLUE para Manipulación Móvil en Entornos Exteriores No Estructurado

Spanish National Automation Conference (JA), 2022

La investigación robótica requiere plataformas experimentales que tengan una arquitectura abiert... more La investigación robótica requiere plataformas experimentales que tengan una arquitectura abierta. Existen varios equipos comerciales con estas características, pero tienen un gran coste económico, siendo esto un inconveniente en el desarrollo de algoritmos. En este artículo presentamos los avances implementados en nuestro vehículo terrestre no tripulado BLUE enfocados a la investigación y desarrollo de métodos para manipulación móvil en entornos exteriores no estructurados. Además, detallamos el hardware y software añadidos a BLUE en los últimos años y mostramos los paquetes y experimentos que se han realizado tanto para las tareas de localización y navegación, como para las operaciones de manipulación de objetos en robótica móvil.

Research paper thumbnail of HACIA LA RECOGIDA DE RESIDUOS DOMÉSTICOS EN EXTERIORES: ENFOQUE VISUAL-TÁCTIL

Spanish National Robotics, Education and Bioengineering Conference (JREB), 2022

Este artículo presenta un sistema de manipulación robótica para la recogida de ciertos tipos de b... more Este artículo presenta un sistema de manipulación robótica para la recogida de ciertos tipos de basura doméstica en exteriores. El sistema está compuesto de un robot con pinza dotada de sensores táctiles, de varias cámaras RGBD y de un LiDAR. El sistema implementa tres módulos de software. Dos de ellos basados en percepción visual y un tercero en táctil. Uno para la tarea de encontrar y localizar posible basura y el otro, destinado a reconocer el objeto como basura, catalogarlo y estimar su agarre. El módulo táctil se emplea para controlar el agarre a partir de información de contacto. Los módulos de percepción se están actualmente probado en ambientes exteriores, obteniendo un error medio en la localización del 6 % de la distancia del objeto, un mAP75 en detección y reconocimiento del 98 % y una precisión táctil del 91 %. Todo ello con tiempos totales medios de ejecución inferiores a 350 ms.

Research paper thumbnail of Project-Based Learning in Robotics Subject of a Master's Degree

International Symposium on Project Approaches in Engineering Education; Active Learning in Engineering Education Workshop; International Conference on Active Learning in Engineering Education. (PAEE/ALE'2021), Jul 7, 2021

The work presented here is part of the development and implementation of methodologies that promo... more The work presented here is part of the development and implementation of methodologies that promote a more reflective, autonomous, collaborative, participatory and meaningful learning, based on entrepreneurship and learning to learn. It is about introducing project-based learning in the Robotics subject of the master’s degree in Automation and Robotics. This subject has been taught for the previous 10 courses in a traditional way, that is, with theoretical classes in a lecture format, where the teacher taught the theoretical contents within the framework of an oral presentation with questions from the students during the presentation or at the end of it. In addition, a series of guided practice exercises were taught, where students consolidated the theoretical contents and acquired the practical skills of the subject. Upon the proposal of the use of projects to manage student learning, it is intended that they are the ones who acquire the necessary knowledge to solve the projects. The teacher becomes a director in the learning process, providing the necessary material so that the student may continue with the development of their real project, understanding each theoretical concept involved in it. One of the main problems that teachers of the subject have encountered over the last 10 years in this subject is that students possess different Engineering profiles. Project-based learning allows teachers to assess the depth in learning specific concepts, so that the subject can add value to all students. In order to know information, an anonymous survey has been carried out that allows for knowing various aspects of the proposed methodology. As students worked in groups, we also consulted about the operation of the group. An individual report of the work carried out was also requested, as well as a group report. All these documents allow teachers to discover the strengths and weaknesses of the proposed methodology.

Research paper thumbnail of Robotic Approach Trajectory Using Reinforcement Learning with Dual Quaternions

ROBOT 2024: Seventh Iberian Robotics Conference, 2024

Manipulation tasks in robotics usually involve two phases: an approach to the object and the gras... more Manipulation tasks in robotics usually involve two phases: an approach to the object and the grasp itself. The first action allows the robot to reach a certain pose in space that
is likely to allow the object to be manipulated. Reinforcement Learning (RL) techniques allow a policy to be learned through experience given a set of states and actions, so this is a powerful tool for developing controllers for specific tasks such as positioning the robot in a particular point in space. However, when manipulating an object, orientation is as relevant as position. For this reason, a method of RL for positioning the robot’s end
effector in a suitable position and orientation for manipulation in simulation is presented. This approach models the problem of computing the distance for the reward function using dual quaternions parameterisation, an element that can represent the pose and attitude of a rigid body in Euclidean space in a compact way without having to apply any constraints.

Research paper thumbnail of Trajectory Generation Using Dual-Robot Haptic Interface for Reinforcement Learning from Demonstration

Robot 2023: Sixth Iberian Robotics Conference, 2023

In learning robotics, techniques such as Learning from Demonstrations (LfD) and Reinforcement Lea... more In learning robotics, techniques such as Learning from Demonstrations (LfD) and Reinforcement Learning (RL) have become widely popular among developers. However, this approximations can result in inefficient strategies when it comes to train more than one agent interacting in the same space with several objects and unknown obstacles. To solve this problematic, Reinforcement Learning from Demonstration (RLfD) allows the agent to learn and evaluate its performance from a set of demonstrations provided by a human expert while generalising from them using RL training. In dual-robot applications this approach is suitable for training agents that perform collaborative tasks. For this reason, a dual-robot haptic interface has been designed in order to produce dual manipulation trajectories to feed a RLfD agent. Haptics allows to perform high quality demonstrations following an impedance control approach. Trajectories obtained will be used as positive demonstrations so the training environment can generate automatic ones. As a result, this dual-robot haptic interface will provide a few trajectory demonstrations on dual manipulation in order to train agents using RL strategies. The aim of this research is to generate trajectories with this dual-robot haptic interface to train one or more agents following RLfD paradigms. Results show that trajectories performed with this interface present less error and deviation than others performed with a non-haptic interface, increasing the quality of the training data.

Research paper thumbnail of Detection and depth estimation for domestic waste in outdoor environments by sensors fusion

22nd IFAC World Congress, 2023

In this work, we estimate the depth in which domestic waste are located in space from a mobile ro... more In this work, we estimate the depth in which domestic waste are located in space from a mobile robot in outdoor scenarios. As we are doing this calculus on a broad range of space (0.3-6.0 m), we use RGB-D camera and LiDAR fusion. With this aim and range, we compare several methods such as average, nearest, median and center point, applied to those which are inside a reduced or non-reduced Bounding Box (BB). These BB are obtained from segmentation and detection methods which are representative of these techniques like Yolact, SOLO, You Only Look Once (YOLO)v5, YOLOv6 and YOLOv7. Results shown that, applying a detection method with the average technique and a reduction of BB of 40%, returns the same output as segmenting the object and applying the average method. Indeed, the detection method is faster and lighter in comparison with the segmentation one. The committed median error in the conducted experiments was 0.0298 ± 0.0544 m.

Research paper thumbnail of Vision and Tactile Robotic System to Grasp Litter in Outdoor Environments

Journal of Intelligent & Robotic Systems, 2023

The accumulation of litter is increasing in many places and is consequently becoming a problem th... more The accumulation of litter is increasing in many places and is consequently becoming a problem that must be dealt with. In this paper, we present a manipulator robotic system to collect litter in outdoor environments. This system has three functionalities. Firstly, it uses colour images to detect and recognise litter comprising different materials. Secondly, depth data are combined with pixels of waste objects to compute a 3D location and segment three-dimensional point clouds of the litter items in the scene. The grasp in 3 Degrees of Freedom (DoFs) is then estimated for a robot arm with a gripper for the segmented cloud of each instance of waste. Finally, two tactile-based algorithms are implemented and then employed in order to provide the gripper with a sense of touch. This work uses two low-cost visual-based tactile sensors at the fingertips. One of them addresses the detection of contact (which is obtained from tactile images) between the gripper and solid waste, while another has been designed to detect slippage in order to prevent the objects grasped from falling. Our proposal was successfully tested by carrying out extensive experimentation with different objects varying in size, texture, geometry and materials in different outdoor environments (a tiled pavement, a surface of stone/soil, and grass). Our system achieved an average score of 94% for the detection and Collection Success Rate (CSR) as regards its overall performance, and of 80% for the collection of items of litter at the first attempt.

Research paper thumbnail of LiDAR data augmentation by interpolation on spherical range image

2023 IEEE 28th International Conference on Emerging Technologies and Factory Automation (ETFA), 2023

LiDAR sensors are used for mapping tasks, LiDAR odometry or 3D environment reconstruction. Severa... more LiDAR sensors are used for mapping tasks, LiDAR odometry or 3D environment reconstruction. Several of them count with a high number of vertical layers, which increase their price and prevents research groups from carrying out experiments and scientific advances. In this paper, we propose a method for augmenting point cloud data by bilinear interpolation in a Spherical Range Image. Our method improves others on the state-of-the-art by means of standard deviation filtering of the newly generated layers. The system operates at a frequency greater than 10 Hz for data interpolation up to 20 times. In addition, we present two applications for our approach such as LiDAR odometry and LiDAR-Camera fusion, obtaining better results than others that do not apply data augmentation. Finally we make available to the scientific community a package development on ROS (Robot Operating System). The code is available at https://github.com/EPVelasco/lidar-camera-fusion

Research paper thumbnail of GeoGraspEvo: grasping points for multifingered grippers

2023 IEEE 28th International Conference on Emerging Technologies and Factory Automation (ETFA), 2023

The task of grasping objects is a simple and routinely action for humans but it is complex for ro... more The task of grasping objects is a simple and routinely action for humans but it is complex for robots. To integrate robots into everyday tasks, they have to be equipped with capabilities human-like dexterity. In this line, we propose an analytic method, called GeoGraspEvo, to compute grasping points to be used by robotic hands with three, four or more fingers. Our proposal uses features computed from visible surface objects captured by a single RGBD image of a scene. Additionally, it uses as input some configurable kinematic parameters to be able to carry out the grasping depending on the hand morphology. The method compute grasping points with no training process.

Research paper thumbnail of Visual-tactile manipulation to collect household waste in outdoor

Revista Iberoamericana de Automática e Informática industrial (RIAI), 2023

Este artículo presenta un sistema de percepcion orientado a la manipulación robótica, capaz de as... more Este artículo presenta un sistema de percepcion orientado a la manipulación robótica, capaz de asistir en tareas de navegación, clasificacion y recogida de residuos domésticos en exterior. El sistema está compuesto de sensores táctiles ópticos, cámaras RGBD y un LiDAR. Estos se integran en una plataforma móvil que transporta un robot manipulador con pinza. El sistema consta de tres modulos software, dos visuales y uno táctil. Los módulos visuales implementan arquitecturas CNNs para la localización y reconocimiento de residuos sólidos, además de estimar puntos de agarre. El módulo táctil, también basado en CNNs y procesamiento de imagen, regula la apertura de la pinza para controlar el agarre a partir de informacion de contacto. Nuestra propuesta tiene errores de localizacion entorno al 6 %, una precisión de reconocimiento del 98 %, y garantiza estabilidad de agarre el 91 % de las veces. Los tres modulos trabajan en tiempos inferiores a los 750 ms.

Research paper thumbnail of Desarrollos en BLUE para Manipulación Móvil en Entornos Exteriores No Estructurado

Spanish National Automation Conference (JA), 2022

La investigación robótica requiere plataformas experimentales que tengan una arquitectura abiert... more La investigación robótica requiere plataformas experimentales que tengan una arquitectura abierta. Existen varios equipos comerciales con estas características, pero tienen un gran coste económico, siendo esto un inconveniente en el desarrollo de algoritmos. En este artículo presentamos los avances implementados en nuestro vehículo terrestre no tripulado BLUE enfocados a la investigación y desarrollo de métodos para manipulación móvil en entornos exteriores no estructurados. Además, detallamos el hardware y software añadidos a BLUE en los últimos años y mostramos los paquetes y experimentos que se han realizado tanto para las tareas de localización y navegación, como para las operaciones de manipulación de objetos en robótica móvil.

Research paper thumbnail of HACIA LA RECOGIDA DE RESIDUOS DOMÉSTICOS EN EXTERIORES: ENFOQUE VISUAL-TÁCTIL

Spanish National Robotics, Education and Bioengineering Conference (JREB), 2022

Este artículo presenta un sistema de manipulación robótica para la recogida de ciertos tipos de b... more Este artículo presenta un sistema de manipulación robótica para la recogida de ciertos tipos de basura doméstica en exteriores. El sistema está compuesto de un robot con pinza dotada de sensores táctiles, de varias cámaras RGBD y de un LiDAR. El sistema implementa tres módulos de software. Dos de ellos basados en percepción visual y un tercero en táctil. Uno para la tarea de encontrar y localizar posible basura y el otro, destinado a reconocer el objeto como basura, catalogarlo y estimar su agarre. El módulo táctil se emplea para controlar el agarre a partir de información de contacto. Los módulos de percepción se están actualmente probado en ambientes exteriores, obteniendo un error medio en la localización del 6 % de la distancia del objeto, un mAP75 en detección y reconocimiento del 98 % y una precisión táctil del 91 %. Todo ello con tiempos totales medios de ejecución inferiores a 350 ms.

Research paper thumbnail of Project-Based Learning in Robotics Subject of a Master's Degree

International Symposium on Project Approaches in Engineering Education; Active Learning in Engineering Education Workshop; International Conference on Active Learning in Engineering Education. (PAEE/ALE'2021), Jul 7, 2021

The work presented here is part of the development and implementation of methodologies that promo... more The work presented here is part of the development and implementation of methodologies that promote a more reflective, autonomous, collaborative, participatory and meaningful learning, based on entrepreneurship and learning to learn. It is about introducing project-based learning in the Robotics subject of the master’s degree in Automation and Robotics. This subject has been taught for the previous 10 courses in a traditional way, that is, with theoretical classes in a lecture format, where the teacher taught the theoretical contents within the framework of an oral presentation with questions from the students during the presentation or at the end of it. In addition, a series of guided practice exercises were taught, where students consolidated the theoretical contents and acquired the practical skills of the subject. Upon the proposal of the use of projects to manage student learning, it is intended that they are the ones who acquire the necessary knowledge to solve the projects. The teacher becomes a director in the learning process, providing the necessary material so that the student may continue with the development of their real project, understanding each theoretical concept involved in it. One of the main problems that teachers of the subject have encountered over the last 10 years in this subject is that students possess different Engineering profiles. Project-based learning allows teachers to assess the depth in learning specific concepts, so that the subject can add value to all students. In order to know information, an anonymous survey has been carried out that allows for knowing various aspects of the proposed methodology. As students worked in groups, we also consulted about the operation of the group. An individual report of the work carried out was also requested, as well as a group report. All these documents allow teachers to discover the strengths and weaknesses of the proposed methodology.