Stefano Ghidini - Academia.edu (original) (raw)

Uploads

Papers by Stefano Ghidini

Research paper thumbnail of A Layered Control Approach to Human-Aware Task and Motion Planning for Human-Robot Collaboration

2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)

Combining task and motion planning efficiently in human-robot collaboration (HRC) entails several... more Combining task and motion planning efficiently in human-robot collaboration (HRC) entails several challenges because of the uncertainty conveyed by the human behavior. Tasks plan execution should be continuously monitored and updated based on the actual behavior of the human and the robot to maintain productivity and safety. We propose control-based approach based on two layers, i.e., task planning and action planning. Each layer reasons at a different level of abstraction: task planning considers high-level operations without taking into account their motion properties; action planning optimizes the execution of high-level operations based on current human state and geometric reasoning. The result is a hierarchical framework where the bottom layer gives feedback to top layer about the feasibility of each task, and the top layer uses this feedback to (re)optimize the process plan. The method is applied to an industrial case study in which a robot and a human worker cooperate to assemble a mosaic.

Research paper thumbnail of Hands-Free: a robot augmented reality teleoperation system

2020 17th International Conference on Ubiquitous Robots (UR)

In this paper the novel teleoperation method "Hands-Free" is presented. Hands-Free is a... more In this paper the novel teleoperation method "Hands-Free" is presented. Hands-Free is a vision-based augmented reality system that allows users to teleoperate a robot end-effector with their hands in real time. The system leverages OpenPose neural network to detect the human operator hand in a given workspace, achieving an average inference time of 0.15 s. The user index position is extracted from the image and converted in real world coordinates to move the robot end-effector in a different workspace.The user hand skeleton is visualized in real-time moving in the actual robot workspace, allowing the user to teleoperate the robot intuitively, regardless of the differences between the user workspace and the robot workspace.Since a set of calibration procedures is involved to convert the index position to the robot end-effector position, we designed three experiments to determine the different errors introduced by conversion. A detailed explanation of the mathematical principles adopted in this work is provided in the paper.Finally, the proposed system has been developed using ROS and is publicly available at the following GitHub repository: https://github.com/Krissy93/hands-free-project.

Research paper thumbnail of Hands-Free: a robot augmented reality teleoperation system

2020 17th International Conference on Ubiquitous Robots (UR), 2020

In this paper the novel teleoperation method "Hands-Free" is presented. Hands-Free is a... more In this paper the novel teleoperation method "Hands-Free" is presented. Hands-Free is a vision-based augmented reality system that allows users to teleoperate a robot end-effector with their hands in real time. The system leverages OpenPose neural network to detect the human operator hand in a given workspace, achieving an average inference time of 0.15 s. The user index position is extracted from the image and converted in real world coordinates to move the robot end-effector in a different workspace. The user hand skeleton is visualized in real-time moving in the actual robot workspace, allowing the user to teleoperate the robot intuitively, regardless of the differences between the user workspace and the robot workspace. Since a set of calibration procedures is involved to convert the index position to the robot end-effector position, we designed three experiments to determine the different errors introduced by conversion. A detailed explanation of the mathematical prin...

Research paper thumbnail of Robust Tuning Rules for Series Elastic Actuator PID Cascade Controllers

IFAC-PapersOnLine

This paper deals with the control of a collaborative robot manipulator with series elastic actuat... more This paper deals with the control of a collaborative robot manipulator with series elastic actuators. In particular, robust tuning rules for cascade control of the joints are presented. Both the motor velocity and link position control loops are considered. The proposed tuning rules allow the online computation of robust control parameters to cope with the different link reflected inertia. Experimental results show the effectiveness of the method in real applications.

Research paper thumbnail of RemindLy: A Personal Note-bot Assistant

Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, 2020

In this project we present the design concept of RemindLy, a small egg-shaped robot with the purp... more In this project we present the design concept of RemindLy, a small egg-shaped robot with the purpose of reminding its user the daily to-do list. Users can interact with their RemindLy both using voice commands and physical interactions, designed as small to large tilting and rotations of the robot body. A small camera is placed on the robot to add a wake-up function when the face of the owner is recognized. To properly setup the notes and customize the robot, a phone app will be developed.

Research paper thumbnail of MEGURU: a gesture-based robot program builder for Meta-Collaborative workstations

Robotics and Computer-Integrated Manufacturing, 2021

This paper presents the Meta-Collaborative Workstation concept and a gesture-based robot program ... more This paper presents the Meta-Collaborative Workstation concept and a gesture-based robot program builder software named MEGURU. The software is ROS-based, and it is publicly available on GitHub. A hand-gestures language has been developed to create a fast and easy to use communication method, where single-hand gestures are combined to create composed Commands, allowing the user to create a customized, powerful, and flexible Gestures Dictionary. Gestures are recognized using an R-FCN Object Detector model fine-tuned on a custom dataset developed for this work. The system has been tested in two experiments. The first one was aimed at evaluating the user experience of people of different age, sex, and professional background concerning the proposed communication method. The second one was aimed at comparing the traditional teach pendant programming method and MEGURU in the task of assembling a small Moka coffee maker. The results of both experiments highlight that MEGURU is a promising robot programming method, especially for non-expert users.

Research paper thumbnail of A Layered Control Approach to Human-Aware Task and Motion Planning for Human-Robot Collaboration

2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)

Combining task and motion planning efficiently in human-robot collaboration (HRC) entails several... more Combining task and motion planning efficiently in human-robot collaboration (HRC) entails several challenges because of the uncertainty conveyed by the human behavior. Tasks plan execution should be continuously monitored and updated based on the actual behavior of the human and the robot to maintain productivity and safety. We propose control-based approach based on two layers, i.e., task planning and action planning. Each layer reasons at a different level of abstraction: task planning considers high-level operations without taking into account their motion properties; action planning optimizes the execution of high-level operations based on current human state and geometric reasoning. The result is a hierarchical framework where the bottom layer gives feedback to top layer about the feasibility of each task, and the top layer uses this feedback to (re)optimize the process plan. The method is applied to an industrial case study in which a robot and a human worker cooperate to assemble a mosaic.

Research paper thumbnail of Hands-Free: a robot augmented reality teleoperation system

2020 17th International Conference on Ubiquitous Robots (UR)

In this paper the novel teleoperation method "Hands-Free" is presented. Hands-Free is a... more In this paper the novel teleoperation method "Hands-Free" is presented. Hands-Free is a vision-based augmented reality system that allows users to teleoperate a robot end-effector with their hands in real time. The system leverages OpenPose neural network to detect the human operator hand in a given workspace, achieving an average inference time of 0.15 s. The user index position is extracted from the image and converted in real world coordinates to move the robot end-effector in a different workspace.The user hand skeleton is visualized in real-time moving in the actual robot workspace, allowing the user to teleoperate the robot intuitively, regardless of the differences between the user workspace and the robot workspace.Since a set of calibration procedures is involved to convert the index position to the robot end-effector position, we designed three experiments to determine the different errors introduced by conversion. A detailed explanation of the mathematical principles adopted in this work is provided in the paper.Finally, the proposed system has been developed using ROS and is publicly available at the following GitHub repository: https://github.com/Krissy93/hands-free-project.

Research paper thumbnail of Hands-Free: a robot augmented reality teleoperation system

2020 17th International Conference on Ubiquitous Robots (UR), 2020

In this paper the novel teleoperation method "Hands-Free" is presented. Hands-Free is a... more In this paper the novel teleoperation method "Hands-Free" is presented. Hands-Free is a vision-based augmented reality system that allows users to teleoperate a robot end-effector with their hands in real time. The system leverages OpenPose neural network to detect the human operator hand in a given workspace, achieving an average inference time of 0.15 s. The user index position is extracted from the image and converted in real world coordinates to move the robot end-effector in a different workspace. The user hand skeleton is visualized in real-time moving in the actual robot workspace, allowing the user to teleoperate the robot intuitively, regardless of the differences between the user workspace and the robot workspace. Since a set of calibration procedures is involved to convert the index position to the robot end-effector position, we designed three experiments to determine the different errors introduced by conversion. A detailed explanation of the mathematical prin...

Research paper thumbnail of Robust Tuning Rules for Series Elastic Actuator PID Cascade Controllers

IFAC-PapersOnLine

This paper deals with the control of a collaborative robot manipulator with series elastic actuat... more This paper deals with the control of a collaborative robot manipulator with series elastic actuators. In particular, robust tuning rules for cascade control of the joints are presented. Both the motor velocity and link position control loops are considered. The proposed tuning rules allow the online computation of robust control parameters to cope with the different link reflected inertia. Experimental results show the effectiveness of the method in real applications.

Research paper thumbnail of RemindLy: A Personal Note-bot Assistant

Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, 2020

In this project we present the design concept of RemindLy, a small egg-shaped robot with the purp... more In this project we present the design concept of RemindLy, a small egg-shaped robot with the purpose of reminding its user the daily to-do list. Users can interact with their RemindLy both using voice commands and physical interactions, designed as small to large tilting and rotations of the robot body. A small camera is placed on the robot to add a wake-up function when the face of the owner is recognized. To properly setup the notes and customize the robot, a phone app will be developed.

Research paper thumbnail of MEGURU: a gesture-based robot program builder for Meta-Collaborative workstations

Robotics and Computer-Integrated Manufacturing, 2021

This paper presents the Meta-Collaborative Workstation concept and a gesture-based robot program ... more This paper presents the Meta-Collaborative Workstation concept and a gesture-based robot program builder software named MEGURU. The software is ROS-based, and it is publicly available on GitHub. A hand-gestures language has been developed to create a fast and easy to use communication method, where single-hand gestures are combined to create composed Commands, allowing the user to create a customized, powerful, and flexible Gestures Dictionary. Gestures are recognized using an R-FCN Object Detector model fine-tuned on a custom dataset developed for this work. The system has been tested in two experiments. The first one was aimed at evaluating the user experience of people of different age, sex, and professional background concerning the proposed communication method. The second one was aimed at comparing the traditional teach pendant programming method and MEGURU in the task of assembling a small Moka coffee maker. The results of both experiments highlight that MEGURU is a promising robot programming method, especially for non-expert users.