João Filipe Ferreira | Universidade de Coimbra (original) (raw)

Papers by João Filipe Ferreira

Research paper thumbnail of SEMFIRE forest dataset for semantic segmentation and data augmentation

<strong>SEMFIRE Datasets (Forest environment dataset)</strong> These datasets are use... more <strong>SEMFIRE Datasets (Forest environment dataset)</strong> These datasets are used for semantic segmentation and data augmentation and contain various forestry scenes. They were collected as part of the research work conducted by the Institute of Systems and Robotics, University of Coimbra team within the scope of the Safety, Exploration and Maintenance of Forests with Ecological Robotics (SEMFIRE, ref. CENTRO-01-0247-FEDER-032691) research project coordinated by Ingeniarius Ltd. The semantic segmentation algorithms attempt to identify various semantic classes (e.g. background, live flammable materials, trunks, canopies etc.) in the images of the datasets. The datasets include diverse image types, e.g. original camera images and their labeled images. In total the SEMFIRE datasets include about 1700 image pairs. Each dataset includes corresponding .bag files. To launch those .bag files on your ROS environment, use the instructions on the following Github repository Description of<strong> </strong>each <strong>dataset:</strong> <strong>2019_2020_quinta_do_bolao_coimbra:</strong> Robot moving on a path through a forest environment <strong>2020_ctcv_parking_lot_coimbra:</strong> Robot moving in a circle in a parking lot for testings <strong>2020_sete_fontes_forest: </strong>A set of forest images acquired by hand-held apparatus Each <strong>dataset</strong> consists of following <strong>directories:</strong> <strong>images directory: </strong>diverse image types, e.g. original camera images and their labeled images <strong>rosbags directory: </strong>.bag files, which correspond to the image directory Each <strong>images directory </strong>consists of following <strong>directories:</strong> <strong>img:</strong> original camera images <strong>lbl:</strong> single channel images (ground truth) with corresponding labels for each image in<strong> img</strong> <strong>lbl_colored: </strong>camera <strong> </strong>images in <strong>lbl</strong> colorized according to different semantic classes (for more details see t [...]

Research paper thumbnail of MoDSeM: Towards Semantic Mapping with Distributed Robots

Towards Autonomous Robotic Systems, 2019

This paper presents MoDSeM, a software framework for cooperative perception supporting teams of r... more This paper presents MoDSeM, a software framework for cooperative perception supporting teams of robots. MoDSeM aims to provide a flexible semantic mapping framework able to represent all spatial information perceived in missions involving teams of robots, and to formalize the development of perception software, promoting the implementation of reusable modules that can fit varied team constitutions. We provide an overview of MoDSeM, and describe how it can be applied to multi-robot systems, discussing several sub-problems such as history and memory, or centralized vs distributed perception. Aiming to demonstrate the functionality of our prototype, preliminary experiments took place in simulation, using a \(100 \times 100 \times 100\) m simulated map to demonstrate its ability to receive, store and retrieve information stored in semantic voxel grids, using ROS as a transport layer and OpenVDB as a grid storage mechanism. Results show the appropriateness of ROS and OpenVDB as a back-end for supporting the prototype, achieving a promising performance in all aspects of the task. Future developments will make use of these results to apply MoDSeM in realistic scenarios, including multi-robot indoor surveillance and precision forestry operations.

Research paper thumbnail of ROS Integration of an Instrumented Bobcat T190 for the SEMFIRE Project

Studies in Computational Intelligence, 2021

Forestry and agricultural robotics are growing areas of research within the field of Robotics. Re... more Forestry and agricultural robotics are growing areas of research within the field of Robotics. Recent developments in planning, perception and mobility in unstructured outdoor environments have allowed the proliferation of innovative autonomous machines. The SEMFIRE project proposes a robotic system to support the removal of flammable material in forests, thus assisting in landscaping maintenance tasks and avoiding the deflagration of wildfires. In this work, we describe work in progress on the development of the Ranger, a large heavy-duty forestry machine based on the well-known Bobcat T190, which is the main actor of SEMFIRE. We present the design of the machine, which has been expanded with several sensors, its full integration in the Robot Operating System, as well as preliminary results.

Research paper thumbnail of The CASIR-IMPEP Attention Framework for Social Interaction with Robots

This document provides all the information that concerns the Artificial Attention System (AAS) im... more This document provides all the information that concerns the Artificial Attention System (AAS) implementation developed within the scope of the CASIR project. The goal is to ultimately make an open source artificial attention library available to the robotics community. I

Research paper thumbnail of Experimental Setup and Configuration for Joint Attention in CASIR

The aim of this report is to present the experimental equipment, setup and configuration to be us... more The aim of this report is to present the experimental equipment, setup and configuration to be used within the scope of the CASIR project, designed with the purpose of studying the influence of automatic (i.e. involuntary) attentional processes in social interaction with robots. A detailed description of the architecture will be presented, namely in what regards the hardware and computational resources supporting the robotic active head to be used as the prototypical robotic agent used throughout the project. Moreover, the configuration of the experimental space will be described, focussing on its potential for enacting the intended human-robot interaction (HRI) scenarios.

Research paper thumbnail of Touch attention Bayesian models for object feature extraction in robotic blind manipulation

Nowadays, robotic platforms tend to be equipped with a conjugation of multi-modal artificial perc... more Nowadays, robotic platforms tend to be equipped with a conjugation of multi-modal artificial perception systems to navigate and interact with the surrounding environment and persons. The complexity and dynamic characteristics of those environments has led to the development of attention mechanisms to filter the sensory overload to sense, perceive and process only the relevant sensory data. This work presents Bayesian models related with the attentional mechanisms involved in blind manipulation of objects and related with the detection of borders, borders following and corner detection of object surfaces. This type of object features can be used as structural references of the manipulated object and be used in the next manipulation stages. The perception of those stimuli requires coordination between the attention, perception and action mechanisms in order to direct (attention) and promote (action) the contact between the touch sensors and the object, to maximize the acquisition of i...

Research paper thumbnail of Robotics Use Case Scenarios

Towards Ubiquitous Low-power Image Processing Platforms, 2020

Research paper thumbnail of Object Classification for Robotic Platforms

Advances in Intelligent Systems and Computing, 2019

Computer vision has been revolutionised in recent years by increased research in convolutional ne... more Computer vision has been revolutionised in recent years by increased research in convolutional neural networks (CNNs); however, many challenges remain to be addressed in order to ensure fast and accurate image processing when applying these techniques to robotics. These challenges consist of handling extreme changes in scale, illumination, noise, and viewing angles of a moving object. The project main contribution is to provide insight on how to properly train a convolutional neural network (CNN), a specific type of DNN, for object tracking in the context of industrial robotics. The proposed solution aims to use a combination of documented approaches to replicate a pick-and-place task with an industrial robot using computer vision feeding a YOLOv3 CNN. Experimental tests, designed to investigate the requirements of training the CNN in this context, were performed using a variety of objects that differed in shape and size in a controlled environment. The general focus was to detect the objects based on their shape; as a result, a suitable and secure grasp could be selected by the robot. The findings in this article reflect the challenges of training the CNN through brute force. It also highlights the different methods of annotating images and the ensuing results obtained after training the neural network.

Research paper thumbnail of Requirements Specification and Integration Architecture for Perception in a Cooperative Team of Forestry Robots

Towards Autonomous Robotic Systems, 2020

Research paper thumbnail of Fast Exact Bayesian Inference for High-Dimensional Models

In this text, we present the principles that allow the tractable implementation of exact inferenc... more In this text, we present the principles that allow the tractable implementation of exact inference processes concerning a group of widespread classes of Bayesian generative models, which have until recently been deemed as intractable whenever formulated using high-dimensional joint distributions. We will demonstrate the usefulness of such a principled approach with an example of real-time OpenCL implementation using GPUs of a full-fledged, computer vision-based model to estimate gaze direction in human-robot interaction (HRI).

Research paper thumbnail of Brief survey on computational solutions for Bayesian inference

In this paper, we present a brief review of research work attempting to tackle the issue of tract... more In this paper, we present a brief review of research work attempting to tackle the issue of tractability in Bayesian inference, including an analysis of the applicability and trade-offs of each proposed solution. In recent years, the Bayesian approach has become increasingly popular, endowing autonomous systems with the ability to deal with uncertainty and incompleteness. However, these systems are also expected to be efficient, while Bayesian inference in general is known to be an NP-hard problem, making it paramount to develop approaches dealing with this complexity in order to allow the implementation of usable Bayesian solutions. Novel computational paradigms and also major developments in massively parallel computation technologies, such as multi-core processors, GPUs and FPGAs, provide us with an inkling of the roadmap in Bayesian computation for upcoming years.

Research paper thumbnail of SEMFIRE: Towards a new generation of forestry maintenance multi-robot systems

2019 IEEE/SICE International Symposium on System Integration (SII), 2019

Europe has been affected by an alarming number of wildfires every year, ravaging nearly half a mi... more Europe has been affected by an alarming number of wildfires every year, ravaging nearly half a million hectares of forestry areas and resulting in an unacceptable amount of human losses. The year 2017 was one of the worst ever recorded, with devastating wildfires raging across the globe. Bearing these shocking facts in mind, this position paper aims to lay the foundations of crucial new upcoming research in the field of forestry robotics by presenting an overview of the SEMFIRE project. SEMFIRE proposes the development of a robotic system designed to reduce the accumulation of combustible material in forests, thus assisting in landscaping maintenance tasks for wildfire prevention. This paper describes the multi-robot system envisaged by SEMFIRE, combined with pervasive local positioning, multi-robot navigation and coordination in forests, and an innovative artificial perception architecture.

Research paper thumbnail of HSMD: An object motion detection algorithm using a Hybrid Spiking Neural Network Architecture

Research paper thumbnail of A visualisation and simulation framework for local and remote HRI experimentation

2016 23° Encontro Português de Computação Gráfica e Interação (EPCGI), Nov 1, 2016

Research paper thumbnail of MoDSeM: Modular Framework for Distributed Semantic Mapping

Embedded Inteligence: Enabling & Supporting RAS Technologies, 2019

This work explored the requirements of accurately and reliably predicting user intention using a ... more This work explored the requirements of accurately and reliably predicting user intention using a deep learning methodology when performing fine-grained movements of the human hand. The focus was on combining a feature engineering process with the effective capability of deep learning to further identify salient characteristics from a biological input signal. 3 time domain features (root mean square, waveform length, and slope sign changes) were extracted from the surface electromyography (sEMG) signal of 17 hand and wrist movements performed by 40 subjects. The feature data was mapped to 6 sensor bend resistance readings from a CyberGlove II system, representing the associated hand kinematic data. These sensors were located at specific joints of interest on the human hand (the thumb's metacarpophalangeal joint, the proximal interphalangeal joint of each finger, and the radiocarpal joint of the wrist). All datasets were taken from database 2 of the NinaPro online database repository. A 3-layer long short-term memory model with dropout was developed to predict the 6 glove sensor readings using a corresponding sEMG feature vector as input. Initial results from trials using test data from the 40 subjects produce an average mean squared error of 0.176. This indicates a viable pathway to follow for this prediction method of hand movement data, although further work is needed to optimize the model and to analyze the data with a more detailed set of metrics.

Research paper thumbnail of Unconventional computing for Bayesian inference

International Journal of Approximate Reasoning, 2017

This special issue focuses on recent advances and future directions on unconventional computation... more This special issue focuses on recent advances and future directions on unconventional computational solutions to perform Bayesian inference. It is a followup of a workshop held within the context of a major robotics conference, the IROS 2015 (IEEE/RSJ International Conference on Intelligent Robots and Systems 2015) workshop on Unconventional Computing for Bayesian Inference (UCBI2015),1 addressing Bayesian inference for autonomous robots, insights from computational biology, and related topics. The special issue includes novel contributions beyond the scope of robotics. Contemporary robots and other cognitive artifacts are not yet ready to autonomously operate in complex environments. The major reason for this failure is the lack of cognitive systems able to efficiently deal with uncertainty when behaving in real world situations. One of the challenges of robotics is endowing devices with adequate computational power to dwell in uncertainty and decide with incomplete data, with limited resources and power, as biological beings have done for a long time. In this context, Bayesian approaches have been used to deal with incompleteness and uncertainty, exhibiting promising results. However, all these works, even if they propose probabilistic models, still rely on a classical computing paradigm that imposes a bottleneck on performance and scalability. Improved and novel electronic devices have opened the spectrum of devices available for computation, such as GPUs, FPGAs, hybrid systems, allowing unconventional approaches to better explore parallelisation and tackle power consumption. The flexibility of current reprogrammable logic devices provides a test bed for novel stochastic processors and unconventional computing. In fact, Bayesian inference has by now found its way into all branches of science, including natural sciences (e.g. [1–8]), humanities and social sciences (e.g. [9–13]), and also applied sciences, such as medicine (e.g. [14,15]). It has become so pervasive that even the scientific method has been occasionally interpreted as an application of Bayesian inference [16]. In most cases, Bayesian inference has been used as an alternative to the more “traditional” frequentist inference methods for statistical analysis of data. On the flip side, for several reasons (in most cases, of technical, computational, or even epistemological nature), the application of Bayesian inference to model cognition and reasoning – either to study the animal brain, such as in neuroscience, or to synthesise artificial brains, such as in artificial intelligence (AI) or robotics – was not as common for many years. However, in the past couple of decades, as confirmed, for example, by Chater, Tenenbaum, and Yuille [17], the Bayesian approach has become very much ubiquitous. In AI, you can find numerous recent examples of Bayesian approach popularity in literature such as [18–20]. Robotics, as a consequence, follows suit, as can be seen in [21–24], as well as the interest of the robotics community in the UCBI workshop. The application of the Bayesian approach to modelling consists in establishing a joint distribution describing a generative model, therefore encoding the probability of any set of data D being generated from any given hypothesis H . Bayesian inference is therefore the process of determining information based on the conditional probabilities of any hypothesis given the available data by applying Bayes’ rule. Information, in this context, can mean the exact values or estimates of (1) the probability of a specific hypothesis or set of hypotheses [H = h] given a specific instantiation of the data [D = d], expressed as P (h|d); (2) the full posterior distribution, P (H | d); or it may also mean (3) a probabilistic decision based on the posterior distribution in favour of specific hypotheses, fh(P (H | d)). We can therefore define Bayesian computation in generic terms as the act of executing Bayesian inference. If H and D are continuous random variables, there are in general no closed-form expressions for computing inference (with the notable exception of Kalman filters) If, on the other hand, the generative model is based on discrete random variables, although inference is computable, it is frequently intractable due to the so-called curse of dimensionality caused by the cardinality of the space of hypotheses H. Moreover, determining the actual computational expression for inference in the discrete case has been

Research paper thumbnail of Integration of touch attention mechanisms to improve the robotic haptic exploration of surfaces

Neurocomputing, 2017

This text presents the integration of touch attention mechanisms to improve the efficiency of the... more This text presents the integration of touch attention mechanisms to improve the efficiency of the action-perception loop, typically involved in active haptic exploration tasks of surfaces by robotic hands. The progressive inference of regions of the workspace that should be probed by the robotic system uses information related with haptic saliency extracted from the perceived haptic stimulus map (exploitation) and a "curiosity"-inducing prioritisation based on the reconstruction's inherent uncertainty and inhibition-ofreturn mechanisms (exploration), modulated by top-down influences stemming from current task objectives, updated at each exploration iteration. This work also extends the scope of the top-down modulation of information presented in a previous work, by integrating in the decision process the influence of shape cues of the current exploration path. The Bayesian framework proposed in this work was tested in a simulation environment. A scenario made of three different materials was explored autonomously by a robotic system. The experimental results show that the system was able to perform three different haptic discontinuity following tasks with a good structural accuracy, demonstrating the selectivity and generalization capability of the attention mechanisms. These experiments confirmed the fundamental contribution of the haptic saliency cues to the success and accuracy of the execution of the tasks.

Research paper thumbnail of State-of-the-Art Report: 3D structure and motion multimodal perception

Research paper thumbnail of Synthesis of Bayesian Machines On FPGAs Using Stochastic Arithmetic

Probabilistic inference allows artificial systems to cope with uncertainty, but it can be computa... more Probabilistic inference allows artificial systems to cope with uncertainty, but it can be computationally demanding. Inspired by biological neural systems, stochastic arithmetic modules on reconfigurable hardware can provide massively parallel systems with limited resources. This work presents a framework to automatically implement Bayesian Machines to perform computations using stochastic bitstreams.

Research paper thumbnail of Hierarchical Log-Spherical Inference Grid – An Unconventional Approach to Robotic Perception and Action

In this text, an unconventional paradigm for robotic multisensory perception and action will be p... more In this text, an unconventional paradigm for robotic multisensory perception and action will be presented in the form of a generalisation of a framework devised in previous work by the authors, the Bayesian Volumetric Map (BVM). The BVM, a log-spherical inference grid providing an egocentric and probabilistic representation of spatial information, was designed to bridge multisensory perception and actuation by means of a unified framework. The underlying paradigm represents a fundamental departure from traditional outlooks on robotic perception and spatial reasoning, in that it embodies a non-Cartesian and egocentric approach as opposed to the conventional Cartesian and allocentric perspective.

Research paper thumbnail of SEMFIRE forest dataset for semantic segmentation and data augmentation

<strong>SEMFIRE Datasets (Forest environment dataset)</strong> These datasets are use... more <strong>SEMFIRE Datasets (Forest environment dataset)</strong> These datasets are used for semantic segmentation and data augmentation and contain various forestry scenes. They were collected as part of the research work conducted by the Institute of Systems and Robotics, University of Coimbra team within the scope of the Safety, Exploration and Maintenance of Forests with Ecological Robotics (SEMFIRE, ref. CENTRO-01-0247-FEDER-032691) research project coordinated by Ingeniarius Ltd. The semantic segmentation algorithms attempt to identify various semantic classes (e.g. background, live flammable materials, trunks, canopies etc.) in the images of the datasets. The datasets include diverse image types, e.g. original camera images and their labeled images. In total the SEMFIRE datasets include about 1700 image pairs. Each dataset includes corresponding .bag files. To launch those .bag files on your ROS environment, use the instructions on the following Github repository Description of<strong> </strong>each <strong>dataset:</strong> <strong>2019_2020_quinta_do_bolao_coimbra:</strong> Robot moving on a path through a forest environment <strong>2020_ctcv_parking_lot_coimbra:</strong> Robot moving in a circle in a parking lot for testings <strong>2020_sete_fontes_forest: </strong>A set of forest images acquired by hand-held apparatus Each <strong>dataset</strong> consists of following <strong>directories:</strong> <strong>images directory: </strong>diverse image types, e.g. original camera images and their labeled images <strong>rosbags directory: </strong>.bag files, which correspond to the image directory Each <strong>images directory </strong>consists of following <strong>directories:</strong> <strong>img:</strong> original camera images <strong>lbl:</strong> single channel images (ground truth) with corresponding labels for each image in<strong> img</strong> <strong>lbl_colored: </strong>camera <strong> </strong>images in <strong>lbl</strong> colorized according to different semantic classes (for more details see t [...]

Research paper thumbnail of MoDSeM: Towards Semantic Mapping with Distributed Robots

Towards Autonomous Robotic Systems, 2019

This paper presents MoDSeM, a software framework for cooperative perception supporting teams of r... more This paper presents MoDSeM, a software framework for cooperative perception supporting teams of robots. MoDSeM aims to provide a flexible semantic mapping framework able to represent all spatial information perceived in missions involving teams of robots, and to formalize the development of perception software, promoting the implementation of reusable modules that can fit varied team constitutions. We provide an overview of MoDSeM, and describe how it can be applied to multi-robot systems, discussing several sub-problems such as history and memory, or centralized vs distributed perception. Aiming to demonstrate the functionality of our prototype, preliminary experiments took place in simulation, using a \(100 \times 100 \times 100\) m simulated map to demonstrate its ability to receive, store and retrieve information stored in semantic voxel grids, using ROS as a transport layer and OpenVDB as a grid storage mechanism. Results show the appropriateness of ROS and OpenVDB as a back-end for supporting the prototype, achieving a promising performance in all aspects of the task. Future developments will make use of these results to apply MoDSeM in realistic scenarios, including multi-robot indoor surveillance and precision forestry operations.

Research paper thumbnail of ROS Integration of an Instrumented Bobcat T190 for the SEMFIRE Project

Studies in Computational Intelligence, 2021

Forestry and agricultural robotics are growing areas of research within the field of Robotics. Re... more Forestry and agricultural robotics are growing areas of research within the field of Robotics. Recent developments in planning, perception and mobility in unstructured outdoor environments have allowed the proliferation of innovative autonomous machines. The SEMFIRE project proposes a robotic system to support the removal of flammable material in forests, thus assisting in landscaping maintenance tasks and avoiding the deflagration of wildfires. In this work, we describe work in progress on the development of the Ranger, a large heavy-duty forestry machine based on the well-known Bobcat T190, which is the main actor of SEMFIRE. We present the design of the machine, which has been expanded with several sensors, its full integration in the Robot Operating System, as well as preliminary results.

Research paper thumbnail of The CASIR-IMPEP Attention Framework for Social Interaction with Robots

This document provides all the information that concerns the Artificial Attention System (AAS) im... more This document provides all the information that concerns the Artificial Attention System (AAS) implementation developed within the scope of the CASIR project. The goal is to ultimately make an open source artificial attention library available to the robotics community. I

Research paper thumbnail of Experimental Setup and Configuration for Joint Attention in CASIR

The aim of this report is to present the experimental equipment, setup and configuration to be us... more The aim of this report is to present the experimental equipment, setup and configuration to be used within the scope of the CASIR project, designed with the purpose of studying the influence of automatic (i.e. involuntary) attentional processes in social interaction with robots. A detailed description of the architecture will be presented, namely in what regards the hardware and computational resources supporting the robotic active head to be used as the prototypical robotic agent used throughout the project. Moreover, the configuration of the experimental space will be described, focussing on its potential for enacting the intended human-robot interaction (HRI) scenarios.

Research paper thumbnail of Touch attention Bayesian models for object feature extraction in robotic blind manipulation

Nowadays, robotic platforms tend to be equipped with a conjugation of multi-modal artificial perc... more Nowadays, robotic platforms tend to be equipped with a conjugation of multi-modal artificial perception systems to navigate and interact with the surrounding environment and persons. The complexity and dynamic characteristics of those environments has led to the development of attention mechanisms to filter the sensory overload to sense, perceive and process only the relevant sensory data. This work presents Bayesian models related with the attentional mechanisms involved in blind manipulation of objects and related with the detection of borders, borders following and corner detection of object surfaces. This type of object features can be used as structural references of the manipulated object and be used in the next manipulation stages. The perception of those stimuli requires coordination between the attention, perception and action mechanisms in order to direct (attention) and promote (action) the contact between the touch sensors and the object, to maximize the acquisition of i...

Research paper thumbnail of Robotics Use Case Scenarios

Towards Ubiquitous Low-power Image Processing Platforms, 2020

Research paper thumbnail of Object Classification for Robotic Platforms

Advances in Intelligent Systems and Computing, 2019

Computer vision has been revolutionised in recent years by increased research in convolutional ne... more Computer vision has been revolutionised in recent years by increased research in convolutional neural networks (CNNs); however, many challenges remain to be addressed in order to ensure fast and accurate image processing when applying these techniques to robotics. These challenges consist of handling extreme changes in scale, illumination, noise, and viewing angles of a moving object. The project main contribution is to provide insight on how to properly train a convolutional neural network (CNN), a specific type of DNN, for object tracking in the context of industrial robotics. The proposed solution aims to use a combination of documented approaches to replicate a pick-and-place task with an industrial robot using computer vision feeding a YOLOv3 CNN. Experimental tests, designed to investigate the requirements of training the CNN in this context, were performed using a variety of objects that differed in shape and size in a controlled environment. The general focus was to detect the objects based on their shape; as a result, a suitable and secure grasp could be selected by the robot. The findings in this article reflect the challenges of training the CNN through brute force. It also highlights the different methods of annotating images and the ensuing results obtained after training the neural network.

Research paper thumbnail of Requirements Specification and Integration Architecture for Perception in a Cooperative Team of Forestry Robots

Towards Autonomous Robotic Systems, 2020

Research paper thumbnail of Fast Exact Bayesian Inference for High-Dimensional Models

In this text, we present the principles that allow the tractable implementation of exact inferenc... more In this text, we present the principles that allow the tractable implementation of exact inference processes concerning a group of widespread classes of Bayesian generative models, which have until recently been deemed as intractable whenever formulated using high-dimensional joint distributions. We will demonstrate the usefulness of such a principled approach with an example of real-time OpenCL implementation using GPUs of a full-fledged, computer vision-based model to estimate gaze direction in human-robot interaction (HRI).

Research paper thumbnail of Brief survey on computational solutions for Bayesian inference

In this paper, we present a brief review of research work attempting to tackle the issue of tract... more In this paper, we present a brief review of research work attempting to tackle the issue of tractability in Bayesian inference, including an analysis of the applicability and trade-offs of each proposed solution. In recent years, the Bayesian approach has become increasingly popular, endowing autonomous systems with the ability to deal with uncertainty and incompleteness. However, these systems are also expected to be efficient, while Bayesian inference in general is known to be an NP-hard problem, making it paramount to develop approaches dealing with this complexity in order to allow the implementation of usable Bayesian solutions. Novel computational paradigms and also major developments in massively parallel computation technologies, such as multi-core processors, GPUs and FPGAs, provide us with an inkling of the roadmap in Bayesian computation for upcoming years.

Research paper thumbnail of SEMFIRE: Towards a new generation of forestry maintenance multi-robot systems

2019 IEEE/SICE International Symposium on System Integration (SII), 2019

Europe has been affected by an alarming number of wildfires every year, ravaging nearly half a mi... more Europe has been affected by an alarming number of wildfires every year, ravaging nearly half a million hectares of forestry areas and resulting in an unacceptable amount of human losses. The year 2017 was one of the worst ever recorded, with devastating wildfires raging across the globe. Bearing these shocking facts in mind, this position paper aims to lay the foundations of crucial new upcoming research in the field of forestry robotics by presenting an overview of the SEMFIRE project. SEMFIRE proposes the development of a robotic system designed to reduce the accumulation of combustible material in forests, thus assisting in landscaping maintenance tasks for wildfire prevention. This paper describes the multi-robot system envisaged by SEMFIRE, combined with pervasive local positioning, multi-robot navigation and coordination in forests, and an innovative artificial perception architecture.

Research paper thumbnail of HSMD: An object motion detection algorithm using a Hybrid Spiking Neural Network Architecture

Research paper thumbnail of A visualisation and simulation framework for local and remote HRI experimentation

2016 23° Encontro Português de Computação Gráfica e Interação (EPCGI), Nov 1, 2016

Research paper thumbnail of MoDSeM: Modular Framework for Distributed Semantic Mapping

Embedded Inteligence: Enabling & Supporting RAS Technologies, 2019

This work explored the requirements of accurately and reliably predicting user intention using a ... more This work explored the requirements of accurately and reliably predicting user intention using a deep learning methodology when performing fine-grained movements of the human hand. The focus was on combining a feature engineering process with the effective capability of deep learning to further identify salient characteristics from a biological input signal. 3 time domain features (root mean square, waveform length, and slope sign changes) were extracted from the surface electromyography (sEMG) signal of 17 hand and wrist movements performed by 40 subjects. The feature data was mapped to 6 sensor bend resistance readings from a CyberGlove II system, representing the associated hand kinematic data. These sensors were located at specific joints of interest on the human hand (the thumb's metacarpophalangeal joint, the proximal interphalangeal joint of each finger, and the radiocarpal joint of the wrist). All datasets were taken from database 2 of the NinaPro online database repository. A 3-layer long short-term memory model with dropout was developed to predict the 6 glove sensor readings using a corresponding sEMG feature vector as input. Initial results from trials using test data from the 40 subjects produce an average mean squared error of 0.176. This indicates a viable pathway to follow for this prediction method of hand movement data, although further work is needed to optimize the model and to analyze the data with a more detailed set of metrics.

Research paper thumbnail of Unconventional computing for Bayesian inference

International Journal of Approximate Reasoning, 2017

This special issue focuses on recent advances and future directions on unconventional computation... more This special issue focuses on recent advances and future directions on unconventional computational solutions to perform Bayesian inference. It is a followup of a workshop held within the context of a major robotics conference, the IROS 2015 (IEEE/RSJ International Conference on Intelligent Robots and Systems 2015) workshop on Unconventional Computing for Bayesian Inference (UCBI2015),1 addressing Bayesian inference for autonomous robots, insights from computational biology, and related topics. The special issue includes novel contributions beyond the scope of robotics. Contemporary robots and other cognitive artifacts are not yet ready to autonomously operate in complex environments. The major reason for this failure is the lack of cognitive systems able to efficiently deal with uncertainty when behaving in real world situations. One of the challenges of robotics is endowing devices with adequate computational power to dwell in uncertainty and decide with incomplete data, with limited resources and power, as biological beings have done for a long time. In this context, Bayesian approaches have been used to deal with incompleteness and uncertainty, exhibiting promising results. However, all these works, even if they propose probabilistic models, still rely on a classical computing paradigm that imposes a bottleneck on performance and scalability. Improved and novel electronic devices have opened the spectrum of devices available for computation, such as GPUs, FPGAs, hybrid systems, allowing unconventional approaches to better explore parallelisation and tackle power consumption. The flexibility of current reprogrammable logic devices provides a test bed for novel stochastic processors and unconventional computing. In fact, Bayesian inference has by now found its way into all branches of science, including natural sciences (e.g. [1–8]), humanities and social sciences (e.g. [9–13]), and also applied sciences, such as medicine (e.g. [14,15]). It has become so pervasive that even the scientific method has been occasionally interpreted as an application of Bayesian inference [16]. In most cases, Bayesian inference has been used as an alternative to the more “traditional” frequentist inference methods for statistical analysis of data. On the flip side, for several reasons (in most cases, of technical, computational, or even epistemological nature), the application of Bayesian inference to model cognition and reasoning – either to study the animal brain, such as in neuroscience, or to synthesise artificial brains, such as in artificial intelligence (AI) or robotics – was not as common for many years. However, in the past couple of decades, as confirmed, for example, by Chater, Tenenbaum, and Yuille [17], the Bayesian approach has become very much ubiquitous. In AI, you can find numerous recent examples of Bayesian approach popularity in literature such as [18–20]. Robotics, as a consequence, follows suit, as can be seen in [21–24], as well as the interest of the robotics community in the UCBI workshop. The application of the Bayesian approach to modelling consists in establishing a joint distribution describing a generative model, therefore encoding the probability of any set of data D being generated from any given hypothesis H . Bayesian inference is therefore the process of determining information based on the conditional probabilities of any hypothesis given the available data by applying Bayes’ rule. Information, in this context, can mean the exact values or estimates of (1) the probability of a specific hypothesis or set of hypotheses [H = h] given a specific instantiation of the data [D = d], expressed as P (h|d); (2) the full posterior distribution, P (H | d); or it may also mean (3) a probabilistic decision based on the posterior distribution in favour of specific hypotheses, fh(P (H | d)). We can therefore define Bayesian computation in generic terms as the act of executing Bayesian inference. If H and D are continuous random variables, there are in general no closed-form expressions for computing inference (with the notable exception of Kalman filters) If, on the other hand, the generative model is based on discrete random variables, although inference is computable, it is frequently intractable due to the so-called curse of dimensionality caused by the cardinality of the space of hypotheses H. Moreover, determining the actual computational expression for inference in the discrete case has been

Research paper thumbnail of Integration of touch attention mechanisms to improve the robotic haptic exploration of surfaces

Neurocomputing, 2017

This text presents the integration of touch attention mechanisms to improve the efficiency of the... more This text presents the integration of touch attention mechanisms to improve the efficiency of the action-perception loop, typically involved in active haptic exploration tasks of surfaces by robotic hands. The progressive inference of regions of the workspace that should be probed by the robotic system uses information related with haptic saliency extracted from the perceived haptic stimulus map (exploitation) and a "curiosity"-inducing prioritisation based on the reconstruction's inherent uncertainty and inhibition-ofreturn mechanisms (exploration), modulated by top-down influences stemming from current task objectives, updated at each exploration iteration. This work also extends the scope of the top-down modulation of information presented in a previous work, by integrating in the decision process the influence of shape cues of the current exploration path. The Bayesian framework proposed in this work was tested in a simulation environment. A scenario made of three different materials was explored autonomously by a robotic system. The experimental results show that the system was able to perform three different haptic discontinuity following tasks with a good structural accuracy, demonstrating the selectivity and generalization capability of the attention mechanisms. These experiments confirmed the fundamental contribution of the haptic saliency cues to the success and accuracy of the execution of the tasks.

Research paper thumbnail of State-of-the-Art Report: 3D structure and motion multimodal perception

Research paper thumbnail of Synthesis of Bayesian Machines On FPGAs Using Stochastic Arithmetic

Probabilistic inference allows artificial systems to cope with uncertainty, but it can be computa... more Probabilistic inference allows artificial systems to cope with uncertainty, but it can be computationally demanding. Inspired by biological neural systems, stochastic arithmetic modules on reconfigurable hardware can provide massively parallel systems with limited resources. This work presents a framework to automatically implement Bayesian Machines to perform computations using stochastic bitstreams.

Research paper thumbnail of Hierarchical Log-Spherical Inference Grid – An Unconventional Approach to Robotic Perception and Action

In this text, an unconventional paradigm for robotic multisensory perception and action will be p... more In this text, an unconventional paradigm for robotic multisensory perception and action will be presented in the form of a generalisation of a framework devised in previous work by the authors, the Bayesian Volumetric Map (BVM). The BVM, a log-spherical inference grid providing an egocentric and probabilistic representation of spatial information, was designed to bridge multisensory perception and actuation by means of a unified framework. The underlying paradigm represents a fundamental departure from traditional outlooks on robotic perception and spatial reasoning, in that it embodies a non-Cartesian and egocentric approach as opposed to the conventional Cartesian and allocentric perspective.