Alessio Angius - Academia.edu (original) (raw)
Papers by Alessio Angius
Electronic Notes in Theoretical Computer Science, 2011
Most Markov chains that describe networks of stochastic reactions have a huge state space. This m... more Most Markov chains that describe networks of stochastic reactions have a huge state space. This makes exact analysis infeasible and hence the only viable approach, apart from simulation, is approximation. In this paper we derive a product form approximation for the transient probabilities of such Markov chains. The approximation can be interpreted as a set of interacting time inhomogeneous Markov
Control of nonlinear systems is challenging in real-time. Decision making, performed many times p... more Control of nonlinear systems is challenging in real-time. Decision making, performed many times per second, must ensure system safety. Designing input to perform a task often involves solving a nonlinear system of differential equations, a computationally intensive, if not intractable, problem. This article proposes sampling-based task learning for control-affine nonlinear systems through the combined learning of both state and action-value functions in a model-free approximate value iteration setting with continuous inputs. A quadratic negative definite state-value function implies the existence of a unique maximum of the action-value function at any state. This allows the replacement of the standard greedy policy with a computationally efficient policy approximation that guarantees progression to a goal state without knowledge of the system dynamics. The policy approximation is consistent, i.e., it does not depend on the action samples used to calculate it. This method is appropriate for mechanical systems with high-dimensional input spaces and unknown dynamics performing constraint-balancing tasks. We verify it both in simulation and experimentally for a UAV carrying a suspended load, and in simulation, for the rendezvous of heterogeneous robots.
Control Engineering Practice
18th International Conference on Database and Expert Systems Applications (DEXA 2007), 2007
Dynamic taxonomies integrated into e-learning tools play a double role: on the one hand they are ... more Dynamic taxonomies integrated into e-learning tools play a double role: on the one hand they are a powerful retrieval system in the usually large content base of an e-learning environment, on the other hand they allow and strongly encourage orthogonal visits of available learning resources by exploiting associations the user would not have thought of (and which are the specific contribution of dynamic taxonomies). These two roles are of interest both for teachers, who may use the search engine to retrieve hints for presentations, assignments, etc., and for students, who may explore the whole learning environment in a new profitable way, which makes, for example, immediately available for a same subject different aspects dealt with in different courses. In the paper we describe the integration of dynamic taxonomies into Moodle, a Course Management System for cooperative learning.
It is well known, mainly because of the work of Kurtz, that density dependent Markov chains can b... more It is well known, mainly because of the work of Kurtz, that density dependent Markov chains can be approximated by sets of ordinary differential equations (ODEs) when their indexing parameter grows very large. This approximation cannot capture the stochastic nature of the process and, consequently, it can provide an erroneous view of the behavior of the Markov chain if the indexing parameter is not sufficiently high. Important phenomena that cannot be revealed include non-negligible variance and bi-modal population distributions. A lessknown approximation proposed by Kurtz applies stochastic differential equations (SDEs) and provides information about the stochastic nature of the process. In this paper we apply and extend this diffusion approximation to study stochastic Petri nets. We identify a class of nets whose underlying stochastic process is a density dependent Markov chain whose indexing parameter is a multiplicative constant which identifies the population level expressed by the initial marking and we provide means to automatically construct the associated set of SDEs. Since the diffusion approximation of Kurtz considers the process only up to the time when it first exits an open interval, we extend the approximation by a machinery that mimics the behavior of the Markov chain at the boundary and allows thus to apply the approach to a wider set of problems. The resulting process is of the jumpdiffusion type. We illustrate by examples that the jump-diffusion approximation which extends to bounded domains can be much more informative than that based on ODEs as it can provide accurate quantity distributions even when they are multi-modal and even for relatively small population levels. Moreover, we show that the method is faster than simulating the original Markov chain.
Performance evaluation models are used by companies to design, adapt, manage and control their pr... more Performance evaluation models are used by companies to design, adapt, manage and control their production systems. In the literature, most of the effort has been dedicated to the development of efficient methodologies to estimate the first moment performance measures of production systems, such as the expected production rate, the buffer levels and the mean completion time. However, there is industrial evidence that the variability of the production output may drastically impact on the capability of managing the system operations, causing the observed system performance to be highly different from what expected. This paper presents a general theory and a methodology to analyze the cumulated output and the lot completion time variability of unreliable machines and systems characterized by general Markovian models. Both discrete models and continuous reward models are considered. We then discuss two simple examples that show how the theory developed in this paper can be applied to ana...
Mathematical models are widely used to create complex biochemical models. Model reduction in orde... more Mathematical models are widely used to create complex biochemical models. Model reduction in order to limit the complexity of a system is an important topic in the analysis of the model. A way to lower the complexity is to identify simple and recurrent sets of reactions and to substitute them with one or more reactions in such a way that the important properties are preserved but the analysis is easier. In this paper we consider the typical recurrent reaction scheme E + S − − ⇀ ↽ − − ES − − → E + P which describes the mechanism that an enzyme, E, binds a substrate, S, and the resulting substrate-bound enzyme, ES, gives rise to the generation of the product, P . If the initial quantities and the reaction rates are known, the temporal behaviour of all the quantities involved in the above reactions can be described exactly by a set of differential equations. It is often the case however that, as not all necessary information is available, only approximate analysis can be carried out. The most well-known approximate approach for the enzyme mechanism is provided by the kinetics of Michaelis-Menten. We propose, based on the concept of the flow-equivalent server which is used in Petri nets to model reduction, an alternative approximate kinetics for the analysis of enzymatic reactions. We evaluate the goodness of the proposed approximation with respect to both the exact analysis and the approximate kinetics of Michaelis and Menten. We show that the proposed new approximate kinetics can be used and gives satisfactory approximation not only in the standard deterministic setting but also in the case when the behaviour is modeled by a stochastic process.
CIRP Annals - Manufacturing Technology, 2015
Lecture Notes in Computer Science, 2015
Theoretical Computer Science, 2015
In this paper we consider large state space continuous time Markov chains arising in the field of... more In this paper we consider large state space continuous time Markov chains arising in the field of systems biology. For a class of such models, namely, for density dependent families of Markov chains that model the interaction of large groups of identical objects, Kurtz has proposed two kinds of approximations. One is based on ordinary differential equations and provides a deterministic approximation while the other uses a diffusion process with which the resulting approximation is stochastic. The computational cost of the deterministic approximation is significantly lower but the diffusion approximation retains stochasticity and is able to reproduce relevant random features like variance, bimodality, and tail behavior that cannot be captured by a single deterministic quantity.
Proceedings of the 18th IFAC World Congress, 2011
Performance evaluation models are used by companies to design, adapt, manage and control their pr... more Performance evaluation models are used by companies to design, adapt, manage and control their production systems. In the literature, most of the effort has been dedicated to the development of efficient methodologies to estimate the first moment performance measures of production systems, such as the expected production rate, the buffer levels and the mean completion time. However, there is industrial evidence that the variability of the production output may drastically impact on the capability of managing the system operations, causing the observed system performance to be highly different from what expected. This paper presents a general theory and a methodology to analyze the cumulated output and the lot completion time variability of unreliable machines and systems characterized by general Markovian models. Both discrete models and continuous reward models are considered. We then discuss two simple examples that show how the theory developed in this paper can be applied to analyse the dependency of the output variability on the system parameters.
Proceedings of the 8th International Conference on Performance Evaluation Methodologies and Tools, 2015
Electronic Notes in Theoretical Computer Science, 2011
Most Markov chains that describe networks of stochastic reactions have a huge state space. This m... more Most Markov chains that describe networks of stochastic reactions have a huge state space. This makes exact analysis infeasible and hence the only viable approach, apart from simulation, is approximation. In this paper we derive a product form approximation for the transient probabilities of such Markov chains. The approximation can be interpreted as a set of interacting time inhomogeneous Markov
Control of nonlinear systems is challenging in real-time. Decision making, performed many times p... more Control of nonlinear systems is challenging in real-time. Decision making, performed many times per second, must ensure system safety. Designing input to perform a task often involves solving a nonlinear system of differential equations, a computationally intensive, if not intractable, problem. This article proposes sampling-based task learning for control-affine nonlinear systems through the combined learning of both state and action-value functions in a model-free approximate value iteration setting with continuous inputs. A quadratic negative definite state-value function implies the existence of a unique maximum of the action-value function at any state. This allows the replacement of the standard greedy policy with a computationally efficient policy approximation that guarantees progression to a goal state without knowledge of the system dynamics. The policy approximation is consistent, i.e., it does not depend on the action samples used to calculate it. This method is appropriate for mechanical systems with high-dimensional input spaces and unknown dynamics performing constraint-balancing tasks. We verify it both in simulation and experimentally for a UAV carrying a suspended load, and in simulation, for the rendezvous of heterogeneous robots.
Control Engineering Practice
18th International Conference on Database and Expert Systems Applications (DEXA 2007), 2007
Dynamic taxonomies integrated into e-learning tools play a double role: on the one hand they are ... more Dynamic taxonomies integrated into e-learning tools play a double role: on the one hand they are a powerful retrieval system in the usually large content base of an e-learning environment, on the other hand they allow and strongly encourage orthogonal visits of available learning resources by exploiting associations the user would not have thought of (and which are the specific contribution of dynamic taxonomies). These two roles are of interest both for teachers, who may use the search engine to retrieve hints for presentations, assignments, etc., and for students, who may explore the whole learning environment in a new profitable way, which makes, for example, immediately available for a same subject different aspects dealt with in different courses. In the paper we describe the integration of dynamic taxonomies into Moodle, a Course Management System for cooperative learning.
It is well known, mainly because of the work of Kurtz, that density dependent Markov chains can b... more It is well known, mainly because of the work of Kurtz, that density dependent Markov chains can be approximated by sets of ordinary differential equations (ODEs) when their indexing parameter grows very large. This approximation cannot capture the stochastic nature of the process and, consequently, it can provide an erroneous view of the behavior of the Markov chain if the indexing parameter is not sufficiently high. Important phenomena that cannot be revealed include non-negligible variance and bi-modal population distributions. A lessknown approximation proposed by Kurtz applies stochastic differential equations (SDEs) and provides information about the stochastic nature of the process. In this paper we apply and extend this diffusion approximation to study stochastic Petri nets. We identify a class of nets whose underlying stochastic process is a density dependent Markov chain whose indexing parameter is a multiplicative constant which identifies the population level expressed by the initial marking and we provide means to automatically construct the associated set of SDEs. Since the diffusion approximation of Kurtz considers the process only up to the time when it first exits an open interval, we extend the approximation by a machinery that mimics the behavior of the Markov chain at the boundary and allows thus to apply the approach to a wider set of problems. The resulting process is of the jumpdiffusion type. We illustrate by examples that the jump-diffusion approximation which extends to bounded domains can be much more informative than that based on ODEs as it can provide accurate quantity distributions even when they are multi-modal and even for relatively small population levels. Moreover, we show that the method is faster than simulating the original Markov chain.
Performance evaluation models are used by companies to design, adapt, manage and control their pr... more Performance evaluation models are used by companies to design, adapt, manage and control their production systems. In the literature, most of the effort has been dedicated to the development of efficient methodologies to estimate the first moment performance measures of production systems, such as the expected production rate, the buffer levels and the mean completion time. However, there is industrial evidence that the variability of the production output may drastically impact on the capability of managing the system operations, causing the observed system performance to be highly different from what expected. This paper presents a general theory and a methodology to analyze the cumulated output and the lot completion time variability of unreliable machines and systems characterized by general Markovian models. Both discrete models and continuous reward models are considered. We then discuss two simple examples that show how the theory developed in this paper can be applied to ana...
Mathematical models are widely used to create complex biochemical models. Model reduction in orde... more Mathematical models are widely used to create complex biochemical models. Model reduction in order to limit the complexity of a system is an important topic in the analysis of the model. A way to lower the complexity is to identify simple and recurrent sets of reactions and to substitute them with one or more reactions in such a way that the important properties are preserved but the analysis is easier. In this paper we consider the typical recurrent reaction scheme E + S − − ⇀ ↽ − − ES − − → E + P which describes the mechanism that an enzyme, E, binds a substrate, S, and the resulting substrate-bound enzyme, ES, gives rise to the generation of the product, P . If the initial quantities and the reaction rates are known, the temporal behaviour of all the quantities involved in the above reactions can be described exactly by a set of differential equations. It is often the case however that, as not all necessary information is available, only approximate analysis can be carried out. The most well-known approximate approach for the enzyme mechanism is provided by the kinetics of Michaelis-Menten. We propose, based on the concept of the flow-equivalent server which is used in Petri nets to model reduction, an alternative approximate kinetics for the analysis of enzymatic reactions. We evaluate the goodness of the proposed approximation with respect to both the exact analysis and the approximate kinetics of Michaelis and Menten. We show that the proposed new approximate kinetics can be used and gives satisfactory approximation not only in the standard deterministic setting but also in the case when the behaviour is modeled by a stochastic process.
CIRP Annals - Manufacturing Technology, 2015
Lecture Notes in Computer Science, 2015
Theoretical Computer Science, 2015
In this paper we consider large state space continuous time Markov chains arising in the field of... more In this paper we consider large state space continuous time Markov chains arising in the field of systems biology. For a class of such models, namely, for density dependent families of Markov chains that model the interaction of large groups of identical objects, Kurtz has proposed two kinds of approximations. One is based on ordinary differential equations and provides a deterministic approximation while the other uses a diffusion process with which the resulting approximation is stochastic. The computational cost of the deterministic approximation is significantly lower but the diffusion approximation retains stochasticity and is able to reproduce relevant random features like variance, bimodality, and tail behavior that cannot be captured by a single deterministic quantity.
Proceedings of the 18th IFAC World Congress, 2011
Performance evaluation models are used by companies to design, adapt, manage and control their pr... more Performance evaluation models are used by companies to design, adapt, manage and control their production systems. In the literature, most of the effort has been dedicated to the development of efficient methodologies to estimate the first moment performance measures of production systems, such as the expected production rate, the buffer levels and the mean completion time. However, there is industrial evidence that the variability of the production output may drastically impact on the capability of managing the system operations, causing the observed system performance to be highly different from what expected. This paper presents a general theory and a methodology to analyze the cumulated output and the lot completion time variability of unreliable machines and systems characterized by general Markovian models. Both discrete models and continuous reward models are considered. We then discuss two simple examples that show how the theory developed in this paper can be applied to analyse the dependency of the output variability on the system parameters.
Proceedings of the 8th International Conference on Performance Evaluation Methodologies and Tools, 2015