Pierre Dersin - Academia.edu (original) (raw)

Papers by Pierre Dersin

Research paper thumbnail of Harnessing AI for Reliability and Maintenance

Research paper thumbnail of Game Theory and Cyber Kill Chain: A Strategic Approach to Cybersecurity

Research paper thumbnail of Confidence Intervals for RUL: A New Approach based on Time Transformation and Reliability Theory

Proceeding of the 33rd European Safety and Reliability Conference

Research paper thumbnail of Tolerable hazard rate for function with independent safety barrier acting as failure detection and negation mechanism

On s'intéresse ici aux dispositifs de sécurité comportant une barrière qui détecte des défaillanc... more On s'intéresse ici aux dispositifs de sécurité comportant une barrière qui détecte des défaillances contraires à la sécurité et les neutralise en amenant le système dans un état de repli sécuritaire lorsqu'elles se produisent. Ce type de dispositif se rencontre par exemple, dans les centrales ou réseaux électriques, dans les industries de process ou dans les systèmes ferroviaires. La question posée est l'allocation d'objectifs quantitatifs de sécurité à la fonction principale ainsi qu'à la barrière, en vue de limiter le risque d'accident en respectant la fréquence acceptable de danger (THR) qui a été prescrite. On présente une méthode qui, contrairement par exemple à celle qui est sous-jacente à l'annexe A4 de la norme CENELEC EN 50129 de la signalisation ferroviaire, ne fait pas l'hypothèse d'un retour immédiat du système à l'état nominal à partir de l'état de repli sécuritaire. Dans l'approche présentée ici, on obtient la probabilité, en fonction du temps, que le système réside dans un état sûr, ainsi que le taux de transition, fonction du temps et asymptotique, vers un état dangereux. La méthodologie repose sur la résolution des équations de Chapman-Kolmogorov en régime transitoire pour la chaîne de Markov décrivant le dispositif. La comparaison avec la méthode de la norme EN 50129 confirme que celle-ci peut conduire à des prévisions optimistes, donc potentiellement à une sous-évaluation des risques. Il est important avant d'appliquer toute formule de calcul du THR du système, de préciser clairement toutes les hypothèses sous-jacentes relatives à la maintenance et au régime de fonctionnement du système étudié.

Research paper thumbnail of Diagnostic automatisé d'aiguillage ferroviaire par apprentissage statistique

HAL (Le Centre pour la Communication Scientifique Directe), Oct 13, 2020

Le but de cette étude est de comparer les performances des méthodes d'analyse de données fonction... more Le but de cette étude est de comparer les performances des méthodes d'analyse de données fonctionnelles par rapport à une approche basée sur des descripteurs métiers pour automatiser le diagnostic d'aiguillages ferroviaires.

Research paper thumbnail of Data-driven undervoltage analysis of a 25kV Traction Sub-Station

2022 IEEE 9th Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering (UPCON)

Research paper thumbnail of Estimation of Markovian Reliability Systems with Logistics via Cross-Entropy

HAL (Le Centre pour la Communication Scientifique Directe), Jul 1, 2018

International audienceUrban passenger rail systems are large scale systems comprising highly reli... more International audienceUrban passenger rail systems are large scale systems comprising highly reliable redundant structures and logistics (e.g., spares or repair personnel availability, inspection protocols, etc). To meet the strict contractual obligations, steady state unavailability of such systems needs to be accurately estimated as a measure of a solution’s life cycle costs. We use Markovian Stochastic Petri Nets (SPN) models to conveniently represent the systems.We propose a multi-level Cross-Entropy (CE) optimization scheme, where we exploit the regenerative structure in the underlying continuous time Markov chain (CTMC) and to determine optimal Importance Sampling (IS) rates in the case of rare events [3]. The CE scheme is used in a pre-simulation and applied to failure transitions of the Markovian SPN models only. The proposed method divides a rare problem into a series ofless rare problems by considering increasingly rare component failures. In the first stage a standard regenerative simulation is used for non-rare system failures. At each subsequent stage, the rarity is progressively increased (by decreasing the failure rates of components) and the IS rates of transitions obtained from the previous problem are used at the current stage. The final pre-simulation stage provides a vector of IS rates that are optimized and are used in the main simulation. The experimental results showed bounded relative error (BRE) property as the rarity of the original problem increases, and as a consequence a considerable variance reduction and gain (in terms of work normalized variance)

Research paper thumbnail of The class of life time distributions with a mean residual life linear in time

Safety and Reliability – Safe Societies in a Changing World, 2018

Research paper thumbnail of Optimization of maintenance policy based on operational reliability analysis : application to railway switches & crossings

The present communication reports on a collaboration between ALSTOM Transport and Lulea Technolog... more The present communication reports on a collaboration between ALSTOM Transport and Lulea Technology University, under the sponsorship of Trafikverket, the Swedish Infrastructure Manager.For 2020, th ...

Research paper thumbnail of Optimization of preventive maintenance policy based on operational reliability analysis (Application to tramway access doors)

Research paper thumbnail of RAM performance monitoring and MTBF demonstration on the SBB SA-NBS project

Theory and Applications, 2009

Research paper thumbnail of Large scale system effectiveness analysis. First annual milestone report, September 30, 1977--September 30, 1978

Research paper thumbnail of Modeling Remaining Useful Life Dynamics in Reliability Engineering

Research paper thumbnail of Prognostics and Health Management in Railways

Industrial maintenance has evolved considerably over the last 70 years. Roughly speaking, one cou... more Industrial maintenance has evolved considerably over the last 70 years. Roughly speaking, one could say that the first generation, until approximately 1950, was characterized by a purely "corrective" perspective, i.e., failures led to repairs, and then came the second generation, characterized by scheduled overhauls and maintenance control and planning tools (roughly the period of 1950-1980). From the 1980s onward, the notion of conditionbased maintenance (CBM) gained ground. With the turn of the twenty-first century, a great interest in "predictive maintenance" has emerged, along with the concept of prognostics and health management (PHM). A number of rail companies and original equipment manufacturers now have a PHM department.

Research paper thumbnail of Some Properties of the Dual Adaptive Stochastic Control Algorithm

The purpose of this paper is to compare analytically the properties of the suboptimal dual adapti... more The purpose of this paper is to compare analytically the properties of the suboptimal dual adaptive stochastic control algorithm when the plant dynamics contain multiplicative white noise parameters. A simple scaler example is used for this analysis.

Research paper thumbnail of Reliability demonstration tests: Decision rules and associated risks

Research paper thumbnail of Approximate Zero-Variance Importance Sampling for static network reliability estimation with node failures and application to rail systems

2016 Winter Simulation Conference (WSC), Dec 1, 2016

To accurately estimate the reliability of highly reliable rail systems and comply with contractua... more To accurately estimate the reliability of highly reliable rail systems and comply with contractual obligations, rail system suppliers such as ALSTOM require efficient reliability estimation techniques. Standard Monte-Carlo methods in their crude form are inefficient in estimating static network reliability of highly reliable systems. Importance Sampling techniques are an advanced class of variance reduction techniques used for rare-event analysis. In static network reliability estimation, the graph models often deal with failing links. In this paper, we propose an adaptation of an approximate Zero-Variance Importance Sampling method to evaluate the reliability of real transport systems where nodes are the failing components. This is more representative of railway telecommunication system behavior. Robustness measures of the accuracy of the estimates, bounded or vanishing relative error properties, are discussed and results from a real network (Data Communication System used in automated train control system) showing bounded relative error property, are presented. Rai, Valenzuela, Tuffin, Rubino, and Dersin The current approach to predict the availability of such a system involves the creation of a Markov model which characterizes the different failure paths of the network. Typically up to third-order failure paths are included. The selection of which paths to model is made by reliability modeling experts. Resulting models are hard to validate both by other experts and the end-user. Furthermore, it is not clear whether there exist relevant failure paths that have not been modeled. Modeling the communication network as a graph with communication equipment as nodes and communication paths as links overcomes both shortcomings: first, the model can be easily validated by the design expert and the client; and second, by defining successful communication as the existence of a path between the communicating devices, no modeling of failure paths is needed because path finding algorithms can be used to establish connectivity. The static network reliability problem deals with the estimation of the probability that a given set of nodes in a graph model are connected when each individual component (link or node) is in an UP/ DOWN (working/ failed) state according to their respective probabilities. The case where links are the failing elements is essential in many applications and has been extensively studied (Cancela, Khadiri, and Rubino 2009). However, there is a wide range of applications where nodes are the failing components such as the DCS, e.g., models of network survivability (Gertsbakh, Shpungin, and Vaisman 2014). This requires an adaptation of the existing methods to the case of node failures. Formally, a node failure means that the node becomes nonfunctional and its associated links useless. In the 2-terminal or source-to-terminal reliability problem, two nodes of the graph are fixed and the reliability of the network is defined as the probability of having a path between those two nodes. In such analysis, a node failure causes a higher number of s-t paths to become nonfunctional as compared to a link failure (depending on the node's degree). Thus, the reliability of a network would be affected more severely in the case of node failures. Computing the unreliability of highly reliable systems (e.g., the DCS) requires efficient simulation techniques. For large graphs, an exact computation of the unreliability u becomes a NP-hard problem that is impractical to be solved analytically (L'Ecuyer et al. 2011). Monte Carlo methods can estimate u in its crude form (CMC) sampling n stochastically independent realizations of the graph and computing the proportion of these n realizations for which the s-t are not connected (L'Ecuyer et al. 2011). For rare events, when u << 1, the accuracy of the simulation process is captured by the relative error RE (ratio of standard deviation and mean value) which is inversely proportional to the event probability u and the number of realizations n (L'Ecuyer et al. 2011, Rubino and Tuffin 2009). Thus, as u −→ 0, for a fixed RE, we need excessively large values of n. This increases the computational effort and the cost. Importance Sampling (IS) is an advanced class of variance reduction techniques for rare-event estimation problems based on the change of the sampling probabilities of the components (i.e., nodes in our case) so that the system failure occurs more frequently. The biasedness of the estimator is removed by multiplying the original estimator with an appropriate likelihood ratio (ratio of the original probability and the new sampling probability) and the estimator is the average over n (Rubino and Tuffin 2009). This is the general basic framework of IS method. Finding this change of measure is the main difficulty in IS, because if the sampling probabilities which lead to frequent failure are not properly selected, the likelihood ratio may have a huge variance resulting in a bad estimation, even if the failure event is not rare anymore (L'Ecuyer, Mandjes, and Tuffin 2009). The robustness of the estimators in such cases is based on the behavior of the RE properties, like bounded relative error (BRE) or vanishing relative error (VRE), as described by L'Ecuyer et al. (2011). If the relative width of the confidence interval (CI) on u based on the central-limit theorem (CLT) for a fixed n steadies when u −→ 0, the BRE property is valid, and VRE is valid if it tends to zero (L'Ecuyer et al. 2011, L'Ecuyer et al. 2010). The aim of this paper is to propose and adapt the dynamic importance sampling method based on Monte Carlo simulations as described by L'Ecuyer et al. (2011), considering node failures, and to prove its application on an existing example of a communication network (DCS). We propose an approximation of the zero-variance IS method based on minimal cuts having relatively high failure probability in the subgraph that remains after removing the nonfunctional nodes and their associated links (irrespective of being functional or not, if one of the associated node is failed), while enforcing the states of the nodes

Research paper thumbnail of Reliability demonstration tests: Decision rules and associated risks

Research paper thumbnail of Designing for RAM in Railway Systems: An Application to the Railway Signaling Subsystem

Research paper thumbnail of Keynote speech: PHM in railways: Big data or smart data?

2017 Prognostics and System Health Management Conference (PHM-Harbin), 2017

Prognostics & Health Management (PHM) has undergone a very fast development since the beginning o... more Prognostics & Health Management (PHM) has undergone a very fast development since the beginning of the new century and holds a promise for making a number of industrial systems, such as railway systems, both more reliable and more cost-effective in terms of maintenance. The impressive developments in data science in recent years (the "Big Data") provide powerful tools for extracting useful and actionable information from data acquired from the field or from test benches. Purely datadriven approaches require no physical understanding and are quite flexible but do require large volumes of data (pertaining to both healthy and degraded conditions) and their performance is highly dependent on the quality of those data. Computational load can be very high. But railway suppliers have accumulated decades of know-how on the physics of their systems, both in normal and degraded conditions. This knowledge can be exploited to the fullest by designing "virtual prototypes", i.e. multiphysics models of the actual systems. The key challenge is taking uncertainty into account, for instance uncertainty in future operating conditions. Hybrid approaches, i.e. combining knowledge of physical processes and information from sensor readings to enhance diagnostics and prognostics capabilities, seem to combine the advantages of both methods. Model predictions can be adjusted using measured data (either off-line or on-line). The above considerations will be illustrated on railway subsystems, such as HVAC (heating, ventilation and airconditioning).

Research paper thumbnail of Harnessing AI for Reliability and Maintenance

Research paper thumbnail of Game Theory and Cyber Kill Chain: A Strategic Approach to Cybersecurity

Research paper thumbnail of Confidence Intervals for RUL: A New Approach based on Time Transformation and Reliability Theory

Proceeding of the 33rd European Safety and Reliability Conference

Research paper thumbnail of Tolerable hazard rate for function with independent safety barrier acting as failure detection and negation mechanism

On s'intéresse ici aux dispositifs de sécurité comportant une barrière qui détecte des défaillanc... more On s'intéresse ici aux dispositifs de sécurité comportant une barrière qui détecte des défaillances contraires à la sécurité et les neutralise en amenant le système dans un état de repli sécuritaire lorsqu'elles se produisent. Ce type de dispositif se rencontre par exemple, dans les centrales ou réseaux électriques, dans les industries de process ou dans les systèmes ferroviaires. La question posée est l'allocation d'objectifs quantitatifs de sécurité à la fonction principale ainsi qu'à la barrière, en vue de limiter le risque d'accident en respectant la fréquence acceptable de danger (THR) qui a été prescrite. On présente une méthode qui, contrairement par exemple à celle qui est sous-jacente à l'annexe A4 de la norme CENELEC EN 50129 de la signalisation ferroviaire, ne fait pas l'hypothèse d'un retour immédiat du système à l'état nominal à partir de l'état de repli sécuritaire. Dans l'approche présentée ici, on obtient la probabilité, en fonction du temps, que le système réside dans un état sûr, ainsi que le taux de transition, fonction du temps et asymptotique, vers un état dangereux. La méthodologie repose sur la résolution des équations de Chapman-Kolmogorov en régime transitoire pour la chaîne de Markov décrivant le dispositif. La comparaison avec la méthode de la norme EN 50129 confirme que celle-ci peut conduire à des prévisions optimistes, donc potentiellement à une sous-évaluation des risques. Il est important avant d'appliquer toute formule de calcul du THR du système, de préciser clairement toutes les hypothèses sous-jacentes relatives à la maintenance et au régime de fonctionnement du système étudié.

Research paper thumbnail of Diagnostic automatisé d'aiguillage ferroviaire par apprentissage statistique

HAL (Le Centre pour la Communication Scientifique Directe), Oct 13, 2020

Le but de cette étude est de comparer les performances des méthodes d'analyse de données fonction... more Le but de cette étude est de comparer les performances des méthodes d'analyse de données fonctionnelles par rapport à une approche basée sur des descripteurs métiers pour automatiser le diagnostic d'aiguillages ferroviaires.

Research paper thumbnail of Data-driven undervoltage analysis of a 25kV Traction Sub-Station

2022 IEEE 9th Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering (UPCON)

Research paper thumbnail of Estimation of Markovian Reliability Systems with Logistics via Cross-Entropy

HAL (Le Centre pour la Communication Scientifique Directe), Jul 1, 2018

International audienceUrban passenger rail systems are large scale systems comprising highly reli... more International audienceUrban passenger rail systems are large scale systems comprising highly reliable redundant structures and logistics (e.g., spares or repair personnel availability, inspection protocols, etc). To meet the strict contractual obligations, steady state unavailability of such systems needs to be accurately estimated as a measure of a solution’s life cycle costs. We use Markovian Stochastic Petri Nets (SPN) models to conveniently represent the systems.We propose a multi-level Cross-Entropy (CE) optimization scheme, where we exploit the regenerative structure in the underlying continuous time Markov chain (CTMC) and to determine optimal Importance Sampling (IS) rates in the case of rare events [3]. The CE scheme is used in a pre-simulation and applied to failure transitions of the Markovian SPN models only. The proposed method divides a rare problem into a series ofless rare problems by considering increasingly rare component failures. In the first stage a standard regenerative simulation is used for non-rare system failures. At each subsequent stage, the rarity is progressively increased (by decreasing the failure rates of components) and the IS rates of transitions obtained from the previous problem are used at the current stage. The final pre-simulation stage provides a vector of IS rates that are optimized and are used in the main simulation. The experimental results showed bounded relative error (BRE) property as the rarity of the original problem increases, and as a consequence a considerable variance reduction and gain (in terms of work normalized variance)

Research paper thumbnail of The class of life time distributions with a mean residual life linear in time

Safety and Reliability – Safe Societies in a Changing World, 2018

Research paper thumbnail of Optimization of maintenance policy based on operational reliability analysis : application to railway switches & crossings

The present communication reports on a collaboration between ALSTOM Transport and Lulea Technolog... more The present communication reports on a collaboration between ALSTOM Transport and Lulea Technology University, under the sponsorship of Trafikverket, the Swedish Infrastructure Manager.For 2020, th ...

Research paper thumbnail of Optimization of preventive maintenance policy based on operational reliability analysis (Application to tramway access doors)

Research paper thumbnail of RAM performance monitoring and MTBF demonstration on the SBB SA-NBS project

Theory and Applications, 2009

Research paper thumbnail of Large scale system effectiveness analysis. First annual milestone report, September 30, 1977--September 30, 1978

Research paper thumbnail of Modeling Remaining Useful Life Dynamics in Reliability Engineering

Research paper thumbnail of Prognostics and Health Management in Railways

Industrial maintenance has evolved considerably over the last 70 years. Roughly speaking, one cou... more Industrial maintenance has evolved considerably over the last 70 years. Roughly speaking, one could say that the first generation, until approximately 1950, was characterized by a purely "corrective" perspective, i.e., failures led to repairs, and then came the second generation, characterized by scheduled overhauls and maintenance control and planning tools (roughly the period of 1950-1980). From the 1980s onward, the notion of conditionbased maintenance (CBM) gained ground. With the turn of the twenty-first century, a great interest in "predictive maintenance" has emerged, along with the concept of prognostics and health management (PHM). A number of rail companies and original equipment manufacturers now have a PHM department.

Research paper thumbnail of Some Properties of the Dual Adaptive Stochastic Control Algorithm

The purpose of this paper is to compare analytically the properties of the suboptimal dual adapti... more The purpose of this paper is to compare analytically the properties of the suboptimal dual adaptive stochastic control algorithm when the plant dynamics contain multiplicative white noise parameters. A simple scaler example is used for this analysis.

Research paper thumbnail of Reliability demonstration tests: Decision rules and associated risks

Research paper thumbnail of Approximate Zero-Variance Importance Sampling for static network reliability estimation with node failures and application to rail systems

2016 Winter Simulation Conference (WSC), Dec 1, 2016

To accurately estimate the reliability of highly reliable rail systems and comply with contractua... more To accurately estimate the reliability of highly reliable rail systems and comply with contractual obligations, rail system suppliers such as ALSTOM require efficient reliability estimation techniques. Standard Monte-Carlo methods in their crude form are inefficient in estimating static network reliability of highly reliable systems. Importance Sampling techniques are an advanced class of variance reduction techniques used for rare-event analysis. In static network reliability estimation, the graph models often deal with failing links. In this paper, we propose an adaptation of an approximate Zero-Variance Importance Sampling method to evaluate the reliability of real transport systems where nodes are the failing components. This is more representative of railway telecommunication system behavior. Robustness measures of the accuracy of the estimates, bounded or vanishing relative error properties, are discussed and results from a real network (Data Communication System used in automated train control system) showing bounded relative error property, are presented. Rai, Valenzuela, Tuffin, Rubino, and Dersin The current approach to predict the availability of such a system involves the creation of a Markov model which characterizes the different failure paths of the network. Typically up to third-order failure paths are included. The selection of which paths to model is made by reliability modeling experts. Resulting models are hard to validate both by other experts and the end-user. Furthermore, it is not clear whether there exist relevant failure paths that have not been modeled. Modeling the communication network as a graph with communication equipment as nodes and communication paths as links overcomes both shortcomings: first, the model can be easily validated by the design expert and the client; and second, by defining successful communication as the existence of a path between the communicating devices, no modeling of failure paths is needed because path finding algorithms can be used to establish connectivity. The static network reliability problem deals with the estimation of the probability that a given set of nodes in a graph model are connected when each individual component (link or node) is in an UP/ DOWN (working/ failed) state according to their respective probabilities. The case where links are the failing elements is essential in many applications and has been extensively studied (Cancela, Khadiri, and Rubino 2009). However, there is a wide range of applications where nodes are the failing components such as the DCS, e.g., models of network survivability (Gertsbakh, Shpungin, and Vaisman 2014). This requires an adaptation of the existing methods to the case of node failures. Formally, a node failure means that the node becomes nonfunctional and its associated links useless. In the 2-terminal or source-to-terminal reliability problem, two nodes of the graph are fixed and the reliability of the network is defined as the probability of having a path between those two nodes. In such analysis, a node failure causes a higher number of s-t paths to become nonfunctional as compared to a link failure (depending on the node's degree). Thus, the reliability of a network would be affected more severely in the case of node failures. Computing the unreliability of highly reliable systems (e.g., the DCS) requires efficient simulation techniques. For large graphs, an exact computation of the unreliability u becomes a NP-hard problem that is impractical to be solved analytically (L'Ecuyer et al. 2011). Monte Carlo methods can estimate u in its crude form (CMC) sampling n stochastically independent realizations of the graph and computing the proportion of these n realizations for which the s-t are not connected (L'Ecuyer et al. 2011). For rare events, when u << 1, the accuracy of the simulation process is captured by the relative error RE (ratio of standard deviation and mean value) which is inversely proportional to the event probability u and the number of realizations n (L'Ecuyer et al. 2011, Rubino and Tuffin 2009). Thus, as u −→ 0, for a fixed RE, we need excessively large values of n. This increases the computational effort and the cost. Importance Sampling (IS) is an advanced class of variance reduction techniques for rare-event estimation problems based on the change of the sampling probabilities of the components (i.e., nodes in our case) so that the system failure occurs more frequently. The biasedness of the estimator is removed by multiplying the original estimator with an appropriate likelihood ratio (ratio of the original probability and the new sampling probability) and the estimator is the average over n (Rubino and Tuffin 2009). This is the general basic framework of IS method. Finding this change of measure is the main difficulty in IS, because if the sampling probabilities which lead to frequent failure are not properly selected, the likelihood ratio may have a huge variance resulting in a bad estimation, even if the failure event is not rare anymore (L'Ecuyer, Mandjes, and Tuffin 2009). The robustness of the estimators in such cases is based on the behavior of the RE properties, like bounded relative error (BRE) or vanishing relative error (VRE), as described by L'Ecuyer et al. (2011). If the relative width of the confidence interval (CI) on u based on the central-limit theorem (CLT) for a fixed n steadies when u −→ 0, the BRE property is valid, and VRE is valid if it tends to zero (L'Ecuyer et al. 2011, L'Ecuyer et al. 2010). The aim of this paper is to propose and adapt the dynamic importance sampling method based on Monte Carlo simulations as described by L'Ecuyer et al. (2011), considering node failures, and to prove its application on an existing example of a communication network (DCS). We propose an approximation of the zero-variance IS method based on minimal cuts having relatively high failure probability in the subgraph that remains after removing the nonfunctional nodes and their associated links (irrespective of being functional or not, if one of the associated node is failed), while enforcing the states of the nodes

Research paper thumbnail of Reliability demonstration tests: Decision rules and associated risks

Research paper thumbnail of Designing for RAM in Railway Systems: An Application to the Railway Signaling Subsystem

Research paper thumbnail of Keynote speech: PHM in railways: Big data or smart data?

2017 Prognostics and System Health Management Conference (PHM-Harbin), 2017

Prognostics & Health Management (PHM) has undergone a very fast development since the beginning o... more Prognostics & Health Management (PHM) has undergone a very fast development since the beginning of the new century and holds a promise for making a number of industrial systems, such as railway systems, both more reliable and more cost-effective in terms of maintenance. The impressive developments in data science in recent years (the "Big Data") provide powerful tools for extracting useful and actionable information from data acquired from the field or from test benches. Purely datadriven approaches require no physical understanding and are quite flexible but do require large volumes of data (pertaining to both healthy and degraded conditions) and their performance is highly dependent on the quality of those data. Computational load can be very high. But railway suppliers have accumulated decades of know-how on the physics of their systems, both in normal and degraded conditions. This knowledge can be exploited to the fullest by designing "virtual prototypes", i.e. multiphysics models of the actual systems. The key challenge is taking uncertainty into account, for instance uncertainty in future operating conditions. Hybrid approaches, i.e. combining knowledge of physical processes and information from sensor readings to enhance diagnostics and prognostics capabilities, seem to combine the advantages of both methods. Model predictions can be adjusted using measured data (either off-line or on-line). The above considerations will be illustrated on railway subsystems, such as HVAC (heating, ventilation and airconditioning).