Nam Dinh - Academia.edu (original) (raw)

Papers by Nam Dinh

Research paper thumbnail of Demonstration of a Data-Driven Physics-Based Approach for Computationally Efficient CFD Prediction of Two-Phase Bubbly Flow

OSTI OAI (U.S. Department of Energy Office of Scientific and Technical Information), Mar 30, 2020

To realize efficient computational fluid dynamics (CFD) prediction of two-phase flow, a multi-sca... more To realize efficient computational fluid dynamics (CFD) prediction of two-phase flow, a multi-scale physics-guided data-driven approach, Feature Similarity Measurement (FSM) technique was developed for error estimation in two-phase flow simulation using coarse-mesh CFD, to achieve a comparable accuracy as fine-mesh simulations with fast-running feature. By defining physics-guided parameters and variable gradients as physical features, FSM has the capability to capture the underlying local patterns in the coarsemesh CFD simulation. Massive low-fidelity data and respective high-fidelity data are used to explore the underlying information relevant to the main simulation errors and the effects of phenomenological scaling. By learning from previous simulation data, a surrogate model using deep feedforward neural network (DFNN) can be developed and trained to estimate the simulation error of coarse-mesh CFD. In a demonstration case of two-phase bubbly flow, the DFNN model well captured and corrected the unphysical "peaks" in the velocity and void fraction profiles near the wall in the coarse-mesh configuration, even for extrapolative predictions. The research documented supports the feasibility of the physics-guided deep learning methods for coarse mesh CFD simulations which has a potential for the efficient industrial design.

Research paper thumbnail of Development and Assessment of a Nearly Autonomous Management and Control System for Advanced Reactors

arXiv (Cornell University), Aug 28, 2020

This paper develops a Nearly Autonomous Management and Control (NAMAC) system for advanced reacto... more This paper develops a Nearly Autonomous Management and Control (NAMAC) system for advanced reactors. The development process of NAMAC is characterized by a three layer-layer architecture: knowledge base, the Digital Twin (DT) developmental layer, and the NAMAC operational layer. The DT is described as a knowledge acquisition system from the knowledge base for intended uses in the NAMAC system. A set of DTs with different functions is developed with acceptable performance and assembled according to the NAMAC operational workflow to furnish recommendations to operators. To demonstrate the capability of the NAMAC system, a case study is designed, where a baseline NAMAC is implemented for operating a simulator of the Experimental Breeder Reactor II during a single loss of flow accident. When NAMAC is operated in the training domain, it can provide reasonable recommendations that prevent the peak fuel centerline temperature from exceeding a safety criterion.

Research paper thumbnail of Machine Learning from Rans And Les to Inform Coarse Grid Simulations

Research paper thumbnail of A Framework for Assessment of Severe Accident Management Effectiveness in Nordic BWR Plants

Expected value and standard deviation of the probability of early containment failure .

Research paper thumbnail of Validation and Uncertainty Quantification for Wall Boiling Closure Relations in Multiphase-CFD Solver

arXiv (Cornell University), Sep 24, 2018

The two-fluid model based Multiphase Computational Fluid Dynamics (MCFD) has been considered as o... more The two-fluid model based Multiphase Computational Fluid Dynamics (MCFD) has been considered as one of the most promising tools to investigate two-phase flow and boiling system for engineering purposes. The MCFD solver requires closure relations to make the conservation equations solvable. The wall boiling closure relations, for example, provide predictions on wall superheat and heat partitioning. The accuracy of these closure relations significantly influences the predictive capability of the solver. In this paper, a study of validation and uncertainty quantification (VUQ) for the wall boiling closure relations in MCFD solver is performed. The work has three purposes: i). identify influential parameters to the quantities of interest of the boiling system through sensitivity analysis; ii). evaluate the parameter uncertainty through Bayesian inference with the support of multiple datasets; iii). quantitatively measure the agreement between solver predictions and datasets. The widely used Kurul-Podowski wall boiling closure relation is studied in this paper. Several statistical methods are used, including Morris screening method for global sensitivity analysis, Markov Chain Monte Carlo (MCMC) for inverse Bayesian inference, and confidence interval as the validation metric. The VUQ results indicated that the current empirical correlations-based wall boiling closure relations achieved satisfactory agreement on wall superheat predictions. However, the closure relations also demonstrate intrinsic inconsistency and fail to give consistently accurate predictions for all quantities of interest over the well-developed nucleate boiling regime.

Research paper thumbnail of Direct Numerical Simulation of Low and Unitary Prandtl Number Fluids in Reactor Downcomer Geometry

Nuclear Technology, Jun 16, 2023

Buoyancy effect on low-flow condition convective heat transfer of non-conventional coolants, such... more Buoyancy effect on low-flow condition convective heat transfer of non-conventional coolants, such as liquid metal and molten salts, is a crucial safety factor to advanced reactors under transient or accidental scenarios. The distinct heat transfer characteristics of non-unitary Prandtl fluids and the inherent complexity of the low-flow mixed convection phenomena requires the development of novel turbulent and heat transfer models that are adaptive to different spatiotemporal scales involved in the mixed convection heat transfer. In this work, direct numerical simulation of low-flow mixed convection is carried out at low-to-unitary Prandtl numbers that are of industrial interest. Time-averaged statistics, turbulent Prandtl number, as well as time signals are analyzed to investigate mixed convection phenomenon. From the time-averaged statistics, buoyant plume altered velocity boundary layer as well as the intensity of the fluctuation near both walls and channel centerline. Buoyancy effect also rendered different degree of convective heat transfer enhancement and impairment depends on Prandtl and Richardson number. Analysis of time series was conducted on the sodium mixed convection case to emphasize on the low-Pr mixed convection behavior at transition region. Resulting power spectra density and wavelet spectrogram suggests possible large convective structure in transition region. Future work will focus on providing broader data coverage on − − parameter space to facilitate more comprehensive analysis of mixed convection.

Research paper thumbnail of Advanced Transient Diagnostic with Ensemble Digital Twin Modeling

arXiv (Cornell University), May 23, 2022

The use of machine learning (ML) model as digital-twins for reduced-order-modeling (ROM) in lieu ... more The use of machine learning (ML) model as digital-twins for reduced-order-modeling (ROM) in lieu of system codes has grown traction over the past few years. However, due to the complex and non-linear nature of nuclear reactor transients as well as the large range of tasks required, it is infeasible for a single ML model to generalize across all tasks. In this paper, we incorporate issue specific digital-twin ML models with ensembles to enhance the prediction outcome. The ensemble also utilizes an indirect probabilistic tracking method of surrogate state variables to produce accurate predictions of unobservable safety goals. The unique method named Ensemble Diagnostic Digital-twin Modeling (EDDM) can select not only the most appropriate predictions from the incorporated diagnostic digital-twin models but can also reduce generalization error associated with training as opposed to single models.

Research paper thumbnail of Failure Mechanism Traceability and Application in Human System Interface of Nuclear Power Plants using RESHA

OSTI OAI (U.S. Department of Energy Office of Scientific and Technical Information), May 24, 2022

In recent years, there has been considerable effort to modernize existing and new nuclear power p... more In recent years, there has been considerable effort to modernize existing and new nuclear power plants with digital instrumentation and control systems (DI&C). However, there has also been considerable concern both by industry and regulatory bodies for the risk and consequence analysis of these systems. Of particular concern are digital common cause failures (CCFs) specifically related to software defects. These "misbehaviors" by the software can occur in both the control and monitoring of a system. While many new methods have been proposed to identify potential software failure modes, such as Systems-theoretic Process Analysis (STPA), Hazard and Consequence Analysis for Digital Systems (HAZCADS), etc., these methods are focused primarily on the control action pathway of a system. In contrast, the information feedback pathway lacks unsafe control actions (UCAs), which are typically related to software basic events; thus, assessment of software basic events in such systems is unclear. In this work, we present the idea of intermediate processors and unsafe information flow (UIF) to help safety analysts trace failure mechanisms in the feedback pathway and how they can be integrated into a fault tree for improved assessment capability. The concepts presented are demonstrated in two comprehensive case studies, a smart sensor integrated platform for unmanned autonomous vehicles and another on a representative advanced human system interface (HSI) for safety critical plant monitoring. The qualitative software basic events are identified, and a fault tree analysis is conducted based on a modified Redundancy-guided Systems-theoretic Hazard Analysis (RESHA) methodology. The case studies demonstrate the use of UIF and intermediate processors in the fault tree to improve traceability of software failures in highly complex digital instrumentation feedback. The improved method can also clarify fault tree construction when multiple component dependencies are present in the system. * In this work, 'failure' is used generally as an undesirable outcome by the system, which includes unintended design or 'misbehaviors.'

Research paper thumbnail of Systems-Theoretic Hazard Analysis of Digital Human-System Interface Relevant to Reactor Trip

12th Nuclear Plant Instrumentation, Control and Human-Machine Interface Technologies (NPIC&HMIT 2021), 2021

Human-system interface is one of the key advanced design features applied to modern digital instr... more Human-system interface is one of the key advanced design features applied to modern digital instrumentation and control systems of nuclear power plants. The conventional design is based on a compact workstation-based system within the control room. The compact workstation provides both a strategic operating environment while also a convenient display for plant status information necessary to the operator. The control environment is further enhanced through display panels, visual and auditory alarms, and procedure systems. However, just like the legacy control, the HSI should incorporate diversity to demonstrate sufficient defense-in-depth protection against common cause failures of the safety system. Furthermore, the vulnerability of the HSI is affected by a plethora of factors, such as human error, cyberattacks, software common cause failures, etc., that complicate the design and analysis. Therefore, this work aims to identify and evaluate existing system vulnerabilities to support the licensing, deployment, and operation of HSI designs, especially the functions that are relevant to a reactor trip. We performed a systematic hazard analysis to investigate potential vulnerabilities within the HSI design using the novel redundancy-guided systems-theoretic hazard analysis. This method was developed and demonstrated by Idaho National Laboratory under a project initiated by the Risk-Informed Systems Analysis Pathway of the U.S. Department of Energy's Light Water Reactor Sustainability Program. The goal of the project is to develop a strong technical basis for risk assessment strategies to support effective, reliable, and licensable digital instrumentation and control technologies.

Research paper thumbnail of Application of Orthogonal Defect Classification for Software Reliability Analysis

OSTI OAI (U.S. Department of Energy Office of Scientific and Technical Information), May 24, 2022

The modernization of existing and new nuclear power plants with digital instrumentation and contr... more The modernization of existing and new nuclear power plants with digital instrumentation and control systems (DI&C) is a recent and highly trending topic. However, there lacks strong consensus on best-estimate reliability methodologies by both the United States (U.S.) Nuclear Regulatory Commission (NRC) and the industry. This has resulted in hesitation for further modernization projects until a more unified methodology is realized. In this work, we develop an approach called Orthogonal-defect Classification for Assessing Software Reliability (ORCAS) to quantify probabilities of various software failure modes in a DI&C system. The method utilizes accepted industry methodologies for software quality assurance that are also verified by experimental or mathematical formulations. In essence, the approach combines a semantic failure classification model with a reliability growth model to predict (and quantify) the potential failure modes of a DI&C software system. The semantic classification model is used to address the question: How do latent defects in software contribute to different software failure root causes? The use of reliability growth models is then used to address the question: Given the connection between latent defects and software failure root causes, how can we quantify the reliability of the software? A case study was conducted on a representative I&C platform (ChibiOS) running a smart sensor acquisition software developed by Virginia Commonwealth University (VCU). The testing and evidence collection guidance in ORCAS was applied, and defects were uncovered in the software. Qualitative evidence, such as condition coverage, was used to gauge the completeness and trustworthiness of the assessment while quantitative evidence was used to determine the software failure probabilities. The reliability of the software was then estimated and compared to existing operational data of the sensor device. It is demonstrated that by using ORCAS, a semantic reasoning framework can be developed to justify if the software is reliable (or unreliable) while still leveraging the strength of the existing methods.

Research paper thumbnail of Demonstration of a Data-Driven Approach for Error Estimation in Two-Phase Flow Simulation Using Coarse-Mesh CFD

Research paper thumbnail of Deep Learning Interfacial Momentum Closures in Coarse-Mesh CFD Two-Phase Flow Simulation Using Validation Data

arXiv (Cornell University), May 7, 2020

Multiphase flow phenomena have been widely observed in the industrial applications, yet it remain... more Multiphase flow phenomena have been widely observed in the industrial applications, yet it remains a challenging unsolved problem. Three-dimensional computational fluid dynamics (CFD) approaches resolve of the flow fields on finer spatial and temporal scales, which can complement dedicated experimental study. However, closures must be introduced to reflect the underlying physics in multiphase flow. Among them, the interfacial forces, including drag, lift, turbulent-dispersion and wall-lubrication forces, play an important role in bubble distribution and migration in liquid-vapor two-phase flows. Development of those closures traditionally rely on the experimental data and analytical derivation with simplified assumptions that usually cannot deliver a universal solution across a wide range of flow conditions. In this paper, a datadriven approach, named as feature-similarity measurement (FSM), is developed and applied to improve the simulation capability of two-phase flow with coarse-mesh CFD approach. Interfacial momentum transfer in adiabatic bubbly flow serves as the focus of the present study. Both a mature and a simplified set of interfacial closures are taken as the low-fidelity data. Validation data (including relevant experimental data and validated fine-mesh CFD simulations results) are adopted as high-fidelity data. Qualitative and quantitative analysis are performed in this paper. These reveal that FSM can substantially improve the prediction of the coarse-mesh CFD model, regardless of the choice of interfacial closures, and it provides scalability and consistency across discontinuous flow regimes. It demonstrates that data-driven methods can aid the multiphase flow modeling by exploring the connections between local physical features and simulation errors.

Research paper thumbnail of Computationally Efficient CFD Prediction of Bubbly Flow using Physics-Guided Deep Learning

arXiv (Cornell University), Oct 17, 2019

To realize efficient computational fluid dynamics (CFD) prediction of two-phase flow, a multiscal... more To realize efficient computational fluid dynamics (CFD) prediction of two-phase flow, a multiscale framework was proposed in this paper by applying a physics-guided data-driven approach. Instrumental to this framework, Feature Similarity Measurement (FSM) technique was developed for error estimation in two-phase flow simulation using coarse-mesh CFD, to achieve a comparable accuracy as fine-mesh simulations with fast-running feature. By defining physics-guided parameters and variable gradients as physical features, FSM has the capability to capture the underlying local patterns in the coarse-mesh CFD simulation. Massive low-fidelity data and respective high-fidelity data are used to explore the underlying information relevant to the main simulation errors and the effects of phenomenological scaling. By learning from previous simulation data, a surrogate model using deep feedforward neural network (DFNN) can be developed and trained to estimate the simulation error of coarse-mesh CFD. In a demonstration case of two-phase bubbly flow, the DFNN model well captured and corrected the unphysical "peaks" in the velocity and void fraction profiles near the wall in the coarse-mesh configuration, even for extrapolative predictions. The research documented supports the feasibility of the physicsguided deep learning methods for coarse mesh CFD simulations which has a potential for the efficient industrial design.

Research paper thumbnail of Uncertainty Quantification for Multiphase Computational Fluid Dynamics Closure Relations with a Physics-Informed Bayesian Approach

Nuclear Technology, Feb 16, 2023

Research paper thumbnail of A Study of BWR Mark I Station Blackout Accident with GOTHIC Modeling

OSTI OAI (U.S. Department of Energy Office of Scientific and Technical Information), Apr 1, 2016

The overall objective of this study is to develop a high-fidelity simulation capability for the a... more The overall objective of this study is to develop a high-fidelity simulation capability for the analysis of accident progression during Station BlackOut (SBO) scenario in Boiling Water Reactor (BWR) Mark I plant. The motivation is on predicting the effectiveness of containment venting within accident mitigation strategy and how to avoid core melt by choosing a proper time to initiate primary system venting. In this work, a GOTHIC model has been developed to support characterization of reactor safety systems performance and evaluation of the venting strategy. The GOTHIC model provides a seamless coupled simulation of the reactor coolant system and the containment system based on the design of the Peach Bottom-2 plant system. Efforts are also taken on the analysis for the primary system venting strategy. The safe time intervals (so-called "safe venting window") to initiate the primary system depressurization in order to optimize the early cooling strategy by adding fire water and avoid core melt are also studied based on the heat removal capability (HRC) of the reactor vessel coolant. This concept is instructive for the operation of the safety systems during the SBO accident mitigation.

Research paper thumbnail of Design of Validation Experiments for Model Improvement of Dispersed Flow Film Boiling in COBRA-TF

Transactions, Nov 1, 2018

Research paper thumbnail of Study of Data-Driven Mesh-Model Optimization in System Thermal-Hydraulic Simulation

OSTI OAI (U.S. Department of Energy Office of Scientific and Technical Information), Jun 1, 2018

Research paper thumbnail of On Modeling of Collisions in Direct Numerical Simulation of High-Speed Multiphase Flows

Springer eBooks, Sep 26, 2006

Research paper thumbnail of Development and Assessment of a Reactor System Prognosis Model with Physics-Guided Machine Learning

Social Science Research Network, 2022

Research paper thumbnail of Uncertainty quantification for Multiphase-CFD simulations of bubbly flows: a machine learning-based Bayesian approach supported by high-resolution experiments

arXiv (Cornell University), Oct 21, 2020

In this paper, we develop a machine learning-based Bayesian approach to inversely quantify and re... more In this paper, we develop a machine learning-based Bayesian approach to inversely quantify and reduce the uncertainties of the two-fluid model-based multiphase computational fluid dynamics (MCFD) for bubbly flow simulations. The proposed approach is supported by high-resolution twophase flow measurement techniques, including double-sensor conductivity probes, high-speed imaging, and particle image velocimetry. Local distribution of key physical quantities of interest (QoIs), including void fraction and phasic velocities, are obtained to support the modular Bayesian inference. In the process, the epistemic uncertainties of the closure relations are inversely quantified, the aleatory uncertainties from stochastic fluctuation of the system are evaluated based on experimental uncertainty analysis. The combined uncertainties are then propagated through the MCFD solver to obtain uncertainties of QoIs, based on which probability-boxes are constructed for validation. The proposed approach relies on three machine learning methods: feedforward neural networks and principal component analysis for surrogate modeling, and Gaussian processes for model form uncertainty modeling. The whole process is implemented within the framework of open-source deep learning library PyTorch with graphics processing unit (GPU) acceleration, thus ensuring the efficiency of the computation. The results demonstrate that with the support of highresolution data, the uncertainty of MCFD simulations can be significantly reduced.

Research paper thumbnail of Demonstration of a Data-Driven Physics-Based Approach for Computationally Efficient CFD Prediction of Two-Phase Bubbly Flow

OSTI OAI (U.S. Department of Energy Office of Scientific and Technical Information), Mar 30, 2020

To realize efficient computational fluid dynamics (CFD) prediction of two-phase flow, a multi-sca... more To realize efficient computational fluid dynamics (CFD) prediction of two-phase flow, a multi-scale physics-guided data-driven approach, Feature Similarity Measurement (FSM) technique was developed for error estimation in two-phase flow simulation using coarse-mesh CFD, to achieve a comparable accuracy as fine-mesh simulations with fast-running feature. By defining physics-guided parameters and variable gradients as physical features, FSM has the capability to capture the underlying local patterns in the coarsemesh CFD simulation. Massive low-fidelity data and respective high-fidelity data are used to explore the underlying information relevant to the main simulation errors and the effects of phenomenological scaling. By learning from previous simulation data, a surrogate model using deep feedforward neural network (DFNN) can be developed and trained to estimate the simulation error of coarse-mesh CFD. In a demonstration case of two-phase bubbly flow, the DFNN model well captured and corrected the unphysical "peaks" in the velocity and void fraction profiles near the wall in the coarse-mesh configuration, even for extrapolative predictions. The research documented supports the feasibility of the physics-guided deep learning methods for coarse mesh CFD simulations which has a potential for the efficient industrial design.

Research paper thumbnail of Development and Assessment of a Nearly Autonomous Management and Control System for Advanced Reactors

arXiv (Cornell University), Aug 28, 2020

This paper develops a Nearly Autonomous Management and Control (NAMAC) system for advanced reacto... more This paper develops a Nearly Autonomous Management and Control (NAMAC) system for advanced reactors. The development process of NAMAC is characterized by a three layer-layer architecture: knowledge base, the Digital Twin (DT) developmental layer, and the NAMAC operational layer. The DT is described as a knowledge acquisition system from the knowledge base for intended uses in the NAMAC system. A set of DTs with different functions is developed with acceptable performance and assembled according to the NAMAC operational workflow to furnish recommendations to operators. To demonstrate the capability of the NAMAC system, a case study is designed, where a baseline NAMAC is implemented for operating a simulator of the Experimental Breeder Reactor II during a single loss of flow accident. When NAMAC is operated in the training domain, it can provide reasonable recommendations that prevent the peak fuel centerline temperature from exceeding a safety criterion.

Research paper thumbnail of Machine Learning from Rans And Les to Inform Coarse Grid Simulations

Research paper thumbnail of A Framework for Assessment of Severe Accident Management Effectiveness in Nordic BWR Plants

Expected value and standard deviation of the probability of early containment failure .

Research paper thumbnail of Validation and Uncertainty Quantification for Wall Boiling Closure Relations in Multiphase-CFD Solver

arXiv (Cornell University), Sep 24, 2018

The two-fluid model based Multiphase Computational Fluid Dynamics (MCFD) has been considered as o... more The two-fluid model based Multiphase Computational Fluid Dynamics (MCFD) has been considered as one of the most promising tools to investigate two-phase flow and boiling system for engineering purposes. The MCFD solver requires closure relations to make the conservation equations solvable. The wall boiling closure relations, for example, provide predictions on wall superheat and heat partitioning. The accuracy of these closure relations significantly influences the predictive capability of the solver. In this paper, a study of validation and uncertainty quantification (VUQ) for the wall boiling closure relations in MCFD solver is performed. The work has three purposes: i). identify influential parameters to the quantities of interest of the boiling system through sensitivity analysis; ii). evaluate the parameter uncertainty through Bayesian inference with the support of multiple datasets; iii). quantitatively measure the agreement between solver predictions and datasets. The widely used Kurul-Podowski wall boiling closure relation is studied in this paper. Several statistical methods are used, including Morris screening method for global sensitivity analysis, Markov Chain Monte Carlo (MCMC) for inverse Bayesian inference, and confidence interval as the validation metric. The VUQ results indicated that the current empirical correlations-based wall boiling closure relations achieved satisfactory agreement on wall superheat predictions. However, the closure relations also demonstrate intrinsic inconsistency and fail to give consistently accurate predictions for all quantities of interest over the well-developed nucleate boiling regime.

Research paper thumbnail of Direct Numerical Simulation of Low and Unitary Prandtl Number Fluids in Reactor Downcomer Geometry

Nuclear Technology, Jun 16, 2023

Buoyancy effect on low-flow condition convective heat transfer of non-conventional coolants, such... more Buoyancy effect on low-flow condition convective heat transfer of non-conventional coolants, such as liquid metal and molten salts, is a crucial safety factor to advanced reactors under transient or accidental scenarios. The distinct heat transfer characteristics of non-unitary Prandtl fluids and the inherent complexity of the low-flow mixed convection phenomena requires the development of novel turbulent and heat transfer models that are adaptive to different spatiotemporal scales involved in the mixed convection heat transfer. In this work, direct numerical simulation of low-flow mixed convection is carried out at low-to-unitary Prandtl numbers that are of industrial interest. Time-averaged statistics, turbulent Prandtl number, as well as time signals are analyzed to investigate mixed convection phenomenon. From the time-averaged statistics, buoyant plume altered velocity boundary layer as well as the intensity of the fluctuation near both walls and channel centerline. Buoyancy effect also rendered different degree of convective heat transfer enhancement and impairment depends on Prandtl and Richardson number. Analysis of time series was conducted on the sodium mixed convection case to emphasize on the low-Pr mixed convection behavior at transition region. Resulting power spectra density and wavelet spectrogram suggests possible large convective structure in transition region. Future work will focus on providing broader data coverage on − − parameter space to facilitate more comprehensive analysis of mixed convection.

Research paper thumbnail of Advanced Transient Diagnostic with Ensemble Digital Twin Modeling

arXiv (Cornell University), May 23, 2022

The use of machine learning (ML) model as digital-twins for reduced-order-modeling (ROM) in lieu ... more The use of machine learning (ML) model as digital-twins for reduced-order-modeling (ROM) in lieu of system codes has grown traction over the past few years. However, due to the complex and non-linear nature of nuclear reactor transients as well as the large range of tasks required, it is infeasible for a single ML model to generalize across all tasks. In this paper, we incorporate issue specific digital-twin ML models with ensembles to enhance the prediction outcome. The ensemble also utilizes an indirect probabilistic tracking method of surrogate state variables to produce accurate predictions of unobservable safety goals. The unique method named Ensemble Diagnostic Digital-twin Modeling (EDDM) can select not only the most appropriate predictions from the incorporated diagnostic digital-twin models but can also reduce generalization error associated with training as opposed to single models.

Research paper thumbnail of Failure Mechanism Traceability and Application in Human System Interface of Nuclear Power Plants using RESHA

OSTI OAI (U.S. Department of Energy Office of Scientific and Technical Information), May 24, 2022

In recent years, there has been considerable effort to modernize existing and new nuclear power p... more In recent years, there has been considerable effort to modernize existing and new nuclear power plants with digital instrumentation and control systems (DI&C). However, there has also been considerable concern both by industry and regulatory bodies for the risk and consequence analysis of these systems. Of particular concern are digital common cause failures (CCFs) specifically related to software defects. These "misbehaviors" by the software can occur in both the control and monitoring of a system. While many new methods have been proposed to identify potential software failure modes, such as Systems-theoretic Process Analysis (STPA), Hazard and Consequence Analysis for Digital Systems (HAZCADS), etc., these methods are focused primarily on the control action pathway of a system. In contrast, the information feedback pathway lacks unsafe control actions (UCAs), which are typically related to software basic events; thus, assessment of software basic events in such systems is unclear. In this work, we present the idea of intermediate processors and unsafe information flow (UIF) to help safety analysts trace failure mechanisms in the feedback pathway and how they can be integrated into a fault tree for improved assessment capability. The concepts presented are demonstrated in two comprehensive case studies, a smart sensor integrated platform for unmanned autonomous vehicles and another on a representative advanced human system interface (HSI) for safety critical plant monitoring. The qualitative software basic events are identified, and a fault tree analysis is conducted based on a modified Redundancy-guided Systems-theoretic Hazard Analysis (RESHA) methodology. The case studies demonstrate the use of UIF and intermediate processors in the fault tree to improve traceability of software failures in highly complex digital instrumentation feedback. The improved method can also clarify fault tree construction when multiple component dependencies are present in the system. * In this work, 'failure' is used generally as an undesirable outcome by the system, which includes unintended design or 'misbehaviors.'

Research paper thumbnail of Systems-Theoretic Hazard Analysis of Digital Human-System Interface Relevant to Reactor Trip

12th Nuclear Plant Instrumentation, Control and Human-Machine Interface Technologies (NPIC&HMIT 2021), 2021

Human-system interface is one of the key advanced design features applied to modern digital instr... more Human-system interface is one of the key advanced design features applied to modern digital instrumentation and control systems of nuclear power plants. The conventional design is based on a compact workstation-based system within the control room. The compact workstation provides both a strategic operating environment while also a convenient display for plant status information necessary to the operator. The control environment is further enhanced through display panels, visual and auditory alarms, and procedure systems. However, just like the legacy control, the HSI should incorporate diversity to demonstrate sufficient defense-in-depth protection against common cause failures of the safety system. Furthermore, the vulnerability of the HSI is affected by a plethora of factors, such as human error, cyberattacks, software common cause failures, etc., that complicate the design and analysis. Therefore, this work aims to identify and evaluate existing system vulnerabilities to support the licensing, deployment, and operation of HSI designs, especially the functions that are relevant to a reactor trip. We performed a systematic hazard analysis to investigate potential vulnerabilities within the HSI design using the novel redundancy-guided systems-theoretic hazard analysis. This method was developed and demonstrated by Idaho National Laboratory under a project initiated by the Risk-Informed Systems Analysis Pathway of the U.S. Department of Energy's Light Water Reactor Sustainability Program. The goal of the project is to develop a strong technical basis for risk assessment strategies to support effective, reliable, and licensable digital instrumentation and control technologies.

Research paper thumbnail of Application of Orthogonal Defect Classification for Software Reliability Analysis

OSTI OAI (U.S. Department of Energy Office of Scientific and Technical Information), May 24, 2022

The modernization of existing and new nuclear power plants with digital instrumentation and contr... more The modernization of existing and new nuclear power plants with digital instrumentation and control systems (DI&C) is a recent and highly trending topic. However, there lacks strong consensus on best-estimate reliability methodologies by both the United States (U.S.) Nuclear Regulatory Commission (NRC) and the industry. This has resulted in hesitation for further modernization projects until a more unified methodology is realized. In this work, we develop an approach called Orthogonal-defect Classification for Assessing Software Reliability (ORCAS) to quantify probabilities of various software failure modes in a DI&C system. The method utilizes accepted industry methodologies for software quality assurance that are also verified by experimental or mathematical formulations. In essence, the approach combines a semantic failure classification model with a reliability growth model to predict (and quantify) the potential failure modes of a DI&C software system. The semantic classification model is used to address the question: How do latent defects in software contribute to different software failure root causes? The use of reliability growth models is then used to address the question: Given the connection between latent defects and software failure root causes, how can we quantify the reliability of the software? A case study was conducted on a representative I&C platform (ChibiOS) running a smart sensor acquisition software developed by Virginia Commonwealth University (VCU). The testing and evidence collection guidance in ORCAS was applied, and defects were uncovered in the software. Qualitative evidence, such as condition coverage, was used to gauge the completeness and trustworthiness of the assessment while quantitative evidence was used to determine the software failure probabilities. The reliability of the software was then estimated and compared to existing operational data of the sensor device. It is demonstrated that by using ORCAS, a semantic reasoning framework can be developed to justify if the software is reliable (or unreliable) while still leveraging the strength of the existing methods.

Research paper thumbnail of Demonstration of a Data-Driven Approach for Error Estimation in Two-Phase Flow Simulation Using Coarse-Mesh CFD

Research paper thumbnail of Deep Learning Interfacial Momentum Closures in Coarse-Mesh CFD Two-Phase Flow Simulation Using Validation Data

arXiv (Cornell University), May 7, 2020

Multiphase flow phenomena have been widely observed in the industrial applications, yet it remain... more Multiphase flow phenomena have been widely observed in the industrial applications, yet it remains a challenging unsolved problem. Three-dimensional computational fluid dynamics (CFD) approaches resolve of the flow fields on finer spatial and temporal scales, which can complement dedicated experimental study. However, closures must be introduced to reflect the underlying physics in multiphase flow. Among them, the interfacial forces, including drag, lift, turbulent-dispersion and wall-lubrication forces, play an important role in bubble distribution and migration in liquid-vapor two-phase flows. Development of those closures traditionally rely on the experimental data and analytical derivation with simplified assumptions that usually cannot deliver a universal solution across a wide range of flow conditions. In this paper, a datadriven approach, named as feature-similarity measurement (FSM), is developed and applied to improve the simulation capability of two-phase flow with coarse-mesh CFD approach. Interfacial momentum transfer in adiabatic bubbly flow serves as the focus of the present study. Both a mature and a simplified set of interfacial closures are taken as the low-fidelity data. Validation data (including relevant experimental data and validated fine-mesh CFD simulations results) are adopted as high-fidelity data. Qualitative and quantitative analysis are performed in this paper. These reveal that FSM can substantially improve the prediction of the coarse-mesh CFD model, regardless of the choice of interfacial closures, and it provides scalability and consistency across discontinuous flow regimes. It demonstrates that data-driven methods can aid the multiphase flow modeling by exploring the connections between local physical features and simulation errors.

Research paper thumbnail of Computationally Efficient CFD Prediction of Bubbly Flow using Physics-Guided Deep Learning

arXiv (Cornell University), Oct 17, 2019

To realize efficient computational fluid dynamics (CFD) prediction of two-phase flow, a multiscal... more To realize efficient computational fluid dynamics (CFD) prediction of two-phase flow, a multiscale framework was proposed in this paper by applying a physics-guided data-driven approach. Instrumental to this framework, Feature Similarity Measurement (FSM) technique was developed for error estimation in two-phase flow simulation using coarse-mesh CFD, to achieve a comparable accuracy as fine-mesh simulations with fast-running feature. By defining physics-guided parameters and variable gradients as physical features, FSM has the capability to capture the underlying local patterns in the coarse-mesh CFD simulation. Massive low-fidelity data and respective high-fidelity data are used to explore the underlying information relevant to the main simulation errors and the effects of phenomenological scaling. By learning from previous simulation data, a surrogate model using deep feedforward neural network (DFNN) can be developed and trained to estimate the simulation error of coarse-mesh CFD. In a demonstration case of two-phase bubbly flow, the DFNN model well captured and corrected the unphysical "peaks" in the velocity and void fraction profiles near the wall in the coarse-mesh configuration, even for extrapolative predictions. The research documented supports the feasibility of the physicsguided deep learning methods for coarse mesh CFD simulations which has a potential for the efficient industrial design.

Research paper thumbnail of Uncertainty Quantification for Multiphase Computational Fluid Dynamics Closure Relations with a Physics-Informed Bayesian Approach

Nuclear Technology, Feb 16, 2023

Research paper thumbnail of A Study of BWR Mark I Station Blackout Accident with GOTHIC Modeling

OSTI OAI (U.S. Department of Energy Office of Scientific and Technical Information), Apr 1, 2016

The overall objective of this study is to develop a high-fidelity simulation capability for the a... more The overall objective of this study is to develop a high-fidelity simulation capability for the analysis of accident progression during Station BlackOut (SBO) scenario in Boiling Water Reactor (BWR) Mark I plant. The motivation is on predicting the effectiveness of containment venting within accident mitigation strategy and how to avoid core melt by choosing a proper time to initiate primary system venting. In this work, a GOTHIC model has been developed to support characterization of reactor safety systems performance and evaluation of the venting strategy. The GOTHIC model provides a seamless coupled simulation of the reactor coolant system and the containment system based on the design of the Peach Bottom-2 plant system. Efforts are also taken on the analysis for the primary system venting strategy. The safe time intervals (so-called "safe venting window") to initiate the primary system depressurization in order to optimize the early cooling strategy by adding fire water and avoid core melt are also studied based on the heat removal capability (HRC) of the reactor vessel coolant. This concept is instructive for the operation of the safety systems during the SBO accident mitigation.

Research paper thumbnail of Design of Validation Experiments for Model Improvement of Dispersed Flow Film Boiling in COBRA-TF

Transactions, Nov 1, 2018

Research paper thumbnail of Study of Data-Driven Mesh-Model Optimization in System Thermal-Hydraulic Simulation

OSTI OAI (U.S. Department of Energy Office of Scientific and Technical Information), Jun 1, 2018

Research paper thumbnail of On Modeling of Collisions in Direct Numerical Simulation of High-Speed Multiphase Flows

Springer eBooks, Sep 26, 2006

Research paper thumbnail of Development and Assessment of a Reactor System Prognosis Model with Physics-Guided Machine Learning

Social Science Research Network, 2022

Research paper thumbnail of Uncertainty quantification for Multiphase-CFD simulations of bubbly flows: a machine learning-based Bayesian approach supported by high-resolution experiments

arXiv (Cornell University), Oct 21, 2020

In this paper, we develop a machine learning-based Bayesian approach to inversely quantify and re... more In this paper, we develop a machine learning-based Bayesian approach to inversely quantify and reduce the uncertainties of the two-fluid model-based multiphase computational fluid dynamics (MCFD) for bubbly flow simulations. The proposed approach is supported by high-resolution twophase flow measurement techniques, including double-sensor conductivity probes, high-speed imaging, and particle image velocimetry. Local distribution of key physical quantities of interest (QoIs), including void fraction and phasic velocities, are obtained to support the modular Bayesian inference. In the process, the epistemic uncertainties of the closure relations are inversely quantified, the aleatory uncertainties from stochastic fluctuation of the system are evaluated based on experimental uncertainty analysis. The combined uncertainties are then propagated through the MCFD solver to obtain uncertainties of QoIs, based on which probability-boxes are constructed for validation. The proposed approach relies on three machine learning methods: feedforward neural networks and principal component analysis for surrogate modeling, and Gaussian processes for model form uncertainty modeling. The whole process is implemented within the framework of open-source deep learning library PyTorch with graphics processing unit (GPU) acceleration, thus ensuring the efficiency of the computation. The results demonstrate that with the support of highresolution data, the uncertainty of MCFD simulations can be significantly reduced.