Greg Stanley | Northwestern University (original) (raw)
Videos by Greg Stanley
Biosphere 2 was constructed as a demonstration/test site for prototyping sealed life support syst... more Biosphere 2 was constructed as a demonstration/test site for prototyping sealed life support systems to support future space colonization, and to better model how earth’s ecosystems work. 8 people were sealed in the 3.14 acre facility for 2 years starting in 1991. It holds the record as the world’s largest and longest-running closed environment test. The facility is still there and open to the public, although it is no longer sealed. The presentation offers a retrospective on its unique contributions to understanding the complexities of sustaining life outside earth.
DOI: 10.13140/RG.2.2.10085.01768
Conference: AAIA Houston Annual Technical Symposium, October 2020
1 views
Papers by Greg Stanley
This paper gives an overview of industrial applications of real-time knowledge based expert syste... more This paper gives an overview of industrial applications of real-time knowledge based expert systems (KBES's) in the process industries. After a brief overview of the features of a KBES useful in process applications, the general roles of KBES's are covered. A particular focus is diagnostic applications, one of the major applications areas. Many applications are seen as an expansion of supervisory control. The lessons learned from numerous online applications are summarized.
Chemical Engineering Science, 1981
This paper gives an overview of industrial applications of real-time knowledge based expert syste... more This paper gives an overview of industrial applications of real-time knowledge based expert systems (KBESs) for process control. After a brief overview of the features of a KBES useful in process control, several case studies are reviewed. The lessons learned are summarized.
Instrument faults and equipment problems can be detected by pattern analysis tools such as neural... more Instrument faults and equipment problems can be detected by pattern analysis tools such as neural networks. While pattern recognition alone may be used to detect problems, accuracy may be improved by "building in" knowledge of the process. When models are known, accuracy, sensitivity, training, and robustness for interpolation and extrapolation should be improved by building in process knowledge. This can be done by analyzing the patterns of model errors, or the patterns of measurement adjustments in a data reconciliation procedure. Using a simulation model, faults are hypothesized, during "training", for later matching at run time. Each fault generates specific model deviations. When measurement standard deviations can be assumed, data reconciliation can be applied, and the measurement adjustments can be analyzed using a neural network. This approach is tested with simulation of flows and pressures in a liquid flow network. A generic, graphically-configured simu...
Instrument faults and equipment problems can be detected by pattern analysis tools such as neural... more Instrument faults and equipment problems can be detected by pattern analysis tools such as neural networks. While pattern recognition alone may be used to detect problems, accuracy may be improved by "building in" knowledge of the process. When models are known, accuracy, sensitivity, training, and robustness for interpolation and extrapolation should be improved by building in process knowledge. This can be done by analyzing the patterns of model errors, or the patterns of measurement adjustments in a data reconciliation procedure. Using a simulation model, faults are hypothesized, during "training", for later matching at run time. Each fault generates specific model deviations. When measurement standard deviations can be assumed, data reconciliation can be applied, and the measurement adjustments can be analyzed using a neural network. This approach is tested with simulation of flows and pressures in a liquid flow network. A generic, graphically-configured simu...
O LI E DATA RECO CILIATIO FOR PROCESS CO TROL Combined data reconciliation with estimation of slo... more O LI E DATA RECO CILIATIO FOR PROCESS CO TROL Combined data reconciliation with estimation of slowly changing parameters has been implemented for closed-loop control in a Chemical Plant. Goals include streamlining use of redundant measurements for backing up failed instruments, filtering noise, and, in some cases, reducing steady state estimation errors. Special considerations include bumpless transfer from failed instruments and automatic equipment up/down classification. Parameters are calculated and filtered, then held fixed during each data reconciliation. I TRODUCTIO The purposes of this paper are to clarify the nature of the online estimation problem, and provide a “tool kit” for practical online data reconciliation applications. In this paper, “online” data reconciliation implies use in closed-loop control or optimization. It will be seen that there are different considerations in using data reconciliation in a process control environment than when doing typical offline appli...
Instrument faults and equipment problems can be detected by pattern analysis tools such as neural... more Instrument faults and equipment problems can be detected by pattern analysis tools such as neural networks. While pattern recognition alone may be used to detect problems, accuracy may be improved by "building in" knowledge of the process. When models are known, accuracy, sensitivity, training, and robustness for interpolation and extrapolation should be improved by building in process knowledge. This can be done by analyzing the patterns of model errors, or the patterns of measurement adjustments in a data reconciliation procedure. Using a simulation model, faults are hypothesized, during "training", for later matching at run time. Each fault generates specific model deviations. When measurement standard deviations can be assumed, data reconciliation can be applied, and the measurement adjustments can be analyzed using a neural network. This approach is tested with simulation of flows and pressures in a liquid flow network. A generic, graphically-configured simu...
Journal of Process Control
Abstract This paper introduces a new approach to estimation and control problems called “BDAC,” f... more Abstract This paper introduces a new approach to estimation and control problems called “BDAC,” for Big Data Approximating Control. It includes a training process and an estimation & control process. The training process creates and maintains an online training set of representative trajectories, updating them for adaptation to changing processes or sustained unmeasured disturbances. Trajectories are acquired by online monitoring, usually with some automated testing to speed up the process. Training and adaptation can occur in manual mode, test mode, and under closed loop control by BDAC or other controls. BDAC does not use models or state space representation, bypassing the usual “silos” of model identification, state estimation, and control. BDAC solves estimation and control problems by approximate pattern matching directly on the training set. It should benefit from rapid progress in “Big Data” techniques such as nearest neighbor search and clustering. A new data clustering technique and its specialization for real time filtering in causal systems is also introduced: “Real Time Exponential Cluster Filtering” (RTECF). BDAC is centered on solutions or approximate solutions to the “BDAC approximation problem,” which includes multivariable overdetermined or underdetermined control problems. A linear approach based on orthogonalization is given, as well as a nonlinear approach based on nearest neighbor interpolation. Even the linear method captures nonlinearity in individual training set trajectories, and the “kernel trick” is demonstrated for directly addressing nonlinearity with the linear controller. Simulation results in supplementary materials demonstrate combined feedforward and feedback control, dealing with setpoint and load changes, nonlinearities, dead times, integrating processes, adaptation, noise, unmeasured disturbances, co-linearity in measurements, estimating missing sensor values, and control while some controller outputs remain in manual or test modes. The well-known quadruple tank simulation shows control of a process switching between nonminimum phase behavior and minimum phase behavior. Integral action and adaptation approaches to avoid offset from setpoints due to changing processes or unmeasured disturbances are demonstrated. Comparisons of parts of the overall process are drawn to K-means clustering, Kalman filtering, linear optimal control/LQG, multivariable predictive control (MPC), and missing data replacement based on Kalman filtering, PCA or autoassociative neural networks.
Traditional approaches to CIM (Computer-Integrated Manufacturing) involve numerous data interface... more Traditional approaches to CIM (Computer-Integrated Manufacturing) involve numerous data interfaces between applications. Some emphasize a centralized database or a hierarchical structure. However, traditional approaches suffer by focusing on data flow: (1) Redundant, possibly inconsistent model information is encoded in multiple applications, complicating development and maintenance, (2) Plant models are not explicit enough for easy review by many people, and (3) Multiple developer and enduser interfaces exist. In reality, there is more commonality between applications than just the data. For instance, multiple applications such as scheduling, control, simulation, monitoring, and diagnostics all need common information, such as connectivity from plant schematics, recipes, manufacturing procedure sequences and constraints, routing information, equipment models, part-of relationships, and goals. Much information about products, equipment, events, and paperwork can be best organized in...
Journal of Process Control, 2018
This paper introduces a new approach to estimation and control problems called “BDAC,” for Big Da... more This paper introduces a new approach to estimation and control problems called “BDAC,” for Big Data Approximating Control. It includes a training process and an estimation & control process. The training process creates and maintains an online training set of representative trajectories, updating them for adaptation to changing processes or sustained unmeasured disturbances. Trajectories are acquired by online monitoring, usually with some automated testing to speed up the process. Training and adaptation can occur in manual mode, test mode, and under closed loop control by BDAC or other controls. BDAC does not use models or state space representation, bypassing the usual “silos” of model identification, state estimation, and control. BDAC solves estimation and control problems by approximate pattern matching directly on the training set. It should benefit from rapid progress in “Big Data” techniques such as nearest neighbor search and clustering. A new data clustering technique and its specialization for real time filtering in causal systems is also introduced: “Real Time Exponential Cluster Filtering” (RTECF). BDAC is centered on solutions or approximate solutions to the “BDAC approximation problem,” which includes multivariable overdetermined or underdetermined control problems. A linear approach based on orthogonalization is given, as well as a nonlinear approach based on nearest neighbor interpolation. Even the linear method captures nonlinearity in individual training set trajectories, and the “kernel trick” is demonstrated for directly addressing nonlinearity with the linear controller. Simulation results in supplementary materials demonstrate combined feedforward and feedback control, dealing with setpoint and load changes, nonlinearities, dead times, integrating processes, adaptation, noise, unmeasured disturbances, co-linearity in measurements, estimating missing sensor values, and control while some controller outputs remain in manual or test modes. The well-known quadruple tank simulation shows control of a process switching between nonminimum phase behavior and minimum phase behavior. Integral action and adaptation approaches to avoid offset from setpoints due to changing processes or unmeasured disturbances are demonstrated. Comparisons of parts of the overall process are drawn to K-means clustering, Kalman filtering, linear optimal control/LQG, multivariable predictive control (MPC), and missing data replacement based on Kalman filtering, PCA or autoassociative neural networks.
A new, extensible graphical language GDL (Graphical Diagnostic Language) addresses fault diagnosi... more A new, extensible graphical language GDL (Graphical Diagnostic Language) addresses fault diagnosis in static or dynamic systems. GDL is used to detect faults, classify the root causes of the faults, initiate corrective actions, recognize recurring problems, plan and execute tests, and manage alarm displays and messages. GDL is an environment for specification, development, run-time use, and maintenance. GDL is comprised of blocks defined in an object-oriented environment. Each block can transform, combine, or manipulate incoming data via a predefined algorithm. Blocks are connected graphically to form information flow diagrams (IFDs). IFDs provide both system specification and run-time interface, complete with status indication by color and animation. Techniques necessary in real-time systems are supported, including task prioritization, asynchronous concurrent operations, and real-time task scheduling. Signal processing and statistical process control blocks generate events from hi...
M.S. (Chemical Engineering) -- Northwestern University, 1973.
Industrial & Engineering Chemistry Process Design and Development
This paper shows how information inherent in the process constraints and measurement statistics c... more This paper shows how information inherent in the process constraints and measurement statistics can be used to enhance flow and inventory data. Two important graph-theoretic results are derived and used to simplify the reconciliation of conflicting data and the estimation of unmeasured process streams. The scheme was implemented and evaluated on a CDC-6400 computer. For a 32-node 61-stream problem, the results indicate a 42 to 60 % reduction in total absolute errors, for the three cases in which the number of measured streams were 36, 50, and 61 respectively. A gross error detection criterion based on nodal imbalances is proposed. This criterion can be evaluated prior to any reconciliation calculations and appeared to be effective for errors of 20 % or more for the simulation cases studied. A logically consistent scheme for identifying the error sources was developed using this criterion. Such a scheme could be used as a diagnostic aid in process analysis.
Biosphere 2 was constructed as a demonstration/test site for prototyping sealed life support syst... more Biosphere 2 was constructed as a demonstration/test site for prototyping sealed life support systems to support future space colonization, and to better model how earth’s ecosystems work. 8 people were sealed in the 3.14 acre facility for 2 years starting in 1991. It holds the record as the world’s largest and longest-running closed environment test. The facility is still there and open to the public, although it is no longer sealed. The presentation offers a retrospective on its unique contributions to understanding the complexities of sustaining life outside earth.
DOI: 10.13140/RG.2.2.10085.01768
Conference: AAIA Houston Annual Technical Symposium, October 2020
1 views
This paper gives an overview of industrial applications of real-time knowledge based expert syste... more This paper gives an overview of industrial applications of real-time knowledge based expert systems (KBES's) in the process industries. After a brief overview of the features of a KBES useful in process applications, the general roles of KBES's are covered. A particular focus is diagnostic applications, one of the major applications areas. Many applications are seen as an expansion of supervisory control. The lessons learned from numerous online applications are summarized.
Chemical Engineering Science, 1981
This paper gives an overview of industrial applications of real-time knowledge based expert syste... more This paper gives an overview of industrial applications of real-time knowledge based expert systems (KBESs) for process control. After a brief overview of the features of a KBES useful in process control, several case studies are reviewed. The lessons learned are summarized.
Instrument faults and equipment problems can be detected by pattern analysis tools such as neural... more Instrument faults and equipment problems can be detected by pattern analysis tools such as neural networks. While pattern recognition alone may be used to detect problems, accuracy may be improved by "building in" knowledge of the process. When models are known, accuracy, sensitivity, training, and robustness for interpolation and extrapolation should be improved by building in process knowledge. This can be done by analyzing the patterns of model errors, or the patterns of measurement adjustments in a data reconciliation procedure. Using a simulation model, faults are hypothesized, during "training", for later matching at run time. Each fault generates specific model deviations. When measurement standard deviations can be assumed, data reconciliation can be applied, and the measurement adjustments can be analyzed using a neural network. This approach is tested with simulation of flows and pressures in a liquid flow network. A generic, graphically-configured simu...
Instrument faults and equipment problems can be detected by pattern analysis tools such as neural... more Instrument faults and equipment problems can be detected by pattern analysis tools such as neural networks. While pattern recognition alone may be used to detect problems, accuracy may be improved by "building in" knowledge of the process. When models are known, accuracy, sensitivity, training, and robustness for interpolation and extrapolation should be improved by building in process knowledge. This can be done by analyzing the patterns of model errors, or the patterns of measurement adjustments in a data reconciliation procedure. Using a simulation model, faults are hypothesized, during "training", for later matching at run time. Each fault generates specific model deviations. When measurement standard deviations can be assumed, data reconciliation can be applied, and the measurement adjustments can be analyzed using a neural network. This approach is tested with simulation of flows and pressures in a liquid flow network. A generic, graphically-configured simu...
O LI E DATA RECO CILIATIO FOR PROCESS CO TROL Combined data reconciliation with estimation of slo... more O LI E DATA RECO CILIATIO FOR PROCESS CO TROL Combined data reconciliation with estimation of slowly changing parameters has been implemented for closed-loop control in a Chemical Plant. Goals include streamlining use of redundant measurements for backing up failed instruments, filtering noise, and, in some cases, reducing steady state estimation errors. Special considerations include bumpless transfer from failed instruments and automatic equipment up/down classification. Parameters are calculated and filtered, then held fixed during each data reconciliation. I TRODUCTIO The purposes of this paper are to clarify the nature of the online estimation problem, and provide a “tool kit” for practical online data reconciliation applications. In this paper, “online” data reconciliation implies use in closed-loop control or optimization. It will be seen that there are different considerations in using data reconciliation in a process control environment than when doing typical offline appli...
Instrument faults and equipment problems can be detected by pattern analysis tools such as neural... more Instrument faults and equipment problems can be detected by pattern analysis tools such as neural networks. While pattern recognition alone may be used to detect problems, accuracy may be improved by "building in" knowledge of the process. When models are known, accuracy, sensitivity, training, and robustness for interpolation and extrapolation should be improved by building in process knowledge. This can be done by analyzing the patterns of model errors, or the patterns of measurement adjustments in a data reconciliation procedure. Using a simulation model, faults are hypothesized, during "training", for later matching at run time. Each fault generates specific model deviations. When measurement standard deviations can be assumed, data reconciliation can be applied, and the measurement adjustments can be analyzed using a neural network. This approach is tested with simulation of flows and pressures in a liquid flow network. A generic, graphically-configured simu...
Journal of Process Control
Abstract This paper introduces a new approach to estimation and control problems called “BDAC,” f... more Abstract This paper introduces a new approach to estimation and control problems called “BDAC,” for Big Data Approximating Control. It includes a training process and an estimation & control process. The training process creates and maintains an online training set of representative trajectories, updating them for adaptation to changing processes or sustained unmeasured disturbances. Trajectories are acquired by online monitoring, usually with some automated testing to speed up the process. Training and adaptation can occur in manual mode, test mode, and under closed loop control by BDAC or other controls. BDAC does not use models or state space representation, bypassing the usual “silos” of model identification, state estimation, and control. BDAC solves estimation and control problems by approximate pattern matching directly on the training set. It should benefit from rapid progress in “Big Data” techniques such as nearest neighbor search and clustering. A new data clustering technique and its specialization for real time filtering in causal systems is also introduced: “Real Time Exponential Cluster Filtering” (RTECF). BDAC is centered on solutions or approximate solutions to the “BDAC approximation problem,” which includes multivariable overdetermined or underdetermined control problems. A linear approach based on orthogonalization is given, as well as a nonlinear approach based on nearest neighbor interpolation. Even the linear method captures nonlinearity in individual training set trajectories, and the “kernel trick” is demonstrated for directly addressing nonlinearity with the linear controller. Simulation results in supplementary materials demonstrate combined feedforward and feedback control, dealing with setpoint and load changes, nonlinearities, dead times, integrating processes, adaptation, noise, unmeasured disturbances, co-linearity in measurements, estimating missing sensor values, and control while some controller outputs remain in manual or test modes. The well-known quadruple tank simulation shows control of a process switching between nonminimum phase behavior and minimum phase behavior. Integral action and adaptation approaches to avoid offset from setpoints due to changing processes or unmeasured disturbances are demonstrated. Comparisons of parts of the overall process are drawn to K-means clustering, Kalman filtering, linear optimal control/LQG, multivariable predictive control (MPC), and missing data replacement based on Kalman filtering, PCA or autoassociative neural networks.
Traditional approaches to CIM (Computer-Integrated Manufacturing) involve numerous data interface... more Traditional approaches to CIM (Computer-Integrated Manufacturing) involve numerous data interfaces between applications. Some emphasize a centralized database or a hierarchical structure. However, traditional approaches suffer by focusing on data flow: (1) Redundant, possibly inconsistent model information is encoded in multiple applications, complicating development and maintenance, (2) Plant models are not explicit enough for easy review by many people, and (3) Multiple developer and enduser interfaces exist. In reality, there is more commonality between applications than just the data. For instance, multiple applications such as scheduling, control, simulation, monitoring, and diagnostics all need common information, such as connectivity from plant schematics, recipes, manufacturing procedure sequences and constraints, routing information, equipment models, part-of relationships, and goals. Much information about products, equipment, events, and paperwork can be best organized in...
Journal of Process Control, 2018
This paper introduces a new approach to estimation and control problems called “BDAC,” for Big Da... more This paper introduces a new approach to estimation and control problems called “BDAC,” for Big Data Approximating Control. It includes a training process and an estimation & control process. The training process creates and maintains an online training set of representative trajectories, updating them for adaptation to changing processes or sustained unmeasured disturbances. Trajectories are acquired by online monitoring, usually with some automated testing to speed up the process. Training and adaptation can occur in manual mode, test mode, and under closed loop control by BDAC or other controls. BDAC does not use models or state space representation, bypassing the usual “silos” of model identification, state estimation, and control. BDAC solves estimation and control problems by approximate pattern matching directly on the training set. It should benefit from rapid progress in “Big Data” techniques such as nearest neighbor search and clustering. A new data clustering technique and its specialization for real time filtering in causal systems is also introduced: “Real Time Exponential Cluster Filtering” (RTECF). BDAC is centered on solutions or approximate solutions to the “BDAC approximation problem,” which includes multivariable overdetermined or underdetermined control problems. A linear approach based on orthogonalization is given, as well as a nonlinear approach based on nearest neighbor interpolation. Even the linear method captures nonlinearity in individual training set trajectories, and the “kernel trick” is demonstrated for directly addressing nonlinearity with the linear controller. Simulation results in supplementary materials demonstrate combined feedforward and feedback control, dealing with setpoint and load changes, nonlinearities, dead times, integrating processes, adaptation, noise, unmeasured disturbances, co-linearity in measurements, estimating missing sensor values, and control while some controller outputs remain in manual or test modes. The well-known quadruple tank simulation shows control of a process switching between nonminimum phase behavior and minimum phase behavior. Integral action and adaptation approaches to avoid offset from setpoints due to changing processes or unmeasured disturbances are demonstrated. Comparisons of parts of the overall process are drawn to K-means clustering, Kalman filtering, linear optimal control/LQG, multivariable predictive control (MPC), and missing data replacement based on Kalman filtering, PCA or autoassociative neural networks.
A new, extensible graphical language GDL (Graphical Diagnostic Language) addresses fault diagnosi... more A new, extensible graphical language GDL (Graphical Diagnostic Language) addresses fault diagnosis in static or dynamic systems. GDL is used to detect faults, classify the root causes of the faults, initiate corrective actions, recognize recurring problems, plan and execute tests, and manage alarm displays and messages. GDL is an environment for specification, development, run-time use, and maintenance. GDL is comprised of blocks defined in an object-oriented environment. Each block can transform, combine, or manipulate incoming data via a predefined algorithm. Blocks are connected graphically to form information flow diagrams (IFDs). IFDs provide both system specification and run-time interface, complete with status indication by color and animation. Techniques necessary in real-time systems are supported, including task prioritization, asynchronous concurrent operations, and real-time task scheduling. Signal processing and statistical process control blocks generate events from hi...
M.S. (Chemical Engineering) -- Northwestern University, 1973.
Industrial & Engineering Chemistry Process Design and Development
This paper shows how information inherent in the process constraints and measurement statistics c... more This paper shows how information inherent in the process constraints and measurement statistics can be used to enhance flow and inventory data. Two important graph-theoretic results are derived and used to simplify the reconciliation of conflicting data and the estimation of unmeasured process streams. The scheme was implemented and evaluated on a CDC-6400 computer. For a 32-node 61-stream problem, the results indicate a 42 to 60 % reduction in total absolute errors, for the three cases in which the number of measured streams were 36, 50, and 61 respectively. A gross error detection criterion based on nodal imbalances is proposed. This criterion can be evaluated prior to any reconciliation calculations and appeared to be effective for errors of 20 % or more for the simulation cases studied. A logically consistent scheme for identifying the error sources was developed using this criterion. Such a scheme could be used as a diagnostic aid in process analysis.
(Overheads for presentation - the conference paper is available separately) Instrument faults and... more (Overheads for presentation - the conference paper is available separately) Instrument faults and equipment problems can be detected by pattern analysis tools such as neural networks. While pattern recognition alone may be used to detect problems, accuracy may be improved by "building in" knowledge of the process. When models are known, accuracy, sensitivity, training, and robustness for interpolation and extrapolation should be improved by building in process knowledge. This can be done by analyzing the patterns of model errors, or the patterns of measurement adjustments in a data reconciliation procedure. Using a simulation model, faults are hypothesized, during "training", for later matching at run time. Each fault generates specific model deviations. When measurement standard deviations can be assumed, data reconciliation can be applied, and the measurement adjustments can be analyzed using a neural network. This approach is tested with simulation of flows an...