Designing combined physical and computer experiments to maximize prediction accuracy (original) (raw)

On the Use of Minimum-Bias Computer Experimental Designs

2011

Computer experimental designs are used to generate data in metamodeling of multiresponse engineering systems. Metamodels, which are also called surrogate models, offer more efficient prediction of system responses but add errors when used as surrogates for the simulators. Error sizes depend on computer experimental designs. Only bias errors are incurred in deterministic computer experiments; however, the majority of experiments reported in the literature are not optimized for minimum bias. Box and Draper–the pioneers of the response surface methodology– originated the work on minimum bias designs in the late 1950's. Space-filling designs such as the Latin hypercubes are mainly in current use; sometimes even in response surface models. This work is a practical study via a number of analytical and electronic circuit examples on the use of minimum bias designs for response surface metamodels. Some minimum bias designs in hypercuboidal spaces are also introduced.

Ch. 7. A review of design and modeling in computer experiments

Handbook of Statistics, 2003

In this paper, we provide a review of statistical methods that are useful in conducting computer experiments. Our focus is primarily on the task of metamodeling, which is driven by the goal of optimizing a complex system via a deterministic simulation model. However, we also mention the case of a stochastic simulation, and examples of both cases are discussed. The organization of our review separates the two primary tasks for metamodeling: (1) select an experimental design; (2) fit a statistical model. We provide an overview of the general strategy and discuss applications in electrical engineering, chemical engineering, mechanical engineering, and dynamic programming. Then, we dedicate a section to statistical modeling methods followed by a section on experimental designs. Designs are discussed in two paradigms, model-dependent and model-independent, to emphasize their different objectives. Both classical and modern methods are discussed.

Comparing designs for computer simulation experiments

2008 Winter Simulation Conference, 2008

The use of simulation as a modeling and analysis tool is wide spread. Simulation is an enabling tool for experimenting virtually on a validated computer environment. Often the underlying function for the results of a computer simulation experiment has too much curvature to be adequately modeled by a low order polynomial. In such cases finding an appropriate experimental design is not easy. This research uses prediction variance over the volume of the design region to evaluate computer simulation experiments assuming the modeler is interested in fitting a second order polynomial or a Gaussian Process model to the response data. Both space-filling and optimal designs are considered.

Book Review: Experiments: Planning, Analysis, and Parameter Design Optimization. By C. F. Jeff Wu and Michael Hamada. John Wiley & Sons, Inc., New York, 2000, 630 p., U.S.$89.95 (hb), ISBN 0-471-25511-4

Mathematical Geology - MATH GEOL, 2003

Design of experiments (DOE) and analysis of data resulting from the designs make up a major part of statistical tools. Pioneered in the 1930s, primary applications of DOE were in agriculture and biology, and the special characteristics of these types of experiments led to development of now-standard statistical methods. DOE expanded in the 1950s to include process-based (e.g., chemical) applications and optimization of processes. More recently, quality control and reduction of variation have been important goals. All of these uses of DOE assume that an observed response variable results from the effects of one or more other variables (factors) that can be controlled by the investigator.

A Novel Hybrid Sequential Design Strategy for Global Surrogate Modeling of Computer Experiments

SIAM Journal on Scientific Computing, 2011

Many complex real-world systems can be accurately modeled by simulations. However, high-fidelity simulations may take hours or even days to compute. Because this can be impractical, a surrogate model is often used to approximate the dynamic behavior of the original simulator. This model can then be used as a cheap, drop-in replacement for the simulator. Because simulations can be very expensive, the data points, which are required to build the model, must be chosen as optimally as possible. Sequential design strategies offer a huge advantage over one-shot experimental designs because they can use information gathered from previous data points in order to determine the location of new data points. Each sequential design strategy must perform a trade-off between exploration and exploitation, where the former involves selecting data points in unexplored regions of the design space, while the latter suggests adding data points in regions which were previously identified to be interesting (for example, highly nonlinear regions). In this paper, a novel hybrid sequential design strategy is proposed which uses a Monte Carlo-based approximation of a Voronoi tessellation for exploration and local linear approximations of the simulator for exploitation. The advantage of this method over other sequential design methods is that it is independent of the model type, and can therefore be used in heterogeneous modeling environments, where multiple model types are used at the same time. The new method is demonstrated on a number of test problems, showing that it is a robust, competitive, and efficient sequential design strategy.

Towards the optimal design of numerical experiment

2015

Abstract—This paper addresses the problem of the optimal design of numerical experiments for the construction of nonlinear surrogate models. We describe a new method, called learner disagreement from experiment resampling (LDR), which borrows ideas from active learning and from resampling methods: the analysis of the divergence of the predictions provided by a pop-ulation of models, constructed by resampling, allows an iterative determination of the point of input space, where a numerical experiment should be performed in order to improve the accuracy of the predictor. The LDR method is illustrated on neural network models with bootstrap resampling, and on orthogonal polynomials with leave-one-out resampling. Other methods of experimental design such as random selection and-optimal selection are investigated on the same benchmark problems. Index Terms—Active learning, bagging, bootstrap,-opti-mality, neural networks.

Design and analysis of computer experiments

AStA Advances in Statistical Analysis, 2010

The design and analysis of computer experiments as a relatively young research field is not only of high importance for many industrial areas but also presents new challenges and open questions for statisticians. This editorial introduces a special issue devoted to the topic. The included papers present an interesting mixture of recent developments in the field as they cover fundamental research on the design of experiments, models and analysis methods as well as more applied research connected to real-life applications.

Towards the optimal design of numerical experiments

Neural Networks, IEEE …, 2008

This paper addresses the problem of the optimal design of numerical experiments for the construction of nonlinear surrogate models. We describe a new method, called learner disagreement from experiment resampling (LDR), which borrows ideas from active learning and from resampling methods: the analysis of the divergence of the predictions provided by a population of models, constructed by resampling, allows an iterative determination of the point of input space, where a numerical experiment should be performed in order to improve the accuracy of the predictor. The LDR method is illustrated on neural network models with bootstrap resampling, and on orthogonal polynomials with leave-one-out resampling. Other methods of experimental design such as random selection and -optimal selection are investigated on the same benchmark problems.

Optimal designs for the propagation of uncertainty in computer experiments

Response surfaces, or meta-models, and design of experiments are widely used for various experimental works. But numerous physical phenomenons are studied through complex and costly numerical simulators. In such cases, the response is influenced by factors but the link between these variables is deterministic. Nevertheless, factors are often known with uncertainty and the influence of this ignorance is important for the practician. Due to the computing time, it is not possible to obtain the uncertainty of the response through a standard Monte Carlo method and an approximation of the simulator, a meta-model, is needed. We present an optimality criterion, the MC-V, in order to evaluate the probability distribution of the response with a minimal error. We chose to apply the criterion on parts of 2 real cases derived from the petroleum industry. The simulator 2 nd order polynomial meta-model and the three distributions of input factors (uniform, gaussian, triangular) are among those use...