Design and analysis of computer experiments (original) (raw)
Related papers
Ch. 7. A review of design and modeling in computer experiments
Handbook of Statistics, 2003
In this paper, we provide a review of statistical methods that are useful in conducting computer experiments. Our focus is primarily on the task of metamodeling, which is driven by the goal of optimizing a complex system via a deterministic simulation model. However, we also mention the case of a stochastic simulation, and examples of both cases are discussed. The organization of our review separates the two primary tasks for metamodeling: (1) select an experimental design; (2) fit a statistical model. We provide an overview of the general strategy and discuss applications in electrical engineering, chemical engineering, mechanical engineering, and dynamic programming. Then, we dedicate a section to statistical modeling methods followed by a section on experimental designs. Designs are discussed in two paradigms, model-dependent and model-independent, to emphasize their different objectives. Both classical and modern methods are discussed.
NPUA: A new approach for the analysis of computer experiments
Chemometrics and Intelligent Laboratory Systems, 2010
The main issue in the analysis of computer experiments is an uncertainty of prediction and related inferences. To address the uncertainty analysis, the Bayesian analysis of deterministic computer models has been actively developed in the last decade. In the Bayesian approach, the uncertainty is expressed through a Gaussian process model. As a consequence, the resulting analysis is rather sensitive with respect to these prior assumptions. Moreover, for high dimensional data this approach leads to time consuming computations.
Recent Advances in Computer Experiment Modeling
2014
OF THE DISSERTATION Recent Advances in Computer Experiment Modeling by YUFAN LIU Dissertation Director: Ying Hung This dissertation develops methodologies for analysis of computer experiments and its related theories. Computer experiments are becoming increasingly important in science and Gaussian process (GP) models are widely used in the analysis of computer experiments. This dissertation focuses on two settings where massive data are observed on irregular grids or quantiles of correlated data are of interests. In this dissertation, we first develop Latin Hypercube Design-based Block Bootstrap method. Then, we investigate quantiles of computer experiments in which correlated data are observed and propose penalized quantile regression with asymmetric Laplace process. The computational issue that hinders GP from broader application is recognized, especially for massive data observed on irregular grids. To overcome the computational issue, we introduce an efficient framework based on...
Practical Aspects for Designing Statistically Optimal Experiments
Journal of Statistical Science and Application, 2014
Due to operational or physical considerations, standard factorial and response surface method (RSM) design of experiments (DOE) often prove to be unsuitable. In such cases a computer-generated statistically-optimal design fills the breech. This article explores vital mathematical properties for evaluating alternative designs with a focus on what is really important for industrial experimenters. To assess "goodness of design" such evaluations must consider the model choice, specific optimality criteria (in particular D and I), precision of estimation based on the fraction of design space (FDS), the number of runs to achieve required precision, lack-of-fit testing, and so forth. With a focus on RSM, all these issues are considered at a practical level, keeping engineers and scientists in mind. This brings to the forefront such considerations as subject-matter knowledge from first principles and experience, factor choice and the feasibility of the experiment design.
A Brief Introduction to Design of Experiments
esign of experiments is a series of tests in which purposeful changes are made to the input variables of a system or process and the effects on response variables are measured. Design of experiments is applicable to both physical processes and computer simulation models. Experimental design is an effective tool for maximizing the amount of information gained from a study while minimizing the amount of data to be collected. Factorial experimental designs investigate the effects of many different factors by varying them simultaneously instead of changing only one factor at a time. Factorial designs allow estimation of the sensitivity to each factor and also to the combined effect of two or more factors. Experimental design methods have been successfully applied to several Ballistic Missile Defense sensitivity studies to maximize the amount of information with a minimum number of computer simulation runs. In a highly competitive world of testing and evaluation, an efficient method for testing many factors is needed.
Design and Analysis of Computational Experiments: Overview
Experimental Methods for the Analysis of Optimization Algorithms, 2010
This chapter presents an overview of the design and analysis of computational experiments with optimization algorithms. It covers classic designs and their corresponding (meta)models; namely, Resolution-III designs including fractional factorial twolevel designs for …rst-order polynomial models, resolution-IV and resolution-V designs for two-factor interactions, and designs including Central Composite Designs for second-degree polynomials. It also reviews factor screening in experiments with very many factors, focusing on the Sequential Bifurcation method. Furthermore, it reviews Kriging models and their designs. Finally, it discusses experiments aimed at the optimization of the parameters of a given optimization algorithm, allowing multiple random experimental outputs. This optimization may use either Generalized Response Surface Methodology or Kriging combined with Mathematical Programming; the discussion also covers Taguchian robust optimization.
New Research Directions in Computer Experiments: -Clustered Designs
2012
We explore the existence, properties, and applications of exact optimal designs for computer experiments, under Gaussian-process (GP), fixed-Gaussian-covarianceparameter, zero-nugget assumptions, that prescribe a cluster of two or more design points as closely spaced as practical, without being identically located. We define such designs as -clustered and define subcases, e.g., twin-point, triplet-point, etc. designs and review the history of these designs. We also define the phase of a design, based on its symmetry properties, and we introduce the concept of phase transitions between phases. We prove that the 0 th - and 1 st -degree terms in the expansion of the determinant of the covariance matrix, in powers of the separation distance from the center of a twin-point cluster to one of the twins, are zero. Using this fact, we outline a proof that, in two or more factors, the IMSE function is a truncated rational function, with leading powers of at least two in the series expansion ...
A User's Guide to the Brave New World of Designing Simulation Experiments
2003
Many simulation practitioners can get more from their analyses by using the statistical theory on design of experiments (DOE) developed specifically for exploring computer models.In this paper, we discuss a toolkit of designs for simulationists with limited DOE expertise who want to select a design and an appropriate analysis for their computational experiments.Furthermore, we provide a research agenda listing problems
Comparing designs for computer simulation experiments
2008 Winter Simulation Conference, 2008
The use of simulation as a modeling and analysis tool is wide spread. Simulation is an enabling tool for experimenting virtually on a validated computer environment. Often the underlying function for the results of a computer simulation experiment has too much curvature to be adequately modeled by a low order polynomial. In such cases finding an appropriate experimental design is not easy. This research uses prediction variance over the volume of the design region to evaluate computer simulation experiments assuming the modeler is interested in fitting a second order polynomial or a Gaussian Process model to the response data. Both space-filling and optimal designs are considered.