Generalizability in NonGaussian Longitudinal Clinical Trial Data Based on Generalized Linear Mixed Models (original) (raw)
Related papers
Controlled Clinical Trials, 2004
Repeated measures are exploited to study reliability in the context of psychiatric health sciences. It is shown how test -retest reliability can be derived using linear mixed models when the scale is continuous or quasi-continuous. The advantage of this approach is that the full modeling power of mixed models can be used. Repeated measures with a different mean structure can be used to usefully study reliability, correction for covariate effects is possible, and a complicated variance -covariance structure between measurements is allowed. In case the variance structure reduces to a random intercept (compound symmetry), classical methods are recovered. With more complex variance structures (e.g., including random slopes of time and/or serial correlation), time-dependent reliability functions are obtained. The methodology is motivated by and applied to data from five double-blind randomized clinical trials comparing the effects of risperidone to conventional antipsychotic agents for the treatment of chronic schizophrenia. Model assumptions are investigated through residual plots and by investigating the effect of influential observations. D
Marginal Correlation in Longitudinal Binary Data Based on Generalized Linear Mixed Models
Communications in Statistics - Theory and Methods, 2010
This work aims at investigating marginal correlation within and between longitudinal data sequences. Useful and intuitive approximate expressions are derived based on generalized linear mixed models. Data from four double-blind randomized clinical trials are used to estimate the intra-class coefficient of reliability for a binary response. Additionally, the correlation between such a binary response and a continuous response is derived to evaluate the criterion validity of the binary response variable and the established continuous response variable.
A Measure for the Reliability of a Rating Scale Based on Longitudinal Clinical Trial Data
Psychometrika, 2007
A new measure for reliability of a rating scale is introduced, based on the classical definition of reliability, as the ratio of the true score variance and the total variance. Clinical trial data can be employed to estimate the reliability of the scale in use, whenever repeated measurements are taken. The reliability is estimated from the covariance parameters obtained from a linear mixed model. The method provides a single number to express the reliability of the scale, but allows for the study of the reliability’s time evolution. The method is illustrated using a case study in schizophrenia.
The Estimation of Reliability in Longitudinal Models
International Journal of Behavioral Development, 1998
Despite the increasing attention devoted to the study and analysis of longitudinal data, relatively little consideration has been directed toward understanding the issues of reliability and measurement error. Perhaps one reason for this neglect has been that traditional methods of estimation (e.g. generalisability theory) require assumptions that are often not tenable in longitudinal designs. This paper first examines applications of generalisability theory to the estimation of m easurement error and reliability in longitudinal research, and notes how factors such as missing data, correlated errors, and true score instability prohibit traditional variance com ponent estimation. Next, we discuss how estimation methods using restricted maximum likelihood can account for these factors, thereby providing m any advantages over traditional estimation methods. Finally, we provide a substantive exam ple illustrating these advantages, and include brief discussions of programming and software...
Analyzing Incomplete Discrete Longitudinal Clinical Trial Data
Statistical Science, 2006
Commonly used methods to analyze incomplete longitudinal clinical trial data include complete case analysis (CC) and last observation carried forward (LOCF). However, such methods rest on strong assumptions, including missing completely at random (MCAR) for CC and unchanging profile after dropout for LOCF. Such assumptions are too strong to generally hold. Over the last decades, a number of full longitudinal data analysis methods have become available, such as the linear mixed model for Gaussian outcomes, that are valid under the much weaker missing at random (MAR) assumption. Such a method is useful, even if the scientific question is in terms of a single time point, for example, the last planned measurement occasion, and it is generally consistent with the intention-to-treat principle. The validity of such a method rests on the use of maximum likelihood, under which the missing data mechanism is ignorable as soon as it is MAR. In this paper, we will focus on non-Gaussian outcomes, such as binary, categorical or count data. This setting is less straightforward since there is no unambiguous counterpart to the linear mixed model. We first provide an overview of the various modeling frameworks for non-Gaussian longitudinal data, and subsequently focus on generalized linear mixed-effects models, on the one hand, of which the parameters can be estimated using full likelihood, and on generalized estimating equations, on the other hand, which is a nonlikelihood method and hence requires a modification to be valid under MAR. We briefly comment on the position of models that assume missingness not at random and argue they are most useful to perform sensitivity analysis. Our developments are Ivy Jansen is Postdoctoral Researcher, ).
Statistical Methods in Medical Research, 2015
Different types of outcomes (e.g. binary, count, continuous) can be simultaneously modeled with multivariate generalized linear mixed models by assuming: (1) same or different link functions, (2) same or different conditional distributions, and (3) conditional independence given random subject effects. Others have used this approach for determining simple associations between subject-specific parameters (e.g. correlations between slopes). We demonstrate how more complex associations (e.g. partial regression coefficients between slopes adjusting for intercepts, time lags of maximum correlation) can be estimated. Reparameterizing the model to directly estimate coefficients allows us to compare standard errors based on the inverse of the Hessian matrix with more usual standard errors approximated by the delta method; a mathematical proof demonstrates their equivalence when the gradient vector approaches zero. Reparameterization also allows us to evaluate significance of coefficients with likelihood ratio tests and to compare this approach with more usual Wald-type t-tests and Fisher's z transformations. Simulations indicate that the delta method and inverse Hessian standard errors are nearly equivalent and consistently overestimate the true standard error. Only the likelihood ratio test based on the reparameterized model has an acceptable type I error rate and is therefore recommended for testing associations between stochastic parameters. Online supplementary materials include our medical data example, annotated code, and simulation details.
Generalized reliability estimation using repeated measurements
British Journal of Mathematical and Statistical Psychology, 2006
Reliability can be studied in a generalized way using repeated measurements. Linear mixed models are used to derive generalized test-retest reliability measures. The method allows for repeated measures with a different mean structure due to correction for covariate effects. Furthermore, different variance-covariance structures between measurements can be implemented. When the variance structure reduces to a random intercept (compound symmetry), classical methods are recovered. With more complex variance structures (e.g. including random slopes of time and/or serial correlation), time-dependent reliability functions are obtained. The effect of time lag between measurements on reliability estimates can be evaluated. The methodology is applied to a psychiatric scale for schizophrenia.
Pharmaceutical Statistics, 2016
There are various settings in which researchers are interested in the assessment of the correlation between repeated measurements that are taken within the same subject (i.e., reliability). For example, the same rating scale may be used to assess the symptom severity of the same patients by multiple physicians, or the same outcome may be measured repeatedly over time in the same patients. Reliability can be estimated in various ways, e.g., using the classical Pearson correlation or the intra-class correlation in clustered data. However, contemporary data often have a complex structure that goes well beyond the restrictive assumptions that are needed with the more conventional methods to estimate reliability. In the current paper, we propose a general and exible modeling approach that allows for the derivation of reliability estimates, standard errors, and condence intervals appropriately taking hierarchies and covariates in the data into account. Our methodology is developed for continuous outcomes together with covariates of an arbitrary type. The methodology is illustrated in a case study, and a Web Appendix is provided which details the computations using the R package CorrMixed and the SAS software.
Generalized Linear Mixed Models for Longitudinal Data
International Journal of Probability and Statistics, 2012
The study of longitudinal data plays a significant role in medicine, epidemiology and social sciences. Typically, the interest is in the dependence of an outcome variable on the covariates. The Generalized Linear Models (GLMs) were proposed to unify the regression approach for a wide variety of discrete and continuous longitudinal data. The responses (outcomes) in longitudinal data are usually correlated. Hence, we need to use an extension of the GLMs that account for such correlation. This can be done by inclusion of random effects in the linear predictor; that is the Generalized Linear Mixed Models (GLMMs) (also called random effects models). The maximum likelihood estimates (MLE) are obtained for the regression parameters of a logit model, when the traditional assumption of normal random effects is relaxed. In this case a more convenient distribution, such as the lognormal distribution, is used. However, adding non-normal random effects to the GLMM considerably complicates the likelihood estimation. So, the direct numerical evaluation techniques (such as Newton -Raphson) become analytically and computationally tedious. To overcome such problems, we propose and develop a Monte Carlo EM (MCEM) algorithm, to obtain the maximum likelihood estimates. The proposed method is illustrated using a simulated data.
Analyzing longitudinal data and use of the generalized linear model in health and social sciences
Quality & Quantity, 2015
In the health and social sciences, longitudinal data have often been analyzed without taking into account the dependence between observations of the same subject. Furthermore, consideration is rarely given to the fact that longitudinal data may come from a non-normal distribution. In addition to describing the aims and types of longitudinal designs this paper presents three approaches based on generalized estimating equations that do take into account the lack of independence in data, as well as the type of distribution. These approaches are the marginal model (population-average model), the random effects model (subject-specific model), and the transition model (Markov model or auto-correlation model). Finally, these models are applied to empirical data by means of specific procedures included in SAS, namely GENMOD, MIXED, and GLIMMIX.