Wim Van den Noortgate | Katholieke Universiteit Leuven (Kortrijk Campus) (original) (raw)

Papers by Wim Van den Noortgate

Research paper thumbnail of Atypical visual processing in ASD as a global deficit or local bias: A meta-analysis

Research paper thumbnail of The Effects of Early Prevention Programs for Families with Young Children at Risk for Physical Child Abuse and Neglect: A Meta-Analysis

Child Maltreatment, 2004

In this article, a meta-analysis is presented on 40 evaluation studies of early prevention progra... more In this article, a meta-analysis is presented on 40 evaluation studies of early prevention programs for families with young children at risk for physical child abuse and neglect with mostly nonrandomized designs. The main aim of all programs was to prevent physical child abuse and neglect by providing early family support. For the meta-analysis, a multilevel approach was used. A significant overall positive effect was found, pointing to the potential usefulness of these programs. The study demonstrated a significant decrease in the manifestation of abusive and neglectful acts and a significant risk reduction in factors such as child functioning, parent-child interaction, parent functioning, family functioning, and context characteristics.

Research paper thumbnail of Multilevel meta-analysis of single-subject experimental designs: A simulation study

One way to combine data from single-subject experimental design studies is by performing a multil... more One way to combine data from single-subject experimental design studies is by performing a multilevel meta-analysis, with unstandardized or standardized regression coefficients as the effect size metrics. This study evaluates the performance of this approach. The results indicate that a multilevel meta-analysis of unstandardized effect sizes results in good estimates of the effect. The multilevel metaanalysis of standardized effect sizes, on the other hand, is suitable only when the number of measurement occasions for each subject is 20 or more. The effect of the treatment on the intercept is estimated with enough power when the studies are homogeneous or when the number of studies is large; the power of the effect on the slope is estimated with enough power only when the number of studies and the number of measurement occasions are large.

Research paper thumbnail of Bias Corrections for Standardized Effect Size Estimates Used With Single-Subject Experimental Designs

The Journal of Experimental Education, 2014

ABSTRACT A multilevel meta-analysis can combine the results of several single-subject experimenta... more ABSTRACT A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect sizes are adjusted using Hedges' small sample bias correction. Next, the within-subject standard deviation is estimated by a 2-level model per study or by using a regression model with the subjects identified using dummy predictor variables. The effect sizes are corrected using an iterative raw data parametric bootstrap procedure. The results indicate that the first and last approach succeed in reducing the bias of the fixed effects estimates. Given the difference in complexity, we recommend the first approach.

Research paper thumbnail of Three-Level Analysis of Single-Case Experimental Data: Empirical Validation

The Journal of Experimental Education, 2014

One approach for combining single-case data involves use of multilevel modeling. In this article,... more One approach for combining single-case data involves use of multilevel modeling. In this article, the authors use a Monte Carlo simulation study to inform applied researchers under which realistic conditions the three-level model is appropriate. The authors vary the value of the immediate treatment effect and the treatment's effect on the time trend, the number of studies, cases and measurements, and the between-case and between-study variance. The study shows that the three-level approach results in unbiased estimates of both kinds of treatment effects. To have reasonable power for testing the treatment effects, the authors recommend researchers to use a homogeneous set of studies and to involve a minimum of 30 studies. The number of measurements and cases is of less importance.

Research paper thumbnail of From a single-level analysis to a multilevel analysis of single-case experimental designs

Journal of School Psychology, 2013

Multilevel modeling provides one approach to synthesizing single-case experimental design data. I... more Multilevel modeling provides one approach to synthesizing single-case experimental design data. In this study, we present the multilevel model (the two-level and the three-level models) for summarizing single-case results over cases, over studies, or both. In addition to the basic multilevel models, we elaborate on several plausible alternative models. We apply the proposed models to real datasets and investigate to what extent the estimated treatment effect is dependent on the modeling specifications and the underlying assumptions. By considering a range of plausible models and assumptions, researchers can determine the degree to which the effect estimates and conclusions are sensitive to the specific assumptions made. If the same conclusions are reached across a range of plausible assumptions, confidence in the conclusions can be enhanced. We advise researchers not to focus on one model but conduct multiple plausible multilevel analyses and investigate whether the results depend on the modeling options.

Research paper thumbnail of The Three-Level Synthesis of Standardized Single-Subject Experimental Data: A Monte Carlo Simulation Study

Multivariate Behavioral Research, 2013

Previous research indicates that three-level modeling is a valid statistical method to make infer... more Previous research indicates that three-level modeling is a valid statistical method to make inferences from unstandardized data from a set of single-subject experimental studies, especially when a homogeneous set of at least 30 studies are included ( Moeyaert, Ugille, Ferron, Beretvas, & Van den Noortgate, 2013a ). When single-subject data from multiple studies are combined, however, it often occurs that the dependent variable is measured on a different scale, requiring standardization of the data before combining them over studies. One approach is to divide the dependent variable by the residual standard deviation. In this study we use Monte Carlo methods to evaluate this approach. We examine how well the fixed effects (e.g., immediate treatment effect and treatment effect on the time trend) and the variance components (the between- and within-subject variance) are estimated under a number of realistic conditions. The three-level synthesis of standardized single-subject data is found appropriate for the estimation of the treatment effects, especially when many studies (30 or more) and many measurement occasions within subjects (20 or more) are included and when the studies are rather homogeneous (with small between-study variance). The estimates of the variance components are less accurate.

Research paper thumbnail of Estimating Intervention Effects Across Different Types of Single-Subject Experimental Designs: Empirical Illustration

School Psychology Quarterly, 2014

The purpose of this study is to illustrate the multilevel meta-analysis of results from single-su... more The purpose of this study is to illustrate the multilevel meta-analysis of results from single-subject experimental designs of different types, including AB phase designs, multiple-baseline designs, ABAB reversal designs, and alternating treatment designs. Current methodological work on the meta-analysis of single-subject experimental designs often focuses on combining simple AB phase designs or multiple-baseline designs. We discuss the estimation of the average intervention effect estimate across different types of single-subject experimental designs using several multilevel meta-analytic models. We illustrate the different models using a reanalysis of a meta-analysis of single-subject experimental designs (Heyvaert, Saenen, Maes, & Onghena, in press). The intervention effect estimates using univariate 3-level models differ from those obtained using a multivariate 3-level model that takes the dependence between effect sizes into account. Because different results are obtained and the multivariate model has multiple advantages, including more information and smaller standard errors, we recommend researchers to use the multivariate multilevel model to meta-analyze studies that utilize different single-subject designs. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

Research paper thumbnail of Estimating causal effects from multiple-baseline studies: Implications for design and analysis

Traditionally, average causal effects from multiple-baseline data are estimated by aggregating in... more Traditionally, average causal effects from multiple-baseline data are estimated by aggregating individual causal effect estimates obtained through within-series comparisons of treatment phase trajectories to baseline extrapolations. Concern that these estimates may be biased due to event effects, such as history and maturation, motivates our proposal of a between-series estimator that contrasts participants in the treatment to those in the baseline phase. Accuracy of the new method was assessed and compared in a series of simulation studies where participants were randomly assigned to intervention start points. The within-series estimator was found to have greater power to detect treatment effects but also to be biased due to event effects, leading to faulty causal inferences. The between-series estimator remained unbiased and controlled the Type I error rate independent of event effects. Because the between-series estimator is unbiased under different assumptions, the 2 estimates complement each other, and the difference between them can be used to detect inaccuracies in the modeling assumptions. The power to detect inaccuracies associated with event effects was found to depend on the size and type of event effect. We empirically illustrate the methods using a real data set and then discuss implications for researchers planning multiple-baseline studies.

Research paper thumbnail of Multilevel meta-analysis of single-subject experimental designs: A simulation study

Behavior Research Methods, 2012

One way to combine data from single-subject experimental design studies is by performing a multil... more One way to combine data from single-subject experimental design studies is by performing a multilevel meta-analysis, with unstandardized or standardized regression coefficients as the effect size metrics. This study evaluates the performance of this approach. The results indicate that a multilevel meta-analysis of unstandardized effect sizes results in good estimates of the effect. The multilevel metaanalysis of standardized effect sizes, on the other hand, is suitable only when the number of measurement occasions for each subject is 20 or more. The effect of the treatment on the intercept is estimated with enough power when the studies are homogeneous or when the number of studies is large; the power of the effect on the slope is estimated with enough power only when the number of studies and the number of measurement occasions are large.

Research paper thumbnail of A multilevel meta-analysis of single-case and small-n research on interventions for reducing challenging behavior in persons with intellectual disabilities

Research in Developmental Disabilities, 2012

A multilevel meta-analysis of single-case and small-n research on interventions for reducing chal... more A multilevel meta-analysis of single-case and small-n research on interventions for reducing challenging behavior in persons with intellectual disabilities. Research in Developmental Disabilities, 33,[766][767][768][769][770][771][772][773][774][775][776][777][778][779][780] A multilevel meta-analysis of single-case and small-n research on interventions for reducing challenging behavior in persons with intellectual disabilities.

Research paper thumbnail of Disentangling instructional roles: the case of teaching and summative assessment

Studies in Higher Education, 2011

Research paper thumbnail of Disentangling instructional roles: the case of teaching and summative assessment

Studies in Higher Education, 2011

While in some higher education contexts a separation of teaching and summative assessment is assu... more While in some higher education contexts a separation of teaching and summative assessment is assumed to be self‐evident, in other contexts the opposite is regarded to be obvious. In this article the different arguments supporting either position are analyzed. Based on a systematic literature review, arguments for and against are classified at the micro‐, meso‐ and macro‐level. Articles specifically discuss

Research paper thumbnail of P1-139 Maternal anxiety during pregnancy and reactivity, self-regulation and internalizing problems in childhood and adolescence

Early Human Development, 2007

Research paper thumbnail of Cross-Classification Multilevel Logistic Models in Psychometrics

Journal of Educational and Behavioral Statistics, 2003

Research paper thumbnail of Cross-Classification Multilevel Logistic Models in Psychometrics

Research paper thumbnail of Simple imputation methods versus direct likelihood analysis for missing item scores in multilevel educational data

Behavior Research Methods, 2012

Missing data, such as item responses in multilevel data, are ubiquitous in educational research s... more Missing data, such as item responses in multilevel data, are ubiquitous in educational research settings. Researchers in the item response theory (IRT) context have shown that ignoring such missing data can create problems in the estimation of the IRT model parameters. Consequently, several imputation methods for dealing with missing item data have been proposed and shown to be effective when applied with traditional IRT models. Additionally, a nonimputation direct likelihood analysis has been shown to be an effective tool for handling missing observations in clustered data settings. This study investigates the performance of six simple imputation methods, which have been found to be useful in other IRT contexts, versus a direct likelihood analysis, in multilevel data from educational settings. Multilevel item response data were simulated on the basis of two empirical data sets, and some of the item scores were deleted, such that they were missing either completely at random or simply at random. An explanatory IRT model was used for modeling the complete, incomplete, and imputed data sets. We showed that direct likelihood analysis of the incomplete data sets produced unbiased parameter estimates that were comparable to those from a complete data analysis. Multipleimputation approaches of the two-way mean and corrected item mean substitution methods displayed varying degrees of effectiveness in imputing data that in turn could produce unbiased parameter estimates. The simple random imputation, adjusted random imputation, item means substitution, and regression imputation methods seemed to be less effective in imputing missing item scores in multilevel data settings.

Research paper thumbnail of Vocational trainees’ views and experiences regarding the learning and teaching of communication skills in general practice

Patient Education and Counseling, 2010

Research paper thumbnail of Optimizing the utility of communication OSCEs: Omit station-specific checklists and provide students with narrative feedback

To evaluate how the utility (reliability, validity, acceptability, feasibility, cost and educatio... more To evaluate how the utility (reliability, validity, acceptability, feasibility, cost and educational impact) of a communication-OSCE was influenced by whether or not station-specific (StSp) checklists were used together with a generic instrument and whether or not narrative feedback was provided to students. At ten stations, faculty members rated standardized patient-student interactions using the common ground (CG) instrument (at all stations) and StSp-checklists. Both raters and patients provided written feedback. The impact of changing the design on the various utility parameters was assessed: reliability by means of a generalizability study, cost using the Reznick model and the other utility parameters by means of a survey. Use of the generic instrument (CG) proved more reliable (G coefficient=0.67) than using the StSp-checklists (G=0.47) or both (G=0.65) while there was a high correlation between both scale scores (Pearsons'r=0.86). The cost was 6.5% higher when StSp-checklists were used and 5% higher when narrative feedback was provided. The utility of a communication OSCE can be enhanced by omitting StSp-checklists and by providing narrative feedback to students. The same generic assessment scale can be used in all stations of a communication OSCE. Providing feedback to students is promising but it increases the costs.

Research paper thumbnail of Ontwikkeling van getalbegrip bij vijf-tot zevenjarigen. Een vergelijking tussen Vlaanderen en Nederland

Research paper thumbnail of Atypical visual processing in ASD as a global deficit or local bias: A meta-analysis

Research paper thumbnail of The Effects of Early Prevention Programs for Families with Young Children at Risk for Physical Child Abuse and Neglect: A Meta-Analysis

Child Maltreatment, 2004

In this article, a meta-analysis is presented on 40 evaluation studies of early prevention progra... more In this article, a meta-analysis is presented on 40 evaluation studies of early prevention programs for families with young children at risk for physical child abuse and neglect with mostly nonrandomized designs. The main aim of all programs was to prevent physical child abuse and neglect by providing early family support. For the meta-analysis, a multilevel approach was used. A significant overall positive effect was found, pointing to the potential usefulness of these programs. The study demonstrated a significant decrease in the manifestation of abusive and neglectful acts and a significant risk reduction in factors such as child functioning, parent-child interaction, parent functioning, family functioning, and context characteristics.

Research paper thumbnail of Multilevel meta-analysis of single-subject experimental designs: A simulation study

One way to combine data from single-subject experimental design studies is by performing a multil... more One way to combine data from single-subject experimental design studies is by performing a multilevel meta-analysis, with unstandardized or standardized regression coefficients as the effect size metrics. This study evaluates the performance of this approach. The results indicate that a multilevel meta-analysis of unstandardized effect sizes results in good estimates of the effect. The multilevel metaanalysis of standardized effect sizes, on the other hand, is suitable only when the number of measurement occasions for each subject is 20 or more. The effect of the treatment on the intercept is estimated with enough power when the studies are homogeneous or when the number of studies is large; the power of the effect on the slope is estimated with enough power only when the number of studies and the number of measurement occasions are large.

Research paper thumbnail of Bias Corrections for Standardized Effect Size Estimates Used With Single-Subject Experimental Designs

The Journal of Experimental Education, 2014

ABSTRACT A multilevel meta-analysis can combine the results of several single-subject experimenta... more ABSTRACT A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect sizes are adjusted using Hedges' small sample bias correction. Next, the within-subject standard deviation is estimated by a 2-level model per study or by using a regression model with the subjects identified using dummy predictor variables. The effect sizes are corrected using an iterative raw data parametric bootstrap procedure. The results indicate that the first and last approach succeed in reducing the bias of the fixed effects estimates. Given the difference in complexity, we recommend the first approach.

Research paper thumbnail of Three-Level Analysis of Single-Case Experimental Data: Empirical Validation

The Journal of Experimental Education, 2014

One approach for combining single-case data involves use of multilevel modeling. In this article,... more One approach for combining single-case data involves use of multilevel modeling. In this article, the authors use a Monte Carlo simulation study to inform applied researchers under which realistic conditions the three-level model is appropriate. The authors vary the value of the immediate treatment effect and the treatment's effect on the time trend, the number of studies, cases and measurements, and the between-case and between-study variance. The study shows that the three-level approach results in unbiased estimates of both kinds of treatment effects. To have reasonable power for testing the treatment effects, the authors recommend researchers to use a homogeneous set of studies and to involve a minimum of 30 studies. The number of measurements and cases is of less importance.

Research paper thumbnail of From a single-level analysis to a multilevel analysis of single-case experimental designs

Journal of School Psychology, 2013

Multilevel modeling provides one approach to synthesizing single-case experimental design data. I... more Multilevel modeling provides one approach to synthesizing single-case experimental design data. In this study, we present the multilevel model (the two-level and the three-level models) for summarizing single-case results over cases, over studies, or both. In addition to the basic multilevel models, we elaborate on several plausible alternative models. We apply the proposed models to real datasets and investigate to what extent the estimated treatment effect is dependent on the modeling specifications and the underlying assumptions. By considering a range of plausible models and assumptions, researchers can determine the degree to which the effect estimates and conclusions are sensitive to the specific assumptions made. If the same conclusions are reached across a range of plausible assumptions, confidence in the conclusions can be enhanced. We advise researchers not to focus on one model but conduct multiple plausible multilevel analyses and investigate whether the results depend on the modeling options.

Research paper thumbnail of The Three-Level Synthesis of Standardized Single-Subject Experimental Data: A Monte Carlo Simulation Study

Multivariate Behavioral Research, 2013

Previous research indicates that three-level modeling is a valid statistical method to make infer... more Previous research indicates that three-level modeling is a valid statistical method to make inferences from unstandardized data from a set of single-subject experimental studies, especially when a homogeneous set of at least 30 studies are included ( Moeyaert, Ugille, Ferron, Beretvas, & Van den Noortgate, 2013a ). When single-subject data from multiple studies are combined, however, it often occurs that the dependent variable is measured on a different scale, requiring standardization of the data before combining them over studies. One approach is to divide the dependent variable by the residual standard deviation. In this study we use Monte Carlo methods to evaluate this approach. We examine how well the fixed effects (e.g., immediate treatment effect and treatment effect on the time trend) and the variance components (the between- and within-subject variance) are estimated under a number of realistic conditions. The three-level synthesis of standardized single-subject data is found appropriate for the estimation of the treatment effects, especially when many studies (30 or more) and many measurement occasions within subjects (20 or more) are included and when the studies are rather homogeneous (with small between-study variance). The estimates of the variance components are less accurate.

Research paper thumbnail of Estimating Intervention Effects Across Different Types of Single-Subject Experimental Designs: Empirical Illustration

School Psychology Quarterly, 2014

The purpose of this study is to illustrate the multilevel meta-analysis of results from single-su... more The purpose of this study is to illustrate the multilevel meta-analysis of results from single-subject experimental designs of different types, including AB phase designs, multiple-baseline designs, ABAB reversal designs, and alternating treatment designs. Current methodological work on the meta-analysis of single-subject experimental designs often focuses on combining simple AB phase designs or multiple-baseline designs. We discuss the estimation of the average intervention effect estimate across different types of single-subject experimental designs using several multilevel meta-analytic models. We illustrate the different models using a reanalysis of a meta-analysis of single-subject experimental designs (Heyvaert, Saenen, Maes, & Onghena, in press). The intervention effect estimates using univariate 3-level models differ from those obtained using a multivariate 3-level model that takes the dependence between effect sizes into account. Because different results are obtained and the multivariate model has multiple advantages, including more information and smaller standard errors, we recommend researchers to use the multivariate multilevel model to meta-analyze studies that utilize different single-subject designs. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

Research paper thumbnail of Estimating causal effects from multiple-baseline studies: Implications for design and analysis

Traditionally, average causal effects from multiple-baseline data are estimated by aggregating in... more Traditionally, average causal effects from multiple-baseline data are estimated by aggregating individual causal effect estimates obtained through within-series comparisons of treatment phase trajectories to baseline extrapolations. Concern that these estimates may be biased due to event effects, such as history and maturation, motivates our proposal of a between-series estimator that contrasts participants in the treatment to those in the baseline phase. Accuracy of the new method was assessed and compared in a series of simulation studies where participants were randomly assigned to intervention start points. The within-series estimator was found to have greater power to detect treatment effects but also to be biased due to event effects, leading to faulty causal inferences. The between-series estimator remained unbiased and controlled the Type I error rate independent of event effects. Because the between-series estimator is unbiased under different assumptions, the 2 estimates complement each other, and the difference between them can be used to detect inaccuracies in the modeling assumptions. The power to detect inaccuracies associated with event effects was found to depend on the size and type of event effect. We empirically illustrate the methods using a real data set and then discuss implications for researchers planning multiple-baseline studies.

Research paper thumbnail of Multilevel meta-analysis of single-subject experimental designs: A simulation study

Behavior Research Methods, 2012

One way to combine data from single-subject experimental design studies is by performing a multil... more One way to combine data from single-subject experimental design studies is by performing a multilevel meta-analysis, with unstandardized or standardized regression coefficients as the effect size metrics. This study evaluates the performance of this approach. The results indicate that a multilevel meta-analysis of unstandardized effect sizes results in good estimates of the effect. The multilevel metaanalysis of standardized effect sizes, on the other hand, is suitable only when the number of measurement occasions for each subject is 20 or more. The effect of the treatment on the intercept is estimated with enough power when the studies are homogeneous or when the number of studies is large; the power of the effect on the slope is estimated with enough power only when the number of studies and the number of measurement occasions are large.

Research paper thumbnail of A multilevel meta-analysis of single-case and small-n research on interventions for reducing challenging behavior in persons with intellectual disabilities

Research in Developmental Disabilities, 2012

A multilevel meta-analysis of single-case and small-n research on interventions for reducing chal... more A multilevel meta-analysis of single-case and small-n research on interventions for reducing challenging behavior in persons with intellectual disabilities. Research in Developmental Disabilities, 33,[766][767][768][769][770][771][772][773][774][775][776][777][778][779][780] A multilevel meta-analysis of single-case and small-n research on interventions for reducing challenging behavior in persons with intellectual disabilities.

Research paper thumbnail of Disentangling instructional roles: the case of teaching and summative assessment

Studies in Higher Education, 2011

Research paper thumbnail of Disentangling instructional roles: the case of teaching and summative assessment

Studies in Higher Education, 2011

While in some higher education contexts a separation of teaching and summative assessment is assu... more While in some higher education contexts a separation of teaching and summative assessment is assumed to be self‐evident, in other contexts the opposite is regarded to be obvious. In this article the different arguments supporting either position are analyzed. Based on a systematic literature review, arguments for and against are classified at the micro‐, meso‐ and macro‐level. Articles specifically discuss

Research paper thumbnail of P1-139 Maternal anxiety during pregnancy and reactivity, self-regulation and internalizing problems in childhood and adolescence

Early Human Development, 2007

Research paper thumbnail of Cross-Classification Multilevel Logistic Models in Psychometrics

Journal of Educational and Behavioral Statistics, 2003

Research paper thumbnail of Cross-Classification Multilevel Logistic Models in Psychometrics

Research paper thumbnail of Simple imputation methods versus direct likelihood analysis for missing item scores in multilevel educational data

Behavior Research Methods, 2012

Missing data, such as item responses in multilevel data, are ubiquitous in educational research s... more Missing data, such as item responses in multilevel data, are ubiquitous in educational research settings. Researchers in the item response theory (IRT) context have shown that ignoring such missing data can create problems in the estimation of the IRT model parameters. Consequently, several imputation methods for dealing with missing item data have been proposed and shown to be effective when applied with traditional IRT models. Additionally, a nonimputation direct likelihood analysis has been shown to be an effective tool for handling missing observations in clustered data settings. This study investigates the performance of six simple imputation methods, which have been found to be useful in other IRT contexts, versus a direct likelihood analysis, in multilevel data from educational settings. Multilevel item response data were simulated on the basis of two empirical data sets, and some of the item scores were deleted, such that they were missing either completely at random or simply at random. An explanatory IRT model was used for modeling the complete, incomplete, and imputed data sets. We showed that direct likelihood analysis of the incomplete data sets produced unbiased parameter estimates that were comparable to those from a complete data analysis. Multipleimputation approaches of the two-way mean and corrected item mean substitution methods displayed varying degrees of effectiveness in imputing data that in turn could produce unbiased parameter estimates. The simple random imputation, adjusted random imputation, item means substitution, and regression imputation methods seemed to be less effective in imputing missing item scores in multilevel data settings.

Research paper thumbnail of Vocational trainees’ views and experiences regarding the learning and teaching of communication skills in general practice

Patient Education and Counseling, 2010

Research paper thumbnail of Optimizing the utility of communication OSCEs: Omit station-specific checklists and provide students with narrative feedback

To evaluate how the utility (reliability, validity, acceptability, feasibility, cost and educatio... more To evaluate how the utility (reliability, validity, acceptability, feasibility, cost and educational impact) of a communication-OSCE was influenced by whether or not station-specific (StSp) checklists were used together with a generic instrument and whether or not narrative feedback was provided to students. At ten stations, faculty members rated standardized patient-student interactions using the common ground (CG) instrument (at all stations) and StSp-checklists. Both raters and patients provided written feedback. The impact of changing the design on the various utility parameters was assessed: reliability by means of a generalizability study, cost using the Reznick model and the other utility parameters by means of a survey. Use of the generic instrument (CG) proved more reliable (G coefficient=0.67) than using the StSp-checklists (G=0.47) or both (G=0.65) while there was a high correlation between both scale scores (Pearsons'r=0.86). The cost was 6.5% higher when StSp-checklists were used and 5% higher when narrative feedback was provided. The utility of a communication OSCE can be enhanced by omitting StSp-checklists and by providing narrative feedback to students. The same generic assessment scale can be used in all stations of a communication OSCE. Providing feedback to students is promising but it increases the costs.

Research paper thumbnail of Ontwikkeling van getalbegrip bij vijf-tot zevenjarigen. Een vergelijking tussen Vlaanderen en Nederland