Telephone versus internet administration of self-report measures of social anxiety, depressive symptoms, and insomnia: psychometric evaluation of a method to reduce the impact of missing data (original) (raw)
Related papers
Computers in Human Behavior, 2010
The Internet has become increasingly popular as a way to administer self-report questionnaires, especially in the field of Internet delivered psychological treatments. Collecting questionnaire data over the Internet has advantages, such as ease of administration, and automated scoring. However, psychometric properties cannot be assumed to be identical to the paper-and-pencil versions. The aim of this study was to test the equivalence of paper-and-pencil and Internet administered versions of self-report questionnaires used in social phobia research. We analyzed data from two trials in which samples were recruited in a similar manner. One sample (N = 64) completed the paper-and-pencil version of questionnaires and the second sample (N = 57) completed the same measures online. We included the Liebowitz Social Anxiety Scale-self-assessment (LSAS-SR), the Social Interaction and Anxiety Scale (SIAS), and the Social Phobia Scale (SPS) as measures of social anxiety. Also included were the Montgomery Åsberg Depression Rating Scale-self-assessment (MADRS-S), the Beck Anxiety Inventory (BAI), and the Quality of Life Inventory (QOLI). Results showed equivalent psychometric properties across administration formats. Cronbach's a ranged between 0.77 and 0.94. There was an indication of a somewhat higher construct validity when participants filled out questionnaires using paper-and-pencil. We conclude that the LSAS-SR, SIAS, and SPS can be administered via the Internet with maintained psychometric properties.
Computers in Human Behavior, 2007
The aim of this study was to investigate the psychometric properties of Internet administered questionnaires used in panic research. Included were 494 people who had registered for an Internet-based treatment program for panic disorder (PD). Participants were randomly assigned to fill in the questionnaires either on the Internet or the paper-and-pencil versions, and then to fill in the same questionnaires again the next day using the other format. The questionnaires were the body sensations questionnaire [BSQ; Chambless, D. L., Caputo, G. C., Bright, P., & Gallagher, R. (1984). Assessment of fear of fear in agoraphobics: the body sensations questionnaire and the agoraphobic cognitions questionnaire. Journal of Consulting and Clinical Psychology, 52, 1090-1097], agoraphobic cognitions questionnaire [ACQ; Chambless, D. L., Caputo, G. C., Bright, P., & Gallagher, R. (1984). Assessment of fear of fear in agoraphobics: the body sensations questionnaire and the agoraphobic cognitions questionnaire. Inventory. Manual, Svensk version (Swedish version). Fagernes, Norway: Psykologifö rlaget, AB], quality of life inventory [QOLI; Frisch, M. B., Cornell, J., Villanueva, M., & Retzlaff, P. J. (1992). Clinical validation of the quality of life inventory. A measure of life satisfaction for use in treatment planning and outcome assessment. Psychological Assessment, 4, 92-101], and montgomery Å sberg depression rating scale [MADRS; Svanborg, P., & Å sberg, M. (1994). A new self-rating scale for depression and anxiety states based on the comprehensive psychopathological rating scale. ACTA Psychiatrica Scandinavica, 89, 21-28]. Results showed largely equivalent psychometric properties for the two administration formats (CronbachÕs a between 0.79 and 0.95). The results also showed high and significant correlations between the Internet and the paper-and-pencil versions. Analyses of order effects showed an interaction effect for the BSQ and the MI (subscale Accompanied), a main effect was identified for ACQ, MI-Alone, BAI and BDI II. However, in contrast to previous research, the Internet version did not consistently generate higher scores and effect sizes for the differences were generally low. Given the presence of an interaction effect, we recommend that the administration format should be stable in research across measurement points. Finally, the findings suggest that Internet versions of questionnaires used in PD research can be used with confidence.
Computers in Human Behavior, 2009
A limited amount of research has been conducted on the psychometric properties of commonly used measures of anxious or depressive symptomatology for use on the internet, although such measures are seeing increasing use in internet administration for both clinical and research uses. A plethora of advantages exist for the use of internet administration of questionnaires, both in terms of assessment and the potential use in treatment monitoring as well as for research. The aim of this study was to examine the psychometric properties of two common clinical measures, the Penn State Worry Questionnaire and the. Depression, Anxiety, and Stress Scale, in an internet administered format (N = 1138). Results suggest that these two measures may be used with confidence in an online format in terms of reliability and validity.
Psychiatry Research
Introduction: With the start of the COVID-19 pandemic, the various social distancing policies imposed have mandated psychiatrists to consider the option of using telepsychiatry as an alternative to face-to-face interview in Hong Kong. Limitations over sample size, methodology and information technology were found in previous studies and the reliability of symptoms assessment remained a concern. Aim: To evaluate the reliability of assessment of psychiatric symptoms by telepsychiatry comparing with face-to-face psychiatric interview. Method: This study recruited a sample of adult psychiatric patients in psychiatric wards in Queen Mary Hospital. Semi-structural interviews with the use of standardized psychiatric assessment scales were carried out in telepsychiatry and face-to-face interview respectively by two clinicians and the reliability of psychiatric symptoms elicited were assessed. Results: 90 patients completed the assessments The inter-method reliability in Hamilton Depression Rating Scale, Hamilton Anxiety Rating Scale, Columbia Suicide Severity Rating Scale and Brief Psychiatric Rating Scale showed good agreement when compared with face-to-face interview. Conclusion: Symptoms assessment by telepsychiatry is comparable to assessment conducted by face-to-face interview. Abbreviations BPRS brief psychiatric rating scale C-SSRS Columbia suicide severity rating scale HAM-A Hamilton anxiety rating scale HDRS Hamilton depression rating scale ICC intraclass correlation coefficient ICD International statistical classification of diseases and related health problems IT information technology QMH Queen Mary Hospital YMRS Young Mania rating scale
Depression and Anxiety, 2008
Although the use of telemedicine in psychiatry has a long history in providing clinical care to patients, its use in clinical trials research has not yet been commonly employed. Telemedicine allows for the remote assessment of study patients, which could be done by a centralized, highly calibrated, and impartial cohort of raters independent of the study site. This study examined the comparability of remote administration of the Montgomery-Asberg Depression Rating Scale (MADRS) by videoconference and by telephone to traditional faceto-face administration. Two parallel studies were conducted: one compared faceto-face with videoconference administration (N 5 35), and the other compared face-to-face with telephone administration (N 5 35). In each study, depressed patients were interviewed independently twice: once in the traditional face-toface manner, and the second time by either videoconference or teleconference. A counterbalanced order was used. The mean MADRS score for interviews conducted remotely by videoconference was not significantly different from the mean MADRS scores conducted by face-to-face administration (mean difference 5 0.51 points), P 5.388, intraclass correlation (ICC) 5 .94, Po0001. Similarly, the mean MADRS score for interviews conducted by telephone was not significantly different from the mean MADRS score conducted by face-toface administration (mean difference 5 0.74 points), P 5.270, ICC 5 .93, Po0001. Results of the study support the comparability of remote administration of the MADRS, by both telephone and videoconference, to face-to-face administration. Comparability of the administration mode allows for remote assessment of patients in both research and clinical applications. Depression and Anxiety 25:913-919, 2008.
Interformat Reliability of Digital Psychiatric Self-Report Questionnaires: A Systematic Review
Journal of Medical Internet Research, 2014
Background: Research on Internet-based interventions typically use digital versions of pen and paper self-report symptom scales. However, adaptation into the digital format could affect the psychometric properties of established self-report scales. Several studies have investigated differences between digital and pen and paper versions of instruments, but no systematic review of the results has yet been done. Objective: This review aims to assess the interformat reliability of self-report symptom scales used in digital or online psychotherapy research. Methods: Three databases (MEDLINE, Embase, and PsycINFO) were systematically reviewed for studies investigating the reliability between digital and pen and paper versions of psychiatric symptom scales. Results: From a total of 1504 publications, 33 were included in the review, and interformat reliability of 40 different symptom scales was assessed. Significant differences in mean total scores between formats were found in 10 of 62 analyses. These differences were found in just a few studies, which indicates that the results were due to study effects and sample effects rather than unreliable instruments. The interformat reliability ranged from r=.35 to r=.99; however, the majority of instruments showed a strong correlation between format scores. The quality of the included studies varied, and several studies had insufficient power to detect small differences between formats. Conclusions: When digital versions of self-report symptom scales are compared to pen and paper versions, most scales show high interformat reliability. This supports the reliability of results obtained in psychotherapy research on the Internet and the comparability of the results to traditional psychotherapy research. There are, however, some instruments that consistently show low interformat reliability, suggesting that these conclusions cannot be generalized to all questionnaires. Most studies had at least some methodological issues with insufficient statistical power being the most common issue. Future studies should preferably provide information about the transformation of the instrument into digital format and the procedure for data collection in more detail.
International Journal of Testing, 2006
The Internet has become increasingly popular as a way to administer self-report questionnaires, especially in the field of Internet delivered psychological treatments. Collecting questionnaire data over the Internet has advantages, such as ease of administration, and automated scoring. However, psychometric properties cannot be assumed to be identical to the paper-and-pencil versions. The aim of this study was to test the equivalence of paper-and-pencil and Internet administered versions of self-report questionnaires used in social phobia research. We analyzed data from two trials in which samples were recruited in a similar manner. One sample (N = 64) completed the paper-and-pencil version of questionnaires and the second sample (N = 57) completed the same measures online. We included the Liebowitz Social Anxiety Scale-self-assessment (LSAS-SR), the Social Interaction and Anxiety Scale (SIAS), and the Social Phobia Scale (SPS) as measures of social anxiety. Also included were the Montgomery Åsberg Depression Rating Scale-self-assessment (MADRS-S), the Beck Anxiety Inventory (BAI), and the Quality of Life Inventory (QOLI). Results showed equivalent psychometric properties across administration formats. Cronbach's a ranged between 0.77 and 0.94. There was an indication of a somewhat higher construct validity when participants filled out questionnaires using paper-and-pencil. We conclude that the LSAS-SR, SIAS, and SPS can be administered via the Internet with maintained psychometric properties.
JMIR mental health, 2021
Background: e-Mental health apps targeting depression have gained increased attention in mental health care. Daily self-assessment is an essential part of e-mental health apps. The Self-administered PsychoTherApy SystemS (SELFPASS) app is a self-management app to manage depressive and comorbid anxiety symptoms of patients with a depression diagnosis. A self-developed item pool with 40 depression items and 12 anxiety items is included to provide symptom-specific suggestions for interventions. However, the psychometric properties of the item pool have not yet been evaluated. Objective: The aim of this study is to investigate the validity and reliability of the SELFPASS item pool. Methods: A weblink with the SELFPASS item pool and validated mood assessment scales was distributed to healthy subjects and patients who had received a diagnosis of a depressive disorder within the last year. Two scores were derived from the SELFPASS item pool: SELFPASS depression (SP-D) and SELFPASS anxiety (SP-A). Reliability was examined using Cronbach α. Construct validity was assessed through Pearson correlations with the Patient Health Questionnaire-9 (PHQ-9), the General Anxiety Disorder Scale-7 (GAD-7), and the WHO-5-Wellbeing-Scale (WHO-5). Logistic regression analysis was performed as an indicator for concurrent criterion validity of SP-D and SPA. Factor analysis was performed to provide information about the underlying factor structure of the item pool. Item-scale correlations were calculated in order to determine item quality. Results: A total of 284 participants were included, with 192 (67.6%) healthy subjects and 92 (32.4%) patients. Cronbach α was set to .94 for SP-D and α=.88 for SPA. We found significant positive correlations between SP-D and PHQ-9 scores (r=0.87; P<.001) and between SPA and GAD-7 scores (r=0.80; P<.001), and negative correlations between SP-D and WHO-5 scores (r=-0.80; P<.001) and between SPA and WHO-5 scores (r=-0.69; P<.001). Increasing scores of SP-D and SPA led to increased odds of belonging to the patient group (SP-D: odds ratio 1.03, 95%
Computers in Human Behavior, 2007
The purpose of this study was to examine the reliability, equivalence and respondent preference of a computerized version of the General Health Questionnaire (GHQ-12), Symptom Checklist (SCL-90-R), Medical Outcomes Study Social Support Survey (MOSSSS), Perceived Stress Scale (PSS) and Utrecht Coping List (UCL) in comparison with the original version in a general adult population. Internal consistency, equivalence and preference between both administration modes was assessed in a group of participants (n = 130) who first completed the computerized questionnaire, followed by the traditional questionnaire and a post-assessment evaluation measure. Test-retest reliability was measured in a second group of participants (n = 115), who completed the computerized questionnaire twice. In both groups, the interval between first and second administration was set at one week. Reliability of the PC versions was acceptable to excellent; internal consistency ranged from a = 0.52-0.98, ICC's for test-retest reliability ranged from 0.58-0.92. Equivalence was fair to excellent with ICC's ranging from 0.54-0.91. Interestingly, more subjects preferred the computerized instead of the traditional questionnaires (computerized: 39.2%, traditional: 21.6%, no preference: 39.2%). These results support the use of computerized assessment for these five instruments in a general population of adults.
Routinely administered questionnaires for depression and anxiety: systematic review
BMJ, 2001
Objectives To examine the effect of routinely administered psychiatric questionnaires on the recognition, management, and outcome of psychiatric disorders in non-psychiatric settings. Data sources Embase, Medline, PsycLIT, Cinahl, Cochrane Controlled Trials Register, and hand searches of key journals. Methods A systematic review of randomised controlled trials of the administration and routine feedback of psychiatric screening and outcome questionnaires to clinicians in non-psychiatric settings. Narrative overview of key design features and end points, together with a random effects quantitative synthesis of comparable studies. Main outcome measures Recognition of psychiatric disorders after feedback of questionnaire results; interventions for psychiatric disorders; and outcome of psychiatric disorders. Results Nine randomised studies were identified that examined the use of common psychiatric instruments in primary care and general hospital settings. Studies compared the effect of the administration of these instruments followed by the feedback of the results to clinicians, with administration with no feedback. Meta-analytic pooling was possible for four of these studies (2457 participants), which measured the effect of feedback on the recognition of depressive disorders. Routine administration and feedback of scores for all patients (irrespective of score) did not increase the overall rate of recognition of mental disorders such as anxiety and depression (relative risk of detection of depression by clinician after feedback 0.95, 95% confidence interval 0.83 to 1.09). Two studies showed that routine administration followed by selective feedback for only high scorers increased the rate of recognition of depression (relative risk of detection of depression after feedback 2.64, 1.62 to 4.31). This increased recognition, however, did not translate into an increased rate of intervention. Overall, studies of routine administration of psychiatric measures did not show an effect on patient outcome. Conclusions The routine measurement of outcome is a costly exercise. Little evidence shows that it is of benefit in improving psychosocial outcomes of those with psychiatric disorder managed in non-psychiatric settings.