Pilot randomized controlled trial of an online intervention for problem gamblers (original) (raw)

Addict Behav Rep. 2019 Jun; 9: 100175.

John A. Cunningham

aCentre for Addiction and Mental Health, Toronto, Canada

bUniversity of Toronto, Toronto, Canada

cAustralian National University, Canberra, Australia

Alexandra Godinho

aCentre for Addiction and Mental Health, Toronto, Canada

David C. Hodgins

dUniversity of Calgary, Calgary, Canada

aCentre for Addiction and Mental Health, Toronto, Canada

bUniversity of Toronto, Toronto, Canada

cAustralian National University, Canberra, Australia

dUniversity of Calgary, Calgary, Canada

⁎Corresponding author at: Centre for Addiction and Mental Health, 33 Russell St., Toronto, Ontario M5S 2S1, Canada. ac.hmac@mahgninnuc.nhoj

Received 2019 Jan 28; Revised 2019 Feb 28; Accepted 2019 Mar 3.

This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Abstract

Introduction

This pilot randomized controlled trial sought to evaluate whether an online intervention for problem gambling could lead to improved gambling outcomes compared to a no intervention control. Participants were recruited through a crowdsourcing platform.

Methods

Participants were recruited to complete an online survey about their gambling through the Mechanical Turk platform. Those who scored 5 or more on the Problem Gambling Severity Index and were thinking about quitting or reducing their gambling were invited to complete 6-week and 6-month follow-ups. Each potential participant who agreed was sent a unique password. Participants who used their password to log onto the study portal were randomized to either access an online intervention for gambling or to a no intervention control.

Results

A total of 321 participants were recruited, of which 87% and 88% were followed-up at 6 weeks and 6 months, respectively. Outcome analyses revealed that, while there were reductions in gambling from baseline to follow-ups, there was no significant observable impact of the online gambling intervention, as compared to a no intervention control condition.

Conclusions

While the current trial observed no impact of the intervention, replication is merited with a larger sample size, and with participants who are not recruited through a crowdsourcing platform.

Trial registration: ClinicalTrials.govNCT03124589

Keywords: Amazon Mechanical Turk, Internet, Online web, Data collection, Research methods, Problem gambling

Highlights

1. Introduction

There are a number of challenges to providing adequate services for problem gamblers. Primarily, the large majority do not access traditional treatment services or Gamblers Anonymous for help with their concerns (as much as 94%) (Cunningham, 2005; Suurvali, Hodgins, Toneatto, & Cunningham, 2008). There are a number of reasons for this, including a lack of availability (particularly outside of urban areas), concerns about stigma, and a desire for self-reliance (Slutske, 2006; Suurvali, Hodgins, Toneatto, & Cunningham, 2012). There is, however, considerable interest among problem gamblers in alternate options for care, including Internet interventions (Cunningham, Hodgins, & Toneatto, 2008).

There has been little published work to-date investigating the efficacy of online interventions for problem gamblers, both with and without therapist assistance. A Swedish randomized controlled trial (RCT) examined the impact of an Internet-delivered intervention with therapist assistance in a small sample of pathological gamblers, finding some impact of the intervention compared to a wait list control at 8-week follow-up (Carlbring & Smit, 2008). While promising, this intervention merits systematic replication to confirm the results. This is because the use of a waiting list control design which, when implemented in a way that those in the waiting list condition are told that they will have to wait for the intervention, can act as a confound to interpreting the results (note: wording of the waiting list manipulation in the Carlbring et al. trial is not stated in the publication) (Cunningham, Kypri, & McCambridge, 2013).

Two further RCTs evaluated a personalized feedback intervention for gambling that is available online (Cunningham, Hodgins, & Toneatto, 2011). However, while both trials demonstrated some minor impact of the intervention on reducing problem gambling, neither trial had participants access the intervention directly through an online portal. Instead, the final personalized report was generated by the researchers and sent by mail to the participants (Cunningham, Hodgins, Toneatto, & Murphy, 2012; Cunningham, Hodgins, Toneatto, Rai, & Cordingley, 2009). Further, the second trial demonstrated that it was unclear what component of the intervention might have had an impact (Cunningham et al., 2012). As such, these trials can only be taken as limited support for the possibilities of online interventions for problem gamblers.

Luquiens et al. (2016) examined the impact of personalized feedback, Internet-based cognitive behavioural therapy (iCBT), or a therapist-assisted iCBT on problem gamblers (Luquiens et al., 2016). While the study had the strength of using naturalistic recruitment from an online gambling website, it suffered from very high attrition rates (83%) and was unable to demonstrate an impact of the interventions on gambling outcomes. Casey et al. (Casey et al., 2017) compared an iCBT program to an active monitoring online control condition, and to a waitlist control. Both the iCBT and the active control showed improved gambling outcomes compared to the waitlist control at a 6-week follow-up. Also encouraging, the iCBT condition demonstrated some superiority to the active control at 6 weeks (Casey et al., 2017). Finally, Hodgins et al. have translated a paper and pencil self-help intervention into an online format (Diskin & Hodgins, 2009; Hodgins, Currie, Currie, & Fick, 2009; Hodgins, Currie, & el-Guebaly, 2001; Hodgins, Fick, Murray, & Cunningham, 2013). An RCT was unable to demonstrate an increased impact of this online intervention in comparison to a personalized feedback intervention (Hodgins, Cunningham, Murray, and Hagopian, in press). A separate trial, which employed a separately programmed version of these same self-help booklets, compared the benefits of providing the online gambling with or without a companion mental health intervention among participants with co-occurring mental health distress and problem gambling (Cunningham et al., 2016). However, the purpose (and design) of this trial was not to provide evidence of the efficacy of the gambling intervention. Similarly, while an RCT employing a waiting list design found that providing an online intervention for depression led to improvements in gambling outcomes compared to those told they would have to wait for 8 weeks, the trial did not examine the possible impact of an online intervention targeting problem gambling (Bucker, Bierbrodt, Hand, Wittekind, & Moritz, 2018).

Given the limited evidence base for online problem gambling interventions, further research in this area is merited. The present pilot RCT employed a crowdsourcing website as a quick and inexpensive method of recruiting participants for an RCT evaluating the impact of the online gambling intervention developed by Hodgins and colleagues (Hodgins et al., 2013) and compared it to a no intervention control. While there are limitations associated with employing a sample recruited from Mechanical Turk (the crowdsourcing website) for this purpose (Cunningham et al., 2017a, Cunningham et al., 2017b), the results have value given the dearth of research in this area (although there are other trials with published protocols ongoing, e.g., (Merkouris et al., 2017)). It was hypothesized that the gambling intervention would lead to improved gambling outcomes at a 6-week and 6-month follow-up compared to participants in the no intervention control group.

2. Methods

2.1. Recruitment

Conduct of the trial was approved by the standing research ethics board of the Centre for Addiction and Mental Health. Participants were recruited from the United States and Canada using an advertisement asking them to take part in a survey about their gambling that was posted on the Mechanical Turk website. Potential participants filled in a brief eligibility survey (18 or over, gambled weekly or more often). Following recommendations for the conduct of research using Mechanical Turk, only participants who had completed at least 100 jobs on the platform, with a quality rating of 95% or higher, were shown the advertisement (Peer, Vosgerau, & Acquisti, 2014). Those found eligible were sent to an online consent form. Eligible participants were told that they would be asked to complete a survey about their gambling as well as other information about themselves. They were further told that the survey would take <15 min and that they would be paid US$1.50 for completion. Finally, participants were told that some people would be asked if they were interested in participating in another study but that we did not know at this point whether they would be asked. Those not eligible were thanked for their interest. It is important to note that the eligibility screener was set up to dissuade Mechanical Turk workers from using automated programs to complete the survey, and so that it could only be completed once for each Mechanical Turk account. More specifically, Mechanical Turk has some controls in place to make it challenging for one person to have more than one account such as requiring SSN data for each user registration. In addition, Mechanical Turk worker IDs were captured using HTML coding, and users attempting the survey more than once were not included in the study. Lastly, participants could only receive compensation for survey completion if codes provided at the end of the survey (via a third party survey software) were manually entered in to Mechanical Turk platform; codes were then visually inspected for each participant by study staff.

The online survey included four attention check questions nested within other items. Respondents who answered all four attention check questions correctly, said that they had provided accurate responses, who scored 5 or more on the Problem Gambling Severity Index (PGSI) indicating current moderate/problem gambling (Currie, Hodgins, & Casey, 2013; Ferris & Wynne, 2001), and who stated that they were thinking about cutting down or quitting their gambling, were invited to take part in a study in which they would be asked to complete two additional surveys (6 weeks survey for a US$5 payment; 6-month survey for a US$10 payment). In addition, participants were told that they would be sent an email with a link and password to log into a website containing information about gambling. Further, they were told that the type of material they would access would be entirely by chance. Finally, participants were told that only those who used the password to access the website would be sent the follow-up surveys.

2.2. Randomization, experimental conditions and follow-up

Participants who agreed to the follow-up study were sent a link and a password (unique password for each participant) to log onto the study website. Those logging on were randomized (1:1 ratio with no stratification) to receive their respective materials. Those assigned to the intervention condition were provided with the online intervention developed from the Hodgins self-help booklets (Diskin & Hodgins, 2009; Hodgins et al., 2001; Hodgins et al., 2009; Hodgins et al., 2013). Those assigned to the control condition were provided a brief survey asking what types of tools they might find helpful in a website designed to help problem gamblers deal with their gambling concerns.

2.3. Sample size estimate

The power analysis was conducted based on the findings of earlier work conducted by Hodgins et al. employing the paper and pencil version of this same intervention (Hodgins et al., 2001; Hodgins et al., 2009). Using the specifications of an alpha of 0.05, a power of 0.80, and a correlation of 0.5 between baseline and follow-up on the outcome variable, number of days gambled in the last month, a sample of 112 participants per condition was needed to detect a differential impact of the intervention of 2 days per month compared to the control condition at the 6-month follow-up. A 20% attrition rate was allowed for, leading to a targeted sample size of 280 participants.

2.4. Data analysis

2.4.1. Outcome variables

The primary outcome variable was the past 3 month version of the NORC DSM-IV Screen for Gambling Problems (NODS; measured at baseline and 6-month follow-up) which indicates DSM-IV gambling severity (Toce-Gerstein & Volberg, 2004; Wulfert et al., 2005). The secondary outcome variables, number of days gambled in the last 30 and the Gambling Symptom Assessment Scale (G-SAS) (Kim, Grant, Potenza, Blanco, & Hollander, 2009) were measured at baseline, 6 weeks and 6-months.

2.4.2. Analysis plan

Bivariate comparisons of baseline demographic and outcome variables were conducted. Outcome analyses employed mixed effect repeated measures outcomes, and used all available data for each time point. Overall, 3 separate mixed-effect models were conducted to examine the effect of time, intervention condition, and the time by intervention condition on each outcome variable. Missing data were analyzed using a maximum likelihood approach.

3. Results

Fig. 1 provides a consort chart for the trial. A total of 321 (318 from the United States; 3 from Canada) eligible participants logged onto the website and were randomized to condition. The average (SD) age was 36.5 (10.9), 44.9% were male, 69.2% had some post-secondary education, 71.7% were full-time employed, 49.5% were married/common law, 17.8% reported a family income less than 20,000,and78.520,000, and 78.5% were Caucasian. At baseline, the mean (SD) PGSI score was 11.5 (5.0) and participants reported gambling an average (SD) of 16.5 (8.5) days in the past 30 days. Approximately half of the participants (48.3%) stated that they had ever accessed help for their gambling. The primary gambling concerns voiced by this sample were instant or scratch tickets (66.7%), slot machines (54.2%), lottery-type games (51.4%) and casino games (31.5%). Bivariate comparisons found no significant differences between intervention and control conditions (p &gt; 0.05), with the exception of a reported family income of less than 20,000,and78.520,000 (23.5% control condition; 11.3% intervention condition; p = 0.004).

Fig. 1

Follow-up rates were excellent (86.6% at 6-weeks) and 87.9% at 6-months. Caution should be taken in interpreting the 6-months follow-up results as differences in follow-up rates approached significance (84.1% intervention condition; 91.2% control condition; p = 0.053).

Mixed effect models revealed that the sample as a whole experienced significant and consistent reductions in gambling symptoms and severity across time (see Table 1 for the estimated marginal means from these analyses, and Table 2, Table 3 for the results on the mixed effects models). In particular, the sample experienced significant reductions in GSAS scores (6-weeks, 6 months; p < 0.001) and the number of days gambled in the past 30 days (6-weeks, 6 months; p < 0.001). However, a significant time by intervention interaction was not observed, indicating that the intervention and control groups did not significantly differ across time in the level of reduction (GSAS p = 0.695, number of days gambled p = 0.403). Similarly, the entire sample also experienced significant reductions in NODS scores from baseline to the 6-month follow-up (p < 0.001), however no significant time by intervention interaction was observed (p = 0.095).

Table 1

Outcome variable means by time and intervention.

| | | Mean (SE)a | | | | | ------------ | ------------- | ------------ | -------------------------- | ----------- | | Time | Randomization | GSAS | # days gambled in past 30 | NODS | | Baseline | Control | 8.21 (0.23) | 15.71 (0.60) | 5.35 (0.21) | | Intervention | 8.26 (0.24) | 17.41 (0.64) | 5.13 (0.22) | | | 6 weeks | Control | 6.82 (0.24) | 9.12 (0.63) | – | | Intervention | 7.16 (0.26) | 9.71 (0.68) | – | | | 6 months | Control | 6.05 (0.24) | 7.09 (0.62) | 3.86 (0.22) | | Intervention | 6.19 (0.26) | 7.86 (0.68) | 4.18 (0.24) | |

Table 2

Mixed-effect models results of time, intervention, and time by intervention interaction on GSAS scores and the number of days gambled in the past 30.

Effect GSAS # days gambled in past 30
Estimate ± SE t p Estimate ± SE t p
Intercept 8.25 ± 0.24 33.84 <0.001 17.41 ± 0.64 27.22 <0.001
Time (Ref: Baseline)
6 weeks −1.10 ± 0.25 −4.37 <0.001 −7.71 ± 0.66 −11.65 <0.001
6 months −2.07 ± 0.25 −8.21 <0.001 −9.55 ± 0.66 −14.41 <0.001
Condition (Ref: Intervention condition) −0.05 ± 0.34 −0.16 0.876 −1.70 ± 0.88 −1.94 0.053
F p F p
Time × intervention (Ref: Baseline × intervention condition) 0.365 0.695 0.911 0.403

Table 3

Mixed-effect models results of time, intervention, and time by intervention interaction on NODS.

Effect NODS
Estimate ± SE t p
Intercept 5.13 ± 0.22 23.14 <0.001
Time (Ref: Baseline)
6 months −0.95 ± 0.24 −3.99 <0.001
Condition (Ref: Intervention) −0.21 ± 0.30 0.70 0.482
F p
Time × intervention(Ref: Baseline × intervention condition) 2.799 0.095

Use of the intervention among those in the intervention condition was minimal. While 62.9% of participants accessed the initial gambling quiz in the online intervention (i.e., the PGSI), only 42.4% completed it. A total of 13.9% completed at least 1 of 15 different gambling self-help tools and 8.6% completed the ‘Monitor your Gambling Urges’ tool (but only 2.0% viewed the report generated). Only 8.6% logged into the intervention more than once.

4. Discussion

This pilot study sought to provide preliminary evidence of the efficacy of an online intervention for problem gamblers. Participants in both the intervention and control conditions reduced their gambling from baseline to follow-up. However, the study was unable to demonstrate an impact of providing access to the intervention on improvements in gambling over and above the reductions observed in the control group.

While the study had a number of strong features - a good follow-up rate and the inclusion of a no intervention control condition - it may be too early to conclude that the intervention under study is ineffective, despite these negative results. This is primarily due to the use of participants recruited through Mechanical Turk for the trial. This study is the fifth of a series of pilot RCTs employing this crowdsourcing platform to recruit participants for intervention research conducted by this group. Of the other four trials (all targeting unhealthy alcohol use), 2 of the 4 were able to demonstrate some small impact of the intervention under study (Bertholet, Godinhno, & Cunningham, 2019; Cunningham et al., 2017a, Cunningham et al., 2017b; Cunningham, Godinho, and Bertholet, under review). However, all the trials, including the current one, suffered from difficulties with getting participants to engage with the intervention. In the present study, only 9% of participants logged into the intervention more than once, indicating a very limited amount of use. Future research should evaluate strategies for improving participant engagement with online interventions conducted on Mechanical Turk before interventions can be reliably evaluated using these samples.

More troubling, the other RCT conducted to evaluate this same intervention (in which the intervention was compared to a brief personalized feedback control) also failed to demonstrate an impact of the intervention (Hodgins et al., in press). Common to both of these RCTs has been the powering of the trial based on the assumption of a medium effect size. While the paper and pencil version of these same materials, when combined with telephone-based therapist support, demonstrated this strength of effect, it is possible that the online version should be tested based on the assumption of a small effect size (as is the case for other online interventions targeting addictive behaviors when provided without any therapist support) (Riper et al., 2014). Also relevant, is that the present study proactively recruited participants, while in the Hodgins et al. RCT of the same intervention, participants responded to an advertisement asking for people concerned about their gambling. It is possible that mode of recruitment will also need to be considered when evaluating research trials (as well as have an implication on the degree of impact of the intervention).

One conclusion of the need to power for a small effect size when evaluating this intervention could be that online versions of materials are less impactful than their equivalent paper and pencil versions. While this is possible, just as likely an explanation is that it was the therapist support that was instrumental in driving the size of the improvements in the earlier paper and pencil intervention materials studies (Hodgins et al., 2001; Hodgins et al., 2009). Alternatively, online trials may be attracting a different segment of the problem gambling population who then engage with the materials in a different way. Or, materials that are optimized for paper and pencil viewing may require more extensive work to be translated into a form that will have an optimum impact in an online format. Whatever the reasons, online interventions appear an attractive medium for problem gamblers seeking help. As such, more research is merited to attempt to develop effective interventions to address this unmet need.

Role of funding source

This research was supported by a Canada Research Chairs awarded to John Cunningham. The funder had no input on the design of the research, collection, analysis, and interpretation of the data, the write-up of the manuscript, and the decision to submit the manuscript for publication.

Contributions

JAC, AG, and DH designed the study. AG conducted the study and the data analyses. JAC wrote the first draft of the manuscript. All authors contributed to and approved the final manuscript.

Conflict of interest

The authors have no conflicts of interest to declare.

Acknowledgements

We thank Sylvia Hagopian for providing access to the online gambling intervention, and for her assistance in setting up the intervention portal. We thank Christina Schell for her assistance with conducting the analyses. John Cunningham is supported by a Canada Research Chairs in Addictions.

References


Articles from Addictive Behaviors Reports are provided here courtesy of Elsevier