The effect on treatment comparisons of different measurement frequencies in human immunodeficiency virus observational databases (original) (raw)

A Joint Modeling Approach to Assess the Impact of CD4 Cell Count on the Risk of Loss to Follow up in HIV/AIDS Patients on Antiretroviral Therapy

Lately, joint modeling has been reckoned as a very efficient technique for studying combinations of longitudinal and survival data generated from medical studies. In this paper we have developed a joint model to simultaneously study the longitudinal repeated measures on CD4 cell counts and the time to event (event being defined as loss to follow up) process of HIV/AIDS patients undergoing Anti-retroviral therapy (ART) treatment at Dr. Ram Manohar Lohia Hospital’s ART centre, New Delhi, India. Apart from increasing the risk of the HIV infection to progress to AIDS, the event of loss to follow up in patients undergoing ART seriously interferes with the development, improvement, and validation of treatment techniques being used in the therapy. The fact that the problem of loss to follow up from ART has been understated in studies based on ART centers in India has motivated us to investigate the effect of various clinical, socioeconomic and demographic factors on the hazard of loss to follow up in patients undergoing ART treatment. The results of the joint model have been compared with those of the separate analyses of the longitudinal and the survival data. The parameter estimates of both methods are consistent; however, the joint analysis supports the dependence of the hazard of lost to follow up from ART treatment on the rate of change in CD4 counts, apart from the patient’s baseline CD4 count. The estimated overall survival probability for HIV/AIDS patients retained on antiretroviral therapy was 0.81; 95%CI (0.76- 0.87).

Effect of ignoring the time of HIV seroconversion in estimating changes in survival over calendar time in observational studies: results from CASCADE*

AIDS, 2000

Objective: To compare estimates of changes in HIV survival over time derived from seroconverter and prevalent cohorts. Design and methods: Using pooled data from 19 seroconverter cohorts (CASCADE), the relative risk of death from HIV seroconversion by calendar time at risk from 1 January 1991 was examined. The analyses were repeated, ignoring knowledge of the time of seroconversion, but adjusting for the CD4 cell count at the time the participant came under observation, thus mimicking a prevalent cohort. Estimates from thè prevalent' cohort approach were compared with those obtained from the seroconverter cohort. Results: Of 5428 subjects at risk on 1 January 1991 or later, 1312 (24.2%) had died. In the analysis based on time from seroconversion, estimates of the effect of calendar year showed marked reductions in mortality in 1997±1999 only, with no evidence of a linear trend over the period 1991±1996 (P-trend 0.85). Using the prevalent cohort approach a decrease in the relative risk of death was observed from 1991 to 1998± 1999, with a statistically signi®cant trend of a decrease in risk from 1991 to 1996 (Ptrend 0.002). Similar ®ndings were observed when the analyses was repeated taking the start date of the cohort as 1 January 1988. Conclusion: Lack of knowledge of HIV infection duration may lead to biased and exaggerated estimates of survival improvements over time. The adjustment for duration of infection in prevalent HIV cohorts through laboratory markers may compensate inadequately for this.

Methods to Assess Population Effectiveness of Therapies in Human Immunodeficiency Virus Incident and Prevalent Cohorts

American Journal of Epidemiology, 2001

Two methods are presented for measuring population effectiveness (i.e., reduction of disease in a population in which only some receive treatment) of antiretroviral therapy among human immunodeficiency virus (HIV)infected men at risk for acquired immunodeficiency syndrome (AIDS) and followed between January 1, 1986, and June 30, 1999, in the Multicenter AIDS Cohort Study. Method I, requiring use of a seroincident cohort, estimates relative hazards of AIDS for persons at equal duration of infection. Method II, allowing use of a seroprevalent cohort, estimates relative hazards since the beginning of therapy eras for persons starting at equal levels of prognostic markers of disease stage (CD4 cell count and HIV type 1 RNA). The follow-up interval was divided into four calendar periods to characterize different eras of antiretroviral therapy. For method I, the relative hazards were 1.52 (95% confidence interval (CI): 0.93, 2.49), 0.91 (95% CI: 0.66, 1.26), and 0.30 (95% CI: 0.18, 0.51) for the eras of no therapy, dual nucleoside therapy, and potent combination antiretroviral therapy, respectively (monotherapy was the reference era). For method II, the corresponding relative hazards were 1.52 (95% CI: 1.10, 2.09), 1.03 (95% CI: 0.77, 1.38), and 0.31 (95% CI: 0.21, 0.45). These results extend the measurement of population effectiveness from incident to prevalent cohorts and demonstrate the ability of cohort studies to complement information provided by clinical trials. Am J Epidemiol 2001;154:675-81. acquired immunodeficiency syndrome; cohort studies; epidemiologic methods; HIV

Models of survival in HIV infection and their use in the quantification of treatment benefits

1998

Because acquired immunodeficiency syndrome (AIDS) is a shifting endpoint and sufficient follow-up data now allow modeling of survival time (i.e., time from human immunodeficiency virus (HIV) seroconversion to death), the authors evaluated non-parametric and parametric models of mortality with the use of data from 554 seropositive participants in the Vancouver Lymphadenopathy-AIDS Study. The authors then applied these models to quantify treatment benefits at the national level in Canada, using back-calculation and forwardprojection based on death registries. The study revealed that the lognormal model better describes survival time than the Weibull model. Relative to observations prior to 1987, later observations (in the era of treatment) revealed a statistically significant change in disease progression: the median survival time increased from 10.1 to 12.0 years, but no further survival improvements were observed in the early 1990s. Concurrent with the increase in availability of treatment, the authors have observed pronounced treatment benefits at the national level: prior to 1995, approximately 1,500 deaths were prevented and 4,200 person-years of life were saved. Also, mortality rates were observed to level off in the mid-1990s due to the shape of the historical HIV infection curve and the accumulating availability of treatment in Canada. Am J Epidemiol 1998; 148:487-96. acquired immunodeficiency

Modeling Outcomes of First-Line Antiretroviral Therapy and Rate of CD4 Counts Change among a Cohort of HIV/AIDS Patients in Ethiopia: A Retrospective Cohort Study

PloS one, 2016

Antiretroviral therapy has shown to be effective in reducing morbidity and mortality in patients infected with HIV for the past couples of decades. However, there remains a need to better understand the characteristics of long-term treatment outcomes in resource poor settings. The main aim of this study was to determine and compare the long-term response of patients on nevirapine and efavirenz based first line antiretroviral therapy regimen in Ethiopia. Hospital based retrospective cohort study was conducted from January 2009 to December 2013 at University hospital located in Northwest Ethiopia. Human subject research approval for this study was received from University of Gondar Research Ethics Committee and the medical director of the hospital. Cox-proportional hazards model was used to assess the effect of baseline covariates on composite outcome and a semi-parametric mixed effect model was used to investigate CD4 counts response to treatments. A total of 2386 HIV/AIDS naive pati...

Practice of Epidemiology Do Increasing Rates of Loss to Follow-up in Antiretroviral Treatment Programs Imply Deteriorating Patient Retention

In several studies of antiretroviral treatment (ART) programs for persons with human immunodeficiency virus infection , investigators have reported that there has been a higher rate of loss to follow-up (LTFU) among patients initiating ART in recent years than among patients who initiated ART during earlier time periods. This finding is frequently interpreted as reflecting deterioration of patient retention in the face of increasing patient loads. However, in this paper we demonstrate by simulation that transient gaps in follow-up could lead to bias when standard survival analysis techniques are applied. We created a simulated cohort of patients with different dates of ART initiation. Rates of ART interruption, ART resumption, and mortality were assumed to remain constant over time, but when we applied a standard definition of LTFU, the simulated probability of being classified LTFU at a particular ART duration was substantially higher in recently enrolled cohorts. This suggests that much of the apparent trend towards increased LTFU may be attributed to bias caused by transient interruptions in care. Alternative statistical techniques need to be used when analyzing predictors of LTFU—for example, using " prospective " definitions of LTFU in place of " retrospective " definitions. Similar considerations may apply when analyzing predictors of LTFU from treatment programs for other chronic diseases.

Impact of Epidemic and Individual Heterogeneity on the Population Distribution of Disease Progression Rates: An Example from Patient Populations in Trials of Human Immunodeficiency Virus Infection

American Journal of Epidemiology, 1996

Patients at the same stage of a chronic disease may have had different rates of disease progression. The authors developed a mathematical modeling approach that allows reconstructing and comparing populations in terms of the disease progression rates of their participants when the disease onset and progression rates are unknown for individual patients. Human immunodeficiency virus 1 infection was used as an example. Both published and hypothetical models were used to describe the human immunodeficiency virus 1 epidemic (epidemic heterogeneity) and incubation and survival functions for different disease stages (individual heterogeneity). Reconstructions of populations with late disease (e.g., acquired immunodeficiency syndrome patients) show a marked predominance of rapid progressors, unless the incidence of new infections has been decreasing for a long time. Rapid progressors would also predominate in populations of acute seroconverters, unless diagnosis is based on repeated serologic screening rather than symptoms. Populations of patients who have not progressed beyond an early stage of the disease (e.g., patients with CD4 cell counts >500//iJ) tend to overrepresent slow progressors, especially if the epidemic has been decreasing for a long time. With this approach, one can assess whether the target population of a clinical trial is comparable with other patient populations at different places and times. Epidemic and individual diversity may even affect trial results if patients with different progression rates experience different benefits from a treatment. By modeling the targeted populations in trials of early versus deferred antiretroviral treatment, the authors observed larger treatment benefits in trials in which rapid progressors probably predominated, compared with trials of slow progressors.