Measuring Patient Expectations: Does the Instrument Affect... : Medical Care (original) (raw)

Patients visiting their physician generally arrive with expectations for the care that they will receive. These expectations range from a desire for information or psychosocial support to expectations for specific tests or treatments. Fulfillment of patient expectations may affect visit satisfaction, 1–6 may influence health care utilization 4,6 and costs, 7,8 and can be used as an indicator of the quality of care. 9–12

Despite the importance of patients’ expectations to clinicians, policy makers, and researchers and explicit calls to clarify and standardize the measurement of patients’ expectations, 13 no standardized assessment instrument exists for studying patients’ expectations. 14 In the literature, many different instruments have been used to assess expectations without regard to how the structure or content of the instrument may affect expectation measurement. The structure of expectation instruments ranges from a single open-ended question to a list of several possible expectations to an extensive list of potentially desired services.

Additionally, the wording of expectation instruments is not consistent. The most common “value” approach interprets expectations to mean expressions of patients’ desires. 13 Among instruments with such an interpretation, the wording varies. Some of these instruments ask patients what they “want” or “would like” in a visit, 1,15 while others ask what they think is “necessary.”2,5 Expectations may also be interpreted to mean expected outcomes, by asking patients what they think is likely to occur. 13 Some instruments do not specify the interpretation and simply ask about “expectations.”16

How such differences might affect measurement of patients’ expectations is not known. Different assessment tools may influence the number of reported expectations or patient satisfaction by creating expectations where none previously existed or by increasing patients’ awareness of whether their expectations were met. 2,13 Conversely, one method may be more sensitive than another in measuring true patient expectations.

We conducted this study to examine the effect the type of instrument used to measure patients’ expectations has on the number of reported expectations, unfulfilled expectations, and subsequent patient satisfaction. We compared 2 expectation assessment instruments, similar to ones commonly used in previous studies, that differ in both structure (the number of potential expectations from which to choose) and wording of the questions (“want” vs. “necessary”). We focused on patients’ expectations for 3 categories of services: diagnostic tests, specialist referrals, and new drug prescriptions. These particular expectations are common, 1,2 are potentially costly, 8,17,18 and are targets of managed care policies designed to reduce expenses.

Methods

Subjects

The study was conducted in a VA primary care medicine outpatient clinic over a 3-month period. We recruited subjects from among patients waiting to see their physicians. We randomly selected half-day clinics from which to draw subjects. There were 10 half-day clinics per week, and each clinic had, on average, 7 physicians. Once a clinic was selected, all patients seeing a physician in that clinic were potential study subjects. Patients were identified and approached with the use of daily appointment schedules. Patients were eligible for the study if they had a scheduled appointment, had not yet seen their physician for that day, and scored at least 8 of 10 on the Short Portable Mental-Status Questionnaire. A score of 0 to 2 errors on the Short Portable Mental-Status Questionnaire indicates intact intellectual functioning. 19 Because the clinic population was overwhelmingly male (98%), we excluded women from the study.

Design

Trained interviewers (C.M.K., K.H.A.) approached scheduled patients in the clinic waiting area and requested their participation. After obtaining informed consent, the interviewers randomly assigned patients to 1 of 3 study groups. The first group was interviewed with a “short” instrument that asked about 3 general expectations for care: tests, referrals, and new medications. The short instrument is an abbreviated version of the Patient Request for Services Schedule. 3,20 The second group was interviewed with a “long” instrument that asked about expectations for tests, referrals, and new medications but included additional questions about specific expectations and nested these questions within a longer list. The 2 instruments also differed in the wording used to ask about expectations (Figure 1). The short instrument asked patients what they wanted, while the long instrument asked patients what they thought was necessary for the doctor to do. The third group served as a control and was not asked about expectations. After the visit, all subjects were interviewed with the same instrument to assess satisfaction and services received (Figure 2). We assessed satisfaction before asking about services received to minimize influencing patients’ perceptions of the visit. All data were collected face-to-face by structured interviews. The interviews lasted, on average, 10 minutes for both the previsit and postvisit interviews (timed during piloting and early data collection). The Durham VA Human Subjects Research Committee approved the study.

F1-11

Fig. 1:

Short and long interview instruments.

F2-11

Fig. 2:

Study design.

Measurements

Previsit expectations were measured with the 2 instruments described above (Figure 1). The short instrument elicited a binary response for each question (“yes” or “no”). We then added the number of reported expectations to create a summary score from 0 (patients with no expectations) to 3 (patients with an expectation for a test, referral, and new medication). The long instrument elicited 1 of 4 responses for each question (“absolutely necessary,” “somewhat necessary,” “somewhat unnecessary,” and “absolutely unnecessary”). We coded items as expected when patients rated them absolutely or somewhat necessary. Items 5, 9, and 13 to 19 in the long instrument were mapped to categories (tests, referrals, and new medications) in the short instrument. For example, if a patient indicated it was absolutely necessary for the doctor to order a prostate-specific antigen test to screen for prostate cancer, the patient was coded as having an expectation for a test. Patients who endorsed multiple items within a category (eg, prostate-specific antigen, cholesterol, and blood tests) were only coded as having 1 expectation for that category. We scored expectations this way to make the scoring comparable to the short instrument. We also computed a summary score of total expectations (ranging from 0 to 3) for patients receiving the long instrument.

Expectation fulfillment was measured with a version of the Patient Services Received Schedule. 3,20 After the visit, patients were asked whether their doctor referred them to a specialist, prescribed a new medication, or performed or ordered a test. If a particular service was expected but not received, we coded that expectation as unmet. 21 In addition, we summed the total number of unmet expectations by category from 0 to 3.

We measured satisfaction in 2 ways. We used a questionnaire developed for the American Board of Internal Medicine (ABIM) and a modified version of the visit-specific questionnaire. 22,23 The ABIM satisfaction questionnaire focuses on humanistic attributes and the interpersonal skills of the physician. 24 The visit-specific questionnaire focuses on the overall clinic experience and satisfaction with the specific visit. Both measures have strong internal reliability (α = 0.96 and α = 0.87 for the ABIM and visit-specific questionnaires, respectively) and are correlated (r = 0.73) but measure different elements of satisfaction. We included both satisfaction instruments because we believed that both aspects of satisfaction could be affected by the assessment method. Furthermore, using both instruments increased sensitivity for detecting an effect of the expectation assessment instruments.

Data Analysis

The 3 groups—short instrument, long instrument, and control groups—were compared on demographic variables with a χ2 test of general association and Fisher’s exact test when cell counts were small. The proportion endorsing an expectation in the long instrument group was compared with the proportion responding “yes” to the like items in the short instrument group with a χ2 test of general association. The distribution of the number of total expectations endorsed (0, 1, 2, or 3) was compared between the 2 groups with a Cochran-Mantel-Haenszel statistic. 25 The proportion of patients with unmet expectations in the short instrument group was compared with the proportion of patients with unmet expectations in the long instrument group with a χ2 test of general association. The distribution of the number of total unmet expectations (0, 1, 2, or 3) was compared between the 2 groups with the Cochran-Mantel-Haenszel statistic.

To measure satisfaction on the 2 satisfaction questionnaires, we used the sum of the scores on each of the items (excellent = 1, poor = 5) and divided by the number of answered questions. Like previous studies, we found the distribution of satisfaction scores to be highly skewed toward higher satisfaction. 11 Therefore, we used the Kruskal-Wallis test of the medians to compare the distribution of satisfaction across the 3 study groups as well as to compare satisfaction scores and unmet expectations. In addition, the satisfaction measures were dichotomized by scoring all the individuals who rated their satisfaction “excellent” or “very good” versus all other responses. The proportion of individuals very satisfied versus those less than very satisfied was compared for both instruments with a χ2 test of general association.

Results

Study Population

We approached 409 patients to participate; 290 patients (71%) completed both the previsit and postvisit interviews. The remaining 119 patients either refused (n = 81, 20%), did not complete both the previsit and postvisit interviews (n = 32, 8%), or failed the Short Portable Mental-Status Questionnaire (n = 6, 1%) (Figure 3). The mean age of the study population was 60 years; all were male, 70% were white, and most were high school graduates. Patients randomized to the 3 groups did not differ statistically on any measured demographic characteristic (Table 1).

F3-11

Fig. 3:

Patient accrual.

T1-11

Table 1:

Demographic Variables by Interview Group

Instrument Type and Expectations

Patients completing the long instrument reported more expectations for tests (83% vs. 28%, P <0.001), referrals (40% vs. 18%, P <0.001), and new medications (45% vs. 28%, P <0.001) (Table 2). The 2 groups also differed in total numbers of expectations. The long instrument group reported, on average, 2 expectations, while the short instrument group reported an average of 1 expectation (P <0.001).

T2-11

Table 2:

Expectations for Tests, Referrals, or New Medication by Measurement Instrument

{tabft}The control group is not listed because those patients did not receive a previsit measure of expectations.

Table 3 shows the relationship between unmet expectations and the measurement instrument. Patients interviewed with the long instrument reported significantly more unmet expectations for tests (P <0.001) and referrals (P <0.05) than patients who completed the short instrument. The 2 groups, however, did not differ in the number of unmet expectations for new medications (P = 0.211). The distribution of the total number of unmet expectations was different for the 2 groups: 40% of the patients completing the long instrument reported at least 1 unmet expectation compared with 19% of the patients interviewed with the short instrument (P <0.001).

T3-11

Table 3:

Unmet Expectations for Tests, Referrals, or New Medication by Measurement Instrument

{tabft}The control group is not listed because those patients did not receive a previsit measure of expectations.

Instrument Type, Unmet Expectations, and Satisfaction

Despite significant differences in the number of expectations elicited and the number of unmet expectations reported, satisfaction did not differ among the comparison groups. The percentage of patients in each of the 3 groups who rated their visit or physician excellent or very good did not differ significantly (P = 0.654 and P = 0.914 for the visit-specific and ABIM questionnaires, respectively) (Table 4). Table 4 also shows that satisfaction was very high for all 3 groups. The means ranged from 1.38 to 1.78 on the 2 satisfaction measures, with a range of 1 to 5 (1 = highest satisfaction). These mean values of the visit-specific and ABIM satisfaction questionnaires did not differ among the comparison groups (P = 0.952 and P = 0.736, respectively).

T4-11

Table 4:

Patient Satisfaction and Measurement Instrument

We also examined the relationship between unmet expectations and patient satisfaction for the comparison groups. Patients interviewed with the short instrument had, on average, no unmet expectations. Patients interviewed with the long instrument, on the other hand, had an average of 1 unmet expectation (P <0.001). Despite this significant difference, the number of unmet expectations was not related to visit-specific satisfaction (P = 0.847) or satisfaction with physician interpersonal skill (P = 0.501). Unmet expectations were not related to satisfaction even among the subset of patients with the highest number (≥2) of unmet expectations (P = 0.23 and P = 0.16 for the visit-specific and ABIM satisfaction scales, respectively). In addition, patients with unmet expectations for each specific service (test, referral, new medication) did not differ on visit-specific satisfaction (P = 0.368, P = 0.917, and P = 0.343, respectively) or satisfaction with physician interpersonal skill (P = 0.249, P = 0.791, and P = 0.806, respectively).

Discussion

This randomized trial tested 2 different commonly used instruments for assessing patient expectations and produced several findings of interest. First, the type of instrument used to assess expectations affects the number of expectations elicited and the number of unmet expectations reported. Several aspects of the 2 instruments may have accounted for the observed differences. The long instrument presented more choices, the choices of interest were nested within a longer list of expectations, and patients were asked how necessary these items were rather than whether they wanted them. Any one of these factors could account for the observed differences. This study cannot determine why one instrument elicited more expectations but certainly shows that the instrument can affect the measurement of expectations. Previous findings, along with our results, provide some insight into which factors of the instruments might account for the differences in expectations. Previous research has shown similar associations between instrument type and the number and fulfillment of expectations. For example, Kravitz et al 21 found that patients reported more expectations and more unmet expectations by structured questionnaire than by interview. This trend suggests that when expectations are assessed, more structure and more choices generate more expectations.

The second finding of interest is the lack of association between instrument type and satisfaction, despite the observed effects of the instrument on expectations. This finding, however, is consistent with previous research. Kravitz and associates 21 found that satisfaction did not differ for patients queried by questionnaire or semistructured interview. That satisfaction did not differ among the groups suggests that participation in the study did not significantly alter the behavior of the subjects.

Finally, we found no association between unmet expectations and satisfaction. There are several potential explanations for this lack of association. First, visit satisfaction may be increased merely because patients are asked about their expectations and satisfaction and thereby perceive that others recognize their concerns as important. This is unlikely given that satisfaction in the control group, which was not asked about expectations before the visit, did not differ from the other groups. A second, more plausible, explanation is that existing measures of satisfaction are not sensitive enough to detect subtle differences. Third, fulfilled expectations for tests, referrals, and new medications may not be a significant determinant of patient satisfaction. This point is controversial. 26 Previous research has shown that unfulfilled expectations are related to lower visit satisfaction. 1–6 However, that same body of literature suggests that “nonmedical” services may be the real indicators of patient satisfaction. Brody et al 5 found that patients who received “nontechnical” interventions, such as education, stress counseling, and negotiation, were significantly more satisfied than those who did not receive these same interventions. Furthermore, patients who received “technical” interventions, such as examinations, tests, medications, and nondrug therapy, were not more satisfied with their visit. In a similar study, Froehlich and Welch 27 found that the proportion of met expectations for tests was not significantly associated with satisfaction. Instead, provider humanism was the sole significant predictor of visit-specific satisfaction. A third study found that patients with the most unmet desires for services, especially services related to information, were significantly less satisfied than those with fewer unmet desires. 1

A final explanation for the lack of association between unmet expectations and satisfaction may be found in the encounter between the physicians and patients during the visit. Physicians may have responded to patients’ expectations in ways that left patients satisfied despite not receiving the expected services. This point is particularly interesting given that patients’ satisfaction with physicians and medical visits seems relatively constant and high, despite increasing managed care practices. The role of communication style and how physicians negotiate expectations and requests in the medical visit should be examined in future work.

This study has several limitations. It was conducted in a single VA general medicine clinic with a largely male population. Although we must be cautious in generalizing to other groups, our results generally are consistent with previous studies conducted on different populations. Another concern is that interviewers asking patients about their expectations and satisfaction may have altered the patients’ natural responses. This alteration may have affected satisfaction levels and expectations for services. Yet, because physicians were blinded to study participants, it is unlikely that they altered their behavior toward the patients. Furthermore, subjects in the control group, who were not asked about expectations, showed satisfaction scores no different from subjects in the other 2 groups, who were asked about their expectations. Finally, we should note that our measures of satisfaction were general measures of overall satisfaction. We did not analyze particular dimensions of satisfaction that could be related to expectations.

Our results suggest that investigators studying patient expectations must contend with an “instrument” effect when measuring the number of expectations or the number of unmet expectations. However, it does not appear that the instrument used to measure expectations will affect patient satisfaction with the medical encounter. Ultimately, the research question should guide the instrument selection. If the central goal is the measurement of expectations, the choice of instrument will affect results. If, on the other hand, the central concern is patient satisfaction, the instrument chosen should not measurably affect results.

References

1. Joos SK, Hickam DH, Borders LM. Patients’ desires and satisfaction in general medicine clinics. Public Health Rep 1993; 108: 751–759.

2. Kravitz RL, Cope DW, Bhrany V, et al. Internal medicine patients’ expectations for care during office visits. J Gen Intern Med 1994; 9: 75–81.

3. Like R, Zyzanski SJ. Patient satisfaction with the clinical encounter: Social psychological determinants. Soc Sci Med 1987; 24: 351–357.

4. Ulhmann RF, Inui TS, Pecoraro RE, et al. Relationship of patient request fulfillment to compliance, glycemic control, and other health care outcomes in insulin-dependent diabetes. J Gen Intern Med 1988; 3: 458–463.

5. Brody DS, Miller SM, Lerman CE, et al. The relationship between patients’ satisfaction with their physicians and perceptions about interventions they desired and received. Med Care 1989; 27: 1027–1035.

6. Eisenthal S, Emery R, Lazare A, et al. “Adherence” and the negotiated approach to patienthood. Arch Gen Psychiatry 1979; 36: 393–398.

7. Eisenberg JM. Physician utilization: The state of research about physicians’ practice patterns. Med Care 1985; 23: 461–483.

8. Woolf SH, Kamerow DB. Testing for uncommon conditions: the heroic search for positive test results. Arch Intern Med 1990; 150: 2451–2458.

9. Cleary PD, McNeil BJ. Patient satisfaction as an indicator of quality care. Inquiry 1988; 25: 25–36.

10. Laine C, Davidoff F. Patient-centered medicine: A professional evolution. JAMA 1996; 275: 152–156.

11. Rosenthal GE, Shannon SE. The use of patient perceptions in the evaluation of health-care delivery systems. Med Care 1997; 35: NS58–NS68.

12. Rubin HR, Gandek B, Rogers WH, et al. Patients’ ratings of outpatient visits in different practice settings: Results from the Medical Outcomes Study. JAMA 1993; 270: 835–840.

13. Kravitz R. Patients’ expectations for medical care: an expanded formulation based on a review of the literature. Med Care Res Rev 1996; 53: 3–27.

14. Thompson AG, Sunol R. Expectations as determinants of patient satisfaction: Concepts, theory and evidence. Int J Qual Health Care 1995; 7: 127–141.

15. Zemencuk JK, Feightner JW, Hayward RA, et al. Patients’ desires and expectations for medical care in primary care clinics. J Gen Intern Med 1998; 13: 273–276.

16. Marple RL, Kroenke K, Lucey CR, et al. Concerns and expectations in patients presenting with physical complaints: Frequency, physician perceptions and actions, and 2-week outcome. Arch Intern Med 1997; 157: 1482–1488.

17. Marton KI, Sox JHC, Wasson J, et al. The clinical value of the upper gastrointestinal tract roentgenogram series. Arch Intern Med 1980; 140: 191–195.

18. Woo B, Woo B, Cook EF, et al. Screening procedures in the asymptomatic adult: Comparison of physicians’ recommendations, patients’ desires, published guidelines, and actual practice. JAMA 1985; 254: 1480–1484.

19. Pfeiffer E. A short portable mental-status questionnaire for the assessment of organic brain deficit in elderly patients. J Am Geriatric Soc 1975; 23: 433–441.

20. Like R, Zyzanski SJ. Patient requests in family practice: A focal point for clinical negotiations. Fam Pract 1986; 3: 216–228.

21. Kravitz RL, Callahan EJ, Azari R, et al. Assessing patients’ expectations in ambulatory medical practice. J Gen Intern Med 1997; 12: 67–72.

22. PSQ Project Co-Investigators. Final report on the Patient Satisfaction Questionnaire Project. Vol. 2-E-1. Washington, DC: American Board of Internal Medicine; 1989.

23. Ware JE, Hays RD. Methods for measuring patient satisfaction with specific medical encounters. Med Care 1988; 26: 393–402.

24. Boudreau D, Tamblyn R, Dufresne L. Evaluation of consultative skills in respiratory medicine using a structured medical consultation. Am J Respir Crit Care Med 1994; 150: 1298–1304.

25. Agresti A. Categorical data analysis. New York: John Wiley & Sons; 1990.

26. Kinnersley P, Stott N, Peters T, et al. A comparison of methods for measuring patient satisfaction with consultations in primary care. Fam Pract 1996; 13: 41–51.

27. Froehlich GW, Welch HG. Meeting walk-in patients’ expectations for testing. J Gen Intern Med 1996; 11: 470–474.

Keywords:

Patient satisfaction; patient expectations; veterans.

© 2001 Lippincott Williams & Wilkins, Inc.