Integrating Health Literacy with Health Care Performance Measurement

By Darren A. DeWalt, Jonathan McNeill
July 26, 2013 | Discussion Paper

Introduction

Health literacy refers to a broad set of skills that help patients understand health information, implement basic self-care activities, and navigate health care systems. These skills include reading, writing, and math, as well as the ability to comprehend spoken communication and to make appropriate care decisions. Given the complexity of the current medical environment, navigating the health care system and understanding health information often requires advanced health literacy skills.

This paper describes opportunities for health care providers to link health literacy to quality measures and integrate health literacy performance measurement into every aspect of the patient experience. We define a systems view of the U.S. health care system to explore the potential scope of measures and review existing and example measures that could be considered “health literacy–related.” Finally, we address the characteristics of effective health literacy-related performance measures and important considerations that will inform the measure development process.

The overall objective of this paper is to contribute to the development of performance measures designed to improve care for people with low health literacy. The target audience of this paper is individuals who seek to create performance measures that take health literacy into account.

 

Health Literacy’s Impact on Health and Care

The 2003 National Assessment of Adult Literacy (NAAL) found that 93 million adults—43 percent of the U.S. adult population—have limited literacy (Kutner et al., 2006). Studies have demonstrated that patients at all literacy levels, but particularly those with the lowest literacy skills, have difficulty understanding medication directions and warning labels (Comings et al., 2001; Davis et al., 2006a,b,c; Madlon-Kay and Mosch, 2000; Murray et al., 2007). Limited
health literacy is also associated with poor health behaviors, inadequate self-management of chronic diseases, increased hospitalization, and higher health care costs (DeWalt et al., 2004). Incorporating health literacy into the health care quality agenda is an important step in helping clinicians provide all patient populations with the resources necessary to improve their health status.

 

Why Adopt Measures that Address Health Literacy Issues?

Patients must assume a variety of responsibilities to receive high-quality health care. Consider the tasks a patient with diabetes has to perform for a routine diabetes follow-up visit with a primary care physician (see Figure 1).

 

 

Each step of the process presents potential challenges for patients with limited health literacy and corresponding opportunities for health care providers to help patients address these challenges. These opportunities are not limited to primary care settings; indeed, the broader health care system and policy environment must operate in a manner that encourages the care necessary to help a patient succeed.

Performance measures play a critical role in a comprehensive effort to improve patient care. They allow health care professionals to track the implementation of recommended interventions and monitor the resulting effects on care processes and health outcomes. Robust measurement will empower providers to identify effective strategies to improve care for people with low health literacy and to share these approaches across larger organizations and practice networks. Furthermore, the very act of collecting data to assess progress is a sign of the systematic experimentation and learning which represent the foundation of any successful quality improvement initiative.

 

Do Health Literacy-Targeted Interventions Work?

Implementing and regularly assessing health literacy interventions requires a significant commitment of resources, as does any change in clinical practice. The following studies have shown that health literacy practices can improve patient understanding and health outcomes. Although formal cost-effectiveness studies have not been performed, the authors believe that resource allocation in many of these studies suggest they can be performed at low cost and may be cost-effective in the long run (Davis et al., 1996; Ferreira et al., 2005; Pignone et al., 2005; Rothman et al., 2004; Yin et al., 2008).

 

Decrease liquid medication errors (Yin et al., 2008)

Health literacy practice: Caregivers in the intervention group were given plain-language, pictogram-based medication instruction sheets to convey information about medication name, dose, frequency, length of treatment, preparation storage, and adherence. They also received brief 1- to 3-minute counseling/teach-back sessions. Caregivers in the control group received standard care.

Results: Caregivers in the intervention group were less likely to make errors in knowledge of dose frequency and to report incorrect medication preparation compared with caregivers who received standard care. Intervention caregivers were also more likely to use the standardized dosing instrument and to dose medications accurately.

 

Literacy and disease management program for diabetics (Rothman et al., 2004)

Health literacy practice: Patients in the intervention group received care from clinical pharmacists in a disease management team; the care included educational sessions, clinical decision making with an evidence-based algorithm, telephone reminders and assistance in overcoming specific barriers to care, and use of specific communication techniques to improve comprehension in low-literacy populations.

Results: After 12 months, intervention patients, including those with limited health literacy, were more likely to improve their diabetes control. Other studies have shown that similar management interventions for heart failure patients can reduce the rate of exacerbations, hospitalizations, and death, and decrease annual hospital costs associated with heart failure (Dewalt et al., 2006).

 

Colon cancer screening (Ferreira et al., 2005)

Health literacy practice: Health care providers in the intervention clinic attended a 2-hour workshop on guidelines for colorectal cancer screening and improving communication for patients with low literacy skills. Patients received brochures and a video about colon cancer screening, self-efficacy, and screening instructions.

Results: Within the intervention group, patients of all literacy levels were more likely to complete colon cancer screening than those who received standard care.

 

Assess effectiveness of educational materials (Davis et al., 1996)

Health literacy practice: Patients were provided with one of two brochures about the polio vaccine: a widely used Centers for Disease Control and Prevention pamphlet or a revised pamphlet with a lower-grade-level readability index.

Results: Patients provided with the revised pamphlet recorded higher comprehension scores. Patients indicated a statistically significant preference for the revised pamphlet.

In addition to the evidence supporting specific interventions, experts have recommended and defined key aspects of clinical practice that can improve care for patients with low health literacy, such as the use of educational materials that are understandable by the patients who receive them and the use of good verbal communication strategies (Abrams et al., 2007). Although we often lack specific evidence that supports the use of these techniques in every setting, these approaches may be appropriate targets for performance measurement in an improvement context. Without clear evidence, however, any measure will be scrutinized if used for public accountability, and we recommend caution before adopting measures that are not based on clear clinical evidence.

 

Health Literacy’s Role in Patient Care: A Systems View

High-quality care is the product of interactions among various levels of the health care system. Established policies, health care organizations, and delivery systems create an influential context which plays a prominent role in determining patients’ health outcomes. Effective health literacy performance measures should, we believe, be based on a systems approach to assess the implementation of health literacy interventions at all levels of patient care. Berwick developed an influential framework which describes four levels of the U.S. health care system (Berwick, 2002):

  1. The experience of patients
  2. The microsystems of care delivery
  3. The organizations that house or otherwise support these microsystems
  4. The environment of policy, payment, regulation, accreditation, and training that shape organizational action

 

This hierarchical model asserts that the quality of a change at any level of the health care system should be defined by its affect on patients’ experiences. As such, health literacy-related performance measures should be designed with the intent of helping practices and larger systems to improve care for patients with low health literacy.

This model provides a framework for specifying measures of health care quality that focus on health literacy. We will categorize measures by the level of the health care system they assess. However, every level of the system could be evaluated by the performance of all levels below. For example, the performance of the organizations that house or support microsystems can be evaluated by the experience of patients in all the microsystems they support. The following section identifies broad categories of health literacy–related measures that could be employed in each level of the health care system:

The experience of patients: Measures at this level of the care system are focused on gathering information from the patient:

  • Patient report on receipt of service
  • Assessment of patient knowledge about key health concerns
  • Patient report on satisfaction with care

 

Microsystems of care delivery: Measures at this level focus on processes carried out by the practice and documented as such:

  • Provision of education to patients
  • Provision of health care services to patients
  • Documentation of protocols and procedures to train staff on health literacy
  • Regular assessment of health literacy activities by practice leadership
  • Existence of partnerships with community organizations

 

Organizations that house the microsystems: Measures at this level focus on the health care organizations that house and support the microsystems of care delivery:

  • Strategic plans to address health literacy concerns
  • Needs assessment for relevant patient populations
  • Systems for sharing effective health literacy-related practices

 

Environment of policy, payment, regulation, accreditation, and training: Measures at this level focus on the external environment of health care delivery:

  • Financial support for care management and health literacy-related activities
  • Accreditation credit for health literacy continuing educational sessions
  • Policies and regulations that support health literacy interventions

 

Characteristics of Effective Measures

Health literacy-related performance measures should help providers and organizations assess how well they provide care that enables people to “understand and act on health information.” In this regard, measures could assess processes, outcomes, or composites. A measured process should be recommended by guidelines and preferably based on evidence that the process produces better outcomes. Outcomes should be important to patients and providers. A report on these measures (usually expressed as a proportion) should help the practice understand their performance and lead to specific steps they could employ to improve performance on that measure. Performance on a measure should improve if the practice or organization makes changes in the quality of the services they provide in that domain.

 

 

Example Measures

Appropriately designed performance measures can provide clinicians and systems with useful data to evaluate and improve the effectiveness of health care delivery at the level of patient care. The following example measures are taken from a variety of current performance measures (e.g., Physician Consortium for Performance Improvement, Joint Commission on Accreditation of Healthcare Organizations [JACHO], National Committee on Quality Assurance) or were developed by us to demonstrate various approaches to assessing the quality of care related to health literacy. We do not endorse these measures, but present them as examples of how different groups have considered this issue. Each measure has limitations and those limitations are discussed in more detail in the Appendix.

 

The Experience of Patients

Measures at this level of the care system are focused on gathering information from the patient.

 

 

 

Microsystems of Care Delivery

Measures at this level focus on processes carried out by the practice and documented as such.

 

 

 

 

Organizations That House the Microsystems of Care Delivery

Measures at this level focus on the health care organizations that house and support the microsystems of care delivery.

 

 

Environment of Policy, Payment, Regulation, Accreditation, and Training

Measures at this level focus on the external environment of health care delivery.

 

 

Important Considerations

Specific v. Generic Measures

When specifying a performance measure, one of the most challenging tasks is defining the denominator of patients for whom the process or outcome applies. We can imagine that some health literacy-related performance measures may apply to all patients, regardless of condition and disease status (e.g., “In the last 12 months, how often did this doctor give you all the information you wanted about your health?”). The advantage of a broad or generic measure is that it captures the reality that clinicians need to customize care for everyone and provide tailored information needs. No specific measure can adequately capture the breadth of what occurs in the health care setting. The more specific a process or outcome becomes, the less it reflects the totality of care in a practice or even for a specific patient. For example, the question “In the last 12 months, how often did this doctor ask you to describe how you were going to follow these instructions?” can only apply to those patients for whom instructions were given in the past 12 months. For those patients who have had contact with the physician only one or two times, there is not sufficient experience to adequately respond to a four- or six-point Likert-type scale. All clinicians will reasonably worry about being held accountable to such an assessment unless the denominator can be specified as “only those patients for whom I provided instructions on several occasions in the past year.” This denominator is difficult to specify in any dataset. If specific denominators cannot be defined, clinicians will have trouble taking action to improve performance on a measure.

As measure denominators become more specific, clinicians will find them more actionable and acceptable. However, such measures begin to apply to a select subset of the entire practice population. For example, a diabetes-specific measure evaluating a patient’s knowledge about the appropriate response to a hypoglycemic event is meaningful to clinicians who agree that a subset of their patients should demonstrate this knowledge. However, if this select group of patients represents a small percentage of the whole practice, then the measure does not address the quality of care for a vast majority of patients. We found that, in most cases, evidence is strongest for focused measures. Very few medical interventions apply to everyone in a measurable way (immunizations and smoking cessation assessment and counseling notwithstanding). When considering health literacy-related measures, we are often focused on specific educational objectives that are very important for health outcomes. Most of those situations occur in the context of a specific illness or clinical situation. As we broaden our population of measurement, the specificity of the denominator becomes more general to the point that the measure is no longer informative. For example, a measure that asks, “Did you provide the patient with all the information they needed?” has no way to control for whether the information was appropriate and actionable. Rather, a question that asks, “Did you tell a patient on insulin how to know when their sugar was low and what to do about it?” approaches the degree of specificity required for meaningful assessment.

 

Data Sources: Patient-Reported v. Physician-Reported v. Administrative Claims Data

Each potential source of data for health literacy performance measures has advantages and disadvantages that may inform the measure development process.

  • Patient-reported data provide direct insight into patient opinions and understanding and are the richest source of information about medical care. This information is crucial to achieving the overall goal of improving health outcomes for patients with low health literacy. However, patient responses must be interpreted with caution since they can be affected by a number of external factors that do not reflect the quality of care provided by the clinician. More importantly, collecting data directly from patient reports can be expensive.
  • Physician-reported data capture health care providers’ actions and perceptions of health literacy interventions. This information can track clinical processes and provide actionable data for improvement. Health care providers may be more likely to trust data collected from self-reported instruments. However, physician-reported measures may incentivize providers to complete recommended interventions by “checking the box” and may fail to capture the quality of the intervention (e.g., the quality of patient-physician communication). This limitation may render many of these measures problematic from an accountability perspective. However, they can be extremely valuable for tracking the implementation of a new program that has been designed to provide high-quality services.
  • Administrative claims data can often be collected on a substantial proportion of the patient population with minimal adjustment to current clinical workflow other than learning appropriate coding. However, these data may fail to capture the quality of interventions and may not contain enough specific information to inform health literacy quality improvement efforts. It may be advisable to use administrative claims data to capture patient demographic information and health outcomes. But, we have not identified examples where administrative data adequately address health literacy concerns.
  • Organizational characteristics can often be described with minimal adjustment to the existing clinical workflow and may be collected with fewer resources than other types of data. However, allowing organizations to “check the box” once they have implemented desired structures or processes may fail to capture the quality of the interventions and the corresponding effect on the patient experience.

 

In summary, patient- and physician-reported measures will be the most reliable measures from an accountability perspective, but have problems of their own. We believe any source of data could provide useful information, but measure creators should pay close attention to the potential limitations and try to mitigate them with careful crafting of the numerators and denominators.

 

Accountability

Assigning accountability in the context of health literacy can be challenging. For many health care performance measures, accountability for accomplishment of the desired process and outcome is often shared between the clinician or the institution and the patient. To reach the same outcome, a person with low health literacy may need more attention or service than someone with higher health literacy. For example, a person with lower health literacy may need more explanation of self-care activities and reinforcement to develop the desired self-management capabilities. In cases like this, we must determine whether we hold clinicians and systems responsible for achieving the same outcome regardless of circumstances or if we hold them responsible for documenting a process (education) regardless of whether it achieves the desired outcome (the latter, in effect, allowing a disparity in outcome for those who have low health literacy).

To achieve our aims of improving care for people with low health literacy, we prefer to hold individuals and institutions accountable for achieving the desired outcomes rather than documenting a recommended process. From a policy level, rewarding close attention to the outcome may help to mitigate some of the costs associated with achieving better outcomes. However, practices caring for more people with low health literacy have a greater chasm to close than others and, as a result, we expect they would require more resources.

We believe that, although accountability is critically important, if presented in a “high-stakes environment,” it may breed defensiveness and, worse, gaming of the system. Examples of “high-stakes” accountability are pay-for-performance or public reporting. When such consequences are at stake, the focus is on attaining the measure goal rather than learning what the measure can tell us about health care. At this point in the development of health literacy measurement, we believe it is more important for clinicians and health systems to focus on the measures to understand the health care system than to jump into high-stakes accountability.

 

Processes and Outcomes

We consider the act of providing a service a process. When a process is recorded, it is difficult to ensure that the process was done in a way that was useful for any or all people deemed appropriate for receipt. This difficulty will always limit the usefulness of “documentation of process” measures. For health literacy-related measurement, the process will often involve providing some specified education or communication strategy. When documented by the provider, such measures are more useful for internal quality improvement projects than for accountability or public reporting. For a documentation requirement to work, the institution must believe in the importance of that process and take steps to make the process as helpful as possible. For purposes of accountability or public reporting, institutions that do not subscribe to the importance of the item can easily provide useless education but check a box (e.g., the Joint Commission on Accreditation of Healthcare Organizations heart failure education measure [The Joint Commission, 2009]). Another strategy is to require documentation of verifiable educational strategies (e.g., using a video proven to be effective, documenting patient understanding). Although they require more specific documentation than other measures, these strategies still do not ensure the provision of appropriate education or communication.

In the context of health literacy-related performance measurement, patient knowledge should be considered an outcome. Knowledge is certainly not a health outcome, but is an appropriate outcome of care that can be measured (like satisfaction). To measure knowledge, the patient would need to be tested on specific content. Many clinicians and systems will not appreciate accountability for whether their patients can actually answer knowledge questions correctly. However, if those questions (knowledge domains) are clearly related to health outcomes and should be known by most patients in a group (e.g., how to treat hypoglycemia for anyone on insulin), perhaps it is the only way to ensure the best care for the patient. Most knowledge domains are not clear-cut or related to health outcomes. Additionally, some of the accountability for knowledge probably lies with the patient.

Patient assessment of the adequacy of education or communication is yet another perspective. Such an assessment, like the Health Literacy Consumer Assessment of Healthcare Providers and Systems (HL CAHPS®) (AHRQ, 2009) represents a combination of patient satisfaction and patient documentation that a specific type of communication occurred. At this point, it remains unclear whether the results of such surveys can lead to improvements in care and, subsequently, higher scores on the surveys. By themselves, such questions cannot ensure that a given patient received the best care.

 

Appropriate Role of Performance Measurement

Any effort to implement performance measurement can lead to unintended consequences. We believe that individual health care providers want to believe they are providing the best possible care for each patient they see. Measures should draw upon that desire to help providers achieve that goal. Conversely, if measures lead to behaviors and resource use that distract from the best possible care of the patient, the whole process is undermined. Because there are very few perfect measures (in fact, we have yet to come across one), using measures for high-stakes accountability can become problematic.

All performance measures should be based on solid empirical evidence that the process or intermediate outcome is related to an important health outcome. At the level of accountability (for payment or public reporting), it is important that the performance measure closely fit the evidence in question. For example, a measure that asks if education occurred cannot adequately assess the quality of that education or whether it occurred at a standard as good as or better than what was tested in studies to derive empirical evidence. As such, a measure of education documentation will lead to a consequence of reporting interventions that may not be evidence-based. In the area of health literacy, because of the relatively small numbers of tested interventions, and the complexity of many of the interventions, it may be difficult to specify measures closely enough to ensure meaningfulness when they are implemented for public accountability. Over time, we expect this to change with emerging evidence. Some of this evidence can come from associating use of pilot performance measures with improved health outcomes.

The most appropriate use of many performance measures is to drive internal quality improvement processes without regard to external accountability. A number of measures can be created to drive such processes, and such measures should still reflect the best medical evidence. However, the specification of the measure can focus on documentation and implementation and allow the organization to implement other aspects of quality control. For example, an organization may take the time to clearly specify the educational intervention so that it matches the research evidence and to provide sufficient training and auditing to ensure adequate implementation. The performance measure can then be used to ensure that all patients receive the intervention.

Any process of performance measurement takes resources to implement. As such, evidence that a measure is interpretable and actionable can help convince providers and systems to implement such measures. Often, because of the expense, institutions will choose to use administrative data or physician-reported data because they require fewer resources. Unfortunately, these choices generally lead to less interpretable information. So, when creating and advocating for performance measures, we must bear in mind the cost of collecting the data relative to the amount of benefit that our patients receive.

 

An Approach to Developing Actionable Health Literacy–Related Performance Measures

To improve the level of acceptability and usefulness of a performance measure, we recommend using the Model for Improvement to organize the process (Langley et al., 2009). The Model for Improvement starts with three questions:

  1. What are we trying to accomplish?
  2. How will we know a change is an improvement?
  3. What changes will result in an improvement?

 

Measure developers are answering the second question: “How will we know a change is an improvement?” To adequately answer this question, the measure developer must specify what they think is the appropriate answer to the first question about organizational goals. Historically, this is based on clinical guidelines that tell us what we are supposed to do (or accomplish) for patients. In the context of health literacy, we have two associated sources of clinical guidelines. Experts have suggested several strategies to improve care for patients with low health literacy in general (Abrams et al., 2007). But most of those recommendations have not been individually evaluated with randomized clinical trials. Some have observational data supporting them (such as the teach-back method). Most are expert opinion. Another source of guidelines is the recommended information needs of patients in specific clinical situations (e.g., an asthmatic must know how to use an inhaler and a patient with prostate cancer should know the options for treatment). Some of these knowledge outcomes are related to health outcomes (such as knowing how to use inhalers), while others are considered ethical requirements in the era of autonomy (such as knowing treatment choices). Unfortunately, current guidelines tend to assume that appropriate information needs are always given to the patient. The field of health literacy research has certainly proved this a faulty assumption and we strongly encourage guideline developers to specify certain information and processes as “need-to-know” and “need-to-do” for patients in each clinical setting. Developers of health literacy–related measures must consider the source of the guideline and address why a specific information outcome is important.

Once the aim is set (based on guidelines), the measure developer can answer the question “How will we know a change is an improvement?” The answer to this question will inform how the organization assesses, or measures, its progress. Measure developers want their measure to answer this question in a way that is useful for the organization trying to improve. For example, an organization may want to use the teach-back method with all patients when appropriate. For this aim, the organization may want to use the HL CAHPS© (AHRQ, 2009) question in a survey to their patients: “In the last 12 months, how often did this doctor ask you to describe how you were going to follow these instructions?” This measure does not currently adequately specify when using the teach-back is appropriate or the best approach. An organization may start by measuring current performance and strive to improve with the assumption that it is not at optimal performance (even without knowing what optimal performance is). But the organization may want some guidance on determining “appropriateness” if it can collect more specific information about the encounter.

Another possible aim and measure would be to “ensure that all patients understand their medicines and are taking them appropriately.” To accomplish this, an organization may choose to measure provider documentation of medication review. It may want to become more specific and say “medication review with all pill bottles brought to the clinic.” It is easy to see, with this measure, how it would help improve the process in the practice, but also how it could easily be “gamed” with a check-box system if it is purely clinician-reported. Perhaps each patient should be asked in follow-up: “Did someone in the practice review all of your medications with you to ensure you understand your medicines?”

We believe that measure developers should walk through the appropriate steps to understand practices’ key objectives and to ensure that the information obtained from performance measurement will adequately answer the question of whether a change has resulted in an improvement.

 

After the Measure Is Developed

We strongly encourage further testing and refinement of measures after they are developed. All measures have error compared with the truth about optimal medical care. In this regard, we need to test that we are measuring what we want to measure. Validity studies can ensure a reliable process for collecting the data on the measure and can document that the measure (such as a patient report of whether education occurred) actually captures what happens (such as what the provider said in the encounter). Another important aspect of a measure is that performance can improve if appropriate steps are taken by the health system or provider. Validity studies that conduct an intervention and demonstrate improvement on a performance measure and associated health outcomes may be the most powerful strategy to solidify validity.

If one accepts that measures need such validity testing before engaging in high-stakes accountability (such as public reporting or pay for performance), it allows us to start testing new measures now. Encouraging health care systems and providers to engage in testing the measures can help demonstrate the value of incorporating health literacy-related measures into clinical care.

 

Summary

The prevalence of low health literacy represents a multifaceted challenge for patients, providers, and health care systems. We believe that each of these stakeholders should acknowledge the crucial role of performance measurement in the comprehensive effort to improve care for people with low health literacy. Effective measures should encompass all levels of the health care system and track processes and outcomes that are important to patients and providers. When developing measures, it is critical to consider the tradeoffs among generic measures and more focused disease-specific measures. Narrowly defined measures often provide the most actionable information, yet may be applicable to only a subset of the entire patient population.

Patient-reported and physician-reported measures demonstrate the most promise in addressing health literacy concerns, but other data sources should be considered. Concerns about external accountability and the limitations of existing evidence should not deter the implementation of performance measurement for internal quality improvement initiatives. Ultimately, we believe that the increased adoption of health literacy–related performance measures can accelerate our understanding of clinical interventions to improve patient outcomes.

 

APPENDIX

Specifications and Discussion of Example Measures

THE EXPERIENCE OF PATIENTS

 

Example 1:

CAHPS® Item Set for Addressing Health Literacy (Agency for Healthcare Research and Quality, 2009)

HL 13: In the last 12 months, how often did this doctor ask you to describe how you were going to follow these instructions?

 

 

Rationale for the measure:
AMA recommends use of teach-back method to confirm patient understanding.

 

Accountable stakeholders:
Physicians and other health care providers tasked with providing patients with information on medication regimens, behavioral modifications, follow-up visits, and other self-care activities.

 

Data source:
Patient-reported survey results.

 

Advantages of measure:
The measure evaluates patients’ perception of health care providers’ utilization of the teach-back method. Patients who believe their physician effectively communicates health information may be more likely to engage in appropriate self-care activities. The measure is actionable by the physician.

 

Disadvantages of measure:
This patient-reported measure must be interpreted with caution. Responses may be affected by patients’ opinion of physician and other external factors more than actual utilization of the teach-back method. Furthermore, depending on the each patient’s current health status and treatment needs, the teach-back method may not be appropriate for all patient visits. In addition, if a patient sees a doctor 1-3 times per year, how does he or she decide what option to pick between “never” and “always”? As a result, a clinician does not have any rational target for what proportion of patients should respond in affirmative on this item (or how affirmatively they should respond). Since the teach-back method is a process designed to confirm patient understanding, it may be more useful to more directly evaluate patient understanding. This approach would allow providers to determine which strategies are most effective in increasing patient understanding. Finally, collecting data directly from patient reports can be expensive.

 

Example 2:

Care Transitions Measure (CTM-15)© (Coleman, 2006)

7. When I left the hospital, I had a readable and easily understood written plan that described how all of my health care needs were going to be met.

 

 

Rationale for the measure:
Patients should be provided with easily understood instructions to facilitate ongoing care.

 

Accountable stakeholders:
All health care providers tasked with providing patients with information on ongoing health care needs, particularly at discharge.

 

Data source:
Patient or caretaker survey results.

 

Advantages of measure:
The measure evaluates patients’ perception of quality of information provided by caretakers. The measure reflects data on the entire patient population. Data elements required for the measure can be easily captured and the measure is actionable by the physician. The measure can be used for multiple conditions.

 

Disadvantages of measure:
The measure is reported by patients and must be interpreted with caution. In addition, the measure is not disease-specific. Utility would be enhanced if measure were tied to health outcomes. The measure captures patient’s assessment of quality of discharge instructions but does not provide specific guidance to improve patient care. Collecting data directly from patient reports can be expensive.

 

Example 3:

Care Transitions Measure (CTM-15)© (Coleman, 2006)

11. When I left the hospital, I was confident I could actually do the things I needed to do to take care of my health.

 

 

Rationale for the measure:
Patients will practice better self-care activities if provided with all information necessary to generate confidence in self-management abilities.

 

Accountable stakeholders:
All health care providers tasked with providing patients with self-management information.

 

Data source:
Patient or caretaker survey results.

 

Advantages of measure:
The measure evaluates patients’ perception of the quality of information provided by caretakers. The measure reflects data on the entire patient population. Data elements required for the measure can be easily captured and the measure is actionable by the physician. The measure can be used for multiple conditions.

 

Disadvantages of measure:
The measure is reported by patients and may be impacted by external factors. In addition, the measure is not disease-specific. Utility would be enhanced if measure were tied to health outcomes.

 

Example 4:

Care Transitions Measure (CTM-15)© (Coleman, 2006)

12. When I left the hospital, I had a readable and easily understood written list of the appointments or tests I needed to complete within the next several weeks.

 

 

Rationale for the measure:
Patients are more likely to follow recommend care plan if they are provided with easily understood information about ongoing care.

 

Accountable stakeholders:
All health care providers tasked with providing patients with self-management information.

 

Data source:
Patient or caretaker survey results.

 

Advantages of measure:
The measure evaluates patients’ perception of quality of information provided by caretakers. The measure reflects data on the entire patient population. Data elements required for the measure can be easily captured and the measure is actionable by the physician. The measure can be used for multiple conditions.

 

Disadvantages of measure:
The measure is reported by patients and may be impacted by external factors. In addition, the measure is not disease-specific. Utility would be enhanced if measure were tied to health outcomes. The resources necessary to achieve improvement in the measure may vary by patients’ health needs.

 

Example 5:

Care Transitions Measure (CTM-15)© (Coleman, 2006)

14. When I left the hospital, I clearly understood how to take each of my medications, including how much I should take and when.

 

 

Rationale for the measure:
Patients’ understanding of medication regimen is an important aspect of self-management.

 

Accountable stakeholders:
All health care providers tasked with providing patients with self-management information.

 

Data source:
Patient or caretaker survey results.

 

Advantages of measure:
The measure evaluates patients’ perception of quality of information provided by caretakers. The measure reflects data on entire patient population. Data elements required for the measure can be easily captured and the measure is actionable by the physician. The measure can be used for multiple conditions.

 

Disadvantages of measure:
The measure is reported by patients and may be impacted by external factors. In addition, the measure is not disease-specific and may not reflect patient understanding of how to respond to a health episode (see Example 6 for contrast). Utility would be enhanced if measure were tied to health outcomes. Difficulty of improving measure may vary by patients’ health needs.

 

Example 6:

Created for illustration purpose

What should you do if you have a low blood sugar (below 60 mg/dl)?

 

 

Rationale for the measure:
Patient understanding of self-care activities is a critical consideration when managing chronic conditions.

 

Accountable stakeholders:
Physicians and other health care providers tasked with providing patients with self-management support.

 

Data source:
Patient-reported survey results.

 

Advantages of measure:
The measure evaluates patients’ understanding of important self-care activities directly related to health outcomes such as glycemic control and hospitalization. By evaluating patient knowledge, this measure provides an indication of the effectiveness of patient education efforts. This measure is also disease-specific and applicable to most diabetic patients, regardless of current health status and treatment needs. Similar measures could be developed for other conditions.

 

Disadvantages of measure:
Responses may be affected by patients’ education level, preexisting health knowledge, and other external factors. Conversely, it is rational to consider the percentage of patients in the denominator for whom is it acceptable that they not know the answer to the question. Although the measure is actionable by health care providers, many factors must be addressed to enhance patient understanding of self-care activities. Further, in the current form, the item must be scored as correct or incorrect by some mechanism (e.g., multiple-choice or scoring open-ended responses).

 

The Microsystems of Care Delivery

 

Example 7:

Prostate Cancer: Physician Performance Measurement Set© (Physicians Consortium for Performance Improvement, 2007)

Measure #4: Treatment Options for Patients with Clinically Localized Disease

 

 

Rationale for the measure:
To enable each prostate cancer patient with clinically localized disease to make an informed choice among options for primary therapy, patients should receive counseling on at least the four interventions listed in this measure. Additional treatment options may be offered, but fewer data are available to support their effectiveness.

 

Accountable stakeholders:
All health care providers tasked with providing patients with information on therapy options.

 

Data source:
Administrative claims data or reviews of electronic or paper medical records. Based on whether the clinician indicates that he or she provided counseling on the aforementioned treatment options (not how he or she did it or whether it was understood).

 

Advantages of measure:
The measure evaluates percentage of patients that were provided with the information necessary to make an informed decision among options for primary therapy. The measure reflects data on entire patient population. Data elements required for the measure can be easily captured and the measure is actionable by the physician. The measure is also disease-specific and focuses on treatment options supported by extensive research.

 

Disadvantages of measure:
The measure is binary (“counseling received” or “counseling not received”) and, as a result, does not specify the quality of counseling provided to patients. In addition, the measure is reported by care providers and does not capture any discrepancies that may exist between provider and patient perceptions of communication.

 

Example 8:

Care Transitions: Performance Measurement Set (Physicians Consortium for Performance Improvement, 2009)

Measure #1: Reconciled Medication List Received by Discharged Patients

 

 

Rationale for the measure:
The Institute of Medicine (IOM) estimated that medication errors in inpatient and outpatient settings harm 1.5 million people each year in the United States, at an annual cost of at least $3.5 billion.9 Many medication errors (approximately 60 percent in one inpatient study [Rozich and Resar, 2001]) occur during times of transition, when patients receive medications from different prescribers who lack access to patients’ comprehensive medication list.9 Providing patients with a comprehensive, reconciled medication list at each care transition (e.g., inpatient discharge) may improve patients’ ability to manage their medication regimen properly and reduce the number of medication errors.

 

Accountable stakeholders:
All health care providers tasked with providing patients with information on medication regimen at discharge, including the physician and clinical staff involved with care transitions (e.g., nursing staff).

 

Data source:
Administrative claims data or reviews of electronic or paper medical records. Based on whether the clinician indicates that he or she provided a reconciled medication list at discharge (not how he or she did it or whether it was understood).

 

Advantages of measure:
Evaluates percentage of patients who were provided with information that may improve their ability to manage their medication regimen properly and reduce the number of medication errors. The measure reflects data on the entire patient population. Data elements required for the measure can be easily captured and the measure is actionable by the physician.

 

Disadvantages of measure:
The measure is binary (“medication list received” or “medication list not received”) and, as a result, does not specify the quality of information provided to patients. In addition, the measure is reported by care providers and does not capture the patient’s understanding of the reconciled medication list and its importance for future medical care.

 

Example 9:

Specifications Manual for National Hospital Quality Measures (The Joint Commission, 2009)

HF 1: Discharge Instructions

 

 

Rationale for the measure:
Patient noncompliance with diet and medications is an important reason for changes in clinical status. Health care professionals should ensure that patients and their families understand their dietary restrictions, activity recommendations, prescribed medication regimen, and the signs and symptoms of worsening heart failure.

 

Accountable stakeholders:
All health care providers tasked with providing patients with information on care activities at discharge, including the physician and clinical staff involved with care transitions (e.g., nursing staff).

 

Data source:
Administrative claims data or reviews of electronic or paper medical records. Based on whether the clinician indicates if he or she provided discharge instructions (not how he or she did it or whether it was understood).

 

Advantages of measure:
The measure evaluates the percentage of patients that were provided with information and education which may improve patients’ ability to manage their condition. The measure reflects data on entire patient population. Data elements required for the measure can be easily captured and the measure is actionable by the physician.

 

Disadvantages of measure:
The measure is binary (“educational materials received” or “educational materials not received”) and, as a result, does not specify the quality of information provided to patients. In addition, the measure is reported by care providers and does not capture the patient’s understanding of the discharge instructions and their importance for future medical care.

 

Note: The Specifications Manual for National Hospital Inpatient Quality Measures (Version 3.0b, August, 2009) is the collaborative work of the Centers for Medicare & Medicaid Services and The Joint Commission (The Joint Commission, 2009). The Specifications Manual is periodically updated by the Centers for Medicare & Medicaid Services and The Joint Commission. Users of the Specifications Manual for National Hospital Inpatient Quality Measures must update their software and associated documentation based on the published manual production timelines.

 

Example 10:

Health Literacy Universal Precautions Toolkit (DeWalt et al., 2010)

Tool 8. Using the Brown Bag Review: Verifying Patient Medications

 

 

Rationale for the measure:
This process will have help practices improve communication about medications between patients and clinical staff. This process has the potential to enhance patient understanding and health outcomes.

 

Accountable stakeholders:
Practice members responsible for self-management activities.

 

Data source:
Documentation in the patient medical record indicating whether or not a medication review occurred at the visit. Identify the percent of patients who had a medication review completed.

 

Advantages of measure:
The measure evaluates percentage of patients that were provided with information on medications. The measure reflects data on entire patient population. Data elements required for the measure can be easily captured and the measure is actionable by the physician and other care-providers. Measure is also disease-specific and can encourage practice to consistently assess patients’ understanding of medication regimens.

 

Disadvantages of measure:
The measure is binary (“review occurred” or “review did not occur”) and, as a result, does not specify the quality of counseling provided to patients. In addition, the measure is reported by care providers and does not capture any discrepancies that may exist between provider and patient perceptions of communication.

Note: see Patient Centered Medical Home©, Standard 3: Care management, Element D

 

Example 11:

National Standards for Culturally and Linguistically Appropriate Services in Health Care (Office of Minority Health, 2001)

Health care organizations should ensure that staff at all levels and across all disciplines receive ongoing education and training in health literacy-related topics (adapted from Standard 3)

 

 

Rationale for the measure:
Implementing health literacy universal precautions in a practice requires that all of staff members are aware of the challenges associated with low health literacy, know how it affects patients, and consistently work to improve communication.

 

Accountable stakeholders:
Practice leadership responsible for staff training.

 

Data source:
Organization reported results and/or records of number of staff trained.

 

Advantages of measure:
The measure evaluates an organization’s commitment to raising awareness about health literacy among staff.

 

Disadvantages of measure:
Responses may not reflect quality of training and the resulting effect on staff members’ knowledge and action. Training may not be standardized across practices. It may be difficult to directly associate this measure with health outcomes or the patient experience.

 

Example 12:

National Standards for Culturally and Linguistically Appropriate Services in Health Care (Office of Minority Health, 2001)

Health care organizations must make available easily understood patient-related materials and post signage in the languages of the commonly encountered groups and/or groups represented in the service area (Standard 7)

 

 

Rationale for the measure:
Organizations should convey educational and other information in a manner that helps patients with low health literacy navigate the complex health care environment.

 

Accountable stakeholders:
Practice members responsible for practice signage and selection of patient educational materials.

 

Data source:
Organization-reported results, review of signage, and available educational materials.

 

Advantages of measure:
The measure evaluates an organization’s progress on making the practice more accessible to patients with lower health literacy.

 

Disadvantages of measure:
Responses may not reflect quality of available signage and educational materials. It is possible that it would be difficult to assess practice’s implementation without a standardized review process. It may be difficult to directly associate this measure with health outcomes or the patient experience.

Note: see Patient-Centered Medical Home©, Standard 4: Patient self-management support, Element B; (National Committee for Quality Assurance, 2008) Health Literacy Universal Precautions Toolkit (DeWalt et al., 2010)

 

Example 13:

National Standards for Culturally and Linguistically Appropriate Services in Health Care (Office of Minority Health, 2001)

Health care organizations should develop participatory, collaborative partnerships with communities and utilize a variety of formal and informal mechanisms to facilitate community and patient/consumer involvement in designing and implementing health literacy-related activities (adapted from Standard 12)

 

 

Rationale for the measure:
Practices and other organizations should help patients utilize community resources to improve health outcomes.

 

Accountable stakeholders:
Practice members responsible for community engagement and self-management activities.

 

Data source:
Organization-reported efforts to collaborate with community resources, patient-reported utilization of those resources, number of referrals made for eligible patients.

 

Advantages of measure:
The measure evaluates an organization’s progress on utilizing resources outside of the practice environment.

 

Disadvantages of measure:
Responses may not reflect quality of collaborations or extent of effort. Context of practice environment (such as the availability of community resources) may largely determine performance on this measure. It may be difficult to directly associate this measure with health outcomes or the patient experience.

Note: See Patient-Centered Medical Home©, Standard 4: Patient self-management support, Element B; (National Committee for Quality Assurance, 2008) Health Literacy Universal Precautions Toolkit (DeWalt et al., 2010)

 

Example 14:

National Standards for Culturally and Linguistically Appropriate Services in Health Care (Office of Minority Health, 2001)

Health care organizations should conduct initial and ongoing organizational self-assessments of health literacy-related activities and are encouraged to integrate health literacy-related measures into their internal audits, performance improvement programs, patient satisfaction assessments, and outcomes-based evaluations (adapted from Standard 9)

 

 

Rationale for the measure:
The measures is designed to provide a practice with a method to assess how the organization is meeting the needs of patients in different areas of the practice. This tool may help the practice to identify strengths, barriers, and opportunities for improvement as well as provide baseline data for future assessment of the selected interventions.

 

Accountable stakeholders:
Practice members responsible for health literacy interventions and process changes.

 

Data source:
Organization-reported efforts to assess health literacy-related activities.

Advantages of measure:
Measure indicates an organization’s commitment to health literacy interventions and the role of health literacy in patient outcomes. Practices that complete regular self-evaluation may be more likely to demonstrate continuous improvement.

 

Disadvantages of measure:
The measure does not indicate quality and extent of self-assessment. It may be important to link the measure to a list of recommended interventions. It may be difficult to directly associate this measure with health outcomes or the patient experience.

 

Organizations that House or Otherwise Support these Microsystems

Example 15:

National Standards for Culturally and Linguistically Appropriate Services in Health Care (Office of Minority Health, 2001)

Health care organizations should maintain a current demographic, cultural, and epidemiological profile of the community as well as a health literacy needs assessment to accurately plan for and implement services that respond to the characteristics of the service area (adapted from Standard 11)

 

 

Rationale for the measure:
The purpose of this standard is to ensure that health care organizations obtain baseline data and update the data regularly to better understand their communities, and to accurately plan for and implement services that respond to health literacy-related needs of the area.

 

Accountable stakeholders:
Organizational leadership responsible for resource allocation and overall strategy.

 

Data source:
Organization-reported results and/or dissemination of relevant data.

 

Advantages of measure:
The measure evaluates the availability of information that may help practices, practice networks, and other organizations address health literacy-related issues.

 

Disadvantages of measure:
Measure does not capture quality and functionality of available data. An organization’s ability to meet measure may depend on context (e.g., is practice a member of a larger network or health system). It may be difficult to directly associate this measure with health outcomes or the patient experience.

 

Example 16:

National Standards for Culturally and Linguistically Appropriate Services in Health Care (Office of Minority Health, 2001)

Health care organizations should develop, implement, and promote a written strategic plan that outlines clear goals, policies, operational plans, and management accountability/oversight mechanisms to provide health literacy-related services (adapted from Standard 8)

 

 

Rationale for the measure:
The purpose of this standard is to ensure that health care organizations define and officially recognize a comprehensive health literacy strategy.

 

Accountable stakeholders:
Organizational leadership responsible for resource allocation and overall strategy.

 

Data source:
Organization reported results and/or dissemination of relevant strategic plan.

 

Advantages of measure:
The measure evaluates the presence of a strategic plan to implement health literacy-related interventions at the organization-wide level. If plan results in changes in care delivery, measure could be an important precursor for improved health outcomes for patients with low literacy.

 

Disadvantages of measure:
Measure does not capture the quality or degree of implementation of the strategic plan. It may be difficult to directly associate this measure with health outcomes or the patient experience.

 

Environment of Policy, Payment, Regulation, Accreditation, and Training that Shape Organizational Action

Example 17:

Created for illustration purposes

Are organizations and/or microsystems compensated for care management and other health literacy-related interventions?

 

 

Rationale for the measure:
Providing practitioners with the resources to implement health literacy-related interventions will increase the availability of these interventions, thereby improving the quality of care for patients with low health literacy.

 

Accountable stakeholders:
Policymakers and organizational leadership responsible for payment structures.

 

Data source:
Organization reported payment structures and/or public record.

 

Advantages of measure:
The measure evaluates the availability of compensation and/or incentive structures that encourage the adoption of health literacy–related interventions. Reporting on this measure may call attention to the importance of providing practitioners with the resources necessary to address all patients’ needs.

 

Disadvantages of measure:
Measure does not capture quality and degree of implementation of the compensation scheme. The health literacy-related resource demands of individual practices and microsystems may vary, thereby undermining support for the measure. It may be difficult to directly associate this measure with health outcomes or the patient experience.

 

Example 18:

Created for illustration purposes

Are health literacy continuing educational sessions counted towards practitioners’ accreditation requirements?

 

 

Rationale for the measure:
Providing practitioners with an incentive to learn more about health literacy–related issues will increase awareness about providing high-quality care for patients with low health literacy.

 

Accountable stakeholders:
Policymakers, professional organizations.

 

Data source:
Organization reported accreditation requirements and/or public record.

 

Advantages of measure:
The measure evaluates the inclusion of health literacy in accreditation requirements. This measure may support a strategy to increase awareness of health literacy on a large scale.

 

Disadvantages of measure:
The measure does not capture the quality or degree of implementation of the accreditation requirements. It may be difficult to relate this measure directly to health outcomes or the patient experience. It may be difficult to directly associate this measure with health outcomes or the patient experience.

 

Example Measures Derived from Clinical Trial-Based Evidence

The following studies have shown that health literacy-related practices can improve patient understanding and health outcomes while reducing costs.

Example 1:

 

 

Study design: Randomized controlled trial

 

Setting: Bellevue Hospital Center in NYC (public hospital), pediatric emergency services.

 

Eligibility criteria: Primary caregiver >18 years old, child 30 days to 8 years old taking a prescription liquid medication, English or Spanish speaker.

 

Total sample size: 251 initially enrolled, 124 randomized to intervention group, 121 randomized to standard treatment: 227 total underwent follow-up assessments.

 

Sample characteristics:

  • Age (years): Child: Intervention group: 3.7 (2.2) Standard treatment: 3.4 (2.3) Caregivers: Intervention group: 31.1 (8.2) Standard treatment: 29.6 (6.9)
  • Gender: Child: Intervention group: F (47.9%) Stand treatment: F (35.5%) Caregiver: Intervention group: Mother (87.4), Father (10.5%), Other (2.4%) standard treatment: Mother (93.4%), Father (5.8%), Other (0.8%)
  • Race/Ethnicity: Black: 11%, Asian: 7%, White: 3.2%, Other 78.8%
  • Average education (years): Intervention group: 11.5 (SD=3.6), Standard treatment: 11.3 (SD=3.2)

 

Literacy levels: adequate: 69.6%; marginal: 17.8%; inadequate: 12.7% (caregiver literacy measured by TOFHLA)

Variables:

  • Independent: intervention
  • Dependent: medication knowledge, dosing accuracy, adherence

 

Intervention: Those in the intervention group were given plain-language, pictogram-based medication instruction sheets to convey information about medication name, dose, frequency, length of treatment, preparation, storage, and adherence. They also received brief 1-3 minute counseling/teach-back sessions. Controls received standard care.

Main outcomes and results:

  1. Intervention caregivers prescribed daily dose medications were less likely to make errors in knowledge of dose frequency compared to controls (0% vs 15.1%, p=0.007), but there was not difference among as-needed medication takers.
  2. Intervention caregivers less likely to report incorrect medication prep than controls for daily dose (p=0.04) and as-needed dose (p=0.006) users.
  3. Intervention caregivers significantly more likely to report use of standardized dosing instrument for daily dose (p=0.008) and as needed dose (p=0.002) users.
  4. Intervention caregivers were more likely to dose medications accurately than controls (5.4% inaccuracy vs. 47.8% inaccuracy, respectively).
  5. Non-adherence was lower in the intervention group compared to the controls (9.3% vs. 38%)

 

Comments: Many different components to the intervention. As a result, it is difficult to ascertain which component was most effective. Results were not stratified by literacy levels, but author reports nearly same effect size in both high and low literacy groups. Intervention seems reasonable to incorporate into clinical care as it is not as resource-intensive as other possible interventions.

 

Example 2:

 

 

Study design: Randomized controlled trial.

 

Setting: UNC internal medicine clinic.

 

Eligibility criteria: >=18 years-old, diagnosed with type 2 diabetes, current treatment for diabetes at the clinic, HbA1c>8%, English-speaking, life expectancy >6 months.

 

Total sample size: 285 Referred, 217 Randomized: 105 controls, 112 intervention. 95 controls and 98 intervention participants completed study and were included in analysis.

 

Sample characteristics:

  • Age: Control: LL=59 (10.4) HL=56 (10.9);
  • Intervention: LL=57 (10.5) HL=51 (13)
  • Gender: Control: n=105 LL Female=53% HL Female=58%;
  • Intervention: n=112 LL Female=55% HL Female=65%
  • Race/Ethnicity: Control: LL African American=68% HL AA=55%; Intervention: LL AA=94% HL AA=51%
  • Income: Control: <=20,000 LL=85% HL=71%; Intervention: <=20,000 LL=82% HL=59%
  • Insurance status: Control: Public Insurance LL=79%, HL=54%; Intervention: Public Insurance LL=59% HL=26%
  • Average education (years): Control: <HS education LL=82% HL=26% Intervention: <HS education LL=82% HL=59%

 

Literacy levels: measured with REALM

Variables:

  • Independent: intervention and literacy
  • Dependent: HbA1c, blood pressure

Intervention: Intervention patients received care from clinical pharmacists in a disease management team; the care included one-to-one educational sessions, clinical decision making with an evidence-based algorithm, telephone reminders and assistance in overcoming specific barriers to care, and use of specific communication techniques to improve comprehension in low-literacy populations.

 

Main outcomes and results:

  1. Overall, patients in the intervention group were significantly more likely to improve HbA1c levels compared to controls (adjusted difference –1.0%, p=0.001) and more likely to obtain HbA1c levels under 7.0% at 12 month f/u (adjusted OR=1.9, p=0.05). There were no significant treatment effects between HL and LL groups.
  2. However, LL intervention participants had more improvement in HbA1c levels than control patients (adjusted difference –1.4%, p<0.001) and were more likely to obtain the goal Hb A1c <7.0% than the control patients (adjusted OR=4.6, p=0.02).
  3. Intervention patients were more likely to improve systolic BP than controls (adjusted difference= –7.6mm Hg, p=0.006), and the difference b/w HH and LL patients were about the same.

 

Comments: Authors adjusted for baseline covariates if difference p<0.20 including race, age, sex, income, insulin status, duration of disease, and baseline HbA1c and SBP levels. Study provides example of a multi-component intervention that can achieve results. Intervention required resources for educational session, creation of personalized self-management plan, follow-up phone calls, and communication techniques to improve comprehension in low-literacy populations.

 

Example 3:

 

 

Study design: Randomized controlled trial.

 

Setting: Outpatient VA primary care clinics in Chicago, IL.

 

Eligibility criteria: Men >=50 years of age scheduled to see a provider for a health problem at the clinics. Excluded if they had a personal or family history of colon cancer or polyps, personal h/o IBS, home FOBT in last year, or flex sigmoidoscopy or colonoscopy in past 5 years.

 

Total sample size: Control: 963, 185 completed literacy assessment. Intervention: 197, 197 completed literacy assessment.

 

Sample characteristics:

  • Age (mean years): 67.8 (SD=10.5)
  • Gender: 100% men
  • Race/Ethnicity: 45% white, 50% African-American, 5% other
  • Income: NR
  • Insurance status: NR
  • Average education: 79% high school graduate

 

Literacy levels: measured with REALM. <9th grade: 33%, >9th grade: 67%

 

Intervention:

  • Control: One clinic-treatment and advice as usual.
  • Intervention: Health care providers in the intervention clinic attended a 2-hour workshop on guidelines for colorectal cancer screening and improving communication with patients with low literacy skills. Providers attended 1-hour feedback sessions every 4-6 months, and they received their personal recommendation and adherence rates. The patient intervention included a brochure and video focused on colon cancer screening, self-efficacy, and screening instructions.

 

Main outcomes and results: In the 6- to 18-month period after the initial visit, 69.4% of the control group patients and 76% of the intervention group patients received a recommendation for colon cancer screening (p=0.2). The intervention group had a higher rate of screening completion (41.3% vs 32.4%, p=.003). Among those screened for literacy: among those with LL, 55.7% of the intervention patients completed screening tests while only 30% of the control patients did (p=.002).

 

Comments: Interesting study because intervention included both patient-focused educational materials and provider workshops on communication strategies for patients with limited health literacy.

 

Example 4:

 

 

Study design: Randomized controlled trial. Participants randomized by day of week in clinic.

 

Setting: Three clinic sites in Shreveport Pediatric clinic at LSU-Shreveport, Caddo Parish Health Unit, private pediatric office.

 

Eligibility criteria: Parents or other adults accompanying children being seen for immunization in June-July 1995.

 

Total sample size: 646 potential, 26 refused, 10 incomplete data, 610 included.

 

Sample characteristics:

  • Age (mean): Group 1: 28, Group 2: 29
  • Race/Ethnicity: Group 1: white: 50%, black: 49%; Group 2: white: 52%, black: 47%
  • Income: NR
  • Insurance status: NR
  • Average education (years): mean= 12.5 yrs; ≥9 yrs: 97%; ≥10 yrs: 86%; 1+ yr college: 307

 

Literacy levels: measured with REALM

 

Variables:

  • Independent: brochure type
  • Dependent: HbA1c, blood pressure

 

Intervention: Group 1 received the CDC improved pamphlet (existing intervention) while Group 2 received the LSU pamphlet (new intervention). Readibility was measured using Fox Index (6th grade) and Flesh Kincaid (4th grade) for both interventions.

 

Main outcomes and results:

  1. Comprehension: All reading levels: CDC: 60%, LSU: 65%, difference (p < 0.01); By reading levels: LSU better than CDC for ≥9th reading levels (p <0.001). No statistical difference for <9th grade levels; Comprehension scores of those in lowest 2 reading levels (grades 0-3 and 4-6) were not significantly improved with LSU pamphlet
  2. Preference: LSU pamphlet preferred over CDC pamphlet

 

Comments: Strikingly low comprehension regardless of style of pamphlet. Study raises important issues regarding informed consent for immunization. On average, participants could comprehend only one-third to one-half of what they read. Authors conclude that written materials alone may not be sufficient. This study provides an example of assessing the effectiveness of existing educational materials and the process of evaluating the effectiveness of new material.


 

References

  1. Abrams, M. A., L. L. Hung, A. B. Kashuba, J. G. Schwartzberg, P. E. Sokol, and K. C. Vergara. 2007. Reducing the risk by designing a safer, shame-free, health care environment. Chicago: American Medical Association. Available at: https://psnet.ahrq.gov/issue/reducing-risk-designing-safer-shame-free-health-care-environment (accessed May 21, 2020).
  2. AHRQ (Agency for Healthcare Research and Quality). 2009. About the CAHPS item set for addressing health literacy. Rockville, MD: Agency for Healthcare Research and Quality. Available at: https://cahpsdatabase.ahrq.gov/files/CGGuidance/About%20the%20Item%20set%20for%20Addressing%20Health%20Literacy.pdf (accessed May 21, 2020).
  3. Berwick, D. M. 2002. A user’s manual for the IOM’s “Quality Chasm” report. Health Affairs 21(3):80-90. https://doi.org/10.1377/hlthaff.21.3.80
  4. Coleman, E. A. 2006. Care transitions measure (CTM-15). Denver, CO: Care Transitions Program.
  5. Comings, J., S. Reder, and A. Sum. 2001. Building a level playing field: The need to expand and improve the national and state adult education and literacy systems. Cambridge: National Center for the Study of Adult Learning and Literacy. Available at: http://www.ncsall.net/fileadmin/resources/research/op_comings2.pdf (accessed May 21, 2020).
  6. Davis, T. C., J. A. Bocchini, Jr., D. Fredrickson, C. Arnold, E. J. Mayeaux, P. W. Murphy, R. H. Jackson, N. Hanna, and M. Paterson. 1996. Parent comprehension of polio vaccine information pamphlets. Pediatrics 97(6 Pt 1):804-810. Available at: https://pubmed.ncbi.nlm.nih.gov/8657518/ (accessed May 21, 2020).
  7. Davis, T. C., D. D. Fredrickson, L. Potter, R. Brouillette, A. C. Bocchini, M. V. Williams, and R. M. Parker. 2006a. Patient understanding and use of oral contraceptive pills in a southern public health family planning clinic. Southern Medical Journal 99(7):713-718. https://doi.org/10.1097/01.smj.0000223734.77882.b2
  8. Davis, T. C., M. S. Wolf, P. F. Bass, J. A. Thompson, H. H. Tilson, M. Neuberger, and R. M. Parker. 2006b. Literacy and misunderstanding of prescription drug labels. Annals of Internal Medicine 145(12): 887-894. https://doi.org/10.7326/0003-4819-145-12-200612190-00144
  9. Davis, T. C., M. S. Wolf, P. F. Bass, M. Middlebrooks, E. Kennen, D. W. Baker, C. L. Bennett, R. Durazo-Arvizu, A. Bocchini, S. Savory, and R. M. Parker. 2006c. Low literacy impairs comprehension of prescription drug warning labels. Journal of General Internal Medicine 21(8):847-851. https://doi.org/10.1111/j.1525-1497.2006.00529.x
  10. DeWalt, D. A., N. D. Berkman, S. L. Sheridan, K. N. Lohr, and M. Pignone. 2004. Literacy and health outcomes: A systematic review of the literature. Journal of General Internal Medicine 19:1228-1239. https://doi.org/10.1111/j.1525-1497.2004.40153.x
  11. DeWalt, D. A., L. F. Callahan, V. H. Hawk, K. A. Broucksou, A. Hink, R. Rudd, and C. Brach. 2010. Health literacy universal precautions toolkit. Washington, DC: Agency for Healthcare Research and Quality. Available at: https://www.ahrq.gov/sites/default/files/wysiwyg/professionals/quality-patient-safety/quality-resources/tools/literacy-toolkit/healthliteracytoolkit.pdf (accessed May 22, 2020).
  12. Dewalt, D. A., R. M. Malone, M. E. Bryant, M. C. Kosnar, K. E. Corr, R. L. Rothman, C. A. Sueta, and M. P. Pignone. 2006. A heart failure self-management program for patients of all literacy levels: A randomized, controlled trial [isrctn11535170]. BMC Health Services Research 6(1):30. https://doi.org/10.1186/1472-6963-6-30
  13. Ferreira, M. R., N. C. Dolan, M. L. Fitzgibbon, T. C. Davis, N. Gorby, L. Ladewski, D. Liu, A. W. Rademaker, F. Medio, B. P. Schmitt, and C. L. Bennett. 2005. Health care provider-directed intervention to increase colorectal cancer screening among veterans: Results of a randomized controlled trial. Journal of Clinical Oncology 23(7):1548-1554. https://doi.org/10.1200/JCO.2005.07.049
  14. The Joint Commission. 2009. Specifications manual for national hospital quality measures. Washington, DC. Available at: https://manual.jointcommission.org/ (accessed May 22, 2020).
  15. Kutner, M., E. Greenberg, Y. Jin, and C. Paulsen. 2006. The health literacy of America’s adults: Results from the 2003 National Assessment of Adult Literacy. Washington, DC: National Center for Education Statistics. Available at: https://nces.ed.gov/pubs2006/2006483.pdf (accessed May 22, 2020).
  16. Langley, G. L., K. M. Nolan, T. W. Nolan, C. L. Norman, and L. P. Provost. 2009. The improvement guide: A practice approach to enhancing organizational performance. 2nd ed. San Francisco: Jossey-Bass. Available at: http://www.ihi.org/resources/Pages/Publications/ImprovementGuidePracticalApproachEnhancingOrganizationalPerformance.aspx (accessed May 22, 2020).
  17. Madlon-Kay, D. J., and F. S. Mosch. 2000. Liquid medication dosing errors. Journal of Family Practice 49(8):741-744. Available at: https://pubmed.ncbi.nlm.nih.gov/10947142/ (accessed May 22, 2020).
  18. Murray, M. D., J. Young, S. Hoke, W. Tu, M. Weiner, D. Morrow, K. T. Stroupe, J. Wu, D. Clark, F. Smith, I. Gradus-Pizlo, M. Weinberger, and D. C. Brater. 2007. Pharmacist intervention to improve medication adherence in heart failure: A randomized trial. Annals of Internal Medicine 146(10):714-725. https://doi.org/10.7326/0003-4819-146-10-200705150-00005
  19. National Committee for Quality Assurance. 2008. Standards and guidelines for physician practice connections—patient-centered medical home. Washington, DC: National Committee for Quality Assurance. Available at: https://www.ncqa.org/programs/health-care-providers-practices/patient-centered-medical-home-pcmh/ (accessed May 22, 2020).
  20. Office of Minority Health. 2001. National standards for culturally and linguistically appropriate services in health care. Washington, DC: U.S. Department of Health and Human Services. Available at: https://minorityhealth.hhs.gov/assets/pdf/checked/finalreport.pdf (accessed May 22, 2020).
  21. Physicians Consortium for Performance Improvement. 2007. Prostate cancer: Physician performance measurement set. Chicago: American Medical Association.
  22. Physicians Consortium for Performance Improvement. 2009. Care transitions: Performance measurement set. Chicago: American Medical Association.
  23. Pignone, M., D. A. DeWalt, S. Sheridan, N. Berkman, and K. N. Lohr. 2005. Interventions to improve health outcomes for patients with low literacy. A systematic review. Journal of General Internal Medicine 20:185-192. https://doi.org/10.1111/j.1525-1497.2005.40208.x
  24. Rothman, R., D. A. DeWalt, R. Malone, B. Bryant, A. Shintani, B. Crigler, M. Weinberger, and M. Pignone. 2004. The influence of patient literacy on the effectiveness of a primary-care based diabetes disease management program. JAMA 292(14):1711-1716. https://doi.org/10.1001/jama.292.14.1711
  25. Rozich, J. D., and R. K. Resar. 2001. Medication safety: One organization’s approach to the challenge. Journal of Clinical Outcomes Management 8:27-34. Available at: https://www.semanticscholar.org/paper/Medication-Safety%3A-One-Organization’s-Approach-to-Rozich-Resar/ababc2a8a55cffe64b3d5b7b4a50b6d92da5e64f (accessed May 22, 2020).
  26. Yin, H. S., B. P. Dreyer, L. van Schaick, G. L. Foltin, C. Dinglas, and A. L. Mendelsohn. 2008. Randomized controlled trial of a pictogram-based intervention to reduce liquid medication dosing errors and improve adherence among caregivers of young children. Archives of Pediatrics & Adolescent Medicine 162(9):814-822. https://doi.org/10.1001/archpedi.162.9.814

 

DOI

https://doi.org/10.31478/201307f

Suggested Citation

DeWalt, D. A. and J. McNeill. 2013. Integrating Health Literacy with Health Care Performance Measurement. NAM Perspectives. Discussion Paper, National Academy of Medicine, Washington, DC. https://doi.org/10.31478/201307f

Disclaimer

The views expressed in this discussion paper are those of the authors and not necessarily of the authors’ organizations or of the Institute of Medicine. The paper is intended to help inform and stimulate discussion. It has not been subjected to the review procedures of the Institute of Medicine and is not a report of the Institute of Medicine or of the National Research Council.


Join Our Community

Sign up for NAM email updates